Clinical rating: the human expertise at the heart of the system

In a DAWBA assessment, the "input" is the information collected from parents, teachers and young people by interview and questionnaire; the "output" is the set of diagnoses (DSM-5, DSM-IV or ICD-10) assigned to each child or young person - or no diagnosis in many instances. For some assessment packages, the link from input to output is fully automated, relying entirely on computer algorithms. The DAWBA is not like this; for most research studies and nearly all clinical work, it is semi-automated, using computer programs to facilitate the work of experienced clinical raters. The final diagnostic decision is made by the DAWBA clinical rater, not by the computer. Because they don't carry out the interviews themselves, clinical raters can potentially be based anywhere in the world. Data gathered in London can be rated in Sydney, or vice versa.

Stage 1: Provisional diagnoses generated by computer
The answers to the structured questions from the interviews and questionnaires are fed into a computerised diagnostic algorithm. This algorithm predicts how likely it is that an experienced clinical rater would assign the child operationalized ICD-10, DSM-IV or DSM-5 diagnoses. The computer prediction is never an absolute "no" or "yes"; instead, the individual is assigned to one of six probability bands, ranging from less than 0.1% likely to more than 70% likely.

In the case of ADHD and hyperkinesis, a prediction that there is a high probability of a diagnosis will only be made when there is evidence of pervasive symptoms and impact, as judged by reports of home and school behaviour. For emotional and conduct disorders, a prediction that there is a high probability of a diagnosis can be made on the basis of symptoms and impact in just one setting.

Stage 2: The clinical rater reviews all relevant information
A specially designed computer program allows the clinical rater to move rapidly backwards and forwards between three sorts of information on each child:

1. The provisional diagnoses made by the computer algorithm.
  
2. Summaries of the answers given by the parent, teacher and young person to the structured questions. There is one summary for each area of symptoms - separation anxiety, depression, PTSD etc.
  
3. The transcripts of all the answers to open-ended questions.

Stage 3: Definitive diagnoses generated by the clinical rater
The rater decides on the definitive diagnoses and keys them into the computerised database. To get a clearer idea about the rating process, you can look at the dawba.net rating screens for two demostration subjects. Try it out!

Clinical raters perform four major tasks that a computer can't match:

1. The raters use the transcripts to check whether respondents appear to have understood the fully structured questions. This is particularly valuable for relatively unusual symptoms such as obsessions and compulsions. Even when parents or young people say "yes" to questions asking about such symptoms, a description of the problem in their own words often makes it clear that they are not describing what a clinician would consider to be an obsession or compulsion.
  
2. The raters consider how to interpret conflicts of evidence between informants. Reviewing the transcripts and interviewer's comments often helps decide whose account to prioritise. Reviewing all of the evidence, it may be clear that one respondent gives a convincing account of symptoms, whereas the other respondent minimises all symptoms in a defensive way. Conversely, one respondent may clearly be exaggerating.
  
3. The raters aim to catch those emotional, conduct and hyperactivity disorders that slip through the "operationalized" net. When the child has a clinically significant problem that does not meet operationalised diagnostic criteria, the clinician can assign a "not otherwise specified diagnosis" such as "anxiety disorder, NOS" or "disruptive behavior disorder, NOS".
  
4. The raters rely primarily on the transcripts to diagnose less common disorders where the relevant symptoms are so distinctive that respondents' descriptions are often unmistakable

Training of clinical raters

The success of the DAWBA depends crucially on the quality of the clinical raters. They need to be well trained in the current diagnostic schemes that emphasise operationalised diagnoses based on phenomenology rather than presumed aetiology or psychodynamics. There is an online manual to help new raters train themselves. This provides an opportunity for raters to familiarise themselves with the DAWBA and its computer programs, and then work through a series of practice cases with accompanying notes that explain how to deal with common difficulties and 'grey' cases. Supervision during the training if of great value. Even after their initial training, new raters need supervision for difficult cases. Experienced raters also benefit from regular consensus meetings that provide opportunities to discuss difficult cases.

Because of the large investment of time needed to train raters to acceptable standards, it is often the case that clinics or research projects that plan to use the DAWBA on a limited basis may be better off 'subcontracting' their ratings to experienced raters. This is straightforward since the computerised rating packages allow raters to generate diagnoses on children they have never seen and who may live on a different continent. The only restriction is the need for the raters to be able to read the language that the transcript is written in. As a rough guide, it is probably not worth training as a rater unless you will be using the DAWBA to assess at least 250 low-risk subjects (e.g. from a community sample), 100 high-risk subjects (e.g. from residential children's homes), or 50 clinic cases. Clinics and projects wishing to 'subcontract' their clinical ratings can contact us on support@youthinmind.com for suggestions about possible contacts.


Last modified : 15/02/22