![]() | ||||||||||||||||||||||||||
![]() ![]() ![]() ![]() ![]() ![]() | ||||||||||||||||||||||||||
Clinical rating: the human expertise at the heart of the systemIn a DAWBA assessment, the "input" is the information collected from parents, teachers and young people by interview and questionnaire; the "output" is the set of diagnoses (DSM-5, DSM-IV or ICD-10) assigned to each child or young person - or no diagnosis in many instances. For some assessment packages, the link from input to output is fully automated, relying entirely on computer algorithms. The DAWBA is not like this; for most research studies and nearly all clinical work, it is semi-automated, using computer programs to facilitate the work of experienced clinical raters. The final diagnostic decision is made by the DAWBA clinical rater, not by the computer. Because they don't carry out the interviews themselves, clinical raters can potentially be based anywhere in the world. Data gathered in London can be rated in Sydney, or vice versa.
Stage 1: Provisional diagnoses generated by computer In the case of ADHD and hyperkinesis, a prediction that there is a high probability of a diagnosis will only be made when there is evidence of pervasive symptoms and impact, as judged by reports of home and school behaviour. For emotional and conduct disorders, a prediction that there is a high probability of a diagnosis can be made on the basis of symptoms and impact in just one setting.
Stage 2: The clinical rater reviews all relevant information
Stage 3: Definitive diagnoses generated by the clinical rater Clinical raters perform four major tasks that a computer can't match:
Training of clinical ratersThe success of the DAWBA depends crucially on the quality of the clinical raters. They need to be well trained in the current diagnostic schemes that emphasise operationalised diagnoses based on phenomenology rather than presumed aetiology or psychodynamics. There is an online manual to help new raters train themselves. This provides an opportunity for raters to familiarise themselves with the DAWBA and its computer programs, and then work through a series of practice cases with accompanying notes that explain how to deal with common difficulties and 'grey' cases. Supervision during the training if of great value. Even after their initial training, new raters need supervision for difficult cases. Experienced raters also benefit from regular consensus meetings that provide opportunities to discuss difficult cases. Because of the large investment of time needed to train raters to acceptable standards, it is often the case that clinics or research projects that plan to use the DAWBA on a limited basis may be better off 'subcontracting' their ratings to experienced raters. This is straightforward since the computerised rating packages allow raters to generate diagnoses on children they have never seen and who may live on a different continent. The only restriction is the need for the raters to be able to read the language that the transcript is written in. As a rough guide, it is probably not worth training as a rater unless you will be using the DAWBA to assess at least 250 low-risk subjects (e.g. from a community sample), 100 high-risk subjects (e.g. from residential children's homes), or 50 clinic cases. Clinics and projects wishing to 'subcontract' their clinical ratings can contact us on support@youthinmind.com for suggestions about possible contacts. Last modified : 15/02/22 |