UK PET Chemistry Workshop on Radiotracer Dispensing Oct 21, 2022 09:00 AM - 05:00 PM — KCL, London
Scottish Radiotherapy Research Forum Nov 10, 2022 10:00 AM - 04:00 PM — University of Stirling
IPEM Advanced Neuro MRI Nov 15, 2022 09:00 AM - 05:00 PM — Birmingham


SINAPSE experts from around Scotland have developed ten online modules designed to explain medical imaging. They are freely available and are intended for non-specialists. **Unfortunately these do not currently work in browsers**

Edinburgh Imaging Academy at the University of Edinburgh offers the following online programmes through a virtual learning environment:

Neuroimaging for Research MSc/Dip/Cert

Imaging MSc/Dip/Cert

PET-MR Principles & Applications Cert

Applied Medical Image Analysis Cert

Online Short Courses

Reliability of the modified rankin scale across multiple raters - Benefits of a structured interview

Author(s): J. T. L. Wilson, A. Hareendran, A. Hendry, J. Potter, I. Bone, K. W. Muir

Background and Purpose - The modified Rankin Scale (mRS) is widely used to assess global outcome after stroke. The aim of the study was to examine rater variability in assessing functional outcomes using the conventional mRS, and to investigate whether use of a structured interview (mRS-SI) reduced this variability. Methods - Inter-rater agreement was studied among raters from 3 stroke centers. Fifteen raters were recruited who were experienced in stroke care but came from a variety of professional backgrounds. Patients at least 6 months after stroke were first assessed using conventional mRS definitions. After completion of initial mRS assessments, raters underwent training in the use of a structured interview, and patients were re-assessed. In a separate component of the study, intrarater variability was studied using 2 raters who performed repeat assessments using the mRS and the mRS-SI. The design of the latter part of the study also allowed investigation of possible improvement in rater agreement caused by repetition of the assessments. Agreement was measured using the kappa statistic ( unweighted and weighted using quadratic weights). Results - Inter-rater reliability: Pairs of raters assessed a total of 113 patients on the mRS and mRS-SI. For the mRS, overall agreement between raters was 43% (kappa = 0.25, kappa(w) = 0.71), and for the structured interview overall agreement was 81% (kappa = 0.74, kappa(w) = 0.91). Agreement between raters was significantly greater on the mRS-SI than the mRS ( P < 0.001). Intrarater reliability: Repeatability of both the mRS and mRS-SI was excellent (kappa = 0.81, kappa(w) >= 0.94). Conclusions - Although individual raters are consistent in their use of the mRS, inter-rater variability is nonetheless substantial. Rater variability on the mRS is thus particularly problematic for studies involving multiple raters. There was no evidence that improvement in inter-rater agreement occurred simply with repetition of the assessment. Use of a structured interview improves agreement between raters in the assessment of global outcome after stroke.

Full version: Available here

Click the link to go to an external website with the full version of the paper

ISBN: 0039-2499
Publication Year: 2005
Periodical: Stroke
Periodical Number: 4
Volume: 36
Pages: 777-781
Author Address: