Author(s)

J. T. L. Wilson, A. Hareendran, A. Hendry, J. Potter, I. Bone, K. W. Muir

ISBN

0039-2499

Publication year

2005

Periodical

Stroke

Periodical Number

4

Volume

36

Pages

777-781

Author Address

Full version

Background and Purpose – The modified Rankin Scale (mRS) is widely used to assess global outcome after stroke. The aim of the study was to examine rater variability in assessing functional outcomes using the conventional mRS, and to investigate whether use of a structured interview (mRS-SI) reduced this variability. Methods – Inter-rater agreement was studied among raters from 3 stroke centers. Fifteen raters were recruited who were experienced in stroke care but came from a variety of professional backgrounds. Patients at least 6 months after stroke were first assessed using conventional mRS definitions. After completion of initial mRS assessments, raters underwent training in the use of a structured interview, and patients were re-assessed. In a separate component of the study, intrarater variability was studied using 2 raters who performed repeat assessments using the mRS and the mRS-SI. The design of the latter part of the study also allowed investigation of possible improvement in rater agreement caused by repetition of the assessments. Agreement was measured using the kappa statistic ( unweighted and weighted using quadratic weights). Results – Inter-rater reliability: Pairs of raters assessed a total of 113 patients on the mRS and mRS-SI. For the mRS, overall agreement between raters was 43% (kappa = 0.25, kappa(w) = 0.71), and for the structured interview overall agreement was 81% (kappa = 0.74, kappa(w) = 0.91). Agreement between raters was significantly greater on the mRS-SI than the mRS ( P < 0.001). Intrarater reliability: Repeatability of both the mRS and mRS-SI was excellent (kappa = 0.81, kappa(w) >= 0.94). Conclusions – Although individual raters are consistent in their use of the mRS, inter-rater variability is nonetheless substantial. Rater variability on the mRS is thus particularly problematic for studies involving multiple raters. There was no evidence that improvement in inter-rater agreement occurred simply with repetition of the assessment. Use of a structured interview improves agreement between raters in the assessment of global outcome after stroke.