Unfolding the phenomenon of interrater agreement: a multicomponent approach for in-depth examination was proposed

OBJECTIVE: The overall objective was to unfold the phenomenon of interrater agreement: to identify potential sources of variation in agreement data and to explore how they can be statistically accounted for. The ultimate aim was to propose recommendations for in-depth examination of agreement to imp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Slaug, Björn (VerfasserIn) , Schilling, Oliver (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: 2012
In: Journal of clinical epidemiology
Year: 2012, Jahrgang: 65, Heft: 9, Pages: 1016-1025
ISSN:1878-5921
DOI:10.1016/j.jclinepi.2012.02.016
Online-Zugang:Verlag, Volltext: http://dx.doi.org/10.1016/j.jclinepi.2012.02.016
Volltext
Verfasserangaben:Björn Slaug, Oliver Schilling, Tina Helle, Susanne Iwarsson, Gunilla Carlsson and Åse Brandt
Beschreibung
Zusammenfassung:OBJECTIVE: The overall objective was to unfold the phenomenon of interrater agreement: to identify potential sources of variation in agreement data and to explore how they can be statistically accounted for. The ultimate aim was to propose recommendations for in-depth examination of agreement to improve the reliability of assessment instruments. - STUDY DESIGN AND SETTING: Using a sample where 10 rater pairs had assessed the presence/absence of 188 environmental barriers by a systematic rating form, a raters × items data set was generated (N=1,880). In addition to common agreement indices, relative shares of agreement variation were calculated. Multilevel regression analysis was carried out, using rater and item characteristics as predictors of agreement variation. - RESULTS: Following a conceptual decomposition, the agreement variation was statistically disentangled into relative shares. The raters accounted for 6-11%, the items for 32-33%, and the residual for 57-60% of the variation. Multilevel regression analysis showed barrier prevalence and raters' familiarity with using standardized instruments to have the strongest impact on agreement. - CONCLUSION: Supported by a conceptual analysis, we propose an approach of in-depth examination of agreement variation, as a strategy for increasing the level of interrater agreement. By identifying and limiting the most important sources of disagreement, instrument reliability can be improved ultimately.
Beschreibung:Gesehen am 09.07.2019
Beschreibung:Online Resource
ISSN:1878-5921
DOI:10.1016/j.jclinepi.2012.02.016