Show simple item record

dc.contributor.advisorYoon, Myeongsun
dc.contributor.advisorWillson, Victor
dc.creatorCao, Qian
dc.date.accessioned2013-10-02T21:28:47Z
dc.date.available2015-05-01T05:57:08Z
dc.date.created2013-05
dc.date.issued2013-04-28
dc.date.submittedMay 2013
dc.identifier.urihttps://hdl.handle.net/1969.1/149310
dc.description.abstractThe study of interrater agreement and itnerrater reliability attract extensive attention, due to the fact that the judgments from multiple raters are subjective and may vary individually. To evaluate interrater agreement and interrater reliability, five different methods or indices are proposed: percentage of agreement, kappa coefficient, the correlation coefficient, intraclass correlation coefficient (ICC), and generalizability (G) theory. In this study, we introduce and discuss the advantages and disadvantages of these methods to evaluate interrater agreement and reliability. Then we review and explore the rank across these five indices by use of frequency in practice in the past five years. Finally, we illustrate how to use these five methods under different circumstances and provide SPSS and SAS code to analyze interrater agreement and reliability. We apply the methods above to analyze the data from Parent-Child Interaction System of global ratings (PARCHISY), and conclude as follows: (1) ICC is the most often used method to evaluate interrater reliability in recent five years, while generalizability theory is the least often used method. The G coefficients provide similar interrater reliability with weighted kappa and ICC on most items, based on the criteria. (2) When the reliability is high itself, different methods provide consistent indication on interrater reliability based on different criteria. If the reliability is not consistent among different methods, both ICC and G coefficient will provide better interrater reliability based on the criteria, and they also provide consistent results.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectinterrater agreementen
dc.subjectinterrater reliabilityen
dc.subjectKappaen
dc.subjectICCen
dc.subjectgeneralizability theoryen
dc.subjecten
dc.titleInterrater Agreement and Reliability of Observed Behaviors: Comparing Percentage Agreement, Kappa, Correlation Coefficient, ICC and G Theoryen
dc.typeThesisen
thesis.degree.departmentEducational Psychologyen
thesis.degree.disciplineEducational Psychologyen
thesis.degree.grantorTexas A&M Universityen
thesis.degree.nameMaster of Scienceen
thesis.degree.levelMastersen
dc.contributor.committeeMemberKwok, Oi-Man
dc.type.materialtexten
dc.date.updated2013-10-02T21:28:47Z
local.embargo.terms2015-05-01


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record