Consensus estimate based on assumption that two (or more coders) can come to exact agreement (percent agreement, Kappa)
Consistencey based on assumption that it is not necessary that coders agree, but that they remain consisten within their own understanding (Pearson's r, Spearman's Rho, Cronbach's alpha - two or more observers)
Perfect correlation between observations, even if different values
e.g. obs 1 says 4 6 8, obs 2 says 5 7 9, obs 3 says 6 8 10. Magnitude is same.
Consensus
Percent agreement (doesn't account for agreement by chance)
Cohen's kappa (does take random into account)
Consistency
correlation coefficient (doesn't take into account variance between coders)
Cronbach's alpha (corrects for variance, can asses more than 2 coders)
Kappa
PA = observed agreements
PC = expected agreements by chance (products of yes percentages multiplied, added to product of no percentages)
Kappa = (Pa - Pc) / (1-Pc)
-
Bias: “tendency of a measurement process to over- or under-estimate the value of a population parameter”
Marginal homogeneity
more conservative as the marginal homogeneity increases
where marginals are homogenous (Yes/no totals at side) are nearly the same.
percentage agreement vs lower kappa
Trait prevalance
more conservative as the prevalence of a trait becomes higher or lower
Single square (yes/no agreement) is notably more than others