Please enable JavaScript.
Coggle requires JavaScript to display documents.
Reliability measure of stability/consistency of instrument, Validity…
Reliability
measure of stability/consistency of instrument
Procedures
Alternate forms & test-retest reliability
Interrater reliability
Behavioral observations from 2+ observers
Test-retest reliability
test stability replicated at least twice over time
one version of instrument
Internal consistency reliability
scores consistent across instrument
Kuder-Richardson split half test
KR-20, KK-21
Split responses in half and correlate items
Right or wrong scores - Binary
responses not influenced by speed
items measure common factor
Spearman-Brown formula
Reliability when length is changed
Coefficient alpha / Crohnbachs alpha
Continuous variables (strongly agree- strongly disagree)
Alternate forms reliability
Two instruments measuring same variables
Validity
instrument accurately assesses what's intended
Standards
Evidence on internal structure
construct validity
Are scores related to items as expected
scores support theory
Evidence on relations to other variables
criterion related validity
concurrent
predictive
examines measures outside of test
predict outside criterion
Evidence on response processes
fit btw construct measured & nature of responses
comparative interviews
Evidence on the consequences of testing
support intended & unintended consequences of testing
benefits or liabilities of testing (consequences)
Evidence on test content
test content relates to what is being intended to measure
Panel of experts/judges
Did the author check for reliability?
What kind of reliability was reported?
Was an appropriate type used?
Were the reliability values (coefficients) reported?
Were they positive/high coefficients?
Did the author check for validity
What type of validity was reported?
Was more than one type reported?
Was validity evidence reported with appropriate statistics?
Was the evidence strong?