Please enable JavaScript.
Coggle requires JavaScript to display documents.
Scale Reliability and Validity - Coggle Diagram
Scale Reliability and Validity
Reliability
- the degree to which the measure of a construct is consistent or dependable. If we use this scale to measure the same construct multiple times, will produce the same result every time, assuming the underlying phenomenon is not changing.
Reliability implies consistency, but not accuracy
.
Estimating Reliability
Test-retest Reliability
- a measure of consistency between 2 measurements/tests of the same construct administered to the same sample at 2 different points in time.
Split-half Reliability
- a measure of consistency between 2 halves of a construct measure.
Inter-rater Reliability (inter-observer reliability)
- a measure of consistency between 2 or more independent raters/observers of the same construct.
Internal Consistency Reliability
- a measure of consistency between different items of the same construct.
Sources of Unreliable Observations
: 1) observer's/researcher's subjectivity; 2) asking imprecise or ambiguous questions; 3) asking questions about issues that respondents are not very familiar with or care about.
Validity (construct validity)
- refers to the extent to which a measure adequately represents the underlying construct that it is supposed to measure.
Validity can be assessed using theoretical or empirical approaches, and should ideally be measured using both approaches
.
Translational Validity
- theoretical assessment of validity that focuses on how well the idea of a theoretical construct is translated into or represented in an operational measure.
Face Validity
- refers to whether an indicator seems to be a reasonable measure of its underlying construct "on its face".
Criterion-related validity
- am empirical assessment of validity that examines how well a given measure relates to one or more external criterion, based on empirical observations.
4 Subtypes of Criterion-related Validity
: 1) convergent; 2) discriminant; 3) concurrent; 4) predictive validity
Content Validity
- an assessment of how well a set of scale items matches with the relevant content domain of the construct that it is trying to measure.
Convergent Validity
- refers to the closeness with which a measure relates to (or converges on) the construct that it is purported to measure. The
Explanatory Factor Analysis
is a data reduction technique which aggregates a given set of items to a smaller set of factors based on the bivariate correlation using a statistical technique called principal components analysis.
Discriminant Validity
- refers to the degree to which a measure does not measure (or discriminates from) other constructs that it is not supposed to measure. Usually, convergent and validity and discriminant validity are assessed jointly for a set of related constructs.
Predictive Validity
- the degree to which a measure successfully predicts a future outcome that it is theoretically expected to predict.
Concurrent Validity
- examines how well one measure relates to other concrete criterion that is presumed to occur simultaneously.
Theory of Measurement
Classical Test Theory (true score theory
- a psychometric theory that examines how measurement works, what it measures, and what it does not measure.
Measurement Errors
Random Error
- the error that can be attributed to a set of unknown and uncontrollable external factors that randomly influence some observations but not others ** By increasing variability in observations, random error reduces the reliability of measurement.
Systematic Error
- error that is introduced by factors that systematically affect all observations of a construct across an entire sample in a systematic manner ** By shifting the central tendency measure, systematic error reduces the validity of measurement.
A measure can be reliable but not valid, and a measure can be valid but not reliable.