Please enable JavaScript.
Coggle requires JavaScript to display documents.
Chapter 7- Scale Reliability & Validity - Coggle Diagram
Chapter 7- Scale Reliability & Validity
Social science measurements are more complex to measure, they are imaginary and considered multi-dimensional.
(2) two things to consider when using the scales to measure: (1) these scales indeed measure the unobservable construct that we wanted to measure so they are "valid". (2) they measure the intended construct consistently and precisely "reliable".
Important to remember that a "measurement can be reliable but not valid" and likewise that " measure can be valid but not reliable", it is important in research that they have a mix of both.
Reliability is the degree to which the measure of a construct is consistent or dependable. - Note reliability implies consistency but not accuracy.
Split-half Reliability- " a measure of consistency between two half's of a construct measure. For instance if you have a ten item measure of a given construct , randomly split those ten items into two sets of 5 not odd sets. (Bhattacherjee, 2012)
Validity- "refers to the extent to which a measure adequately represents the underlying construct that is supposed to measure." (Bhattacherjee, 2012)
Transitional Validity which is a theoretical assessment on how well the idea of a theoretical construct is translated into or represented breaks down into 2 sub types of validity: face and content validity.
criterion- related validity comes in four sub-types: convergent, discriminant, concurrent, and predictive validity they focus on hoe well a given measurement relates to one or more external criterion. (Bhattacherjee, 2012)
Discriminant validity refers to the degree to which a measure does not measure(Bhattacherjee, 2012)
Predictive is the degree to which a measure successfully predicts a future outcome that it is theoretical expected to predict.(Bhattacherjee, 2012)
Convergent validity refers o the closeness with which a measure relates to the construct that is purported to measure
Concurrent validity examines how well one measure relates other concrete criterion that is presumed to occur simultaneously.
content validity is an assessment of how well a set of scale items matches the relevant content domain of the construct that it is trying to measure. (Bhattacherjee, 2012)
Face validity refers to weather an indicator seems to be a reasonable measure of its underlying constructs(Bhattacherjee, 2012)
Test- retest reliability- "is a measure of consistency between two measurements(test) of the same construct administered to the same sample of two different points in time"(Bhattacherjee, 2012).
internal consistency reliability- "a measure of consistency between different items of the same construct."(Bhattacherjee, 2012)
Inter-rater reliability- is a measure of consistency between two or more independent raters(observers) of the same level of measurement of the construct. Think of categories and the observers group them together in the categories."(Bhattacherjee, 2012)
Classical test theory or True score theory is psychometric theory that examines how measurement works, what it measures, and what it does not measure. E(Error)+T(true score)= X(observed score)
2 forms of errors that can occur Random error and Systematic Error.
Systematic Error - introduced by factors that systematically affect all observations of a construct across an entire sample in a systematic error.
Random Error can be attributed to a set of unknown or uncontrollable external factors that randomly influence some observations but not others. (Bhattacherjee, 2012)