Please enable JavaScript.
Coggle requires JavaScript to display documents.
Introduction to psychological assessment (Testing measurement equivalence,…
Introduction to psychological assessment
Factor Analysis (data-reduction technique that aims to identify underlying dimensions or factors or linear combinations of original variables)
Exploratory Factor Analysis (EFA)- is often used to assess measurement equivalence and involves conducting separate factor analysis in each group and comparing the resultant factor solutions for evidence of equivalence
can be easily conducted in most statistical software appicaton
EFA techniques have certain shortcomings as a method to establish equivalence, such as low power to detect small differences across cultures especially using long tests
Makes use of correlation coefficients
Confirmatory factor analysis (CFA)- a way of testing how well measured variables, typically test item score, represents a smaller number of underlying constructs
Is used to provide strong confirmatory test of our measurement theory
Is more flexible than the EFA and can address various levels of equivelance
Normally uses covariance's as a basis of analyses and not correlation coefficients
Is to identify group members as a possible contaminant, that is an unwanted source of variance in test scores
Logical behind using CFA to study equivalence
In statistical analyses of test-score data, we use implicit or explicit measurement models to explain how test items relate to underlying psychological characteristics that these items are supposed to measure
when using factor-analytic techniques to study measurement bias, these modules are represented in complex mathematical models that capture information about relationship between items, and latent or underlying constructs
As such, bias analyses in respect of factor structures investigate systematic group-related differences in any of a number of important statistical parameter estimates
Levels measurement equivalence
Functional equivalence (exists when the same constructs exists within each of the cultural groups)
Structural equivalence (Implies that the same indicators or items can be used to measure the theoretical constructs)
Metric equivalence (implies that equal intervals exists between numeric values of the scale in all groups being compared)
Full-score equivalence (which is the highest form of equivalence, implies that scale of the underlying dimension shares the same metric and the same origin between cultural groups
Testing measurement equivalence
Configural Invariance (This is the weakest form of equivalence as it provides only limited justification that's scores may be equated across groups.
Metric invariances (A stronger form of equivalence exists when metric invariance is established
Scalar invariance (tests for the equality of the observed variables intercepts on the latent constructs)
Factor variance-covariance invariance (If a test measures more than one factor, it would be useful to also investigate weather the relationships between latent factors are the same across groups being tested)
Error variance invariance (Implies that random and systematic errors of measurement are similar across groups. It is also called uniqueness invariance)
Prediction Bias and Equivalence
Prediction Bias (exists when there is a systematic group-related differences in the way in which criterion scores are predicted for different subgroups)
Prediction equivalence (exists when the expected criterion score for two test-takers from different groups is the same when their predictor scores are the same)
Moderated multiple-regression analyses (MMR)- are normally used to establish predictive bias
In this approach, test-takers' scores on some outcome variable (eg: academic performance) are repressed against a combination of their predictor scores (eg: matric results), their group membership(eg:gender groups), and a variable that combines these two (eg: an interaction variable such as Matric gender)
The analysis results tell us whether it is possible to predict the criterion (eg: performance) in the same way for persons from different groups. If the results suggest we cannot, this phenomeon is called differential prediction
Bias in prediction exists when the regression equation (y=a + bx) used to predict a criterion in two or more groups differ significantly, in terms of their intercepts, their slopes, their standard errors of estimate or any combination of these
we also take into consideration intercept bias and slope bias
There should be fairness when conducting prediction bias according to the Employment Equity Act (No 55 of 1988)