Please enable JavaScript.
Coggle requires JavaScript to display documents.
FACTOR ANALYSIS :pencil2: RM2 LECTURE 4 - Coggle Diagram
FACTOR ANALYSIS
:pencil2: RM2 LECTURE 4
What is Factor Analysis good for?
To test correlated variables
To see whether different measures are aspects of a common dimension.
To get a feel for what those factors might be
To see how many separate factors are present in a complex data set.
What is Factor Analysis NOT?
It is not a direct way of modelling data like regression. (Although output from Factor Analysis can be used to create models)
It does not tell you what the factors mean, it just shows you which measures are 'grouped together' in some way.
Correlation Matrix
Try to reduce the R-Matrix into small sets of uncorrelated dimensions.
Factor Loading
Orthogonal Factors
= people can vary along them independently
Factor loadings
represent the weight of a variable on a factor. They are stored in a factor matrix. It tells us how much each question on a questionnaire contributes to each of the factors.
Initial Considerations
Test Variables should correlate quite well (r > .3)
Avoid Multicollinearity
(you don't want questions that are highly correlated and are essentially asking the same thing)
Avoid Singularity
(some variables are perfectly correlated (r = 1)
Screen the R-Matrix and eliminate any variables that obviously cause concern.
Solutions
Determinant
- indicator of multicollinearity, should be
greater than 0.00001
Kaiser-Meyer-Olkin (KMO)
- Measures sampling adequacy - do you have enough people in your sample? Should be
greater than 0.5
Bartlett's Test of Sphericity
- Checking for very low correlation across all predictors. Should be
significant at p < .05
Principal Component Analysis (PCA)
Takes a cloud of data points and finds the principal axes of that cloud.
These axes are at right angles to each other.
Principle components are sometimes called "
eigenvectors
" with associated "
eigenvalues
"
Eigenvectors
A way of reducing complex, multidimensional data sets to a manageable set of 'components'
Eigenvalues
Factors / eigenvectors with small eigenvalues are pointless and we ignore them.
Factor Extraction
Kaiser's Extraction
- Retain factors with
eigenvalues > 1
(ignore if they go below 1)
Scree Plot
- Cattell (1966) use the 'point of inflexion' - which indicates the
last eigenvalue that you should consider
when extracting factors. steep scree plot = meaningful, contributing to factors.
Rotation
Factor rotation
- maximising the loading of a variable on one factor while minimising its loading on all other factors.
Orthogonal Rotation
- factors are uncorrelated.
E.g.
VARIMAX
Oblique Rotation -
Factors intercorrelate
Reliability
Split-Half method
- splits the questionnaire into two random halves, calculates scores and correlates them.
Cronbach's Alpha
- splits the questionnaire into all possible halves, calculates the scores, correlates them and averages the correlation for all splits. Ranges from 0 (no reliability) to 1 (completely reliable)
Interpreting Cronbach's Alpha
Reliable if
a > .7
(Kline, 1999)
Depends on the number of items, more questions = bigger a
Treat sub-scales separately.
Remember to reverse score reverse phrased items!
Test-retest methods
- practice effects and mood states