Module 4: Assessing Research Claims (4 TYPES OF VALIDITY (1. EXTERNAL:…
Module 4: Assessing Research Claims
Being manipulated (randomly assign different levels) in an experiment. Only for causal research.
Remains stable (e.g. MVA victims as a constant for an Inclusion Criteria)
Each participant has equal chance of being assigned to different levels of IV = equivalent groups. Presume groups are equivalent at beginning of experiment; any change after is caused by IV.
TYPES OF VARIABLES:
Undesirable; influence relationship between variables of interest. Influence in outcome of experiment by adding variance to data (harder to find significant effect, reduces likelihood of false positive). Control by R.A (potentially CV if no RA)!
Undesirable; covaries with levels of IV. Not sure if change due to variable of interest or confound. Impacts whether/not you can draw causal claim. Happens when NO R.A.
What you're interested in that is inherent in a participant. CON: groups start out different, no random assignment. Only measure, not manipulate = Only correlational, not causal.
Ex: Gender, SES, Weight, etc.
3 TYPES OF CLAIMS
Argue 2+ variables are correlated in some way. Not necessarily causal (i.e. spurious correlation). Where someone sits on one variable, based on another. Stronger correlation = more accurate prediction (but prediction error is inevitable).
Ex: Sleep deprivation tied to low GPA in post-secondary students.
When 2+ variables are correlated, such that Variable X causes the associated change in Variable Y (+, -, curvilinear)
Variables are significantly correlated
2. Temporal Precedence:
Causal variable came first, the outcome came later.
3. Internal Validity:
No alternate explanations exist for this relationship. Achieved through experimental controls, random assignment between all groups.
Rate/level of a particular variable being measured (not manipulated). No inferences or causal conclusions.
Ex: 45% of retired Canadians live in a long-term care facility.
4 TYPES OF VALIDITY
Does the operationalization of abstract construct actually capture the construct? Difference between inference at operational vs. conceptual level.
Did the researcher draw proper conclusions of the data?
a.) Proper treatment of data
(use right measurement scale, stats, test for variance, was N big enough?)
b.) Soundness of conclusions
(find what they expected?)
= False Alarm (something happened, but it didn't). Study isn't set up properly (construct validity, sampling, etc).
= Failed to find something that you should have. Not enough stat power, N not high enough >> power analysis.
Generalizability of results into real-world.
c.) ECOLOGICAL EXPERIENCE
of participants in lab maps out to real world
Mundane Reality vs Psychological Realism:
(Aronson: Participants encaptured by experiment will behave more naturally - Milgram Experiment, Stanford Prison Experiment - wanting to perform) > Cover story, reward
with different pops, tasks, settings.
b.) CONVERGING EVIDENCE:
Different sources converging on same answer.
(1. Happen in real world? [CV] 2. Does something that affects Beh A also affect Beh B? [DV]),
Ruled out all alternative explanations for findings (IV, random assignment, counterbalancing, control). Needed in order to make causal claim.
ELLIOT ET AL:
Tested ideas across different populations (US, Germany, University + HS). Cultures where red = danger. Control colours.
Is maze running heritable? 17 forks (need to be learned) by Bright & Dull rats. Interbred for 21 generations (starting from Gen 0). Measure average # errors made by each rat generation. CONCLUSION: Heredity is responsible for intelligence.
Construct Validity (is it really intelligence?) Statistical Validity (Are there just a few really bright rats - outliers? By Gen 7, almost every single Bright Rat performed better than Dull Rat).
Something that varies across at least 2 levels (e.g. cell phone use/not)
Concretizing an abstract concept (e.g. levels of depression). Standardizing something, thereby allowing for replication.