Please enable JavaScript.
Coggle requires JavaScript to display documents.
PSYU3330 - Measurement & Research Applications in Psychology - Coggle…
PSYU3330 - Measurement & Research Applications in Psychology
Week 1
Science & The Scientific Method
Definition: Understanding the world objectively (but complete objectivity is impossible).
Key Features
Testable hypotheses
Definitive experiments
Debate & falsifiability
Error detection (empirical & logical)
Scientific: Empirical observation, logical reasoning
Non-Scientific
Tenacity (belief perseverance)
Intuition (gut feeling)
Authority (trusting experts)
Rational Method (logic without observation)
Empiricism (knowledge through experience)
The Research Process
Establish Research Question
Review Literature & Form Hypothesis
Define & Operationalize Variables
Identify Participants
Select Research Strategy
Select Research Design
Measure Data
Analyze Data
Evaluate/Update Theory
Measurement & Errors
Why Measurement is Important?
Research is only valid if variables are measurable.
Complex constructs (e.g., intelligence, self-esteem) require carefully designed instruments.
Types of Measurement Errors
Random Error: Unpredictable fluctuations, affects precision, leads to Type II errors (false negatives).
Systematic Error: Consistent bias, affects accuracy, leads to Type I errors (false positives).
Precision vs. Accuracy
Precision = Consistency of repeated measurements.
Accuracy = Closeness to the true value.
Correlation vs. Causation
Correlational Research
Identifies relationships but does not establish cause-effect.
Problems
Directionality problem: Which variable affects the other?
Third variable problem: An unknown factor might be influencing both variables.
Experimental Research
IV = manipulated, DV = measured.
Control & experimental conditions to determine causality.
Must control for confounding variables to avoid misleading conclusions.
Week 2
Psychological Assessment (Why we test)
Purpose: Standardized assessment of behaviour, attitudes and abilities
Key goal: Bias, ensure fairness, and enable comparisons
Core Concept: Psychological constructs are inferred from measurable behaviour
Links to: Score normalization (making raw scores meaningful)
Score Normalization
Raw score: Direct test score (not meaningful alone)
Derived score: Transformed for meaningful interpretation
Key transformations
Percentile Rank:
% of people scoring below a given score (non-linear)
Standard Scores (linear):
z-scores allow comparisons across tests
Common Transformations (z-based)
T-score: (
z
x 10) + 50
IQ score: (
z
x 15) + 100
Stanine/Sten Scores: Categorized performance groups
Links to: Reliability
Reliability
Formula
Reliability coefficient (rxx) = True Score Variance / Oberserved Score Variance
High rxx = more true score, less error
Types of realiability
Test-Retest -> Same test given twice. Error source: Time sampling
Alternate Forms -> Two equivalent tests. Error source: Content sampling
Internal Consistency
Split-half: Splitting test into two halves
Cronbach's Alpha: Checks item correlations
Inter-rater realiability -> agreement between raters. Error source: Rater differences
Error Variance (Understanding Inconsistencies in Scores)
Observed Score = True Score + Error
Sources of error
Time Sampling: External factors (test-retest)
Content sampling: Different questions (alternate forms, split-half)
Rater differences: Subjectivity (inter-rater raliability)
Content Heterogeneity: Construct too broad (Cronbach's alpha)
Improving Reliability (reducing errors)
Increase test length: more items reduce random errors
Use clear scoring criteria: Reduces rater subjectivity
Standardize test conditions: Minimizes time-related factors
Train raters: Improves rater-reliability