Please enable JavaScript.
Coggle requires JavaScript to display documents.
Midterm 1 Study Guide: Part 1 - Coggle Diagram
Midterm 1 Study Guide: Part 1
Types of Research Design
Experimental
Random assignment of participants to conditions and the manipulation of independent variables
Characterized by Two Factors:
Random Assignment
Assigned to conditions such that each participant has an equally likely chance of being assigned to each condition
Manipulation
Systematic control, variation, or application of independent variables to different groups of participants
Types of Experimental Methodology
Laboratory Experiments
Uses random assignment and manipulation of independent variables to ensure control and increase internal validity
Usually in a controlled, artificial environment
High internal validity
: Controlled conditions allow researchers to rule out alternative explanations
Low External Validity
: Findings may not always apply to real world settings, due to the artificial nature of the experiment
Researchers selected participants with real work experience to make the study more applicable to real-world scenarios in order to improve
generalizability
Field Experiments
Conducted in real world settings to make the study more realistic
Still use random assignment and manipulation of variables to control for outside influences
Aim to balance realism and scientific control
Challenges
: Hard to find real-world workplaces that allow random assignment and experiment changes.
Companies don’t want disruptions in their work, making them rare in I/O psychology
Quasi-Experiments
Similar to field experiments but do not use random assignment
How They Work
Manipulate an independent variable (like training vs. no training)
Instead of random assignment, use pre-existing groups (e.g., departments or work teams)
Controlling Bias
Researchers check for pre-existing differences (like experience, gender, or performance) and adjust for them to keep the study fair
Observational Methods
Often referred to as correlational designs as they rely on correlational statistics for analysis
Also called descriptive as they only describe relationships between variables
Why Descriptive Research isn't an Experiment
No random assignment of participants, no manipulation of independent variables, and limited in drawing causal conclusions- only determines if a relationship exists
Importance of Descriptive Research
Helps identify patterns and relationships that can later be tested with experiments
Some findings are valuable even without causality
Like the Periodic Table in chemistry or workplace attitude surveys in I/O psychology
Useful for predicting behaviour, even if it cannot establish direct cause-and-effect
Why Observational Research Can't Infer Causality
Researchers do not control variables or assign conditions randomly
Reverse causation is possible (e.g., does job satisfaction improve performance, or does good performance increase satisfaction?)
Data Collection Techniques
Naturalistic Observation
Watching people in their natural environment to study their behaviour, used in both labs and real-word settings
The behaviour can be
observed, counted, or measured
Types of Naturalistic Observation
Participant Observation
Used more in sociology/anthropology
The observer blends in with the participants
Unobtrusive Naturalistic Observation
More common in I/O psychology
The researcher stays out of the way and observes without interfering
Example
: A consultant might give a leadership style questionnaire and then observe staff meetings to see how different leaders interact.
Challenges of Unobtrusive Observation
Even when trying to be unnoticed, people may change their behavior if they know they are being watched.
This can affect the accuracy of the data collected.
Case Studies
In-depth study of one person, group, company, or society
Uses interviews, historical analysis, and research on writings or policies
Focuses on describing rather than proving cause and effect
Uses
: Freud used case studies to analyze his clients
I/O
Psychology
: Used to study company structures or business leaders (e.g., a Fortune 500 CEO).
Consulting Firms
: Use case studies to showcase their work and attract clients.
Strengths
: Provide detailed and rich info, help describe typical or exceptional individuals or companies
Limitations
: Based on only one individual or organization, hard to apply findings to other people or businesses (limited external validity)
Archival Research
Definition
: Using existing (secondary) data collected by individuals or organizations for general or specific purposes
Quality Dependence
The quality of research depends on the quality of its original data
Researchers cannot fix issues in a weak data set
Benefits
: Saves time, access to high quality data, more variables available, and includes both cross-sectional and longitudinal data
Main
Concern
: Lack of control over data quality
Surveys
A way to collect information by asking people questions
Researchers select a group of participants (sample) and give them a questionnaire
Why are they useful
?: Help gather data on attitudes and beliefs about workplaces, supervisors, coworkers, etc.
The most commonly used data collection method in I/O psychology.
Self-Administered Questionnaires
Surveys that people fill out on their own without a researcher present
Can be mailed, emailed or given in person
Where are they used
?: Research studies, workplaces
Why are they useful
?: Easy to distribute, can be given to large groups, anonymous answers encourage honesty
Downsides
: Low response rates, no clarification (can't ask for help)
Interviews
A type of survey where the researcher asks questions orally, usually done face-to-face but can also be done over the phone
Benefits
: Higher response rates, clearer answers
Common Uses in I/O Psych
: Understanding employee attitudes or concerns, to check if applicants are qualified, and assess potential for a promotion
Challenge
: Interviews can be time-consuming
Technological Advances in Survey Technology
Web-based and mobile surveys are replacing paper surveys
Benefits
: Convenient for participants to complete at their own time. Data collected is immediately available and easy to analyze.
Research shows that well-designed web surveys can be just as good as paper ones. Both methods often have similar results for measurements, like reliability and accuracy.
New Survey Tools: SurveySignal
: A mobile app that sends short survey links at random or fixed times to participants’ phones.
Experience Sampling Methodology (ESM)
: A technique where participants are contacted at specific times to answer questions about their feelings and behaviour.
Use of ESM
: ESM helps researchers understand moment-to-moment attitudes and emotions.
Used in studies about job satisfaction, emotional regulation, and workplace behaviour.
Measurement
Assigning numbers to objects or events to represent their characteristics or qualities
I/O Psych
: Measuring intelligence, attitudes, or satisfaction
Examples
:
Cognitive ability
: A test that measures intelligence by asking questions.
Job Satisfaction
: A survey that measures how happy people are with their job or school
Goal
: Use these numbers in statistical analysis to answer research questions
Challenges
: Measuring attitudes and emotions is tricky, and many sources of error such as bad mood or personal problems that can affect measurements
Why It's Harder
: In psych, there are no perfect tools to measure creativity or motivation, making measurements less accurate
Importance
: Crucial to develop accurate and reliable ways to measure things in I/O psych
Reliability
Consistency or stability of a measure
Why It's Important
: Tests must be reliable, measurement error, and prediction accuracy
Test-Retest Reliability
Measures stability of a test over time
Measurement error causes scores to vary
Parallel Forms Reliability
Compares two equivalent forms of a test to see if they measure the same thing. Can be used for large classes to avoid cheating
Ensures the tests are measuring the same construct
Reliability Types and Concepts
Interrater Reliability
The consistency with which multiple raters rate the same behaviour or person
Important for performance appraisals
Measured by correlation between different raters' scores (similar to parallel forms)
Internal Consistency
Measures how well the items on a test relate to each other
Test items should measure the same thing
Split-Half Reliability
: Split the test into two parts and compare scores
Inter-Item Reliability
: Check how well items correlate with each other (e.g. Cronbach's alpha)
Reliability Guidelines:
Researchers aim for a reliability of at least 0.70
Reliability vs Validity
: Reliability is important, but we also focus on validity- how well a test measures what it's supposed to measure and how it predicts outcomes
Validity of Tests, Measures, and Scales
Construct Validity
: Measures the extent to which a test measures the underlying construct it was designed for (i.e. intelligence, motivation)
Types of Evidence
: Content Validity: Test should cover a representative sample of the material or quality being assessed
Criterion-Related Validity
: Focuses on whether the test predicts attitudes, behaviour, or performance accurately
Types
:
Predictive
: How well a test predicts future performance or behaviour.
Concurrent
: Measures how well a test predicts behaviour or performance at the same time it's taken (Two things measured at once)
Convergent and Divergent Validity
Convergent
: Should show a strong relationship with other similar constructs
Divergent
: Test should NOT show a strong relationship with dissimilar constructs (E.g. job satisfaction and organizational commitment)
Statistics
Descriptive
: Summarizes data in a single number to draw conclusions about hypotheses and research.
Measures of Central Tendency
: Mean, median and mode
Measures of Dispersion (Variability)
: Range, Variance, SD
Shapes of Distributions:
Normal
: Bell-shaped curve, most observations cluster around the mean, with fewer observations as we move away from the mean.
68% of observations fall within 1 standard deviation of the mean.
99% of observations fall within 3 standard deviations of the mean
Mean and median are the same value
Correlation and Regression
Correlation
: Measures the strength and direction of the relationship between two variables
Values range from -1.0 to +1.0, indicating a negative or positive relationship
Positive Correlation
: High scores on one variable reflect high scores on the other (e.g., participation and job satisfaction).
Negative Correlation
: High scores on one variable reflect low scores on the other (e.g., job satisfaction and absenteeism).
Magnitude
: Correlations range from 0 to 1, with larger values indicating stronger relationships. Correlations of -.50 and +.50 have the same magnitude.
Scatterplots
: Show relationships—perfect correlation is rare in psychology.
Regression
: Used to predict one variable from another, based on their correlation
Coefficient of Determination (r²)
: Percentage of variance in the criterion variable accounted for by the predictor.
Meta-Analysis
: A statistical and methodological technique used for quantitative literature reviews.
Combines findings from multiple studies to estimate the true relationship between variables (e.g., job satisfaction and job performance).
Combines results from numerous studies (e.g., 25, 100, or 1,000+ studies) to provide a more accurate estimate than relying on a few individual studies.
Requires a thorough literature review to ensure all relevant data is included for an accurate analysis.
Provides solid estimates of relationships between constructs.
Summarizes large bodies of research, improving reliability and generalizability.
Quality of the meta-analysis depends on the thoroughness of the review and data collection.
If not done carefully, the results may be inaccurate or flawed.