Please enable JavaScript.
Coggle requires JavaScript to display documents.
Quantitative - Coggle Diagram
Quantitative
Research designs
Experimental methods
- Manipulate levels/conditions of the IV and measure the effect of manipulation on the DV to identify causal impact
Independent measure/Between-subjects - different groups of pp used in each different level/condition of the IV
- +tives: avoids order/carry over/practice effects (such as practice/fatigue),
- -tives: Need more participants, so more time consuming, difficult to identify whether condition is driving the effect (i.e. pp in condition C may just be a bad dancer and not be effected by the what we are manipulating, differences between pps may effect results (i.e. age, gender, social background)
Repeated measure/Within-subjects - each pp is tested at each level/condition of the IV
- +tives: same pps in each condition means individual differences are reduced, fewer pps needed so less time consuming
- -tives: problems with carry over effects (taking part in condition A may then influence condition B), issues with practice effects (practice/fatigue effect)
- To solve negatives: Counterbalance (do condition B or C first)
+tives: the only type of research that allows for causal relationships to be inferred, allows for tight control on extraneous variables
-tives: often artificial meaning low ecological validity, may not be generalisable to the real world, experimenter may influence the results,
Control groups
- Used to establish a cause-and-effect relationship by isolating the effect of an independent variable.
- Receives either no treatment, a standard treatment whose effect is already known, or a placebo
- Helps ensure the internal validity of your research.
- For successful control group: ensure all confounding variables are accounted for, use double-blinding, randomize subjects
Matched pairs - Each condition uses different participants, but they are matched in terms of important characteristics, e.g., gender, age, intelligence, etc.
- +tives: reduces individual differences as tried to match, avoids order effects
- -tives: if 1 pp drops out you lose 2 pps data, very time-consuming, impossible to match people exactly unless twins!
Other variables
- Extraneous variable refers to any variables that you are not intentionally studying
- Confounding variables is a third variable that influences both the independent and dependent variables. Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.
- Extraneous variables become confounding variables, where they offer an alternative explanation for changes in scores on the dependent variable, reducing the internal validity of your results.
Quasi-experiments
- Involves the manipulation of an independent variable without the random assignment of participants to conditions or orders of conditions
- Identify different levels of the IV in the population and compare their performance
- PPs are not randomly assigned to conditions, but conditions are formed from already available groups/categories
- e.g a group of psychotherapists have started a new therapy on patients whilst others have stuck with old protocol, you can use these pre-existing groups to study the differences between the new therapy vs old protocol = no random assignment
+tives:
- Good for when random allocation would not be suitable/ethical (smoking - wouldn't randomly allocate people to smoke)
- High external validity than most true experiments as they often involve real-world interventions instead of artificial laboratory settings
- High internal validity as they allow to control for confounding variables better than other types of studies
-tives:
- Other factors related to the 'natural' grouping may not be measured (smokers may drink more alcohol).
- Does not necessarily allow causal inferences like true experiments
- Other controls might make it less ecologically valid
- Lower internal validity than true experiments, can be difficult to ensure all confounding variables are controlled - randomisation may help
Correlational research design
- A correlational research design measures a relationship between two variables without the researcher controlling either of them. Aims to find either a positive (variables change in the same direction), negative (variables change in opposite direction) or no correlation
- Correlation does not imply causation
- case-control studies (a comparison between two groups, one of which experienced a condition while the other group did not.), observational studies
Things to consider
Randomization
- when participants are assigned to a treatment group at random.
Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in repeated-measures design to ensure that the order of treatment application doesn’t influence the results of the experiment.
Systematic variation
- Variation due to the experimenter doing something in one condition but not the other
Unsystematic variation
- Variation as a result from random factors that exists between the experimental conditions (natural differences in ability, time of day etc.)
A double-blind study withholds each subject’s group assignment from both the participant and the researcher performing the experiment.
Meta-analysis
- A quantitative technique for synthesizing the results of multiple studies of a phenomenon into a single result by combining the effect size estimates from each study into a single estimate of the combined effect size or into a distribution of effect sizes.
Strengths:
- Provides organised approach for handling large numbers of studies
- The process is systematic and documented in great detail allowing for readers to evaluate the decisions/conclusions
- Allows researchers to examine an effect within a collection of studies in a better way than qual
Weaknesses:
- Consumes a great deal of time and effort
- Have been criticised for aggregating studies that are too different
- Some argue that objective coding used ignores the context of each individual study
- Using low-quality studies impact on the mean effect size
Bias:
- Publication bias. The problem here is that "positive" studies are more likely to go to print.
- Search bias. The search for studies can produce unintentionally biased results. This includes using an incomplete set of keywords or varying strategies to search databases. Also, the search engine used can be a factor.
- Selection bias. Researchers must clearly define criteria for choosing from the long list of potential studies to be included in the meta-analysis to ensure unbiased results.
Main objectives:
- Evaluate effects in different subsets of participants
- Create new hypotheses to inspire future studies
- Overcome limitations of small sample sizes
- Establish statistical significance across studies
- When you take many studies into consideration at once, the statistical significance found is much greater than with one study alone. This is important because statistical significance increases the validity of any observed differences also increasing reliability of info
Descriptive
- Aims to accurately and systematically describe a population, situation or phenomenon. It can answer what, where, when and how questions, but not why questions.
- Unlike in experimental research, the researcher does not control or manipulate any of the variables, but only observes and measures them.
- e.g. surveys, naturalistic observations, case studies
Cross-sectional
- A cross-sectional study is a type of research design in which you collect data from many different individuals at a single point in time. In cross-sectional research, you observe variables without influencing them.
- +tives:* cheap and quick, captures a specific moment in time, collect data from a large pool of subjects and compare differences between groups
- -tives: difficult to establish cause-and-effect relationships, cannot analyse behaviour over a period of time or establish long-term trends
Longitudinal
- A longitudinal study (or longitudinal survey, or panel study) is a research design that involves repeated observations of the same variables (e.g., people) over short or long periods of time (i.e., uses longitudinal data).
- Retrospective study: you collect data on events that have already happened.
- Prospective study: you choose a group of subjects and follow them over time, collecting data in real time.
-
-
Graphs
Boxplots can show:
- Medians
- Quartiles
- Possible outliers
Bar chats can show:
- Means
- Medians
- Modes
- Range etc
- But only one group at a time!
Psychometrics
Validity
- The extent to which findings or conclusions of a study are actually measuring what they claim to be measuring.
External Validity
- Refers to the extent to which the results of a study can be generalized to other settings (ecological validity), other people (population validity) and over time (historical validity).
- Can be improved by setting experiments in a more natural setting and using random sampling to select participants.
- There is an inherent trade-off between external and internal validity; the more applicable you make your study to a broader context, the less you can control extraneous factors in your study.
Ecological Validity
- Refers to the extent to which the results and conclusions are generalisable to real life.
- Whether data is generalisable to the real world, based on the conditions research is conducted under and procedures involved.
Temporal Validity
- Refers to the extent to which the findings and conclusions of study are valid when we consider the differences and progressions that come with time.
- This is high when research findings successfully apply across time (certain variables in the past may no longer be relevant now or in the future).
Population Validity
- Refers to the extent to which the sample can be generalised to similar and wider populations.
- A type of external validity which describes how well the sample used can be generalised to a population as a whole
- Depends on the choice of population and on the extent to which the study sample mirrors that population.
Threats to External Validity
- Sampling bias: when the sample is not representative of the population
- Experimenter effect: characteristics/behaviours of the experimenter(s) unintentionally influence the outcomes
- Hawthorn effect: participants to change their behaviours because they know they are being studied.
- Situation effect: Factors like the setting, time of day, location, researchers’ characteristics, etc. limit generalizability
How to counter threats to external validity:
- Replications counter almost all threats by enhancing generalizability to other settings, populations and conditions.
- Field experiments counter testing and situation effects by using natural contexts.
- Probability sampling counters selection bias by making sure everyone in a population has an equal chance of being selected for a study sample.
- Recalibration or reprocessing also counters selection bias using algorithms to correct weighting of factors (e.g., age) within study samples.
Internal Validity
- Refers to whether the effects observed in a study are due to the manipulation of the independent variable and not some other factor.
- Can be improved by controlling extraneous variables, using standardized instructions, counter balancing, and eliminating demand characteristics and investigator effects.
- It the extent to which you can be confident that the causal relationship established in your experiment cannot be explained by other factors
Content Validity
- refers to the extent to which a study or test measures up against all the elements of a construct
- is it measuring the appropriate content?
Construct Validity
- does the test relate to underlying theoretical constructs?
- refers to the extent to which a study or test measures the concept which it claims to
- Whether a measure successfully measures the concept it is supposed to (e.g. does a questionnaire measure IQ, or something related but crucially different?).
Convergent Validity
- Refers to the extent to which the constructs that are tested relate with each other.
Discriminate/Divergent Validity
- Refers to the extent to which the constructs tested which don’t have a relationship do not have a relationship with one another.
Criterion Validity
- refers to the extent to which the results and conclusions are valid compared with other measures
- the relationship to other measures
Concurrent Validity
- Refers to the extent to which the results and conclusions concur with other studies and evidence
- E.g Milgram (1963) studied the effects of obedience to authority. Milgram’s results concurred with many replications of the study. Therefore Milgram’s study was high in concurrent validity.
- Whether a measure is in agreement with pre-existing measures that are validated to test for the same [or a very similar] concept (gauged by correlating measures against each other).
Predictive Validity
- Refers to the extent to which the results and conclusions can be used to predict real life applications of the study. Predictive validity is established through repeated results over time.
- Does the test predict later performance on a related criterion?
-
Face Validity
- whether the test appears (at face value) to measure what it claims to. This is the least sophisticated measure of validity.
- A measure of whether it looks subjectively promising that a tool measures what it's supposed to
- Having face validity does not mean that a test really measures what the researcher intends to measure, but only in the judgment of raters that it appears to do so. Consequently it is a crude and basic measure of validity.
Reliability
Internal Reliability
- This describes the internal consistency of a measure (i.e. consistency within itself), such as whether the different questions (known as ‘items’) in a questionnaire are all measuring the same thing.
- The internal reliability of self-report measures, such as psychometric tests and questionnaires
Internal Consistency
- Assesses the correlation between multiple items in a test that are intended to measure the same construct.
- Using a multi-item test where all the items are intended to measure the same variable.
Split Half method
- You randomly split a set of measures into two sets. After testing the entire set on the respondents, you calculate the correlation between the two sets of responses.
Average inter-item correlation
- For a set of measures designed to assess the same construct, you calculate the correlation between the results of all possible pairs of items and then calculate the average.
- Cronbachs alpha
Parallel Forms Reliability
- Used to assess the consistency of the results of two tests constructed in the same way from the same content domain.
- Measures the correlation between two equivalent versions of a test.
- If you want to use multiple different versions of a test (for example, to avoid respondents repeating the same answers from memory), you first need to make sure that all the sets of questions or measurements give reliable results.
External Reliability
- The extent to which a measure varies from one use to another
Test-retest reliability
- Measuring a property that you expect to stay the same over time.
- Measures the consistency of results when you repeat the same test on the same sample at a different point in time. You use it when you are measuring something that you expect to stay constant in your sample.
Interrater/interobserver reliability
- Multiple researchers making observations or ratings about the same topic.
- Measures the degree of agreement between different people observing or assessing the same thing. You use it when data is collected by researchers assigning ratings, scores or categories to one or more variables.
- Cohen's kappa coefficient (κ) is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items.
-