Please enable JavaScript.
Coggle requires JavaScript to display documents.
Research Methods - Coggle Diagram
Research Methods
Sampling
Random sample:
- get names of everyone in TP and put them in a hat/ use a computer programme to generate names at random.
:check: Potentially unbiased: this means CVs/EVs are controlled. Enhances internal validity.
:red_cross: Time consuming and may not work: complete list of population is hard to get. Aslo some Pps may refure to take part
Systematic sample:
- all the names in a computer program and use a system, e.g pick every 3rd person, to randomly select the sample.
:check: Unbiased: the first item is usually selected at random. Objective method.
:red_cross: Time and effort: a complete list of the population is required. My as well use random sampling
Stratified sample:
- where the sample represents the TP in the correct proportions, e.g., if there are 90% male and 10% female in a TP of 200, then the sample (of e.g., 20 people) must have 90% male and 10% female. Then each strata are selected at random.
:check: Representative method: the characteristics of the TP are represented. Generalisability is more likely than other methods.
:red_cross: Stratification isn't perfect: strata cant reflect all how people are different. Complete representation isn't possible
Volunteer/ self-selecting sample:
- people pick/ choose to take part (volunteer) themselves to take part, e.g., respond to an advert.
:check: Pps are willing: Pps have selected themselves and know how much time and effort is involved. Likely to engage more than people stopped in the street.
:red_cross: Volunteer bias: Pps may share certain traits. Response to cues and generalization is limited.
Opportunity sample:
- where a psychologist selects available participants, e.g., own students. This is very common.
:check: Quick method: convenient because you just make use of the closest people. This makes it cheaper and one of the most popular sampling methods.
:red_cross: Inevitably biased: unrepresentative of the TP as it's drawn from a very specific area. This means that the findings cant be generalized.
-
Types of experiments
Lab experiments:
- a controlled environment where EVs and CVs are regulated
- Pps go to the researcher
:check: EVs and CVs can be controlled: this means that the effect of EVs and CVs on the DV can be minimized. Cause-and-effect between the IV and DV can be demonstrated, high internal validity.
:check: Can be easily replicated: greater controls mean less chance that new EVs are introduced. This means that findings can be confirmed, supporting their validity.
:red_cross: May lack generalizability: the controlled environment may be rather artificial and Pps are aware they are being studied. Thus behavior may not be 'natural' and cant be generalized to everyday life, with low external validity.
:red_cross: DC may be a problem, the findings may be explained by these cues rather than the effect of the IV
Field experiment:
- a natural setting
- the researcher goes to Pps
:check: More natural environment: Pps more comfortable and behavior more authentic. Results may be more generalizable to everyday life.
:check: Pps are unaware of being studied: they are more likely to behave as they normally do. The study has greater external validity.
:red_cross: More difficult to control CVs/EVs: observed changes in the DV may not be due to the IV, but to CVs/EVs instead. It's more difficult to establish cause-and-effect than in the lab.
:red_cross: There are ethical issues: Pps in a field experiment may not have given informed consent. This is an invasion of privacy, which raises ethical issues.
Natural experiment:
- the IV isnt manipulated by the experiemnter, something or someone else causes the IV to vary.
:check: May be the only practical/ethical option: it may be unethical to manipulate the IV. A natural experiment may be the only way causal reserach can be done for some topics.
:check: Greater external validity: involve real-world issues, such as the effect of natural disasters on stress kevels. This emans the findings are more relevant.
:red_cross: The natural event may only occur rarely: many natural events are 'one-offs' and this reduces the opportunity for research. This may limit the scope for generalsing findings to other similar situations.
:red_cross: Pps arent randomly allocated: the ecperimenter has no control iver which Pps are placed in which condition. May result in CVs that arent controlled
Quasi-experiment:
- IV is based on a pre-existing difference between people
:check: Often high control: often carried out under controlled conditions and therefore shares some of the strengths of lab experiments. This means, for example, replication is possible.
:check: Comparisons can be made between people: the IV is a difference between people. This means that comparisons between different types of people can be made.
:red_cross: Pps arent randomly selected: the experimenter has no control over what Pps are placed in which condition. Pp variables may have caused the change in the DV acting as a CV.
:red_cross: Casual relationships aren't demonstrated: the researcher doesn't manipulate/control the IV. So, we can't say for certain that any change in the DV was due to the IV.
Experimental Method
Research issues:
- Extraneous and Confounding variables:
extraneous variables (EV) - are 'nuisance' variables that may make it more difficult to detect and effect.
confounding variables (CV) - change systematically with the IV do we can't be sure if any observed change in the DV is due to the CV or the IV
- demand characteristics
- Investigator effects: any effect of the researcher's behavior on the outcome of the research and also on design decisions.
- Randomisation: this limits investigator effects, by using chance. E.g randomly generate the order of a list of words in a memory test or randomly order the conditions in an experiment.
- Standardisation: as far as possible participants should get the same experience: information, environment, and instructions, to limit EV. Everything should be controlled.
Pilot studies and more:
- Pilot studies: small-scale trial run of an investigation to 'road-test' procedures so they can be modified.
- Control groups/ conditions: used to set comparison, they act as a baseline to establish causation
- Single blind and double blind:
single blind - Pps doesnt know the aims of the study so DC are reduced
double blind - both Pp and reseracher dont know the aims of the study to redure DC and IE
Key concepts:
- Aims: a general expression of what the researcher tends to investigate
- Operationalised hypothesis: a statement of what the researcher believes to be true.
directional hypothesis - predicts which condition will do best. Earlier research supports its
non-directional hypothesis - states that there will be a difference between the results of the 2 conditions but not who will do better or worse.
- Formula for writing hypotheses:
- there will be a different ... DV … Between/for/when ... IV (participants who do this … and participants who do this …)
- participants who (IV) do this … (DV) will do better on this … (IV) than participants who do this …
Experimental Designs
Independent Measures Design (IMD):
- one group does condition A, and one group does condition B
- Pps should be randomly allocated to experimental groups
:check: No order effects: Pps are only tested once so can practice or become tired/bored. This controls an important CV
:check: Won't guess aim: Pps only tested once, so unlikely to guess research aims. Therefore behavior may be more 'natural'
:red_cross: Pps variables, the Pps in the 2 groups are different, acting as EV/CV. May reduce the validity of the study
:red_cross: Less economical: need 2x as many Pps as repeated measures for same data. More time spent recruiting which is expensive.
Matched Pairs Design:
- 2 groups of Pps are used but they are also related to each other by being paired on PV that matter for the experiment
:check: Pp variables: Pps matched on a variable that is relevant to the experiment. This controls Pp variables and enhances the validity of the results
:check: No order effects: Pps are only tested once so no practice of fatigue effects. This enhances the validity of the results
:red_cross: Matching isn't perfect: matching is time-consuming and it cants control all relevant variables. Cnt agrees on all Pp variables
:red_cross: More Pps: need 2x as many Pps as RMD for same data. More time is spent on recruiting which is expensive.
Repeated Measures Design (RMD):
- same Pps in all conditions of an experiment
- the order of conditions should be counterbalanced to avoid order effects
:check: The Pp variables: the person in both conditions have the same characteristics. This controls an important CV
:check: Fewer Pps: 1/2 the number of Pps is needed than in IMD. Less time spent recruiting Pps
:red_cross: Order effects are a problem: Pps may do better or worse when doing a similar task twice. Reduced the validity of the results
:red_cross: Pps guess aims: Pps may change their behavior. This may reduce the validity of the results.
Correlations
:check: Useful starting point for research: by assessing the strength and direction of a relationship, correlations measure how 2 variables are related. If variables are strongly related it may suggest hypotheses for future research
:check: Relatively economical: unlike lab studies, there is no need for a controlled environment and can use secondary data. So correlations are less time-consuming than experiments.
:red_cross: No cause-and-effect: correlations are often presented as casual when they only show how 2 variables are related. This leads to false conclusions about the causes of behavior.
:red_cross: Intervening variables: another untested variable may explain the relationship between co-variables. This may also lead to false conclusions.
- illustrates the strength and direction of an association between 2 co-variables
- Scattergram: 1 co-variable is on the x-axis, and the other is on the y-axis.
- Types of correlations:
- positive - co-variables increase together
- negative - one co-variable increases, the other decreases
- zero - no relationship between variables.
- Differences between correlations and experiments: in an experiment the researcher manipulates the IV and records the effect on the DV, in s correlation, there is no manipulation, so cause-and-effect cant be demonstrated
-
-
-
-
-
-
-