Research Method
Research process
Validity and Reliability
Sampling methods
Aims and hypothesis
Types of experiment
Types of Observation
Sampling participants
1.Develop an aim and hypothesis
2.Selection of research method
3.Operationalization of variables/defining measurements
4.Ethical considerations
5.Recruit participants
6.Collect data
7.Analyse data
8.Evaluate the quality of the research
An aim identifies the purpose of the investigation. It is a straightforward expression of what the researcher is trying to find out from conducting an investigation.
A hypothesis is statement of prediction. Predicts the relationship between the variables in the experiment or correlation.
directional hypothesis
non-directional hypothesis
alternative hypothesis
null hypothesis
An experiment is a method used in science to test whether or not a theory (thought) is true. It involves changing one thing (IV), keeping everything else this same (control) and taking measurements (DV). This allows the person doing the experiment to see the effect this one variable has on the measurement.
variables
Independent variable (IV) = the variable that is being changed (this is the variable the experimenter thinks will effect the measurements).
Dependent Variable (DV) = the variable that is measured (the measurement change depends on changes to the IV)
Extraneous Variable = this is a variable that will effect the results if it is not controlled.
Control variable = a variable that has been controlled by the experiment so it does not effect the results.
different types of experiment
Lab Experiment = an experiment that happens in a controlled environment like a laboratory
Field Experiments = an experiment in a natural environment, but there is still an independent variable being changed by the researcher. There is likely to be some effort to control confounding variables.
Natural Experiments = an experiment which happens in a natural environment, but the researcher does not change variable, the variable changes naturally
Experimental design
1.Independent Measure Design:There is a different group of participants for each condition of the independent variable.Each participant only participates in one condition of the IV.
- Repeat Measure Design:
Each participant participates in all condition of the IV.
- Matched-Pair Design
Each participant only participates in one condition of the IV like an independent group design. However, participates are put into pairs with people with similar characteristics with each other.
volunteer
Naturalistic = investigation in a natural environment, where the researcher has no control over who participates or the situation.
Controlled = Part of an experiment as they happen in a environment that the researcher can control and the participates are pre-selected.
Covert = participant is unaware they are being observed.
Overt = participant is aware they are being observed.
Structured = the behaviors that will be observed are operationalized. A limited number of pre-determined and well defined behaviors will be observed. Could use time-sampling and event sampling.
Unstructured = observer notes what they see e.g. common, important and unusual things.
Participant observation = Researcher often joins the group they are studying or at least interacts with them. It s usually naturalistic and covert and involves the researcher recording the data as they interact with the participants.
Non-participant observation = researcher observes the participants from a distance e.g. watching through a one-way mirror or using a video camera.
*criticism: researcher bias
improved by Blind Studies/Inter-rater reliability
Event Sampling: tallying the number of times a operationalized (well defined) happens.
Time Sampling: the duration of the observation is first decided. It is then divided into intervals. The behavior performed by the participant is defined for each interval
*operationalization is a process of defining the measurement of a phenomenon that is not directly measurable, though its existence is indicated by other phenomena.
Standardization: keeping the experience of the investigation the same for every participant.
random
opportunity
ppt are invited to the study thru ads via email or notices. 2. Those replies becomes the sample. “Respond to the advert”
- Ppt are chosen because they are available. 2. Eg. University Students.
- all members of the population are allocated numbers and a fixed amount of them are selected in a unbiased way.
Self-report
1.Questionnaires:-usually a paper and pen test, however it could be online, no interaction between the participant and the researcher, more often than not questions are closed and assessed using rating scales with a standardized way of assessing the data
2.Interviews: a research method using verbal questions asked directly, typically face-to-face
2.Types of question
-closed question: a question were there only a limited number of answers
3.Rating scale: a rating is made along a scale
structured
unstructured
semi-structured
Validity
1.definition
a study is only valid if it measures what it is supposed to.-accurate
-In an experiment, this means that only the IV causes the change in the DV.
If a study has high external validity it means the results can be applied to other people and other cultures.
2.Things which decrease validity
Extraneous variables which effect the DV
Demand characteristics
Subjective and biased interpretation of qualitative data
Participant differences e.g. when using independent design
3.Types of validity
b. Ecological Validity: extent to which the results of the study can be applied to real life.
a. Internal Validity: the extent to which the study measures what it is supposed to.
c. Population Validity: extent to which the results of the study can be applied to other people.
d. Predictive Validity: extent to which the results of a test can predict performance/behavior.
e. Convergent Validity: extent of agreement between tests measuring the same variable.
Reliability
Things which affect reliability
Types of reliability
-External Reliability: extent to which the results of the study can be replicated.
-Internal Reliability: the extent to which the results of the study are consistent.
The participant’s mood and motivation
How objectively the participant’s data is interpreted
If a procedure is not very standardized it might be difficult for other researchers to do the same investigation with other participants.
If only a very narrow sample was used maybe the results will not be replicable with other samples of people.
Assessing reliability
Split-Half Method:
A questionnaire is split in to two halves
Test Re-Test Method:
Involves giving the participant the same test on two separate occasions
Inter-Rater Reliability involves two or more researchers rating an observation or the contents of an interview (qualitative data). The researchers then compare their ratings. The researchers may then see if their results correlate with each other. If the two or more researchers’ results are consistently similar then the results are considered to be reliable.
Correlation
Correlation means association - more precisely it is a measure of the extent to which two variables are related.
types of correlation
positive
negative
zero
Advantages and disadvantages of correlation
Adv.
can’t introduce some variables(lung cancer) in lab experiment
Disadv.
can’t see if one changes the other would change or not or how to change cuz correlation refers to the natural relationship between two naturally occurring variables
The third variable
Ethical issues
Problems in research that raise concerns 2. about the welfare of ppt (3. or may have a negative impact on society) [4. Aspects of the procedure 5. nature of study.]
Types of ethical considerations
Deception
Debriefing
Right to withdraw
Preceptive consent
Ethical guideline
Informed consent
Confidentiality
Privacy
Protection