Please enable JavaScript.
Coggle requires JavaScript to display documents.
SURVEY & EXPERIMENTAL RESEARCH - Coggle Diagram
SURVEY & EXPERIMENTAL RESEARCH
Survey Research
- A research method that uses set questionnaires or interviews to collect information about people’s opinions, habits, and behaviors in an organized way.
Questionnaire Survey
- A tool made up of a set of questions designed to collect answers from people in a consistent way. The questions can be open-ended or fixed-choice.
Self-administered postal surveys
- the same questionnaire is posted to a large number of people, and willing respondents can complete the survey at their convenience and return it in prepaid envelopes.
Group Administered Survey
- A group of people is gathered in one place at the same time, and each person fills out the survey on their own without talking to others.
Interview Survey
- Interviews are more personal than questionnaires and are done by trained interviewers following a standard set of questions. Unlike a questionnaire, the interviewer may have special instructions and space to note their own observations.
Face to Face
- The interviewer talks directly with the person, asking questions and writing down their answers..
Focus Groups
- A small group of people (usually 6–10) are interviewed together, with the interviewer acting as a facilitator for the discussion.
Telephone Interview
- Interviewers call people on the phone, usually chosen at random from a directory, to ask a set of standard survey questions.
Bias
Non-Response Bias
- If most people don’t respond to a survey, there may be a specific reason for the low response, which can make the study’s results less trustworthy..
Sample Bias
- Phone surveys that call random numbers leave out people with unlisted or mobile numbers, and anyone who can’t answer the phone during the survey.
Social Desirabilty Bias
- When people answer questions in a way that makes them look good rather than telling the truth.
Recall Bias
- When asking about events from long ago, people might not remember their actions or reasons accurately, or their memories may have changed over time.
Common Method Bias
- The false connection that can appear between variables measured at the same time using the same tool, like a questionnaire in a single-time survey.
Experimental Research
Basic Concepts
Treatment
- In experiments, some people are given something new to try or experience
Treatment Manipulation
- This helps rule out other explanations for what caused the results. The accuracy of an experiment depends on how well the treatment was applied.
Post-Test
-And any measurements taken after the treatment is given.
Pretest
- Any measurements taken before the treatment is given.
Random
- the process of randomly drawing a sample from a population or a sampling frame
Assignment
- A method of randomly putting people into either the treatment group or the control group.
Experimental Designs
- 1 that gets the treatment and one that doesn’t. The two main types are a design that measures people before and after the treatment, and one that measures only after. Some versions add extra steps to adjust for differences between people.
Pretest-posttest control group design
- People are randomly placed into a treatment group or a control group. Everyone is measured once at the start, then the treatment group gets the treatment, and afterward everyone is measured again to see what changed.
Posttest only control group
- This design is a simpler version of the pretest-posttest setup, but it skips the initial measurement before the treatment.
Covariates Design
- A special version of the pretest-posttest design where the first measurement looks at related factors (covariates) instead of the main outcomes being studied.
Factorial Design
- 2 x 2 factorial design, which consists of two treatments, each with two levels (such as high/low or present/absent)
Main Effect
- These occur when the outcome differs noticeably across the levels of one factor, no matter the levels of the other factors.
Interaction Effect
- This happens when the effect of one factor changes depending on the level of another factor.
Perils of Experimental Designs
Weak or missing theories
: Without solid theories, hypotheses may be random, unclear, or meaningless.
Unreliable measurements
: Tools used to measure outcomes are often untested or inconsistent, making comparisons across studies difficult.
Poor research design
: Experiments may use the wrong variables, lack controls, or have unequal treatments, making results questionable.
Inappropriate tasks:
Subjects may face unrealistic or confusing tasks, leading to results that are hard to interpret or compare.