Please enable JavaScript.
Coggle requires JavaScript to display documents.
Research Design (5. Popular Research Designs (1. Experimental studies: are…
Research Design
5. Popular Research Designs
1. Experimental studies:
are those that are intended to test cause-effect relationships (hypotheses) in a tightly controlled setting by separating the cause from the effect in time, administering the cause to one group of subjects (the 'treatment group') but not to another group ('control group'), and observing how the mean effects vary between subjects in these two groups. In a true
experimental design
subjects must be randomly assigned between each group. If random assignment is not followed, then the design becomes
quasi-experimental.
2. Field surveys:
are non-experimental designs that do not control for or manipulate independent variables or treatments, but measure these variables and test their effects using statistical methods. Field surveys capture snapshots of practices, beliefs, or situations from a random sample of subjects in field settings through a survey questionnaire or less frequently, through a structural interview.
a. Cross-sectional field-surveys:
Independent and dependent variables are measured at the same point in time (e.g., using a single questionnaire).
b. Longitudinal field surveys:
Dependent variables are measured at a later point in time than the independent variables.
The strengths of field surveys are their external validity (since data is collected in field settings), their ability to capture and control for a large number of variables, and their ability to study a problem from multiple perspectives or using multiple theories.
3. Secondary data analysis:
is an analysis of data that has previously been collected and tabulated by other sources.
4. Case research:
is an in-depth investigation of a problem in one or more real-life settings (case sites) over an extended period of time. Data may be collected using a combination of interviews, personal observations, and internal or external documents.
5. Focus group research:
is a type of research that involves bringing in a small group of subjects (typically 6 to 10) people at one location and having them discuss a phenomenon of interest for a period of 1.5 to 2 hours. the discussion is moderated and led by a trained facilitator, who sets the agenda and poses an initial set of questions for participants, makes sure that the ideas and experiences of all participants are represented.
6. Action research:
assumes that complex social phenomena are best understood by introducing interventions or 'actions' into those phenomena and observing the effects of those actions.
7. Ethnography:
is an interpretive research design inspired by anthropology that emphasizes that research phenomenon must be studied within the context of its culture.
Key attributes of a research design
1. Internal validity:
also called causality, examines whether the observed change in a dependent variable is indeed caused by a corresponding change in the hypothesized independent variable, and not by variables extraneous to the research context. Causality requires three conditions:
Covariation of cause and effect
Temporal precedence
No plausible alternative explanation (or spurious correlation)
Certain research designs, such as laboratory experiments, are strong in internal validity by virtue of their ability to manipulate the independent variable (cause) via a treatment and observe the effect (dependent variable) of that treatment after a certain point in time, while controlling for the effects of extraneous variables.
Although higher in internal validity compared to other methods, laboratory experiments are, by no means, immune to threats of internal validity, and are susceptible to
history, testing, instrumentation, regression, and other threats.
2. External validity:
or generalizability refers to whether the observed associations can be generalized from the sample to the population (population validity), or to other people, organizations, contexts, or time (ecological validity).
3. Construct validity:
examines how well a given measurement scale is measuring the theoretical construct that is expected o measure. For instance, construct validity must assure that a measure of
empathy
is indeed
measuring empathy and not compassion
, which may be difficult since these constructs are somewhat similar in meaning. Construct validity is assessed in positivist research based on
correlational
or
factor analysis of pilot test data.
Improving internal and external validity
Controls are required to assure internal validity (causality) of research designs, and can be accomplished in four ways
1. Manipulation:
The researcher manipulates the independent variables in one or more levels (called 'treatments'), and compares the effects of the treatments against a control group where subjects do not receive the treatment. This type of control is achieved in experimental or quasi-experimental designs but not in non-experimental designs such as surveys.
2. Elimination:
The elimination technique relies on eliminating extraneous variables by holding them constant across treatments, such as by restricting the study to a single gender or a single socio-economic status.
3. Inclusion:
In the inclusion technique, the role of extraneous variables is considered by including them in the research design and separately estimating their effects on the dependent variable, such as via factorial designs where one factor is gender (male versus female).
4. Statistical control:
In statistical control, extraneous variables are measured and used as covariates during the statistical process.
5. Randomization:
The randomization technique is aimed at canceling out the effects of extraneous variables through a process of random sampling, if it can be assured that these effects are of a random (non-systematic) nature.
a. Random selection:
Where a sample is selected randomly from a population
b. Random assignment:
Where subjects selected in a non-random manner are randomly assigned to treatment groups.
What is
research design?
Research design is a comprehensive plan for data collection in an empirical research project.
Three process required in research design
The data collection process
The instrument development process
The sampling process
Two categories of data collection methods
1. Positivist methods:
are aimed at theory testing. Laboratory experiment and survey research are two of the examples. Positivist methods employ a deductive approach to research, starting with a theory and testing theoretical theoretical postulates using empirical data.
2. Interpretive methods:
employ an inductive approach that starts with data and tries to derive a theory about the phenomenon of interest from the observed data.