Please enable JavaScript.
Coggle requires JavaScript to display documents.
Scholarly practice, Research Design notation; ROX - slide 29 in L4,…
Scholarly practice
-
-
-
-
Applying "science"
Evidence-based practice
-
Interpretive research
-
applied ethnography
qualitative methods; phenomenology, grounded theory, narrative, etc
-
Analytical research
-
experimental research
-
-
the researcher assigns conditions, or manipulates exposure to some hypothesised cause (e.g., new treatment)
(1) two or more conditions (experimental vs control group) (2) Random assignmnt of subjects (3) observation/ measurement (4) between group analysis (compare outcomes)
Goal: causal inference = outcome is attributable to the exposure (REMEMBER CORRELATION NOT CAUSATION)
3 conditions need to be
Covariation/correlation: variation in exposure (i.e., independent variable) is associated with variation in outcome (or dependent variable)
-
Plausible alternative explanations for the relationship between exposure and ‘outcome’ can be ruled out.
- 2 more items...
random assignment
a method for assigning participants to the different conditions --> The main purpose of random assignment is to rule out plausible alternative explanations (e.g., selection threat / pre-existing group differences).
if subjects aren't blind, there are theats to in internal validity
Diffusion of treatment: This occurs when subjects in different groups communicate with each other and learn information not intended for them.
Compensatory equalization: This occurs when those providing the treatment attempt to give some of the advantages the treatment group has to members of the control group.
Compensatory rivalry: This occurs when members of the control group try to gain some of the benefits of the treatment group.
Resentful demoralization: This refers to control group subjects underperforming because they resent being denied the treatment.
-
observational studies
-
Obervational studies are categorized based on time – may be retrospective, cross-sectional (snapshot in time), looking prospective (forward in time )
-
cross sectional study
-
Purpose: to estimate the prevalence of some condition or variable of interest in a given population and to identify potential ‘risk’ factors.
limitations: they are carried out at one time point and give no indication of the sequence of events (and plausible explanations)
-
-
no less interested in prediction, or cause and effect, and are necessary when conditions, such as exposure to some kind of traumatic event, cannot be assigned. can't manipulate exposure!
-
circle of methods
-
RCTs arent enough, rather information from multiple sources ( case studies, ethnography, observations, etc) to determine causal relationships
Evidence based practice
-
-
3 principles
- Practice should be based on the best available evidence, and the best or most trustworthy evidence comes from controlled clinical observations (e.g., RCT).
- Science is cumulative: claims about ‘what works’ should be based on systematic reviews that summarize the best available evidence.
- Evidence is necessary but not sufficient for decision- making: Clinical decision-making requires consideration of patients’ values and preferences.
-
Issues with EBP
- The nature of research-based knowledge
Research knowledge usually takes the form of generalizations and applying these to particular situations is rarely straightforward.
- what works is not the same as what matters!!!
- The nature of professional practice
OT practice involves more than selecting a line of action from a smorgasbord of so called evidence-based options to treat clinical problems.
- The authoritarian nature of EBP
the conceptual distinction between thinking and doing
which increased a power distinction (in old time doers where slaves...) --> creates dogma/stigma
-
-
- we cant seperate research and practice.
-
Measurement in research
reliability
the term reliability means “repeatability” or “consistency”. A measure is considered reliable if it would give us the same result over and over again
A reliability of .5 means that about half of the variance of the observed score is attributable to truth and half is attributable to error. A reliability of .8 means the variability is about 80% true ability and 20% error. And so on. https://conjointly.com/kb/theory-of-reliability/
construct validity
construct validity involves generalizing from your program or measures to the concept of your program or measures.
-
Inferential statistics
-
6 key concepts
Parameters
numerical characteristic of a population (Ex. population of people, events, objects)
- A population is the entire set of possible cases --> can also be infinite in size
- When we have data on entire population (like census), we dont need inferential statistics (we know what true population perameter is)
the population parameter we want to estimate is the ‘true effect’ of some intervention (which is unknowable).
- the population is theoretical; it refers to the complete set of ‘effects’ observed over countless repeated trials.
- RCT produces a measure of effect which is a measure of the true effect
-
-
-
Measures of effect size
simple way of quantifying the strength of the relationship between two variables.
- quantifying the effect of a particular intervention, relative to some comparison
- "how well does it work"
- important tool for interpreting and promotes a more scientific approach
-
Risk Ratio
measure of the risk/ the ratio of a probability of a certain event happening in one group compared to the risk of the same event happening in another group (dichotomous)
-
-
- Effect size on itself means nothing
- Depends on context for something to be meaningful (ex. less pain, increase survival, etc)
- ex. Survival might be more important so a small effect might be important vs in another context it might not be
Confidence Intervals
-
See written notes :)
- A way of quantifying uncertainty/imprecision
- A 95% confidence interval is a range of values either side of the estimate between which we can be 95% confident that the true value lies
Only if the confidence interval excludes values that could be deemed clinically significant (ie effect size) would it be reasonable to conclude that a study has demonstrated no effect.
When no difference falls within CI, sometimes conclude that it had no effect which isn’t the best thing to do. Need to repeat the study if this happens in a larger sample!
-
Theory is key understanding how we do make a difference for whom, when and under what circumstances
SLIDE 41 L6 has good notes about things to consider after reading study/ generalizability
-
-
- social and cultural factors; relational processes make a difference - the solution space
-
Refers to participants dropping out or leaving a study, which means that the results are based on a biased sample of only the people who did not choose to leave
-
This is the possibility that mental or physical changes occur within the participants themselves that could account for the evaluation results. Change b/c of natural reasons
-
different approaches to research (interpretive vs critical vs etc), serve diffferent purposes. we cant find the best one, we want the one is best suited for question at hand
-