Please enable JavaScript.
Coggle requires JavaScript to display documents.
Randomised Controlled Trials in Mental Health (Different Designs of RCT…
Randomised Controlled Trials in Mental Health
What is a RCT?
A study which randomly assigns participants to an intervention or control group.
Intervention can be – drugs, therapies, surgical procedures, training
Looks at the effect of group condition (intervention or control) on pre-specified outcomes (e.g. symptoms of schizophrenia)
.
NICE suggested "A radnomised controlled trial is often the most appropriate type of study to assess the efficacy or effectiveness of an intervention."
Different Designs of RCT
Parallel group trial
Strengths
Randomisation_reduces selection bias
Blinding (best case.. participants don’t know their group, and experimenters don’t know either
Can demonstrate causation
Results can be combined for systematic reviews/meta-analysis
Weaknesses
Expensive and difficult
Results may not translate to the real world (e.g. stringient in and exclusion criteria - e.g. comorbidity)
Ethical implications (e.g. with placebo)
Retention in the trial can be difficult (e.g. over longer follow up – high drop-out rates)
Diagramm
Cross-over trial
Diagramm
Where all participants participate in each group
Strength
More ethical?
Reduce patient variability
Weaknesses
Carry-over effects (can include a whash up period - to counteract this)
When would you use a cross-over trial?
Chronic/stable conditions e.g. Asthma
Treatment should not cure (because if the first condition cures them … the second one does not show anything) – not the best design in mental health conditions
Cluster trial
Diagramm
Randomising different groups, instead of participants, e.g. one ward being treated as usual, whereas the other receivning the intervention
Strengths
Improved experience of the trial for pp/groups/teams
Reduced risk of contamination (between groups - that affect each other)
Easier to recruit a whole team/group
Weaknesses
Require a large sample (because of the increased relatedness of participants)
Increased chance of selection bias (e.g. because of area where they are from)
When would you use a cluster trial?
When the chance of contamination is high
When it is preferable for the team to have only one intervention
Type of trial
Explanatory trial
Tests whether an intervention works under optimal situations
.
Narrower inclusion/exclusion criteria e.g. No comorbid conditions
Smaller sample size
Standardised intervention, delivered by investigators
Demonstrates efficacy (whether a specific intervention works for a specific condition)
Results may be less generalisable
e.g. First trial of DBT (44 pp), DBT delivered by investigators
Pragmatic trial
Tests whether an intervention works in the real world
.
Broader inclusion/exclusion criteria e.g. includes comorbid conditions
Larger sample size
Intervention may be different and delivered by treating clinicians
Demonstrates effectiveness
Results are broadly generalisable
e.g. Pragmatic trial of DBT (80 pp), DBT delivered by clinicians in the NHS
Randomisation
Random assignment of participants to groups
Equal chance
Groups should be the same except for treatment (so that any difference following intervention can be causaly attributed to the intervention..)
Randomisation methods:
Simple randomisation (e.g. flipping a coin, computer generated random numbers) - better for large trials ( in small trials it could result in unequal numbers)
Block randomisation – randomises participants into groups that results in equal sample sizes (hoever, could result in group differences e.g. in demographic factors)
Stratified randomisation – Balancing demographic characteristics e.g. gender (through conducting two separate randomisations, one in females and one in males)
The control group
Lee et al., 1965
Inclusion of a control group means we can attribute changes in outcome to the treatment we are testing
We can rule out:
Placebo effect
People improve over time
People improve by being in a study (Hawthorne effect)
Types of control group
Placebo group
Treatment as usual
Active control
Blinding
Pildal et al., 2007
Participant/Investigator often blinded
Knowledge of being in the active or control arm could influence outcomes and inflate treatment effects
Trials that are not double blinded give larger estimates of treatment effects than double-blinded trials – especially for subjective outcomes
Types of blinding
Double-blind: participants and investigators - Cannabinoid medication vs. placebo to treat ADHD
Single blind: participants or investigators Befriending vs. activity booklet
Open trials: Unblinded - Therapeutic community vs treatment as usual
Results
CONSORT diagram
Analysing trial data
Pre-specified:
Primary outcome (main outcome the paper is looking at)
Secondary outcomes (others that are thought to be important)
Intention to treat (analysis)
Includes every participant who was randomised (no matter if they deviated the protocol – this analysis is more close to the real world, since it reflects more what occurs in the real world)
Per protocol (analysis)
Includes only those who adhered to the protocol
How do you deal with missing data?
Multiple imputation (estimating what a persons data might have been, depending on what the outcomes of the others were)
Validity
Internal validity
Does the treatment have an effect?
I.e. could the effect have been due to differences in the treatment/control group
Can be shown in explanatory trials
External validity
Is this effect generalisable to the general population
Can be shown more by pragmatic trials
Ethical Considerations
Lancet Psychiatry, 2016
Informed consent (information sheet) – but what about recruitment of patients under section?
Placebo – As long as there is no risk of serious harm
Restricting access to potentially lifesaving treatments (more in physical health)
How to critique trials
The Lancet
Comparison of adaptive pacing therapy, cognitive behaviour therapy, gradad exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial
The PACE Trial
Patients within the trial complained, and suggested there must have been something wrong with it… therefore it was looked into… (loads of flaws found)
Critique:
Selection bias: Included patients did not meet definition of ME
They changed their study design:
Broadened definition of recovery
Reporting of the primary outcome
Problems with trial methodology can lead to bias
Bias is a deviation from the truth in results:
Underestimation – false negative
Overestimation – false positive
Sources of bias in RCTs
Centre for Reviews and Dissemination, 2008
Randomisation
Allocation concealment
Blinding
Attrition bias (drop-outs) • Control group
Results:
Selective reporting
Appropriate analysis
Randomisation
Generating the sequence
The same sort of participants should receive each intervention
The sequence (of randomisaation) should not be predictable
E.g Computerised list of random numbers?
Allocation concealment
Schulz, 1995
How you store the randomisation list
The groups that participants are in needs to be concealed from investigators
Otherwise investigators could channel those with a better prognosis to the experimental group and those with a poorer prognosis to the control group
“Some physicians took sequentially numbered, opaque, sealed envelopes to the hot light…in the radiology department for deciphering of assignments”
Could lead to inflated treatment effects
Blinding
Lack of adequate blinding can inflate treatment effects
Consider level of blinding and how this may have effected results
Double-blind – least bias
Open trial – greatest bias
Attrition bias (drop-outs)
It can be difficult to keep patients in trials
The more missing data the less valid the trials conclusions
Data not missing at random can bias treatment effects
Results
Selective reporting
Are primary/secondary outcomes pre-specified
Are all outcomes reported?
Appropriate statistical analysis?
Primary analysis: Intent to Treat
Analysis of baseline adjusted scores/reporting of a time x treatment effect. Look out for analysis of endpoint scores
Is an effect size reported?
Were the drop-outs adequately accounted for? Look out for LOCF