Please enable JavaScript.
Coggle requires JavaScript to display documents.
PROBABILITY AND SIGNIFICANCE - Coggle Diagram
PROBABILITY AND SIGNIFICANCE
use of statistical tables
calculated & critical values
once statistical test had been calculated the result is a number - the
calculated value
(/observed value)
to check
statistical significance
the calculated value must be compared with the
critical value
- a number which tells us whether or not we can reject the null hypothesis & accept the alternative hypothesis
each statistical test has it's own
table of critical values
developed by statisticians
for some statistical tests, calculated value must be equal to or greater than the critical value, for other test is must be equal too or less than the critical value
using tables of critical values
there's 3 criteria to help researcher decide which critical value to use:
one-tailed/two-tailed test?
- used one-tailed if hypothesis was directional & two-tailed for non-directional hypothesis. probability levels doubles when two-tailed test are being used as they're a more conservative prediction
no. of ppts in the study
. usually appears as the
N
value on the table. for some tests
degrees of freedom
(df)
are calculated instead
levels of significance
(or
p
value)
levels of significance
o.o5 is standard level in psych research
but more stringent levels of significance may be used (i.e. 0.01) in studies where there may be a human cost, e.g., drug trials - or 'one-off' studies which for whatever particular reason can't be repeated in the future
the null hypothesis
researchers begin research by writing a hypothesis.
may be directional/non-directional depending how confident research is in outcome of the investigation
hypothesis which states a difference/correlation between conditions is sometimes referred to as an
alternative hypothesis
because it's alternative to the
null hypothesis
the null hypothesis states there is 'no difference/correlation' between conditions:
statistical test
determines which hypothesis is 'true' & thus we rejected/accept the null hypothesis
levels of significance & probability
statistical test work on basis of probability rather than certainty.
all statistical tests employ a
significance level
- point at which research can claim they've discovered large enough difference/correlation within the data to claim an effect has been found
i.e. point at which researcher can reject the null hypothesis & accept the alternative hypothesis
usual level of
significance
in psych is
0.05 (or 5%)
properly written as
p ≤ 0.05
(p stands for probability)
means probability that observed effect (the result) occurred when there's no effect in the population is equal to or less than 5%.
means even when researcher claim to have found significant difference/correlation, there's still upto 5% chance it isn't true for target population from which the sample was drawn
psychologist can never be 100% certain about a particular result as they've not tested all members of population under all possible circumstances
type 1 and type 2 errors
because researchers can't be 100% certain they've found statistical significance, it's possible that wrong hypothesis may be accepted
Type 1 error
= null hypothesis is rejected & alternative hypothesis accepted when it should've been other way around because, in reality, the null hypothesis is 'true'
often referred to as an optimistic error/false positive as research claims to have found a significant difference/correlation when one doesn't exist
type 2 error
= reverse of type 1 error. null is accepted but should have been alternative because, in reality, alternative hypothesis is true. this is a pessimistic error/'false negative
we're more likely to make type 1 error if the significance level is too lenient (too high) e.g. 0.1/10% rather than 5%
type 2 error = more likely if significance level is too stringent (too low) e.g., 0.01 (1%) as potentially significant value may be missed
psychologist prefer 5% level of significance as it best balances the risk of making a type 1 or 2 error