Please enable JavaScript.
Coggle requires JavaScript to display documents.
Introduction to Hypothesis Testing - Coggle Diagram
Introduction to Hypothesis Testing
Hypothesis testing is a statistical techinique that uses sample data to evaluate a hypothesis about a population.
For example:
A reasearcher completes a research study and then uses hypothesis test to evaluate results.
The goal of the hypothesis test is determine what happens to the population after the test/treatment is administered.
If the treatment has any affect then it add a constant to each score.
Step 1: State the hypothesis
Begin with indicating a hypothesis about the unknown population. There are two hypothesis:
Null hypothesis states that the treatment will not change the population. Notation: H0
Alternative hypothesis states that there is a change or diffrence in the dependent variable when treatment is applied. Notation: H1
Step 2: Set the Criteria for a Decision
The researcher will use the data from the sample to prove the null hypothesis.
Sample means that are likely to be obtained when H0 is true are those close to the null hypotheis
Sample means that are very unlikely to be obtained if H0 is true are those that are very different than the null hypothesis.
Alpha level: The probability value that is used define the concept "very unlikely" in a hypothesis test.
The level of significance is how researchers define what is high or low probability.
The critical region describes the extreme values.
These values are obtained if the the null hypothesis is true.
The boundaries of the critical region are obtained by Alpha level or level of significance
If sample data are recorded in the alpha region, the null hypothesis is rejected.
Step 3: Compute Z-score
Test statistics: Sample data that is converted into a single statistic that is used to test the hypotheses.
Z score in the frame of the hypothesis test:
z = M - µ/Mσ
or Z score= actual difference between the sample M and the hypothesis µ/standard difference between M and µ with no treatment effect
Errors and Uncertainty
Type I errors: Occurs when a researcher rejects a null hypothesis that is actually true.
In research, the researcher determines that a treatment does have any effect when it really doesn't.
The alpha level determines the probability of obtaining sample data that in the critical region even if the hypothesis is true.
Type I error can lead to a false report
Type II occurs when a researcher fails to reject a null hypothesis that is in fact false.
A type two erros suggests that the hypothesis test has failed to detect a real effect from treatment.
The probability of the type II error is represented by symbol or β
Step Four: Make Decision
Two possible outcomes:
1.The sample data are located in the critical region. Our sample is not consistent with H0 and our decision is to reject the null hypothesis.
The sample data are not in the critical region. Because the data do not provide strong evidence that the null evidence is wrong, we will fail to reject the null hypothesis.
A signifcant result means that the null hypothesis has been rejected.
A result is said to be signifcant or statistically significant if it is very unlikely to occur when the null hypothesis is true.
Factors that influence hypothesis test
The larger the variability, the lower the likelihood of finding a significant treatment effect.
A large difference in mean indicates that the treated sample is noticeably different from the untreated population and usually supports a conclusion that the treatment effect is significant.
The number of scores in a sample affects hypothesis testing. Reducing the number of X can affect the z score.
Directional (One Tailed) Hypothesis Tests
In this hypothesis test, the statistical hypotheses specify either an incease or decrease in pupluation mean. In other words, they indicate the direction of the effect being measured.
It is always good to begin with the stating the hypothesis (whether the treatment has affect or not).
Directiona/ One tailed hypothesis tests use < or > = symbols to describe the relationships to describe the predicted effect.
For Direction (One Tailed) Hypothesis Tests, the critical region can be defined as the easiest way to determine if the sample values show convincing evidence that treatment does have an effect.
The difference between one and two tail hypothesis testing is in how they reject H0.
A one tailed hypothesis test allows you to reject the null hypothesis when the difference between the sample and population is small and the difference shows in the direction.
A two tailed test requires a large difference without regard to direction
Effect Size
The effect size is the measurement of the absolute magnitude of a treatment effect. This is independent of the samples being used.
The formula is:
Mtreatment - µ no treatment/ σ
or Mean difference/ standard deviation
Cohen's D (formula described below) suggests that effect size can be standardized by measuring the mean difference in terms of the standard deviation.
Evaluating the size of a treatment effect was also suggested by Cohen:
d = .2 - small effect
d = .5 - medium effect
d = .8 - large effect
Statistical Power
The power of a statistical test the probability that the test will correctly reject a false null hypothesis.
The power will identify a treatment effect if one really exists.
The formula is p = 1- β
The power is calculated before researchers conduct a study
Factors that affect Power
The size of the sample affects the power. A larger sample produces greater power for a hypothesis test. Tests with larger samples run the risk of sample means in the critical regions.
Reducing the Alpha level also reduces power.
Changing from a two tailed to a one tailed test will affect the power by causing a larger proportion of the treatment distribution to be in the critcal region.