Please enable JavaScript.
Coggle requires JavaScript to display documents.
Chapter 9: Introduction to the t-statistic - Coggle Diagram
Chapter 9: Introduction to the t-statistic
The t-statistic: an alternative to z
Substitute the sample variance or standard deviation in place of the unknown population value to create the estimated standard error
The t statistic is used to test hypotheses about an unknown population mean when the value of s is unknown.
As sample size increases, so does the value for degrees of freedom (df = n-1) and s^2 will be a better estimate.
As df gets very large, the t distribution gets closer in shape to a normal z-score distribution (bell-shaped, symmetrical, mean of zero).
The t distribution has more variability than a normal z distribution especially when df values are small--flatter and more spread out because both the numerator & denominator vary.
As sample size and df increase, the variability in the distribution decreases, and it more closely resembles a normal distribution.
Hypothesis tests with the t statistic
When the obtained difference between the data & hypothesis (numerator) is much greater than expected (denominator), we obtain a large value for t (whether positive or negative).
In this case, we conclude that the data are not consistent with the hypothesis and the decision is to reject the null hypothesis.
When the difference is small relative to the standard error, we obtain a t statistic near zero and
the decision is fail to reject the null hypothesis.
All you need to compute a t-statistic is a null hypothesis and a sample from the unknown population.
Assumptions of the t-test
1) The values in the sample must consist of independent observations.
2) The population sampled must be normal.
Violating this assumption has little practical effect on the results obtained for a t statistic, especially when the sample size is relatively large.
Any factor that influences the standard error affects the likelihood of rejecting the null hypothesis and finding a significant treatment effect.
The larger the variance, the larger the error, the less likely to obtain a significant treatment effect.
The larger the sample, the smaller the error is, tend to have bigger t statistics and are more likely to produce significant results.
Measuring effect size for the t statistic
Cohen's estimated d
The numerator measures the magnitude of the treatment effect.
The sample std. dev. in the denominator standardized the mean difference into units (d=1.00 --> 1 std. dev).
If we can measure how much of the variability is explained by the treatment, we will obtain a measure of the size of the treatment.
percentage of variance accounted for by the treatment (r^2) =variability accounted for by the treatment effect/total variability
0.01 = small effect
0.09 = medium effect
0.25 = large effect
*only slightly affected by changes in the size of the sample
Confidence interval
A confidence interval is a range of values centered around a sample statistic.
To construct a confidence interval, plug the estimated t value into the t equation
To gain more confidence in your estimate, you must increase the width of the interval. Conversely, to have a smaller, more precise interval, you must give up confidence.
A bigger sample gives you more information about the population and allows you to make a more precise estimate (a narrower interval).
*Since confidence intervals are influenced by sample size, they do not provide an unqualified measure of absolute effect size and are not an adequate substitute for Cohen's d or r^2.