Please enable JavaScript.
Coggle requires JavaScript to display documents.
The t Test for Two Independent Samples - Coggle Diagram
The t Test for Two Independent Samples
Introduction to the Independent-Measures Design
independent-measures research design: A research design that uses a separate group of participants for each treatment condition (or for each population) is called an independent-measures research design or a between-subjects design.
between-subjects design: A research design that uses a separate group of participants for each treatment condition (or for each population) is called an independent-measures research design or a between-subjects design.
repeated-measures research design or a within-subjects design: in which the two sets of data are obtained from the same group of participants
The Role of Sample Variance and Sample Size in the Independent-Measures t Test
Error and the Role of Individual Differences
Individual differences among participants contribute to variability in a sample, which in turn affects the standard error. Random assignment in an independent-measures design helps minimize bias between groups, but pretesting or using matched samples can further ensure that groups are equivalent before treatments are applied.
Hypothesis Tests with the Independent-Measures t Statistic
Assumptions Underlying the Independent-Measures t Formula
The two populations from which the samples are selected must be normal.
The two populations from which the samples are selected must have equal variances.
The observations within each sample must be independent
Hartley’s F-Max Test: To determine whether the homogeneity of variance assumption is satisfied in an independent-measures test, you can compare the sample variances because they should be similar if the assumption holds. Alternatively, you can use a statistical test such as Hartley’s F-max test, which provides a more objective way to check equality of variances for two or more groups.
Directional Hypotheses and One-Tailed Tests
Step 2: Locate the Critical Region
Step 3: Collect the Data and Calculate the Test Statistic
Step 1: State the Hypotheses and Select the Alpha Level
Step 4: Make a Decision
The Hypotheses and the Independent-Measures t Statistic
Calculating the Estimated Standard Error
Each of the two sample means represents its own population mean, but in each case there is some error. M1 approximates μ1 with some error. M2 approximates μ2 with some error.
For the independent-measures t statistic, we want to know the total amount of error involved in using two sample means to approximate two population means. To do this, we will find the error from each sample separately and then add the two errors together.
Pooled Variance: One method for correcting the bias in the standard error is to combine the two sample variances into a single value
The Formulas for an Independent-Measures Hypothesis Test
The basic structure of the t statistic is the same for both the independent-measures and the single-sample hypothesis tests. In both cases,
actual difference between sample data and the hypothesis/expected difference between sample data and hypothesis with no treatment effect
The independent-measures t is basically a two-sample t that doubles all the elements of the single-sample t formulas.
Estimated Standard Error: sM1−M2=√(s^2/1/n1)/n1 + (s^2/2)/n2
The Hypotheses for an Independent-Measures Test
the null hypothesis for the independent-measures test is: H0: μ1−μ2=0 (no difference between the population means
the alternative hypothesis states that there is a mean difference between the two populations: H1:μ1−μ2 =/= 0
The Final Formula and Degrees of Freedom: sample mean difference-population mean difference/estimated standard error
Effect Size and Confidence Intervals for the Independent-Measures t
Explained Variance and
r^2
By measuring exactly how much of the variability can be explained, we can obtain a measure of how big the treatment effect actually is. The calculation of r^2 for the independent-measures t is exactly the same as it was for the single-sample t
r^2=t^2/t^2+df
Confidence Intervals for Estimating
As with the single-sample t, the first step is to solve the t equation for the unknown parameter. For the independent-measures t statistic, we obtain
μ1-μ2 = M1-M2 +-ts(M1-M2)
Cohen’s Estimated d
d = mean difference/standard deviation = μ1-μ2/σ
Confidence Intervals and Hypothesis Tests
Estimation not only quantifies the size of a treatment effect but also provides information about its significance. A 95% confidence interval that does not include zero indicates that the null hypothesis can be rejected with 95% confidence, while a confidence interval that does include zero means the null hypothesis cannot be rejected.