Tag Archives: single-factor within-subjects analysis of variance

Week 29: The Single-Factor Within-Subjects Analysis of Variance

Let’s change focus a bit this week and look at some ANOVA-related tests for dependent samples. We can start with the single-factor within-subjects analysis of variance!

When Would You Use It?
The single-factor within-subjects analysis of variance is a parametric test used to determine if, in a set of k dependent samples, at least two samples represent populations with different mean values.

What Type of Data?
The single-factor within-subjects analysis of variance requires interval or ratio data.

Test Assumptions

  • The sample of subjects has been randomly chosen from the population it represents.
  • The distribution of data in the underlying populations for each experimental condition/factor is normal.
  • The assumption of sphericity is met.

Test Process
Step 1: Formulate the null and alternative hypotheses. The null hypothesis claims that the k population means are equal. The alternative hypothesis claims that at least two of the k population means are different.

Step 2: Compute the test statistic, an F-value. To do so, calculate the following sums of squares values for between-conditions (SSBC), between-subjects (SSBS), and the residual (SSR):


Then compute the mean squared difference scores for between-subjects (MSBC), between subjects (MSBS), and the residual (MSR):


Finally, compute the F statistic by calculating the ratio:


Step 3: Obtain the p-value associated with the calculated F statistic. The p-value indicates the probability of a ratio of MSBC to MSR equal to or larger than the observed ratio in the F statistic, under the assumption that the null hypothesis is true. Unless you have software, it probably isn’t possible to calculate the exact p-value of your F statistic. Instead, you can use an F table (such as this one) to obtain the critical F value for a prespecified α-level. To use this table, first determine the α-level. Find the degrees of freedom for the numerator (or MSB; the df are explained below) and locate the corresponding column on the table. Then find the degrees of freedom for the denominator (or MSE; the df are explained below) and locate the corresponding set of rows on the table. Find the row specific to your α-level. The value at the intersection of the row and column is your critical F value.

Step 4: Determine the conclusion. If the p-value is larger than the prespecified α-level (or the calculated F statistic is larger than the critical F value), fail to reject the null hypothesis (that is, retain the claim that the population means are all equal). If the p-value is smaller than the prespecified α-level, reject the null hypothesis in favor of the alternative.

The example I want to look at today comes from a previous semester’s STAT 213 grades. The class had two midterms and I final. Taking a sample of n = 30 from this class, I wanted to see if the average test grades were all similar across all three tests or if there were some statistically significant differences. Let α = 0.05.

H0: µmidterm1 = µmidterm2 = µmidterm3
Ha: at least one pair of means are different



For this case, the critical F value is 3.15. Since the computed F value is larger than the critical F value, we reject H0 and conclude that at least two test grades have population means that are statistically significantly different.