# How to do a hypothesis test for the difference between means when both population variances are known (in Python, using SciPy)

## Task

Assume we have two samples, $x_1, x_2, \ldots, x_n$ and $x’_1, x’_2, \ldots, x’_n$, that come from normally distributed populations with known variances, and the two sample means are $\bar{x}$ and $\bar{x}’$, respectively. We might want to ask whether the difference $\bar{x}-\bar{x}’$ is significantly different from, greater than, or less than zero.

Related tasks:

- How to compute a confidence interval for the difference between two means when both population variances are known
- How to do a hypothesis test for a mean difference (matched pairs)
- How to do a hypothesis test for a population proportion
- How to do a hypothesis test for population variance
- How to do a hypothesis test for the difference between two proportions
- How to do a hypothesis test for the mean with known standard deviation
- How to do a hypothesis test for the ratio of two population variances
- How to do a hypothesis test of a coefficient’s significance
- How to do a one-sided hypothesis test for two sample means
- How to do a two-sided hypothesis test for a sample mean
- How to do a two-sided hypothesis test for two sample means

## Solution

We’re going to use fake data here, but you can replace our fake data with your real data below. You will need not only the samples but also the known population standard deviations.

1
2
3
4

sample1 = [ 5, 8, 10, 3, 6, 2]
sample2 = [13, 20, 16, 12, 18, 15]
population1_sd = 2.4
population2_sd = 3

We must compute the sizes and means of the two samples.

1
2
3
4
5

import numpy as np
n1 = len(sample1)
n2 = len(sample2)
sample1_mean = np.mean(sample1)
sample2_mean = np.mean(sample2)

We choose a value $0 \le \alpha \le 1$ as the probability of a Type I error (a false positive, finding we should reject $H_0$ when it’s actually true). We will use $\alpha=0.05$ in this example.

### Two-tailed test

In a two-tailed test, the null hypothesis is that the difference is zero, $H_0: \bar{x} - \bar{x}’ = 0$. We compute a test statistic and $p$-value as follows.

1
2
3
4

from scipy import stats
test_statistic = ( (sample1_mean - sample2_mean) /
np.sqrt(population1_sd**2/n1 + population2_sd**2/n2) )
2*stats.norm.sf(abs(test_statistic)) # two-tailed p-value

1

1.8204936819059392e-10

Our p-value is less than $\alpha$, so we have sufficient evidence to reject the null hypothesis. The difference between the means is significantly different from zero.

### Right-tailed test

In the right-tailed test, the null hypothesis is $H_0: \bar{x} - \bar{x}’ \le 0$. That is, we are testing whether the difference is greater than zero.

The code is very similar to the previous, except only in computing the $p$-value. We repeat the code that’s in common, to make it easier to copy and paste the examples.

1
2
3
4

from scipy import stats
test_statistic = ( (sample1_mean - sample2_mean) /
np.sqrt(population1_sd**2/n1 + population2_sd**2/n2) )
stats.norm.sf(test_statistic) # right-tailed p-value

1

0.9999999999089754

Our $p$-value is greater than $\alpha$, so we do not have sufficient evidence to reject the null hypothesis. We would continue to assume that the difference in means is less than or equal to zero.

### Left-tailed test

In a left-tailed test, the null hypothesis is $H_0: \bar{x} - \bar{x}’ \ge 0$. That is, we are testing whether the difference is less than zero.

The code is very similar to the previous, except only in computing the $p$-value. We repeat the code that’s in common, to make it easier to copy and paste the examples.

1
2
3
4

from scipy import stats
test_statistic = ( (sample1_mean - sample2_mean) /
np.sqrt(population1_sd**2/n1 + population2_sd**2/n2) )
stats.norm.sf(-test_statistic) # left-tailed p-value

1

9.102468409529696e-11

Our $p$-value is less than $\alpha$, so we have sufficient evidence to reject the null hypothesis. The difference between the means is significantly less than zero.

Content last modified on 24 July 2023.

See a problem? Tell us or edit the source.

Contributed by Elizabeth Czarniak (CZARNIA_ELIZ@bentley.edu)