'R | Function "t.test()" vs qnorm : issue with the 95 percent confidence interval

In R, I loaded the mtcars data and tried to find the 95 percent confidence of the variable mpg interval with qnorm :

data(mtcars)
with(mtcars, qnorm(p = 0.975,mean = mean(mpg),sd = sd(mpg)))

which returns :

31.90323

And :

with(mtcars, qnorm(p = 0.025,mean = mean(mpg),sd = sd(mpg)))

returns :

8.525658

But with the t.test() function, the result is different :

with(mtcars, t.test(mpg,conf.level = 0.95))

I get :

95 percent confidence interval:
17.91768 22.26357

I don't understand why is that. Please light my lantern.

r


Solution 1:[1]

The confidence interval withe the function t.test() is given here for the mean and not for the variable. To prove this, we can take the result of statistics presenting the standard deviation for the mean as :

Standard deviation of mean : click here

Proof :

with(mtcars, qnorm(p = 0.975,mean = mean(mpg),sd = sd(mpg)/sqrt(length(mpg))))
with(mtcars, qnorm(p = 0.025,mean = mean(mpg),sd = sd(mpg)/sqrt(length(mpg))))

Returns :

22.17882
18.00243

It's not far from the values given by t.test() but still not equal. Why is that ?

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Idr SEK