#### sailakshmisuresh

##### New Member

We have studied that when building confidence intervals the standard error is multiplied with the respective critical value. In the example aforementioned since the standard deviation is 2% shouldn't we divide that by square root of n to get the standard deviation of the sampling distribution (standard error)- more like sampling distribtion of the sample standard deviation). So whatever might be the sample size shouldn't its square root be used to divide 2%. And I am unable to understand why is 2% multiplied by $60. There should be no need to multiply right since 0.02 is the standard deviation that we already found.

Can we write more like

60 +/- 1.96* (0.02/sq rt of n)

Would be grateful if someone can clarify this doubt!

Thanks