A couple of weeks ago, I recommended some basic statistical changes in a review that I was performing, drawing the author’s attention to the incorrect use of standard error in their analyses. I actually received their answer today as “Why would this matter? The measure would be the same“.
But it actually CANNOT be the same! The standard deviation (SD) is basically a measure of variability. About 95% of observations usually fall within the 2 standard deviation limits (in a normal distribution). But if we want to estimate how much the mean will vary from the SD of our sampling, then we should calculate the standard error (SE). SE depends on both the standard deviation and the sample size (SE = SD/√(sample size)), so it usually decreases as the sample size increases (which gives a very nice short measure in a graph!!) while SD usually doesn’t change as we increase the sampling size.
Basically, if we want to detect how scattered some data is, we should use the SD while if we want to detect the uncertainty around the estimate of a mean, we should then use the SE of the mean.
What actually concerns me is the fact that these doubts are more common than they should be considering that we all use them in our papers. For instance, the ± sign is widely used to join the observed mean to something that could either be the SD or SE since not all authors identify which measure of dispersion was used. It is no doubt a mistake from the authors, but what about the editors and the journals? Doesn’t anyone revises this?