What Does Standard Deviation Measure?
Standard deviation (SD) measures how much individual values in a dataset differ from the mean. A low standard deviation means values cluster tightly around the mean. A high standard deviation means they are spread far apart.
Consider two sets of exam scores, both with a mean of 70:
The mean alone tells you the average. The standard deviation tells you whether that average is representative of most values, or whether the data is chaotic and the mean hides extreme variation.
Population vs Sample Standard Deviation
There are two versions of the formula, and using the wrong one is a common mistake:
Population Standard Deviation (σ)
Use this when your data is the entire population, every member of the group you care about. For example: the exact heights of all 30 students in a single classroom, or the price of every item in a vending machine.
Sample Standard Deviation (s)
Use this when your data is a sample drawn from a larger population, which is most real-world research. The denominator is N−1 instead of N. This is called Bessel's correction, and it compensates for the fact that a sample tends to underestimate the true spread of the full population.
Step-by-Step Calculation: Worked Example
Dataset: test scores for a sample of 6 students.
Values: 72, 85, 90, 68, 79, 95
Step 1, Calculate the mean
Step 2, Find each deviation from the mean
Step 3, Square each deviation
Step 4, Sum the squared deviations
Step 5, Divide by N−1 (sample variance)
Step 6, Take the square root (standard deviation)
This means most scores fall within roughly 10 points of the mean of 81.5, between about 71 and 92, which matches the dataset well.
What Standard Deviation Tells You
For data that follows a normal (bell-shaped) distribution, standard deviation has a precise meaning through the 68-95-99.7 rule(also called the empirical rule):
For the test score example (mean = 81.5, s = 10.4):
Common Real-World Applications
- ›Finance, volatility: The standard deviation of daily stock returns is a direct measure of price volatility. A stock with SD = 2% moves much more unpredictably than one with SD = 0.3%.
- ›Quality control: Manufacturers use SD to monitor production consistency. If the SD of component dimensions exceeds a threshold, the process may be out of control.
- ›Medicine, clinical trials: SD is reported alongside the mean in clinical studies to show how consistent the treatment response was across patients.
- ›Education: Standardised test scores (SAT, IQ) are designed so the SD equals a specific value (e.g. IQ: mean = 100, SD = 15), allowing direct comparison across test-takers.
- ›Weather forecasting: A temperature forecast with low SD means high confidence; high SD means greater uncertainty.
Standard Deviation vs Variance
Variance is simply the square of the standard deviation (s²). Both measure spread, but they have different units. If your data is in centimetres, the variance is in cm², an awkward unit. Standard deviation is in the same unit as your original data, making it much easier to interpret in real-world contexts.
Variance is commonly used in mathematical derivations and statistical tests (like ANOVA) because variances add nicely when combining independent datasets. For communication and interpretation, standard deviation is almost always preferred.
Z-Scores: Standard Deviation in Action
A z-score tells you how many standard deviations a particular value sits above or below the mean:
Z-scores allow you to compare values from different distributions on a common scale. A z-score of +1.30 corresponds to approximately the 90th percentile of a normal distribution , meaning the student scored better than about 90% of their peers.