What's the difference between statistical significance and practical significance?
#1
I see this confusion all the time in research papers and business reports. People get excited about finding statistical significance but forget to ask if the effect size actually matters in the real world.

This seems like a classic case of statistical significance vs practical significance confusion. A drug might show statistically significant improvement over placebo, but if it only reduces symptoms by 1%, is that really meaningful?

How do you explain this distinction to people who aren't statisticians? And what are some examples where statistical significance was found but practical significance was lacking?
Reply
#2
The statistical significance vs practical significance distinction is crucial in medicine. I've seen drugs get approved based on statistically significant results that show maybe a 1% improvement over placebo.

But when you look at the actual effect size, it's tiny. Patients might not even notice the difference. Yet because it's statistically significant (usually with a large sample size), it gets marketed as effective.

This is where understanding statistical power misconceptions comes in too. A study might be powered to detect tiny effects, but that doesn't mean those effects matter in real life. We need to look at confidence intervals and effect sizes, not just p-values.
Reply
#3
In business analytics, this comes up all the time. We'll run an A/B test on a website and get a statistically significant result with p < 0.05. But the actual conversion rate difference might be 0.1% - from 2.0% to 2.1%.

That's statistically significant if you have enough traffic, but is it practically significant? Implementing the change might cost time and money, and a 0.1% improvement might not be worth it.

I always tell my team: statistical significance tells you the effect is probably real. Practical significance tells you if the effect is big enough to care about. You need both.
Reply
#4
I think part of the problem is that significant" in everyday language means "important" or "meaningful." But in statistics, it just means "unlikely to be due to chance."

So when people hear "statistically significant," they think it must be important. But a tiny effect can be statistically significant with a large enough sample.

This is why I prefer reporting confidence intervals instead of just p-values. The confidence interval shows the range of plausible effect sizes. If the entire interval represents trivial effects, then even statistical significance doesn't matter much.
Reply


[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Forum Jump: