Why does a significant p-value feel misleading when distributions look similar?
#1
I’ve been working on a project where I need to compare two groups, and I ran a t-test that came back significant. But when I plotted the data, the distributions looked almost identical. I’m second-guessing myself now—is a significant p-value alone ever enough to feel confident? I’m worried I’m missing something obvious about what the test is actually telling me.
Reply
#2
That tension is real When the plot looks nearly identical yet the p value is significant it can feel like a mismatch
Reply
#3
Significant p value means the t test detected a difference in means under the assumed model but it says nothing about practical importance or how big the difference is
Reply
#4
Check the effect size and confidence intervals rather than only the p value If the effect is tiny it may be statistically significant with a large sample but not practically meaningful
Reply
#5
Maybe you are chasing a difference in means while the data vary in spread The two groups could have similar looking distributions but one has a little higher center
Reply
#6
Maybe the framing is off Are you asking if the groups differ or if the method is suitable?
Reply
#7
I am not sure the p value alone should calm your nerves Statistical significance does not guarantee real world relevance or clean separation
Reply
#8
Try a bootstrap interval or a non parametric view to map the overlap instead of just chasing a p value If you can point to effect size and overlap you may reframe the question
Reply


[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Forum Jump: