Comments - p-value and level of significance explained - Data Science Central2020-05-31T11:18:58Zhttps://www.datasciencecentral.com/profiles/comment/feed?attachedTo=6448529%3ABlogPost%3A658698&xn_auth=noIf that probability is too lo…tag:www.datasciencecentral.com,2019-05-06:6448529:Comment:8235182019-05-06T21:37:42.413ZRyan H.https://www.datasciencecentral.com/profile/RyanH530
<blockquote>If that probability is too low, we reject the null hypothesis, that is, we say that based on current evidence and testing, the null hypothesis is not true.</blockquote>
<p></p>
<p>Perhaps it's just a matter of semantics but I think it's important to note that a hypothesis test cannot prove or disprove a null hypothesis. You can only reject or fail to reject it. If you reject the null, all you are saying is that your sample data suggests strong evidence in favour of your alternative…</p>
<blockquote>If that probability is too low, we reject the null hypothesis, that is, we say that based on current evidence and testing, the null hypothesis is not true.</blockquote>
<p></p>
<p>Perhaps it's just a matter of semantics but I think it's important to note that a hypothesis test cannot prove or disprove a null hypothesis. You can only reject or fail to reject it. If you reject the null, all you are saying is that your sample data suggests strong evidence in favour of your alternative hypothesis and that the sample mean is statistically different from the null's hypothesized true mean.</p>
<p>When you set a significance level of, say, 0.05, you are setting the probability of committing a type I error, which is rejecting the null when it fact it was true. Again, we don't know if the null is true or not but if it were true and we rejected the null, that would be a type I error. At 0.05, if we were to take many many samples, we'd expect about 5% of them to have 95% confidence intervals that do not capture the true mean.</p>
<p>So, our doubt here is that our single sample might be one of the 5% that doesn't capture the true mean and made us reject the null when if fact we shouldn't have. This is why you can't prove or disprove the null.</p> I recently attended the ASA’s…tag:www.datasciencecentral.com,2017-12-04:6448529:Comment:6628702017-12-04T19:26:28.519ZJanet Dobbinshttps://www.datasciencecentral.com/profile/JanetDobbins
<p><span>I recently attended the ASA’s Symposium on Statistical Inference with Peter Bruce, the founder of Statistics.com. In a chat with the co-chair, Peter asked, partly tongue-in-cheek, whether the real problem was</span> <span class="il">too</span><span> </span><span class="il">much</span><span> </span><span class="il">research</span><span> chasing </span><span class="il">too</span><span> few real results. …</span></p>
<p><span>I recently attended the ASA’s Symposium on Statistical Inference with Peter Bruce, the founder of Statistics.com. In a chat with the co-chair, Peter asked, partly tongue-in-cheek, whether the real problem was</span> <span class="il">too</span><span> </span><span class="il">much</span><span> </span><span class="il">research</span><span> chasing </span><span class="il">too</span><span> few real results. <i class="m_-8580740421984585696m_3939298665677411340bard-text-block m_-8580740421984585696m_3939298665677411340style-scope">Scientific American Online’s</i><span class="m_-8580740421984585696m_3939298665677411340bard-text-block m_-8580740421984585696m_3939298665677411340style-scope"> opinion editor liked the topic, and the result was the following article -<a href="https://blogs.scientificamerican.com/observations/are-scientists-doing-too-much-research/" target="_blank" rel="noopener">https://blogs.scientificamerican.com/observations/are-scientists-doing-too-much-research/</a></span></span></p>