Comments - Significance of p-value - Data Science Central2020-11-28T20:19:29Zhttps://www.datasciencecentral.com/profiles/comment/feed?attachedTo=6448529%3ABlogPost%3A702198&xn_auth=noAmlan,
P value can be import…tag:www.datasciencecentral.com,2018-03-28:6448529:Comment:7079572018-03-28T21:07:47.619ZDan Butorovichhttps://www.datasciencecentral.com/profile/DanButorovich965
<p>Amlan,</p>
<p></p>
<p>P value can be important in small scale research, but it loses meaning when you have n >10,000. Effect size is also needed to see if your p value has any real impact. For Chi-square, effect size is estimated by phi. Phi = sqrt(ChiSquare/n). In the first and second case, your phi is small, meaning that although you had a significant result, the effect was small. In practical terms, if you react to this result, you may end up costing the company more in fixing the…</p>
<p>Amlan,</p>
<p></p>
<p>P value can be important in small scale research, but it loses meaning when you have n >10,000. Effect size is also needed to see if your p value has any real impact. For Chi-square, effect size is estimated by phi. Phi = sqrt(ChiSquare/n). In the first and second case, your phi is small, meaning that although you had a significant result, the effect was small. In practical terms, if you react to this result, you may end up costing the company more in fixing the issue than it is worth. The p value of your second case is .00004, far less than your alpha of .05 - a significant result. The phi however is .18, which is a fairly small effect size. Statistically significant is different from real world impact. Something we have to keep in mind. Always look for effect size along with P value. P is just one of several outcome parameters we look at in stats.</p> This post demonstrates the po…tag:www.datasciencecentral.com,2018-03-18:6448529:Comment:7051482018-03-18T22:51:12.682ZJoseph F Luckehttps://www.datasciencecentral.com/profile/JosephFLucke
<p>This post demonstrates the poverty of frequentist statistical inference. First, the procedure does not answer the question. The engineers' question was "Given the data y, what's Pr(theta >= .03 | y)?". The hypothesis testing procedure addresses the question "What's Pr(y | theta=.03)"? Answering the first question by the second can lead to error.</p>
<p>Consider the Bayesian approach.</p>
<p>Assume the propensity to failure follows a beta density.</p>
<p>Assume further the engineers'…</p>
<p>This post demonstrates the poverty of frequentist statistical inference. First, the procedure does not answer the question. The engineers' question was "Given the data y, what's Pr(theta >= .03 | y)?". The hypothesis testing procedure addresses the question "What's Pr(y | theta=.03)"? Answering the first question by the second can lead to error.</p>
<p>Consider the Bayesian approach.</p>
<p>Assume the propensity to failure follows a beta density.</p>
<p>Assume further the engineers' beliefs are beta(.3, 9.7), so that the mean propensity to failure is 3% and they are 95% confident that the propensity lies between 0 and 18%. Their prior probability that the propensity (designated theta) is greater than 3% is 28%.</p>
<p>According to the first scenario, the observed failure rate is 20 out of 500. Using Bayes Theorem, the posterior mean is beta(20+.3, 480+9.7). The posterior mean is 4% and they are now 95% confident that it lies between 2% and 7%. Their posterior Pr(theta>.03 | y= 20 out of 500) = .88. This is a 50% increase from their prior and in a business and safety climate might be worrisome.</p>
<p>According to the second scenario, the observed failure rate is 50 out of 500. Using Bayes Theorem, the posterior mean is beta(50+.3, 450+9.7). The posterior mean is 10% and the engineers are now 95% confident that it lies between 8% and 13%. Their posterior Pr(theta>.03 | y= 50 out of 500) = 1. In this scenario, the engineers can clearly conclude that the propensity to failure exceeds the 3% threshold.</p>
<p>The Bayesian approach gives a clear answer to the question posed. The engineers' prior information is taken into account. The confidence interval is interpreted naturally as the probability that an uncertain parameter falls within fixed bounds (and not as one in a hypothetical infinite sequence of such intervals that cover the unknown parameter 95% of the time. The author makes this common misinterpretation of frequentist confidence intervals) </p>
<p>Furthermore, the Bayesian and frequentist disagree on the import of the evidence from the first scenario. The frequentist makes the dogmatic claim that there is insufficient evidence to worry. The Bayesian makes a more nuanced claim that the evidence is sufficiently but not definitively worrisome.</p> Hi Amlan,
thank you, I have…tag:www.datasciencecentral.com,2018-03-14:6448529:Comment:7037942018-03-14T15:41:31.705Zaman Ullahhttps://www.datasciencecentral.com/profile/amanUllah
<p>Hi Amlan,</p>
<p></p>
<p>thank you, I have ambiguity if you clarify which are</p>
<ol>
<li>Is p-value called significance probability and alpha called significance level?</li>
<li>There is inverse relationship between p-value and sample size</li>
<li>I did some changes in previous question to reduce ambiguity as</li>
</ol>
<p> If p-value=1%(not alpha), then which one is correct </p>
<ol>
<li> The probability that null hypothesis is true is 1 in 100</li>
<li>The probability that…</li>
</ol>
<p>Hi Amlan,</p>
<p></p>
<p>thank you, I have ambiguity if you clarify which are</p>
<ol>
<li>Is p-value called significance probability and alpha called significance level?</li>
<li>There is inverse relationship between p-value and sample size</li>
<li>I did some changes in previous question to reduce ambiguity as</li>
</ol>
<p> If p-value=1%(not alpha), then which one is correct </p>
<ol>
<li> The probability that null hypothesis is true is 1 in 100</li>
<li>The probability that null hypothesis is false is 1 in 100</li>
<li>The probability that alternative hypothesis is true is 1 in 100</li>
<li>None of the above</li>
</ol> Hi Aman, you can calculate p-…tag:www.datasciencecentral.com,2018-03-13:6448529:Comment:7036202018-03-13T15:16:07.660ZAmlan Kumar Pradhanhttps://www.datasciencecentral.com/profile/AmlanKumarPradhan
Hi Aman, you can calculate p-value in excel or you can use online p-value calculator. For the second part you probably mean to say if alpha=1% which I will consider as significance level 0.01. If my understanding is true then the fourth option will be appropriate.
Hi Aman, you can calculate p-value in excel or you can use online p-value calculator. For the second part you probably mean to say if alpha=1% which I will consider as significance level 0.01. If my understanding is true then the fourth option will be appropriate. Hi Amlan
It is a very interes…tag:www.datasciencecentral.com,2018-03-12:6448529:Comment:7030942018-03-12T15:57:34.062Zaman Ullahhttps://www.datasciencecentral.com/profile/amanUllah
<p>Hi Amlan</p>
<p>It is a very interesting example, Would you answer the following questions that</p>
<p>how you find out(is there any formula) p-value, and</p>
<p>If p-value=1%, then which one is correct </p>
<p>1. The probability that null hypothesis is true is 1 in 100</p>
<p>2. The probability that null hypothesis is false is 1 in 100</p>
<p>3. The probability that alternative hypothesis is true is 1 in 100</p>
<p>4. The probability of getting data as extreme as we have done is 1 in 100 if…</p>
<p>Hi Amlan</p>
<p>It is a very interesting example, Would you answer the following questions that</p>
<p>how you find out(is there any formula) p-value, and</p>
<p>If p-value=1%, then which one is correct </p>
<p>1. The probability that null hypothesis is true is 1 in 100</p>
<p>2. The probability that null hypothesis is false is 1 in 100</p>
<p>3. The probability that alternative hypothesis is true is 1 in 100</p>
<p>4. The probability of getting data as extreme as we have done is 1 in 100 if the null hypothesis is true</p>