Featured Discussions - Data Science Central2020-01-29T09:18:36Zhttps://www.datasciencecentral.com/forum/topic/list?feed=yes&xn_auth=no&featured=1Suitability of Augmented Analytics...tag:www.datasciencecentral.com,2020-01-07:6448529:Topic:9208382020-01-07T10:28:52.753ZAhmed Salmanhttps://www.datasciencecentral.com/profile/AhmedSalman
<p>Dear All, </p>
<p>It would be really great if someone answers to the following question in some detail:</p>
<p>Well, according to Gartner, "Augmented Analytics" is the future of data and analytics.</p>
<p>Given that an "augmented analytics system" can analyze both structured and unstructured data, my question is: where you can apply it and where you cannot.</p>
<p>I think clear broad conception about applications of "augmented analytics" would be helpful for us.</p>
<p>Best…</p>
<p>Dear All, </p>
<p>It would be really great if someone answers to the following question in some detail:</p>
<p>Well, according to Gartner, "Augmented Analytics" is the future of data and analytics.</p>
<p>Given that an "augmented analytics system" can analyze both structured and unstructured data, my question is: where you can apply it and where you cannot.</p>
<p>I think clear broad conception about applications of "augmented analytics" would be helpful for us.</p>
<p>Best regards,</p>
<p>Salman</p> Creating Polynomial Features in ML using sklearntag:www.datasciencecentral.com,2019-12-26:6448529:Topic:9176682019-12-26T14:43:34.797ZVishal Kapurhttps://www.datasciencecentral.com/profile/VishalKapur
<p>I have 10 features and all of them are numeric.</p>
<p>Does polynomial features only can be used on continuous variables and not on discrete variables??<br/> Out of 10 features which I should pick for creating polynomial features??Choosing criteria..</p>
<p>I should take independent features for creating polynomial features or I should take features which are highly correlated with the dependent Y variable?? </p>
<p></p>
<p>I have 10 features and all of them are numeric.</p>
<p>Does polynomial features only can be used on continuous variables and not on discrete variables??<br/> Out of 10 features which I should pick for creating polynomial features??Choosing criteria..</p>
<p>I should take independent features for creating polynomial features or I should take features which are highly correlated with the dependent Y variable?? </p>
<p></p> NLP: POS Tagger for French languagetag:www.datasciencecentral.com,2019-11-25:6448529:Topic:9104872019-11-25T00:09:45.556ZAneleyhttps://www.datasciencecentral.com/profile/Aneley
<p>Hi,</p>
<p></p>
<p>I'm new in the context NLP and i search POSTAGGER for french</p>
<p>I already use Spacy but results are not optimal for french language.</p>
<p>Can you help me?</p>
<p>Thanks you</p>
<p>Hi,</p>
<p></p>
<p>I'm new in the context NLP and i search POSTAGGER for french</p>
<p>I already use Spacy but results are not optimal for french language.</p>
<p>Can you help me?</p>
<p>Thanks you</p> Simulating Distributions with One-Line Formulas, even in Exceltag:www.datasciencecentral.com,2019-11-10:6448529:Topic:9069802019-11-10T18:24:47.066ZVincent Granvillehttps://www.datasciencecentral.com/profile/VincentGranville
<p>If you don't like using black-box R functions, or you don't have access to these functions, here are simple options to simulate deviates from various distributions. They can even be implemented in Excel! You first need to simulate uniform deviates on [0, 1]. If you don't trust the function available in your programming language, here is a good alternative:</p>
<p><br></br> rnd = 1000<br></br> for (n=0; n<20000; n++) {<br></br> rnd=(10232193 * rnd + 3701101) % 54198451371<br></br> Rand= rnd /…</p>
<p>If you don't like using black-box R functions, or you don't have access to these functions, here are simple options to simulate deviates from various distributions. They can even be implemented in Excel! You first need to simulate uniform deviates on [0, 1]. If you don't trust the function available in your programming language, here is a good alternative:</p>
<p><br/> rnd = 1000<br/> for (n=0; n<20000; n++) {<br/> rnd=(10232193 * rnd + 3701101) % 54198451371<br/> Rand= rnd / 54198451371</p>
<p>}</p>
<p>This code produces 20,000 deviates of a uniform distribution on [0, 1]. The deviates are stored in the variable named Rand. The symbol % stands for the modulo operator.</p>
<p><a href="https://storage.ning.com/topology/rest/1.0/file/get/3706746208?profile=original" target="_blank" rel="noopener"><img src="https://storage.ning.com/topology/rest/1.0/file/get/3706746208?profile=RESIZE_710x" class="align-center"/></a></p>
<p>Now, assuming Rand, Rand1 and Rand2 are uniform deviates on [0, 1], here is how to sample deviates from various other distributions:</p>
<p><strong>Normal(0, 1) and log-normal deviates</strong>:</p>
<ul>
<li><span style="text-decoration: underline;">Normal</span>: x = sqrt(-2* log(Rand1)) * cos(2* Pi *Rand2) </li>
<li><span style="text-decoration: underline;">Log-normal</span>: y = exp(x)</li>
</ul>
<p><strong>Exponential deviates of parameter Lambda:</strong></p>
<ul>
<li>x = - log(1 - Rand) / Lambda</li>
</ul>
<p><strong>Geometric deviates of parameter P:</strong></p>
<ul>
<li>if (Rand < P) { x = 0 } else { x = int(log(1 - Rand) / log(1 - P)) }</li>
</ul>
<p><strong>Power law deviates with exponent B, on [0, A]:</strong></p>
<ul>
<li>x = A * Rand^(1 / B)</li>
</ul>
<p>Do you know any simple formula to generate other types of deviates?</p>
<p></p> Hybrid method of Data Envelopment Analysis with Supervised Learningtag:www.datasciencecentral.com,2019-11-10:6448529:Topic:9069392019-11-10T01:59:48.153ZBagus Prabowo Ajihttps://www.datasciencecentral.com/profile/BagusPrabowoAji
<p>Dear members of data science central,</p>
<p>I look forward for any suggestions from anyone, related to my paper about convenience store performance measurement.</p>
<p><strong>Background Problems</strong>: Convenience stores have recently been a trend place of daily necessities shopping for Indonesians. This condition boost the growth of convenience store’s numbers and encourage the management to improve its performance in order to face tight business competition, while the performance of…</p>
<p>Dear members of data science central,</p>
<p>I look forward for any suggestions from anyone, related to my paper about convenience store performance measurement.</p>
<p><strong>Background Problems</strong>: Convenience stores have recently been a trend place of daily necessities shopping for Indonesians. This condition boost the growth of convenience store’s numbers and encourage the management to improve its performance in order to face tight business competition, while the performance of convenience stores is actually determined by the efficiency of various product categories. In relation to this, the concept of benchmarking through Data Envelopment Analysis (DEA) is one of the well-known method used to measure company’s efficiency that can be utilized to measure firm performance. However, DEA has limitation in handling large amounts of data, but supervised learning technique can be used as an alternative method to overcome it.</p>
<p><strong>Main Objectives</strong>: This study provide an integrated model that applies benchmarking concept and supervised learning technique to measure performance of convenience store by considering the efficiency of various product categories.</p>
<p><strong>Novelty</strong>: This is the first study that utilizes SVM algorithm besed on DEA for measuring performance of a local convenience store.</p>
<p><strong>Research Methods</strong>: The proposed approach has several steps. First step, calculating efficiency score product categories using DEA method. Second step, using the effeciency score as class feature for the data set to train the SVM model through K-Fold 5 cross validation, then predicting the efficiency score based on the test set. Final step, evaluating the number of efficient and inefficient product categories to determine the performance of convenience store.</p>
<p><strong>Conclusion</strong>: The proposed method has been successfully established and proven valid in predicting efficiency of products category to measure convenience store performance. Furthermore, this present research indicates that local convenience store has 39.4% inefficient product categories, while 60.6 % other product categories are efficient.</p> Data science degreetag:www.datasciencecentral.com,2019-10-16:6448529:Topic:8995432019-10-16T21:24:56.692ZLuckyhttps://www.datasciencecentral.com/profile/Lucky441
<p>Dear forum members,</p>
<p></p>
<p>I have started working as a customer data insight analyst after working as a consultant in a different domain for 14 years.</p>
<p>I got this job because i know general sql and python and formally educated in mathematics and computer applications.</p>
<p></p>
<p>My job involves customer churn analysis and my company is using mostly excel /tableau, i am exploring few python libraries like pandas but i am not able to implement the data science concepts like…</p>
<p>Dear forum members,</p>
<p></p>
<p>I have started working as a customer data insight analyst after working as a consultant in a different domain for 14 years.</p>
<p>I got this job because i know general sql and python and formally educated in mathematics and computer applications.</p>
<p></p>
<p>My job involves customer churn analysis and my company is using mostly excel /tableau, i am exploring few python libraries like pandas but i am not able to implement the data science concepts like predictive analysis due to pressure to produce outputs and i end up working in excel.</p>
<p></p>
<p>In my company, there is no data scientist and people are inclined to use excel but I am aspiring to become a data scientist but not formally educated in data science.</p>
<p></p>
<p>Can anyone suggest me if taking a data science degree can speed up my skills to apply the data science techniques in my company?</p>
<p></p>
<p>Regards,</p>
<p>Lucky </p>
<p></p>
<p></p>
<p></p>
<p></p>
<p></p>
<p> </p> Diminishing returns in econometricstag:www.datasciencecentral.com,2019-10-15:6448529:Topic:8985692019-10-15T11:04:09.037ZJeremy Hornehttps://www.datasciencecentral.com/profile/JeremyHorne638
<p>I was wondering if anyone here has much experience in building econometric models - specifically in calculating diminishing returns as there are tonnes of different ways to go about this. For simplicity, I have previously used <span>an exponential decay (e to the power of -(a.x) where a is the rate of diminishing returns and x is the rate of media spend - but there are many other ways to model this (e.g. Linear log models, Multiplicative Competitive Interaction) and I'd be interested to hear…</span></p>
<p>I was wondering if anyone here has much experience in building econometric models - specifically in calculating diminishing returns as there are tonnes of different ways to go about this. For simplicity, I have previously used <span>an exponential decay (e to the power of -(a.x) where a is the rate of diminishing returns and x is the rate of media spend - but there are many other ways to model this (e.g. Linear log models, Multiplicative Competitive Interaction) and I'd be interested to hear of people's experiences as to which of these have worked well.</span></p>
<p></p> Recommendation on a data visualization booktag:www.datasciencecentral.com,2019-09-30:6448529:Topic:8925312019-09-30T05:28:07.985ZJames Austinhttps://www.datasciencecentral.com/profile/JamesAustin
<p>I was looking for the best <font style="background-color: #ffffff;">data visualization book</font> that I should have. Any recommendations? Thanks in advance</p>
<p>I was looking for the best <font style="background-color: #ffffff;">data visualization book</font> that I should have. Any recommendations? Thanks in advance</p> Optimization algotag:www.datasciencecentral.com,2019-09-26:6448529:Topic:8912922019-09-26T11:30:04.542ZOleksii Kulishhttps://www.datasciencecentral.com/profile/OleksiiKulish
<p>Hi all. <br></br>Reading an article about ELO rank I have a question. The probability of the "A" team win is a sigmoid function like 1 / (1 + exp( RankB -RankA))<br></br>and after the game we need to update these ranks like Rank_new = Rank_old +- K*(1(0) - probability) <br></br><br></br>So the main question is how I can use for example NN( or other algo) for finding "K" parameter to make binary crossentropy minimum. And I hope it musn't be constant (I want to find dependent from the initial player…</p>
<p>Hi all. <br/>Reading an article about ELO rank I have a question. The probability of the "A" team win is a sigmoid function like 1 / (1 + exp( RankB -RankA))<br/>and after the game we need to update these ranks like Rank_new = Rank_old +- K*(1(0) - probability) <br/><br/>So the main question is how I can use for example NN( or other algo) for finding "K" parameter to make binary crossentropy minimum. And I hope it musn't be constant (I want to find dependent from the initial player rating) <br/><br/>The main my problem that I can't understand is that we need after updating parameters use a new input rank for calculating probability. So every epoch we need to update input <br/><br/><br/></p> Insight in datatag:www.datasciencecentral.com,2019-09-18:6448529:Topic:8892032019-09-18T08:58:11.152ZIlan Perezhttps://www.datasciencecentral.com/profile/IlanPerez
<p>I have a situation with a client. They have 4 sources of data and they are wanting to create a single metric out of these four values to gain a generalised insight into how the company is going overall.</p>
<p></p>
<p>The problem is that each source has a completely different scale and are not really comparable.</p>
<p></p>
<p>Source A has a scale in the millions where as Source B's scale is in the hundreds.</p>
<p></p>
<p>Further to this we wanted to weight each source as some provide more…</p>
<p>I have a situation with a client. They have 4 sources of data and they are wanting to create a single metric out of these four values to gain a generalised insight into how the company is going overall.</p>
<p></p>
<p>The problem is that each source has a completely different scale and are not really comparable.</p>
<p></p>
<p>Source A has a scale in the millions where as Source B's scale is in the hundreds.</p>
<p></p>
<p>Further to this we wanted to weight each source as some provide more value than others.</p>
<p></p>
<p>We decided to scale all four between 0 and 1 using this formula</p>
<p><span><span class="mrow" id="MathJax-Span-24"><span class="msubsup" id="MathJax-Span-25"><span class="mi" id="MathJax-Span-26">z</span><span class="mi" id="MathJax-Span-27">i</span></span><span class="mo" id="MathJax-Span-28">= </span><span class="mfrac" id="MathJax-Span-29"><span class="mrow" id="MathJax-Span-30"><span class="msubsup" id="MathJax-Span-31"><span class="mi" id="MathJax-Span-32">x</span><span class="mi" id="MathJax-Span-33">i</span></span><span class="mo" id="MathJax-Span-34">− </span><span class="mo" id="MathJax-Span-35">min</span><span class="mo" id="MathJax-Span-36">(</span><span class="mi" id="MathJax-Span-37">x</span><span class="mo" id="MathJax-Span-38">) / </span></span><span class="mrow" id="MathJax-Span-39"><span class="mo" id="MathJax-Span-40">max</span><span class="mo" id="MathJax-Span-41">(</span><span class="mi" id="MathJax-Span-42">x</span><span class="mo" id="MathJax-Span-43">)</span><span class="mo" id="MathJax-Span-44">−</span><span class="mo" id="MathJax-Span-45">min</span><span class="mo" id="MathJax-Span-46">(</span><span class="mi" id="MathJax-Span-47">x</span><span class="mo" id="MathJax-Span-48">)</span></span></span></span></span></p>
<p>and while its works I am confused as to what insight I can get out of the numbers.</p>
<p></p>
<p>Here is the google sheet I am preparing with</p>
<p><a href="https://docs.google.com/spreadsheets/d/1Eua7tmqD3B0l3M04QnXDcU5HCAmFIfP65lsA7l52604/edit?usp=sharing">https://docs.google.com/spreadsheets/d/1Eua7tmqD3B0l3M04QnXDcU5HCAmFIfP65lsA7l52604/edit?usp=sharing</a></p>
<p></p>
<p>If you look at cell H14 and H15 can you say that March was 3 times worse than Feb because the March score was 1.1 and the Feb was 3.2?</p>
<p></p>
<p>Thanks in advance</p>
<p></p>
<p></p>
<p></p>
<p></p>