Sunaina's Posts - Data Science Central2020-02-23T21:09:45ZSunainahttps://www.datasciencecentral.com/profile/Sunainahttps://storage.ning.com/topology/rest/1.0/file/get/2801270835?profile=RESIZE_48X48&width=48&height=48&crop=1%3A1https://www.datasciencecentral.com/profiles/blog/feed?user=14ilx2az4wg2u&xn_auth=noSteps to calculate centroids in cluster using K-means clustering algorithmtag:www.datasciencecentral.com,2018-03-08:6448529:BlogPost:7019592018-03-08T00:30:00.000ZSunainahttps://www.datasciencecentral.com/profile/Sunaina
<p>In this blog I will go a bit more in detail about the K-means method and explain how we can calculate the distance between centroid and data points to form a cluster.</p>
<p>Consider the below data set which has the values of the data points on a particular graph.</p>
<p><strong>Table 1:</strong></p>
<p><img alt="" class="alignnone wp-image-177" height="124" src="https://sunainasblog.files.wordpress.com/2018/03/untitled.png?w=736" width="671"></img></p>
<p>We can randomly choose two initial points as the centroids and from there we can start calculating distance of each point.</p>
<p>For now we will consider that D2 and D4 are…</p>
<p>In this blog I will go a bit more in detail about the K-means method and explain how we can calculate the distance between centroid and data points to form a cluster.</p>
<p>Consider the below data set which has the values of the data points on a particular graph.</p>
<p><strong>Table 1:</strong></p>
<p><img class="alignnone wp-image-177" src="https://sunainasblog.files.wordpress.com/2018/03/untitled.png?w=736" alt="" width="671" height="124"/></p>
<p>We can randomly choose two initial points as the centroids and from there we can start calculating distance of each point.</p>
<p>For now we will consider that D2 and D4 are the centroids.</p>
<p>To start with we should calculate the distance with the help of Euclidean Distance which is</p>
<p><strong><i> </i>√((x1-y1)² + (x2-y2)²</strong></p>
<p><strong><em>Iteration 1:</em></strong></p>
<p><em><strong>Step 1:</strong></em> We need to calculate the distance between the initial centroid points with other data points. Below I have shown the calculation of distance from initial centroids D2 and D4 from data point D1.</p>
<p><img class="alignnone wp-image-179" src="https://sunainasblog.files.wordpress.com/2018/03/untitled1.png?w=736" alt="" width="620" height="210"/></p>
<p>After calculating the distance of all data points, we get the values as below.</p>
<p><strong>Table 2:</strong></p>
<p><img class="alignnone wp-image-164" src="https://sunainasblog.files.wordpress.com/2018/02/untitled2.png" alt="" width="536" height="176"/><img class="alignnone wp-image-181" src="https://sunainasblog.files.wordpress.com/2018/03/untitled2.png?w=736" alt="" width="631" height="98"/></p>
<p><strong><em>Step 2:</em></strong> Next, we need to group the data points which are closer to centriods. Observe the above table, we can notice that D1 is closer to D4 as the distance is less. Hence we can say that D1 belongs to D4 Similarly, D3 and D5 belongs to D2. After grouping, we need to calculate the mean of grouped values from Table 1.</p>
<p><strong>Cluster 1: (D1, D4) Cluster 2: (D2, D3, D5)</strong></p>
<p><em><strong>Step 3:</strong></em> Now, we calculate the mean values of the clusters created and the new centriod values will these mean values and centroid is moved along the graph.</p>
<p><img class="alignnone wp-image-183" src="https://sunainasblog.files.wordpress.com/2018/03/untitled3.png?w=736" alt="" width="640" height="88"/></p>
<p>From the above table, we can say the new centroid for cluster 1 is (2.0, 1.0) and for cluster 2 is (2.67, 4.67)</p>
<p><em><strong>Iteration 2: </strong></em></p>
<p><em><strong>Step 4:</strong></em> Again the values of euclidean distance is calculated from the new centriods. Below is the table of distance between data points and new centroids.</p>
<p><img class="alignnone wp-image-184" src="https://sunainasblog.files.wordpress.com/2018/03/untitled4.png?w=736" alt="" width="629" height="136"/></p>
<p>We can notice now that clusters have changed the data points. Now the cluster 1 has D1, D2 and D4 data objects. Similarly, cluster 2 has D3 and D5</p>
<p><em><strong>Step 5:</strong></em> Calculate the mean values of new clustered groups from Table 1 which we followed in step 3. The below table will show the mean values</p>
<p><img class="alignnone wp-image-185" src="https://sunainasblog.files.wordpress.com/2018/03/untitled5.png?w=736" alt="" width="638" height="84"/></p>
<p>Now we have the new centroid value as following:</p>
<p><strong>cluster 1 ( D1, D2, D4)</strong> <strong>- (1.67, 1.67) and cluster 2 (D3, D5) - (3.5, 5.5)</strong></p>
<p>This process has to be repeated until we find a constant value for centroids and the latest cluster will be considered as the final cluster solution.</p>
<p></p>Clustering in Power BI using Rtag:www.datasciencecentral.com,2018-03-07:6448529:BlogPost:7018602018-03-07T22:30:00.000ZSunainahttps://www.datasciencecentral.com/profile/Sunaina
<p>Here, I've used the famous <a href="https://en.wikipedia.org/wiki/Iris_flower_data_set">Iris Flower dataset</a> to show the clustering in Power BI using R. I've used the <strong>K-means</strong> clustering method to show the different species of Iris flower.</p>
<p><em><strong>About the dataset</strong>: </em>The Iris dataset has 5 attributes (Sepal length, Sepal width, Petal width, Petal length, Species). The 3 different species are named as <strong><em>Setosa</em></strong>,…</p>
<p>Here, I've used the famous <a href="https://en.wikipedia.org/wiki/Iris_flower_data_set">Iris Flower dataset</a> to show the clustering in Power BI using R. I've used the <strong>K-means</strong> clustering method to show the different species of Iris flower.</p>
<p><em><strong>About the dataset</strong>: </em>The Iris dataset has 5 attributes (Sepal length, Sepal width, Petal width, Petal length, Species). The 3 different species are named as <strong><em>Setosa</em></strong>, <em><strong>Versicolor</strong></em> and <em><strong>Virginica</strong></em>. It is observed that, the Petal Length and Petal Width are similar in each Species, hence I have considered Petal Length for x axis and Petal Width for y axis to plot a graph.</p>
<p><strong><em>K-means Clustering</em>: </strong> K means is a non-hierarchical iterative clustering technique.In this technique we start by randomly assigning the data points to clusters. We know that there are 3 different species in our data set, so I have taken 3 clusters. The algorithm will start assigning each data points to these 3 clusters. Then it calculates the distance between each data point to the assigned cluster centroids using 'Eluclidian Space'. According to the distance rearrange the centroid. This process is done iteratively until the clusters become stable and there are no data points to be moved.</p>
<p><strong><em>R visual: </em></strong>In the visual we can see the how the species are separated after clustering. Here 1 is <strong>Setosa</strong>, cluster 2 is <strong>Versicolor</strong> and cluster 3 is <strong>Virginica.</strong> We can also see that algorithm wrongly assigned few data points in Versicolor and Virginica.</p>
<p><em><strong>Drawback:</strong></em> We see that after clustering few data points belonging to Setosa are seen in Versicolor and vice-versa. However this clustering is more suitable for unsupervised learning and when we have a large dataset.</p>
<p><img class="alignnone size-large wp-image-156" src="https://sunainasblog.files.wordpress.com/2018/02/cluster.png?w=736" alt="" width="736" height="365"/></p>
<p><strong><em>Code:</em></strong></p>
<ul>
<li>require('ggplot2')<br/> library(ggplot2)<br/> set.seed(20)<br/> iris<- kmeans(dataset[ ,3:4], 3, nstart=20)<br/> Clusters<- as.factor(iris$cluster)<br/> ggplot(dataset, aes(PetalLength, <span>PetalWidth,</span> color = Clusters)) + geom_point(shape = 17, size = 4)</li>
</ul>