• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Crypto Currency
  • Technology
  • Contact
NEO Share

NEO Share

Sharing The Latest Tech News

  • Home
  • Artificial Intelligence
  • Machine Learning
  • Computers
  • Mobile
  • Crypto Currency

k-means Clustering Algorithm: Explained and Implemented

January 27, 2021 by systems

Manmohan Dogra

This article will help you understand another unsupervised ML algorithm to solve clustering problems. Let’s begin.

Source

k-means is one of the mildest unsupervised learning algorithms used to solve the well-known clustering problem. It is an iterative algorithm that tries to partition the dataset into a ‘k’ number of pre-defined distinct non-overlapping subgroups (clusters). The main idea is to define k centers, one for each cluster. These centers should be placed in a crafty way because of different location causes the different result. So, the better choice is to place them as much as possible far away from each other. The next step is to take each point belonging to a given data set and associate it to the nearest center.

When no point is pending, the first step is completed and an early group age is done. At this point, we need to re-calculate k new centroids as the barycenter of the clusters resulting from the previous step.

After we have these k new centroids, a new binding has to be done between the same data set points and the nearest new center. A loop has been generated. As a result of this loop, we may notice that the k centers change their location step by step until no more changes are done or in other words, centers do not move anymore. Finally, this algorithm aims at minimizing an objective function known as the squared error function given by:

  • ‘||xi — vj||’ is the Euclidean distance between xi and vj.
  • ‘ci’ is the number of data points in the ith cluster.
  • ‘c’ is the number of cluster centers.
Clusters with blue centers.

Let,
X = [x1,x2,x3,……..,xn] be the set of data points and
V = [v1,v2,…….,vc] be the set of centers.

  1. Randomly select ‘c’ cluster centers.
  2. Calculate the distance between each data point and cluster centers.
  3. Assign the data point to the cluster center whose distance from the cluster center is the minimum of all the cluster centers.
  4. Recalculate the new cluster center using the below formula:

where ‘ci’ represents the number of data points in the ith cluster.

5. Recalculate the distance between each data point and new obtained cluster centers.

6. If no data point was reassigned then stop, otherwise repeat step 3.

We can also use the ‘elbow method’ to find the no. of clusters formed. (even if we already know it), this is just to validate it visually. Below is an example.

4 different lines show 4 possible clusters

Implementing k-means using in GoogleColab (python) using sklearn.

  1. k-means algorithm is fast, robust, and easier to understand compared to several other clustering algorithms.
  2. It is relatively efficient: O(tknd), where n is no. of objects, k is no. of clusters, d is no. of dimensions of each object, and t is no of iterations. Normally, k, t, d << n.
  3. Gives the best result when data sets are distinct or well separated from each other.
  1. Euclidean distance measures can unequally weight underlying factors.
  2. The learning algorithm provides the local optima of the squared error function.
  3. Randomly choosing the cluster center cannot lead us to a fruitful result.
  4. Applicable, only when mean, is defined i.e. fails for categorical data.
  5. Unable to handle noisy data and outliers.
  6. The algorithm fails for a non-linear data set.
  7. The learning algorithm requires apriori specification of the number of cluster centers

K-means algorithms can be used in a variety of applications such as market segmentation, document clustering, image segmentation, and etc.

The goal usually when we undergo a cluster analysis is either:

  1. Get a meaningful intuition of the structure of the data we’re dealing with.
  2. Cluster-then-predict where different models will be built for different subgroups if we believe there is a wide variation in the behaviors of different subgroups.

Kmeans clustering is one of the most popular clustering algorithms and usually, the first thing practitioners apply when solving clustering tasks to get an idea of the structure of the dataset. The goal of means is to group data points into distinct non-overlapping subgroups. However, it suffers as the geometric shapes of clusters deviate from shapes. Moreover, it also doesn’t learn the number of clusters from the data and requires it to be pre-defined.

Filed Under: Artificial Intelligence

Primary Sidebar

Stay Ahead: The Latest Tech News and Innovations

Cryptocurrency Market Updates: What’s Happening Now

Emerging Trends in Artificial Intelligence: What to Watch For

Top Cloud Computing Services to Secure Your Data

The Future of Mobile Technology: Recent Advancements and Predictions

Footer

  • Privacy Policy
  • Terms and Conditions

Copyright © 2025 NEO Share

Terms and Conditions - Privacy Policy