• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Crypto Currency
  • Technology
  • Contact
NEO Share

NEO Share

Sharing The Latest Tech News

  • Home
  • Artificial Intelligence
  • Machine Learning
  • Computers
  • Mobile
  • Crypto Currency

AFFINITY PROPAGATION:

December 28, 2020 by systems

Photo by Markus Spiske on Unsplash

AFFINITY PROPAGATION:

  • We use a type of clustering algorithm where the complete data is viewed as a network with each data point being a node in the network.
  • The entire algorithm is based on finding iteratively how well one point is suited to be a representative of another point (i.e., how suited a particular point is to be an exemplar to another point by gaining information about other prospective representatives in the data).
  • Unlike clustering algorithms such as k-means or k-medoids, affinity propagation does not require the number of clusters to be determined or estimated before running the algorithm
  • A dataset is described using a small number of exemplars, ‘exemplars’ are members of the input set that are representative of clusters. The messages sent between pairs represent the suitability for one sample to be the exemplar of the other, which is updated in response to the values from other pairs
  • This updating happens iteratively until convergence, at that point the final exemplars are chosen, and hence we obtain the final clustering.

ALGORITHM:

  • The first step is to create the similarity matrix based on the given data .
  • The similarity matrix is formed using the distance between two points. Similarity is the negative of the sum of squares of distance between the points.
  • Then calculate the responsibility matrix using the similarity matrix and availability matrix. Availability matrix however is initialised to zero.
  • Once the responsibility matrix is calculated we use it to calculate the availability matrix that was initialized to zero.
  • Once we have both availability and responsibility matrix we add them to get the criterion matrix which decides the ultimate clusters.

When updating the messages, it is important that they be damped to avoid numerical oscillations that arise in some circumstances. Each message is set to l times its value from the previous iteration plus 1 — l times its prescribed updated value, where the damping factor l is between 0 and 1. In all of our experiments (3), we used a default damping factor of l = 0.5

implementation:

https://colab.research.google.com/drive/1CfVKsR1MJkrx85f7SA62X9JryH4e5-Xa?usp=sharing

https://colab.research.google.com/drive/1KUBixlLWRcA6R75gSMd-FXKIaDG4v291?usp=sharing

References:

http://utstat.toronto.edu/reid/sta414/frey-affinity.pdf

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.490.7628&rep=rep1&type=pdf

Filed Under: Machine Learning

Primary Sidebar

Carmel WordPress Help

Carmel WordPress Help: Expert Support to Keep Your Website Running Smoothly

Stay Ahead: The Latest Tech News and Innovations

Cryptocurrency Market Updates: What’s Happening Now

Emerging Trends in Artificial Intelligence: What to Watch For

Top Cloud Computing Services to Secure Your Data

Footer

  • Privacy Policy
  • Terms and Conditions

Copyright © 2025 NEO Share

Terms and Conditions - Privacy Policy