Hello Everyone 👋,
Nowadays Machine Learning and its Application are advancing day by day. It’s becoming very hard for us to recall basic concepts related to Machine learning on daily basis.
Hence Introducing the Machine learning Algorithm Cheat Sheet Series in which we will be recalling core concepts related to the Machine learning Algorithm which will be helpful for you in cracking any Data Science interviews or projects.
It will be a point to point explanation for quick revision and understanding of Machine learning Algorithms.
So Hold Tight………..
Support Vector Machines
- A supervised learning algorithm can be used for both classification and regression problem statement.
- It works by classifying data into different classes by finding the line i.e. hyperplane which separates the training data set into classes.
- In SVM we plot the data as a point in n-dimensional space where n is the number of features with value of each feature to be cordinate value.Then the classification is performed by finding the hyper-plance differentiating two classes.
- Support Vector Machines tries to maximize the distance between different classes involved and this is called margin maximization.Basically the line between two classes is identified then there is much chance that SVM will generalize to unseen data.
- SVM are categorized into following –
# Linear SVM’s-
☞ Linear SVM are the classifiers that separate the training data by a Hyperplane.In SVM classifier it is easy to add hyperplane between two classes but question arises that we should add features manually to form a hyper plane this technique is called Kernal Trick.
# Non-Linear SVM’s-
☞ In non-linear SVM’s it is not possible to separate the training data using a hyperplane
☞ SVM Kernal defines a function that takes low dimensional input space and transform it to higher dimensional space hence a not separable data points transformed into seperable.
Advantages of Support Vectors
- SVM offers best classification performance (accuracy) on the training data.
- SVM renders more efficiency for correct classification of the future data.
- The best thing about SVM is that it does not make any strong assumptions on data.
- It does not over-fit the data.
Disadvantages of Support Vector Machines
- Training time higher on large dimensions datasets.
- Does not give good results when we have noise in our dataset.
- SVM doesn’t directly provide probability estimates.Hence it can be calculated using k-fold cross validation.
- SVM algorithms are slightly harder to visualize because of the complexity in formulation.
Applications of Support Vector Machine
- Stock Market forecasting by various financial institutions.
- Face Detection classify part of image as face or no face and create a boundary box.
- Can be used in Biomedical imaging such as classification of Cancer and Protein cells.
- Handwriting recognition We use SVMs to recognize handwritten characters used widely.
- Used in Classification of Text Documents.
- Crucial for cases where very high predictive power is required.
ℹ Scikit learn implementation of Support Vector Machines Algorithm can be found here.
ℹ R Implementation of Support Vector Machines Algorithm can be found here.
With the above info, I hope you will get a better understanding of the Support Vector Machines Algorithm.Also you can able to crack any interview question related to SVM’s.
If you like this Post, Please follow me. If you have noticed any mistakes in the way of thinking, formulas, animations, or code, please let me know.