Below is the simplest machine learning formula, this algorithm is based on a very basic straight line formula taught in high school
Y = AX + B
Did you forget? It doesn’t matter if you forget. This is a very simple formula. In this tutorial i am explaining why this simple formula can be used to make predictions.
The above formula is only useful on a single variable data set. But in real life most data sets have multiple variables. You can develop multivariate algorithms using the same simple formula.
This is the sister of linear regression. But polynomial regression can more accurately find the relationship between input variables and output variables, even when the relationship is not linear,
Logarithmic regression (Logis regression)
Logarithmic regression is developed on the basis of linear regression.It is also use the same simple linear equation. This is a widely used, powerful and popular machine learning algorithm . This can be used to predict categorical variables. Based on the concept of binary classification, we can develop logarithmic regression for multi-classification tasks.
Neural networks are becoming more and more popular today. A neural network can solve much faster on more complex data sets. This also involves the same straight line equation, but the development of this algorithm is much more complicated than the previous ones.
What if you spend all your time developing the algorithm but it does not solve the problem as you intended. How can you fix it? First you need to find out where the problem is. Is the algorithm wrong, or is there insufficient data to train the model or do you need more features? Looks like a lot of problems, right? But if you don’t solve the problem first and then move in the right direction, you will waste countless hours.
If your ML algorithm performs poorly
On the other hand, if the data set is too skewed, it is another challenge. For example, if you try to solve a classification problem, 95% are positive results and only 5% are negative results. If you do this, you will have a 95% accuracy rate even if you guess all the predictions are positive. On the other hand, if this ML algorithm seems to have a 90% accuracy rate, is it really inefficient? Even guessing is more accurate than it.
The oldest popular unsupervised learning algorithm . This algorithm does not make predictions like the previous ones. It just classifies based on the similarity within the data. This is more like trying to understand current data more effectively. Then when the algorithm receives new data, based on its characteristics, it will determine which cluster the new data is classified into. There are other important applications of this algorithm. It can be used to reduce the dimensional features of the picture.
Think about it, when we need to input a large number of pictures into the algorithm to train a picture classification model. Ultra-high-resolution pictures will input a heavy burden and slow down the entire training process (recognize what picture this is and need such a high resolution!). In this case, a lower-dimensional image can allow us to process it faster. This is just an example. You may imagine many other uses for the same reason.
This is another core task of machine learning. Used for credit card fraud detection, manufacturing defect detection, and even cancer cell detection. It can be done with Gaussian distribution (normal distribution) or as simple as a probability formula.
Recommendation systems are everywhere. If you buy something in 1688, it will recommend you some other products that you might like. Station B will also recommend videos you might like, and Facebook will recommend people you might know. You see it is everywhere.