

Lets understand what is bias and variance in a machine learning model.
Before proceeding further lets have a recap of the important component of a machine learning model.
To build a model we need training data set, development dataset and test dataset. Once we collect appropriate dataset, clean and organise we build our ML model. The outcome will be a certain accuracy value on both training set as well as development set.
Say we got an accuracy of 75% on training set and 40% on test set. These are really no so good numbers. But lets understand what they mean.
A value of 75% accuracy on training set means, there is 25% error. And this is the bias over training set.
And value of 40% accuracy in test set. Now there is difference of 35% between training and test accuracy, and this is called the variance of model. (the difference between the actual and the expected)
We haven’t included a key parameter in beginning. or we haven’t even defined at first place a particular value. And it is not easy to be defined in most cases as well. This is called the optimum error rate or the acceptable error rate. Setting an optimum error rate for any machine learning problem is tricky. since we do not have an existing performance to compare against. we will try to define one now.
Incase of a human achievable tasks, we will measure humans performance and identify the error rate he/she has. That will perform the optimum error rate. Example, If I can identify a cat in an image any number of times correctly then there will be 0 error. Optimum error rate will be 0 as well.
If i am given blurry and shadow images, then i will not be able to get it right every time. The error rate can be say 5%, then optimum error rate of system becomes 5%
When we set this at beginning we will be able to breakdown bias even more accurately.
If the training set accuracy is 75 and optimum error rate is 5%, then avoidable bias becomes 20%, we can improve this.
Now, relationship between bias and variance.
When Bias is high there is huge scope for improvement in model.
When variance is high there is huge scope for improvement in training a model with increased amount of data, provided bias is acceptable and within limits.
How to reduce bias ?
As mentioned to reduce bias,
Make model more complex to fit the data better.
Modify input features, understand feature importances and add or remove features appropriately from the input.
Eliminate regularisation layer.
How to reduce variance ?
To reduce variance the first and most beneficial method will be to increase training data.
If we do not have the liberty of having more data,
Add Regularization,
Reduce nodes and or layer.
Reduce number of features.
But doing these may have an impact on bias increasing as well. As a tradeoff have a larger network so that bias doesn’t get affected quickly and maintain a balance between bias and variance.
The reason for maintaining balance between two, is to avoid either overfitting or undercutting the ML model.
Overfitting — The model works best over training data only, on evaluation set its results are poor.
Underfitting — The model works poor on training data, and yields similar results on evaluation set.
If the model does not yield good result over training set as well as development set, then there is no overfitting also no underfitting.