## Learning to write custom loss using wrapper functions and OOP in python

A neural network learns to map a set of inputs to a set of outputs from training data. It does so by using some form of optimization algorithm such as gradient descent, stochastic gradient descent, AdaGrad, AdaDelta or some recent algorithms such as Adam, Nadam or RMSProp. The ‘gradient’ in gradient descent refers to error gradient. After each iteration the network compares its predicted output to the real outputs, and then calculates the ‘error’. Typically, with neural networks, we seek to minimize the error. As such, the objective function used to minimize the error is often referred to as a cost function or a loss function and the value calculated by the ‘loss function’ is referred to as simply ‘loss’. Typical loss functions used in various problems –

a. Mean Squared Error

b. Mean Squared Logarithmic Error

c. Binary Crossentropy

d. Categorical Crossentropy

e. Sparse Categorical Crossentropy

In Tensorflow, these loss functions are already included, and we can just call them as shown below.

- Loss function as a string

model.compile (loss = ‘binary_crossentropy’, optimizer = ‘adam’, metrics = [‘accuracy’])

or,

2. Loss function as an object

from tensorflow.keras.losses import mean_squared_error

model.compile(loss = mean_squared_error, optimizer=’sgd’)

The advantage of calling a loss function as an object is that we can pass parameters alongside the loss function, such as threshold.

from tensorflow.keras.losses import mean_squared_error

model.compile (loss=mean_squared_error(param=value), optimizer = ‘sgd’)

For creating loss using function, we need to first name the loss function, and it will accept two parameters, y_true (true label/output) and y_pred (predicted label/output).

def loss_function(y_true, y_pred):

***some calculation***

return loss

Loss function name — my_rmse

Aim is to return the root mean square error between target (y_true) and prediction (y_pred).

**Formula of RMSE:**