A “loss function” is a mathematical function that quantifies the difference between predicted values and actual values in a machine learning or statistical model.

Get the **full solved assignment PDF of MECE-103 of 2023-24** session now.

It serves as a measure of how well the model is performing. The goal during the training of a model is to minimize the value of the loss function.

Here are expressions for different cases of loss functions:

**Mean Squared Error (MSE)**:

[ \text{MSE} = \frac{1}{n} \sum_{i=1}^{n} (y_i – \hat{y}_i)^2 ]

where ( n ) is the number of samples, ( y_i ) is the actual value, and ( \hat{y}_i ) is the predicted value.**Mean Absolute Error (MAE)**:

[ \text{MAE} = \frac{1}{n} \sum_{i=1}^{n} |y_i – \hat{y}_i| ]**Cross-Entropy Loss (Logarithmic Loss) for Binary Classification**:

[ \text{CrossEntropy} = -\frac{1}{n} \sum_{i=1}^{n} \left[ y_i \cdot \log(\hat{y}_i) + (1 – y_i) \cdot \log(1 – \hat{y}_i) \right] ]

where ( y_i ) is the true label (0 or 1), and ( \hat{y}_i ) is the predicted probability of the positive class.**Hinge Loss for Support Vector Machines (SVM)**:

[ \text{HingeLoss} = \frac{1}{n} \sum_{i=1}^{n} \max(0, 1 – y_i \cdot \hat{y}_i) ]

where ( y_i ) is the true class label (either -1 or 1), and ( \hat{y}_i ) is the predicted score.

These are just a few examples, and the choice of the loss function depends on the nature of the problem (regression, classification, etc.) and the desired characteristics of the model.