Log Loss

Log Loss :

Log loss, also known as cross entropy loss, is a performance metric used in classification tasks. It measures the difference between the predicted probability and the actual outcome.
For example, in a binary classification task where the classes are “positive” and “negative”, the predicted probabilities are calculated for each class. If the actual outcome is “positive”, the log loss is calculated as:
-log(predicted probability of positive class)
If the predicted probability of the positive class is 0.9, the log loss would be -log(0.9) = 0.10. A lower log loss indicates that the predicted probabilities are closer to the actual outcome.
In another example, if we have a multi-class classification task with 3 classes (A, B, C), the predicted probabilities are calculated for each class. The log loss is calculated as:
-Sum(actual outcome * log(predicted probability of each class))
For example, if the actual outcome is “A” and the predicted probabilities are [0.1, 0.3, 0.6], the log loss would be -(0 * log(0.1) + 1 * log(0.3) + 0 * log(0.6)) = 0.51.
In summary, log loss penalizes heavily for incorrect predictions and rewards for correct predictions. It is commonly used in machine learning competitions as a performance metric, as it is sensitive to probabilities close to 0 and 1, which is important in classification tasks.