Underfitting

What is Underfitting :

Underfitting occurs when a machine learning model is not able to capture the underlying trend in the data. It can happen for a variety of reasons, including having a model that is too simple for the data, or not having enough data to train the model. Underfitting can lead to poor model performance, as the model will be unable to accurately make predictions on unseen data.
Here are two examples of underfitting:
Example 1: Predicting Housing Prices
Imagine you are trying to build a machine learning model to predict housing prices in a certain city. You have a dataset with information on various houses in the city, including their size, number of bedrooms, and location. You decide to use a linear regression model to make the predictions, as it is a simple and well-understood model.
However, after training the model and testing it on a set of unseen data, you find that the model is not very accurate. It consistently underpredicts the prices of houses in certain neighborhoods, and overpredicts the prices of houses in others.
Upon closer examination, you realize that the housing prices in the city are not well-represented by a linear trend. Instead, the prices are influenced by a variety of factors, such as the quality of schools in the area, the proximity to amenities like parks and restaurants, and the overall desirability of the neighborhood. A linear regression model, which only considers a single predictor variable, is not able to capture these complex trends in the data and therefore underfits the data.
Example 2: Classifying Email Spam
Another example of underfitting is in the task of classifying email spam. You have a dataset of emails, each labeled as either spam or not spam. You decide to use a decision tree classifier to make the predictions, as it is a simple and interpretable model.
However, after training the model and testing it on a set of unseen emails, you find that the model is not very accurate. It frequently misclassifies spam emails as not spam, and vice versa.
Upon closer examination, you realize that the model is not able to capture the subtle nuances in the emails that distinguish spam from non-spam. For example, spam emails often contain certain words or phrases that are not commonly found in non-spam emails, such as “earn money fast” or “double your income.” A decision tree classifier, which only considers a single feature at a time, is not able to capture these complex patterns in the data and therefore underfits the data.
In both of these examples, the models were unable to capture the underlying trends in the data and therefore underfitted the data. This led to poor model performance and inaccurate predictions. To remedy underfitting, you may need to use a more complex model or increase the amount of data you have available for training.