Feature Scaling

Feature Scaling

The predictors in a dataset are mostly of different magnitudes. For example, in a ‘user’ dataset the ‘age’ feature will have positive values and normally in single or double digit but if the same dataset also contains salary, its values can easily be in five or six figures. We will discuss some techniques to normalise the variables so that all features have same or similar […]

Read Me

Handling Outliers

Handling Outliers

Outliers are those values which are extremely different from other values in the dataset. To work with outliers we have to find answers to two problems. Firstly, how do we define an outlier and secondly, how do we handle the outliers? Let’s take a look at the two questions separately. Outlier Identification Before handling the outliers it is first important to establish which data points […]

Read Me

Encoding Categorical Variables

Encoding Categorical Variables

Your machine learning models cannot train on the categorical variables so they need to be encoded into a numerical format. In this article we will discuss different encoding techniques. One Hot Encoding In this technique we replace each categorical variable with multiple dummy variables where the number of new variables depend on the cardinality of the categorical variable. The dummy variables have binary values where […]

Read Me

Missing Data Imputation

Missing Data Imputation

The most common issue faced during feature engineering is handling of missing data. It is important to handle the missing data as otherwise your machine learning libraries like Scikit-learn would not be able to work with your data. Before we look at the various ways to handle missing data, we need to first analyse the missing data causes and patterns. Causes can be several ranging […]

Read Me