Bayesian methods of hyperparameter optimization

In addition to the random search and the grid search methods for selecting optimal hyperparameters, we can use Bayesian methods of probabilities to select the optimal hyperparameters for an algorithm.

In this case study, we will be using the BayesianOptimization library to perform hyperparmater tuning. This library has very good documentation which you can find here: https://github.com/fmfn/BayesianOptimization

You will need to install the Bayesian optimization module. Running a cell with an exclamation point in the beginning of the command will run it as a shell command — please do this to install this module from our notebook in the cell below.

In [1]:
#! pip install bayesian-optimization
In [2]:
import warnings
warnings.filterwarnings('ignore')
from sklearn.preprocessing import LabelEncoder
import numpy as np
import pandas as pd
import lightgbm
from bayes_opt import BayesianOptimization
from catboost import CatBoostClassifier, cv, Pool
In [3]:
import os
os.listdir()
Out[3]:
['flight_delays_test.csv.zip',
 '.DS_Store',
 'Bayesian_optimization_case_study.ipynb',
 'flight_delays_train.csv.zip',
 '.ipynb_checkpoints']

How does Bayesian optimization work?

Bayesian optimization works by constructing a posterior distribution of functions (Gaussian process) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not, as seen in the picture below.

As you iterate over and over, the algorithm balances its needs of exploration and exploitation while taking into account what it knows about the target function. At each step, a Gaussian Process is fitted to the known samples (points previously explored), and the posterior distribution, combined with an exploration strategy (such as UCB — aka Upper Confidence Bound), or EI (Expected Improvement). This process is used to determine the next point that should be explored (see the gif below).

Let's look at a simple example

The first step is to create an optimizer. It uses two items:

  • function to optimize
  • bounds of parameters

The function is the procedure that counts metrics of our model quality. The important thing is that our optimization will maximize the value on function. Smaller metrics are best. Hint: don't forget to use negative metric values.

Here we define our simple function we want to optimize.

In [4]:
def simple_func(a, b):
    return a + b

Now, we define our bounds of the parameters to optimize, within the Bayesian optimizer.

In [5]:
optimizer = BayesianOptimization(
    simple_func,
    {'a': (1, 3),
    'b': (4, 7)})

These are the main parameters of this function:

  • n_iter: This is how many steps of Bayesian optimization you want to perform. The more steps, the more likely you are to find a good maximum.

  • init_points: This is how many steps of random exploration you want to perform. Random exploration can help by diversifying the exploration space.

Let's run an example where we use the optimizer to find the best values to maximize the target value for a and b given the inputs of 3 and 2.

In [6]:
optimizer.maximize(3,2)
|   iter    |  target   |     a     |     b     |
-------------------------------------------------
|  1        |  8.469    |  2.76     |  5.709    |
|  2        |  6.07     |  1.623    |  4.447    |
|  3        |  7.624    |  1.843    |  5.781    |
|  4        |  9.131    |  2.802    |  6.329    |
|  5        |  9.949    |  2.963    |  6.986    |
=================================================

Great, now let's print the best parameters and the associated maximized target.

In [7]:
print(optimizer.max['params']);optimizer.max['target']
{'a': 2.9626873726514082, 'b': 6.986142465358723}
Out[7]:
9.94882983801013

Test it on real data using the Light GBM

The dataset we will be working with is the famous flight departures dataset. Our modeling goal will be to predict if a flight departure is going to be delayed by 15 minutes based on the other attributes in our dataset. As part of this modeling exercise, we will use Bayesian hyperparameter optimization to identify the best parameters for our model.

You can load the zipped csv files just as you would regular csv files using Pandas read_csv. In the next cell load the train and test data into two seperate dataframes.

In [8]:
train_df = pd.read_csv('flight_delays_train.csv.zip')
test_df = pd.read_csv('flight_delays_test.csv.zip')

Print the top five rows of the train dataframe and review the columns in the data.

In [9]:
train_df.head()
Out[9]:
Month DayofMonth DayOfWeek DepTime UniqueCarrier Origin Dest Distance dep_delayed_15min
0 c-8 c-21 c-7 1934 AA ATL DFW 732 N
1 c-4 c-20 c-3 1548 US PIT MCO 834 N
2 c-9 c-2 c-5 1422 XE RDU CLE 416 N
3 c-11 c-25 c-6 1015 OO DEN MEM 872 N
4 c-10 c-7 c-6 1828 WN MDW OMA 423 Y

Use the describe function to review the numeric columns in the train dataframe.

In [10]:
train_df.describe()
Out[10]:
DepTime Distance
count 100000.000000 100000.00000
mean 1341.523880 729.39716
std 476.378445 574.61686
min 1.000000 30.00000
25% 931.000000 317.00000
50% 1330.000000 575.00000
75% 1733.000000 957.00000
max 2534.000000 4962.00000

Notice, DepTime is the departure time in a numeric representation in 2400 hours.

The response variable is 'dep_delayed_15min' which is a categorical column, so we need to map the Y for yes and N for no values to 1 and 0. Run the code in the next cell to do this.

In [11]:
#train_df = train_df[train_df.DepTime <= 2400].copy()
y_train = train_df['dep_delayed_15min'].map({'Y': 1, 'N': 0}).values

Feature Engineering

Use these defined functions to create additional features for the model. Run the cell to add the functions to your workspace.

In [12]:
def label_enc(df_column):
    df_column = LabelEncoder().fit_transform(df_column)
    return df_column

def make_harmonic_features_sin(value, period=2400):
    value *= 2 * np.pi / period 
    return np.sin(value)

def make_harmonic_features_cos(value, period=2400):
    value *= 2 * np.pi / period 
    return np.cos(value)

def feature_eng(df):
    df['flight'] = df['Origin']+df['Dest']
    df['Month'] = df.Month.map(lambda x: x.split('-')[-1]).astype('int32')
    df['DayofMonth'] = df.DayofMonth.map(lambda x: x.split('-')[-1]).astype('uint8')
    df['begin_of_month'] = (df['DayofMonth'] < 10).astype('uint8')
    df['midddle_of_month'] = ((df['DayofMonth'] >= 10)&(df['DayofMonth'] < 20)).astype('uint8')
    df['end_of_month'] = (df['DayofMonth'] >= 20).astype('uint8')
    df['DayOfWeek'] = df.DayOfWeek.map(lambda x: x.split('-')[-1]).astype('uint8')
    df['hour'] = df.DepTime.map(lambda x: x/100).astype('int32')
    df['morning'] = df['hour'].map(lambda x: 1 if (x <= 11)& (x >= 7) else 0).astype('uint8')
    df['day'] = df['hour'].map(lambda x: 1 if (x >= 12) & (x <= 18) else 0).astype('uint8')
    df['evening'] = df['hour'].map(lambda x: 1 if (x >= 19) & (x <= 23) else 0).astype('uint8')
    df['night'] = df['hour'].map(lambda x: 1 if (x >= 0) & (x <= 6) else 0).astype('int32')
    df['winter'] = df['Month'].map(lambda x: x in [12, 1, 2]).astype('int32')
    df['spring'] = df['Month'].map(lambda x: x in [3, 4, 5]).astype('int32')
    df['summer'] = df['Month'].map(lambda x: x in [6, 7, 8]).astype('int32')
    df['autumn'] = df['Month'].map(lambda x: x in [9, 10, 11]).astype('int32')
    df['holiday'] = (df['DayOfWeek'] >= 5).astype(int) 
    df['weekday'] = (df['DayOfWeek'] < 5).astype(int)
    df['airport_dest_per_month'] = df.groupby(['Dest', 'Month'])['Dest'].transform('count')
    df['airport_origin_per_month'] = df.groupby(['Origin', 'Month'])['Origin'].transform('count')
    df['airport_dest_count'] = df.groupby(['Dest'])['Dest'].transform('count')
    df['airport_origin_count'] = df.groupby(['Origin'])['Origin'].transform('count')
    df['carrier_count'] = df.groupby(['UniqueCarrier'])['Dest'].transform('count')
    df['carrier_count_per month'] = df.groupby(['UniqueCarrier', 'Month'])['Dest'].transform('count')
    df['deptime_cos'] = df['DepTime'].map(make_harmonic_features_cos)
    df['deptime_sin'] = df['DepTime'].map(make_harmonic_features_sin)
    df['flightUC'] = df['flight']+df['UniqueCarrier']
    df['DestUC'] = df['Dest']+df['UniqueCarrier']
    df['OriginUC'] = df['Origin']+df['UniqueCarrier']
    return df.drop('DepTime', axis=1)

Concatenate the training and testing dataframes.

In [13]:
full_df = pd.concat([train_df.drop('dep_delayed_15min', axis=1), test_df])
full_df = feature_eng(full_df)

Apply the earlier defined feature engineering functions to the full dataframe.

In [14]:
for column in ['UniqueCarrier', 'Origin', 'Dest','flight',  'flightUC', 'DestUC', 'OriginUC']:
    full_df[column] = label_enc(full_df[column])

Split the new full dataframe into X_train and X_test.

In [15]:
X_train = full_df[:train_df.shape[0]]
X_test = full_df[train_df.shape[0]:]

Create a list of the categorical features.

In [16]:
categorical_features = ['Month',  'DayOfWeek', 'UniqueCarrier', 'Origin', 'Dest','flight',  'flightUC', 'DestUC', 'OriginUC']

Let's build a light GBM model to test the bayesian optimizer.

LightGBM is a gradient boosting framework that uses tree-based learning algorithms. It is designed to be distributed and efficient with the following advantages:

  • Faster training speed and higher efficiency.
  • Lower memory usage.
  • Better accuracy.
  • Support of parallel and GPU learning.
  • Capable of handling large-scale data.

First, we define the function we want to maximize and that will count cross-validation metrics of lightGBM for our parameters.

Some params such as num_leaves, max_depth, min_child_samples, min_data_in_leaf should be integers.

In [17]:
def lgb_eval(num_leaves,max_depth,lambda_l2,lambda_l1,min_child_samples, min_data_in_leaf):
    params = {
        "objective" : "binary",
        "metric" : "auc", 
        'is_unbalance': True,
        "num_leaves" : int(num_leaves),
        "max_depth" : int(max_depth),
        "lambda_l2" : lambda_l2,
        "lambda_l1" : lambda_l1,
        "num_threads" : 20,
        "min_child_samples" : int(min_child_samples),
        'min_data_in_leaf': int(min_data_in_leaf),
        "learning_rate" : 0.03,
        "subsample_freq" : 5,
        "bagging_seed" : 42,
        "verbosity" : -1
    }
    lgtrain = lightgbm.Dataset(X_train, y_train,categorical_feature=categorical_features)
    cv_result = lightgbm.cv(params,
                       lgtrain,
                       1000,
                       early_stopping_rounds=100,
                       stratified=True,
                       nfold=3)
    return cv_result['auc-mean'][-1]

Apply the Bayesian optimizer to the function we created in the previous step to identify the best hyperparameters. We will run 10 iterations and set init_points = 2.

In [18]:
lgbBO = BayesianOptimization(lgb_eval, {'num_leaves': (25, 4000),
                                                'max_depth': (5, 63),
                                                'lambda_l2': (0.0, 0.05),
                                                'lambda_l1': (0.0, 0.05),
                                                'min_child_samples': (50, 10000),
                                                'min_data_in_leaf': (100, 2000)
                                                })

lgbBO.maximize(n_iter=10, init_points=2)
|   iter    |  target   | lambda_l1 | lambda_l2 | max_depth | min_ch... | min_da... | num_le... |
-------------------------------------------------------------------------------------------------
[LightGBM] [Warning] min_data_in_leaf is set=505, min_child_samples=6107 will be ignored. Current value: min_data_in_leaf=505
|  1        |  0.7223   |  0.03887  |  0.003152 |  14.69    |  6.107e+0 |  505.5    |  405.2    |
|  2        |  0.7231   |  0.004475 |  0.008803 |  57.32    |  7.356e+0 |  118.8    |  2.192e+0 |
|  3        |  0.7436   |  0.004355 |  0.03196  |  51.95    |  1.352e+0 |  1.841e+0 |  3.88e+03 |
|  4        |  0.7324   |  0.02473  |  0.02422  |  51.4     |  4.36e+03 |  994.2    |  1.328e+0 |
|  5        |  0.7432   |  0.02839  |  0.03516  |  18.7     |  64.02    |  1.853e+0 |  948.8    |
|  6        |  0.7223   |  0.01357  |  0.04566  |  12.81    |  123.0    |  101.3    |  3.973e+0 |
|  7        |  0.7434   |  0.04223  |  0.01168  |  10.35    |  6.855e+0 |  1.371e+0 |  1.294e+0 |
|  8        |  0.732    |  0.007544 |  0.001537 |  45.54    |  4.475e+0 |  1.012e+0 |  2.402e+0 |
|  9        |  0.7438   |  0.02456  |  0.02504  |  19.98    |  6.247e+0 |  1.708e+0 |  279.5    |
|  10       |  0.7435   |  0.03543  |  0.04737  |  61.25    |  1.717e+0 |  1.766e+0 |  3.487e+0 |
|  11       |  0.7434   |  0.03038  |  0.04502  |  52.94    |  6.804e+0 |  1.968e+0 |  743.0    |
|  12       |  0.7429   |  0.03706  |  0.00544  |  21.96    |  1.339e+0 |  1.891e+0 |  3.9e+03  |
=================================================================================================

Print the best result by using the '.max' function.

In [19]:
lgbBO.max
Out[19]:
{'target': 0.7438005519841256,
 'params': {'lambda_l1': 0.02455687654875783,
  'lambda_l2': 0.025035946161529106,
  'max_depth': 19.982497552680655,
  'min_child_samples': 6246.860342083871,
  'min_data_in_leaf': 1707.6593837504079,
  'num_leaves': 279.48617722724066}}

Review the process at each step by using the '.res[0]' function.

In [20]:
lgbBO.res[0]
Out[20]:
{'target': 0.7222787313087559,
 'params': {'lambda_l1': 0.0388736633361515,
  'lambda_l2': 0.0031520289777631494,
  'max_depth': 14.690049908083786,
  'min_child_samples': 6107.492174889771,
  'min_data_in_leaf': 505.4889513707348,
  'num_leaves': 405.20004771848215}}