Joblib is a set of tools to provide lightweight pipelining in Python. It provides utilities for saving and loading Python objects.
This recipe includes the following topics:
- Load classification problem dataset (Pima Indians) from github
- Split columns into the usual feature columns(X) and target column(Y)
- Split data into train and test subset using train_test_split
- Instantiate the classification algorithm: LogisticRegression
- Call fit() to train the model on the test dataset
- Save model to disk using Joblib: dump
- Load model from disk using Joblib: load
- Evaluate the model by calling score() on the unseen dataset
# import modules
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.externals.joblib import dump
from sklearn.externals.joblib import load
# read data file from github
# dataframe: pimaDf
gitFileURL = 'https://raw.githubusercontent.com/andrewgurung/data-repository/master/pima-indians-diabetes.data.csv'
cols = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
pimaDf = pd.read_csv(gitFileURL, names = cols)
# convert into numpy array for scikit-learn
pimaArr = pimaDf.values
# Let's split columns into the usual feature columns(X) and target column(Y)
# Y represents the target 'class' column whose value is either '0' or '1'
X = pimaArr[:, 0:8]
Y = pimaArr[:, 8]
# set test size to 33%
test_size = 0.33
# set seed to create a reproducible set of random data
seed = 7
# split data into train and test subset
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)
# instantiate the classification algorithm: LogisticRegression
model = LogisticRegression()
# call fit() to train the model
model.fit(X_train, Y_train)
# save model to disk using Joblib: dump
filename = 'trained_model_2.sav'
dump(model, filename)
# in a different notebook
# load the saved model from disk using Joblib: load
loaded_model = load(filename)
# evaluate the model on unseen data
accuracy = loaded_model.score(X_test, Y_test)
# display mean estimated accuracy
print("Accuracy: %.3f%%" % accuracy)
Accuracy: 0.756%