Classification with Python

In this notebook we try to practice all the classification algorithms that we learned in this course.

We load a dataset using Pandas library, and apply the following algorithms, and find the best one for this specific dataset by accuracy evaluation methods.

Lets first load required libraries:

In [1]:
import itertools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import pandas as pd
import numpy as np
import matplotlib.ticker as ticker
from sklearn import preprocessing
%matplotlib inline
Waiting for a Spark session to start...
Spark Initialization Done! ApplicationId = app-20190427050906-0000
KERNEL_ID = 6b30441f-08d3-4e8e-8c11-14e702516a74

About dataset

This dataset is about past loans. The Loan_train.csv data set includes details of 346 customers whose loan are already paid off or defaulted. It includes following fields:

Field Description
Loan_status Whether a loan is paid off on in collection
Principal Basic principal loan amount at the
Terms Origination terms which can be weekly (7 days), biweekly, and monthly payoff schedule
Effective_date When the loan got originated and took effects
Due_date Since it’s one-time payoff schedule, each loan has one single due date
Age Age of applicant
Education Education of applicant
Gender The gender of applicant

Lets download the dataset

In [2]:
!wget -O loan_train.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv
--2019-04-27 05:09:09--  https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.193
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.193|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 23101 (23K) [text/csv]
Saving to: 'loan_train.csv'

loan_train.csv      100%[===================>]  22.56K  --.-KB/s    in 0.09s   

2019-04-27 05:09:10 (260 KB/s) - 'loan_train.csv' saved [23101/23101]

Load Data From CSV File

In [3]:
df = pd.read_csv('loan_train.csv')
df.head()
Out[3]:
Unnamed: 0 Unnamed: 0.1 loan_status Principal terms effective_date due_date age education Gender
0 0 0 PAIDOFF 1000 30 9/8/2016 10/7/2016 45 High School or Below male
1 2 2 PAIDOFF 1000 30 9/8/2016 10/7/2016 33 Bechalor female
2 3 3 PAIDOFF 1000 15 9/8/2016 9/22/2016 27 college male
3 4 4 PAIDOFF 1000 30 9/9/2016 10/8/2016 28 college female
4 6 6 PAIDOFF 1000 30 9/9/2016 10/8/2016 29 college male
In [4]:
df.shape
Out[4]:
(346, 10)

Convert to date time object

In [5]:
df['due_date'] = pd.to_datetime(df['due_date'])
df['effective_date'] = pd.to_datetime(df['effective_date'])
df.head()
Out[5]:
Unnamed: 0 Unnamed: 0.1 loan_status Principal terms effective_date due_date age education Gender
0 0 0 PAIDOFF 1000 30 2016-09-08 2016-10-07 45 High School or Below male
1 2 2 PAIDOFF 1000 30 2016-09-08 2016-10-07 33 Bechalor female
2 3 3 PAIDOFF 1000 15 2016-09-08 2016-09-22 27 college male
3 4 4 PAIDOFF 1000 30 2016-09-09 2016-10-08 28 college female
4 6 6 PAIDOFF 1000 30 2016-09-09 2016-10-08 29 college male

Data visualization and pre-processing

Let’s see how many of each class is in our data set

In [6]:
df['loan_status'].value_counts()
Out[6]:
PAIDOFF       260
COLLECTION     86
Name: loan_status, dtype: int64

260 people have paid off the loan on time while 86 have gone into collection

Lets plot some columns to underestand data better:

In [7]:
# notice: installing seaborn might takes a few minutes
!conda install -c anaconda seaborn -y
Solving environment: done


==> WARNING: A newer version of conda exists. <==
  current version: 4.5.11
  latest version: 4.6.14

Please update conda by running

    $ conda update -n base -c defaults conda



## Package Plan ##

  environment location: /opt/ibm/conda/miniconda3

  added / updated specs: 
    - seaborn


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    openssl-1.0.2r             |       h7b6447c_0         3.2 MB  anaconda
    libgcc-ng-8.2.0            |       hdf63c60_1         7.6 MB  anaconda
    certifi-2018.8.24          |           py35_1         139 KB  anaconda
    conda-4.5.11               |           py35_0         1.0 MB  anaconda
    seaborn-0.9.0              |           py35_0         378 KB  anaconda
    ca-certificates-2019.1.23  |                0         126 KB  anaconda
    ------------------------------------------------------------
                                           Total:        12.4 MB

The following packages will be UPDATED:

    ca-certificates: 2017.08.26-h1d4fec5_0 --> 2019.1.23-0       anaconda
    certifi:         2018.1.18-py35_0      --> 2018.8.24-py35_1  anaconda
    conda:           4.5.11-py35_0         --> 4.5.11-py35_0     anaconda
    libgcc-ng:       7.2.0-h7cc24e2_2      --> 8.2.0-hdf63c60_1  anaconda
    openssl:         1.0.2o-h20670df_0     --> 1.0.2r-h7b6447c_0 anaconda
    seaborn:         0.8.0-py35h15a2772_0  --> 0.9.0-py35_0      anaconda


Downloading and Extracting Packages
openssl-1.0.2r       | 3.2 MB    | ##################################### | 100% 
libgcc-ng-8.2.0      | 7.6 MB    | ##################################### | 100% 
certifi-2018.8.24    | 139 KB    | ##################################### | 100% 
conda-4.5.11         | 1.0 MB    | ##################################### | 100% 
seaborn-0.9.0        | 378 KB    | ##################################### | 100% 
ca-certificates-2019 | 126 KB    | ##################################### | 100% 
Preparing transaction: failed

# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<

    Traceback (most recent call last):
      File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/conda/exceptions.py", line 819, in __call__
        return func(*args, **kwargs)
      File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/conda/cli/main.py", line 78, in _main
        exit_code = do_call(args, p)
      File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/conda/cli/conda_argparse.py", line 77, in do_call
        exit_code = getattr(module, func_name)(args, parser)
      File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/conda/cli/main_install.py", line 11, in execute
        install(args, parser, 'install')
      File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/conda/cli/install.py", line 253, in install
        handle_txn(unlink_link_transaction, prefix, args, newenv)
      File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/conda/cli/install.py", line 282, in handle_txn
        unlink_link_transaction.execute()
      File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/conda/core/link.py", line 223, in execute
        self.verify()
      File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/conda/common/io.py", line 46, in decorated
        return f(*args, **kwds)
      File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/conda/core/link.py", line 200, in verify
        self.prepare()
      File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/conda/core/link.py", line 192, in prepare
        stp.remove_specs, stp.update_specs)
      File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/conda/core/link.py", line 282, in _prepare
        mkdir_p(transaction_context['temp_dir'])
      File "/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/conda/gateways/disk/__init__.py", line 60, in mkdir_p
        makedirs(path)
      File "/opt/ibm/conda/miniconda3/lib/python3.5/os.py", line 241, in makedirs
        mkdir(name, mode)
    PermissionError: [Errno 13] Permission denied: '/opt/ibm/conda/miniconda3/.condatmp'

`$ /opt/ibm/conda/miniconda3/bin/conda install -c anaconda seaborn -y`

  environment variables:
                 CIO_TEST=<not set>
          CONDA_PKGS_DIRS=/home/spark/shared/user-libs/python3
               CONDA_ROOT=/opt/ibm/conda/miniconda3
                     PATH=/opt/ibm/conda/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin
                          :/usr/bin:/sbin:/bin
      PIXIEDUST_LIBS_PATH=/home/spark/shared/user-libs/spark2/
               PYTHONPATH=/home/spark/shared/user-libs/python3:/opt/ibm/third-
                          party/libs/python3:/opt/ibm/image-libs/python3:/opt/ibm/spark/python:/
                          opt/ibm/spark/python/lib/py4j-0.10.6-src.zip
            PYTHONSTARTUP=/opt/ibm/kernelScript/python/shell.py
       REQUESTS_CA_BUNDLE=<not set>
     SPARK_DIST_CLASSPATH=/opt/ibm/connectors/idax/*:/opt/ibm/connectors/cloudant/*:/opt/ibm/con
                          nectors/db2/*:/opt/ibm/connectors/others-db-drivers/*:/opt/ibm/connect
                          ors/stocator/*:/opt/ibm/connectors/s3/*:/opt/ibm/image-
                          libs/common/*:/opt/ibm/image-libs/spark2/*:/opt/ibm/third-
                          party/libs/batch/*
            SSL_CERT_FILE=<not set>

     active environment : None
       user config file : /home/spark/shared/.condarc
 populated config files : 
          conda version : 4.5.11
    conda-build version : not installed
         python version : 3.5.4.final.0
       base environment : /opt/ibm/conda/miniconda3  (read only)
           channel URLs : https://conda.anaconda.org/anaconda/linux-64
                          https://conda.anaconda.org/anaconda/noarch
                          https://repo.anaconda.com/pkgs/main/linux-64
                          https://repo.anaconda.com/pkgs/main/noarch
                          https://repo.anaconda.com/pkgs/free/linux-64
                          https://repo.anaconda.com/pkgs/free/noarch
                          https://repo.anaconda.com/pkgs/r/linux-64
                          https://repo.anaconda.com/pkgs/r/noarch
                          https://repo.anaconda.com/pkgs/pro/linux-64
                          https://repo.anaconda.com/pkgs/pro/noarch
          package cache : /home/spark/shared/user-libs/python3
       envs directories : /home/spark/shared/.conda/envs
                          /opt/ibm/conda/miniconda3/envs
               platform : linux-64
             user-agent : conda/4.5.11 requests/2.18.4 CPython/3.5.4 Linux/4.4.0-145-generic ubuntu/18.04 glibc/2.27
                UID:GID : 1000:2000
             netrc file : None
           offline mode : False


An unexpected error has occurred. Conda has prepared the above report.
Upload successful.
In [8]:
import seaborn as sns

bins = np.linspace(df.Principal.min(), df.Principal.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'Principal', bins=bins, ec="k")

g.axes[-1].legend()
plt.show()
In [9]:
bins = np.linspace(df.age.min(), df.age.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'age', bins=bins, ec="k")

g.axes[-1].legend()
plt.show()

Pre-processing: Feature selection/extraction

Lets look at the day of the week people get the loan

In [10]:
df['dayofweek'] = df['effective_date'].dt.dayofweek
bins = np.linspace(df.dayofweek.min(), df.dayofweek.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'dayofweek', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()

We see that people who get the loan at the end of the week dont pay it off, so lets use Feature binarization to set a threshold values less then day 4

In [11]:
df['weekend'] = df['dayofweek'].apply(lambda x: 1 if (x>3)  else 0)
df.head()
Out[11]:
Unnamed: 0 Unnamed: 0.1 loan_status Principal terms effective_date due_date age education Gender dayofweek weekend
0 0 0 PAIDOFF 1000 30 2016-09-08 2016-10-07 45 High School or Below male 3 0
1 2 2 PAIDOFF 1000 30 2016-09-08 2016-10-07 33 Bechalor female 3 0
2 3 3 PAIDOFF 1000 15 2016-09-08 2016-09-22 27 college male 3 0
3 4 4 PAIDOFF 1000 30 2016-09-09 2016-10-08 28 college female 4 1
4 6 6 PAIDOFF 1000 30 2016-09-09 2016-10-08 29 college male 4 1

Convert Categorical features to numerical values

Lets look at gender:

In [12]:
df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)
Out[12]:
Gender  loan_status
female  PAIDOFF        0.865385
        COLLECTION     0.134615
male    PAIDOFF        0.731293
        COLLECTION     0.268707
Name: loan_status, dtype: float64

86 % of female pay their loans while only 73 % of males pay their loan

Lets convert male to 0 and female to 1:

In [13]:
df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
df.head()
Out[13]:
Unnamed: 0 Unnamed: 0.1 loan_status Principal terms effective_date due_date age education Gender dayofweek weekend
0 0 0 PAIDOFF 1000 30 2016-09-08 2016-10-07 45 High School or Below 0 3 0
1 2 2 PAIDOFF 1000 30 2016-09-08 2016-10-07 33 Bechalor 1 3 0
2 3 3 PAIDOFF 1000 15 2016-09-08 2016-09-22 27 college 0 3 0
3 4 4 PAIDOFF 1000 30 2016-09-09 2016-10-08 28 college 1 4 1
4 6 6 PAIDOFF 1000 30 2016-09-09 2016-10-08 29 college 0 4 1

One Hot Encoding

How about education?

In [14]:
df.groupby(['education'])['loan_status'].value_counts(normalize=True)
Out[14]:
education             loan_status
Bechalor              PAIDOFF        0.750000
                      COLLECTION     0.250000
High School or Below  PAIDOFF        0.741722
                      COLLECTION     0.258278
Master or Above       COLLECTION     0.500000
                      PAIDOFF        0.500000
college               PAIDOFF        0.765101
                      COLLECTION     0.234899
Name: loan_status, dtype: float64

Feature befor One Hot Encoding

In [15]:
df[['Principal','terms','age','Gender','education']].head()
Out[15]:
Principal terms age Gender education
0 1000 30 45 0 High School or Below
1 1000 30 33 1 Bechalor
2 1000 15 27 0 college
3 1000 30 28 1 college
4 1000 30 29 0 college

Use one hot encoding technique to conver categorical varables to binary variables and append them to the feature Data Frame

In [17]:
Feature = df[['Principal','terms','age','Gender','weekend']]
Feature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1)
Feature.drop(['Master or Above'], axis = 1,inplace=True)
Feature.head()
Out[17]:
Principal terms age Gender weekend Bechalor High School or Below college
0 1000 30 45 0 0 0 1 0
1 1000 30 33 1 0 1 0 0
2 1000 15 27 0 0 0 0 1
3 1000 30 28 1 1 0 0 1
4 1000 30 29 0 1 0 0 1

Feature selection

Lets defind feature sets, X:

In [18]:
X = Feature
X[0:5]
Out[18]:
Principal terms age Gender weekend Bechalor High School or Below college
0 1000 30 45 0 0 0 1 0
1 1000 30 33 1 0 1 0 0
2 1000 15 27 0 0 0 0 1
3 1000 30 28 1 1 0 0 1
4 1000 30 29 0 1 0 0 1

What are our lables?

In [19]:
y = df['loan_status'].values
y[0:5]
Out[19]:
array(['PAIDOFF', 'PAIDOFF', 'PAIDOFF', 'PAIDOFF', 'PAIDOFF'], dtype=object)

Normalize Data

Data Standardization give data zero mean and unit variance (technically should be done after train test split )

In [20]:
X= preprocessing.StandardScaler().fit(X).transform(X)
X[0:5]
Out[20]:
array([[ 0.51578458,  0.92071769,  2.33152555, -0.42056004, -1.20577805,
        -0.38170062,  1.13639374, -0.86968108],
       [ 0.51578458,  0.92071769,  0.34170148,  2.37778177, -1.20577805,
         2.61985426, -0.87997669, -0.86968108],
       [ 0.51578458, -0.95911111, -0.65321055, -0.42056004, -1.20577805,
        -0.38170062, -0.87997669,  1.14984679],
       [ 0.51578458,  0.92071769, -0.48739188,  2.37778177,  0.82934003,
        -0.38170062, -0.87997669,  1.14984679],
       [ 0.51578458,  0.92071769, -0.3215732 , -0.42056004,  0.82934003,
        -0.38170062, -0.87997669,  1.14984679]])

Classification

Now, it is your turn, use the training set to build an accurate model. Then use the test set to report the accuracy of the model You should use the following algorithm:

  • K Nearest Neighbor(KNN)
  • Decision Tree
  • Support Vector Machine
  • Logistic Regression

Notice:

  • You can go above and change the pre-processing, feature selection, feature-extraction, and so on, to make a better model.
  • You should use either scikit-learn, Scipy or Numpy libraries for developing the classification algorithms.
  • You should include the code of the algorithm in the following cells.

K Nearest Neighbor(KNN)

Notice: You should find the best k to build the model with the best accuracy.
warning: You should not use the loan_test.csv for finding the best k, however, you can split your train_loan.csv into train and test to find the best k.

In [21]:
# split train_loan
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4 )
print ('Train set:', X_train.shape,  y_train.shape)
print ('Test set:', X_test.shape,  y_test.shape)
Train set: (276, 8) (276,)
Test set: (70, 8) (70,)
In [22]:
# import library
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics

# try with 10 different values of k to find the best one
Ks = 10
mean_acc = np.zeros((Ks-1))
std_acc = np.zeros((Ks-1))
ConfustionMx = [];
for n in range(1,Ks):
    #Train Model and Predict  
    neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)
    yhat=neigh.predict(X_test)
    mean_acc[n-1] = metrics.accuracy_score(y_test, yhat)
    std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])

# accuracy
print(mean_acc)

# Plot model accuracy for Different number of Neighbors
plt.plot(range(1,Ks),mean_acc,'g')
plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
plt.legend(('Accuracy ', '+/- 3xstd'))
plt.ylabel('Accuracy ')
plt.xlabel('Number of Nabors (K)')
plt.tight_layout()
plt.show()

# result
print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1 )
[ 0.67142857  0.65714286  0.71428571  0.68571429  0.75714286  0.71428571
  0.78571429  0.75714286  0.75714286]
The best accuracy was with 0.785714285714 with k= 7
In [23]:
# train model with k=7
neigh = KNeighborsClassifier(n_neighbors = 7).fit(X_train,y_train)
neigh
Out[23]:
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
           metric_params=None, n_jobs=1, n_neighbors=7, p=2,
           weights='uniform')

Decision Tree

In [24]:
# import library
from sklearn.tree import DecisionTreeClassifier
# create an instance of the DecisionTreeClassifier called loanTree
loanTree = DecisionTreeClassifier(criterion="entropy", max_depth = 4)
# training
loanTree.fit(X_train,y_train)
loanTree
Out[24]:
DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=4,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=None,
            splitter='best')

Support Vector Machine

In [25]:
# import library
from sklearn import svm
# training
clf = svm.SVC()
clf.fit(X_train, y_train)
Out[25]:
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
  decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf',
  max_iter=-1, probability=False, random_state=None, shrinking=True,
  tol=0.001, verbose=False)

Logistic Regression

In [26]:
# import library
from sklearn.linear_model import LogisticRegression
# training
LR = LogisticRegression(C=0.01).fit(X_train,y_train)
LR
Out[26]:
LogisticRegression(C=0.01, class_weight=None, dual=False, fit_intercept=True,
          intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
          penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
          verbose=0, warm_start=False)

Model Evaluation using Test set

In [27]:
from sklearn.metrics import jaccard_similarity_score
from sklearn.metrics import f1_score
from sklearn.metrics import log_loss

First, download and load the test set:

In [28]:
!wget -O loan_test.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv
--2019-04-27 05:15:35--  https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.193
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.193|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3642 (3.6K) [text/csv]
Saving to: 'loan_test.csv'

loan_test.csv       100%[===================>]   3.56K  --.-KB/s    in 0s      

2019-04-27 05:15:36 (648 MB/s) - 'loan_test.csv' saved [3642/3642]

Load Test set for evaluation

In [29]:
test_df = pd.read_csv('loan_test.csv')
test_df.head()
Out[29]:
Unnamed: 0 Unnamed: 0.1 loan_status Principal terms effective_date due_date age education Gender
0 1 1 PAIDOFF 1000 30 9/8/2016 10/7/2016 50 Bechalor female
1 5 5 PAIDOFF 300 7 9/9/2016 9/15/2016 35 Master or Above male
2 21 21 PAIDOFF 1000 30 9/10/2016 10/9/2016 43 High School or Below female
3 24 24 PAIDOFF 1000 30 9/10/2016 10/9/2016 26 college male
4 35 35 PAIDOFF 800 15 9/11/2016 9/25/2016 29 Bechalor male
In [30]:
# Pre-processing loan_test
test_df['due_date'] = pd.to_datetime(test_df['due_date'])
test_df['effective_date'] = pd.to_datetime(test_df['effective_date'])
test_df['dayofweek'] = df['effective_date'].dt.dayofweek
test_df['weekend'] = test_df['dayofweek'].apply(lambda x: 1 if (x>3)  else 0)
test_df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
Feature_test = test_df[['Principal','terms','age','Gender','weekend']]
Feature_test = pd.concat([Feature_test,pd.get_dummies(test_df['education'])], axis=1)
Feature_test.drop(['Master or Above'], axis = 1,inplace=True)
test_X = Feature_test
test_X = preprocessing.StandardScaler().fit(test_X).transform(test_X)
test_X[0:5]
Out[30]:
array([[ 0.49362588,  0.92844966,  3.05981865,  1.97714211, -4.12310563,
         2.39791576, -0.79772404, -0.86135677],
       [-3.56269116, -1.70427745,  0.53336288, -0.50578054, -4.12310563,
        -0.41702883, -0.79772404, -0.86135677],
       [ 0.49362588,  0.92844966,  1.88080596,  1.97714211, -4.12310563,
        -0.41702883,  1.25356634, -0.86135677],
       [ 0.49362588,  0.92844966, -0.98251057, -0.50578054,  0.24253563,
        -0.41702883, -0.79772404,  1.16095912],
       [-0.66532184, -0.78854628, -0.47721942, -0.50578054,  0.24253563,
         2.39791576, -0.79772404, -0.86135677]])
In [31]:
# Pre-processing loan_test (cont)
test_y = test_df['loan_status'].values
test_y[0:5]
Out[31]:
array(['PAIDOFF', 'PAIDOFF', 'PAIDOFF', 'PAIDOFF', 'PAIDOFF'], dtype=object)

KNN

In [33]:
# predicted y
yhat_knn = neigh.predict(test_X)

# jaccard
jaccard_knn = jaccard_similarity_score(test_y, yhat_knn)
print("KNN Jaccard index: ", jaccard_knn)

# f1_score
f1_score_knn = f1_score(test_y, yhat_knn, average='weighted')
print("KNN F1-score: ", f1_score_knn)
KNN Jaccard index:  0.685185185185
KNN F1-score:  0.645381013197

Decision tree

In [34]:
# predicted y
yhat_dt = loanTree.predict(test_X)

# jaccard
jaccard_dt = jaccard_similarity_score(test_y, yhat_dt)
print("DT Jaccard index: ", jaccard_dt)

# f1_score
f1_score_dt = f1_score(test_y, yhat_dt, average='weighted')
print("DT F1-score: ", f1_score_dt)
DT Jaccard index:  0.518518518519
DT F1-score:  0.538580246914

SVM

In [35]:
# predicted y
yhat_svm = clf.predict(test_X)

# jaccard
jaccard_svm = jaccard_similarity_score(test_y, yhat_svm)
print("SVM Jaccard index: ", jaccard_svm)

# f1_score
f1_score_svm = f1_score(test_y, yhat_svm, average='weighted')
print("SVM F1-score: ", f1_score_svm)
SVM Jaccard index:  0.814814814815
SVM F1-score:  0.786195286195

Logistic regression

In [36]:
# predicted y
yhat_lg = LR.predict(test_X)
yhat_lg_prob = LR.predict_proba(test_X)

# jaccard
jaccard_lg = jaccard_similarity_score(test_y, yhat_lg)
print("LR Jaccard index: ", jaccard_lg)

# f1_score
f1_score_lg = f1_score(test_y, yhat_lg, average='weighted')
print("LR F1-score: ", f1_score_lg)

# logloss
logloss_lg = log_loss(test_y, yhat_lg_prob)
print("LR log loss: ", logloss_lg)
LR Jaccard index:  0.740740740741
LR F1-score:  0.630417651694
LR log loss:  0.603787127219
/opt/ibm/conda/miniconda3/lib/python3.5/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
  'precision', 'predicted', average, warn_for)

Report

You should be able to report the accuracy of the built model using different evaluation metrics:

Algorithm Jaccard F1-score LogLoss
KNN 0.6852 0.64538 NA
Decision Tree 0.5185 0.53858 NA
SVM 0.8148 0.7862 NA
LogisticRegression 0.740741 0.630418 0.60379

Want to learn more?

IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: SPSS Modeler

Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at Watson Studio

Thanks for completing this lesson!

Author: Saeed Aghabozorgi

Saeed Aghabozorgi, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.


Copyright © 2018 Cognitive Class. This notebook and its source code are released under the terms of the MIT License.