☑ Logistic Regression

Author

Ken Pu

1 A dataset

We will work on the iris dataset.

  • Three species of iris: Setosa, Versicolour, and Virginica.
  • Four measurements of the sepal and petal dimensions of 150 samples.
from sklearn import datasets
import pandas as pd

iris = datasets.load_iris()
dataset = pd.DataFrame(
  iris['data'],
  columns=['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
)
dataset['species'] = iris['target']
dataset = dataset.sample(frac=1)
dataset.head()
sepal_length sepal_width petal_length petal_width species
128 6.4 2.8 5.6 2.1 2
50 7.0 3.2 4.7 1.4 1
94 5.6 2.7 4.2 1.3 1
95 5.7 3.0 4.2 1.2 1
67 5.8 2.7 4.1 1.0 1

2 A simplified classification problem

For the species 0 and 1, can we tell them apart based on the sepal dimensions?

This is a binary classification problem based on two attributes.

df = dataset[(dataset.species == 0) | (dataset.species == 1)]
x_df = df[['sepal_length', 'sepal_width']]
y_df = df['species']
import matplotlib.pyplot as pl
c = ['blue' if s == 0 else 'red' for s in y_df]
pl.scatter(x_df['sepal_length'], x_df['sepal_width'], c=c);

3 The model of binary classificatio using logistic regression

The training data is given as:

  • Input: \(\{x_i\}\) where each \(x_i\in\mathbb{R}^n\).
  • Output: \(\{y_i\}\) where each \(y_i\in\{0, 1\}\).

The model is given as:

\[ p_i = f(x_i|\theta) \]

where \(p_i\) is the probability that \(y_i=1\).

The logistic model

Let \(w\in\mathbb{R}^n\) and \(b\in\mathbb{R}\) be two vectors that define the hyperplane \(P(w, b)\) in \(\mathbb{R}^n\), where:

  • \(w\) is the normal vector.
  • \(b\) is the offset.

\[ P(w,b) = \{x: w^x+b = 0\} \]

The displacement of \(x_i\) from the hyperplane \(P(w,b)\) is given by:

\[ \delta_i = w^Tx_i + b \]

Furthermore,

\[ \begin{eqnarray} \mathrm{if}\ \delta_i < 0 &\implies& y_i = 0 \\ \mathrm{if}\ \delta_i > 0 &\implies& y_i = 1 \end{eqnarray} \]

The displacement \(\delta_i\) is called the logit of \(x_i\). The logit \(\delta_i\in[-\infty, \infty]\).

So, to convert logit to a probability measure, we use the sigmoid function:

\[ \mathrm{sigmoid}(u) = \frac{1}{1 + e^{-u}} \]

Finally, putting everything together, we have:

\[ f(x_i) = \mathrm{sigmoid}(w^Tx_i + b) \]

import torch
import torch.nn as nn
class LogisticRegression(nn.Module):
    def __init__(self, features=2):
        super().__init__()
        self.w = nn.Parameter(torch.randn(size=(features,)))
        self.b = nn.Parameter(torch.randn(size=()))
        self.activation = nn.Sigmoid()
    def forward(self, x):
        return self.activation(x @ self.w + self.b)
model = LogisticRegression()
x_input = torch.tensor(x_df.values, dtype=torch.float32)
y_true = torch.tensor(y_df, dtype=torch.float32)
model(x_input[:5])
tensor([0.9970, 0.9936, 0.9951, 0.9939, 0.9957], grad_fn=<SigmoidBackward0>)

4 Evaluating model performance with binary cross-entropy

We have two sets of outputs:

  • True output: \(y_i\in\{0, 1\}\)
  • Predicted probability: \(p_i\in[0, 1]\).
y_i p_i
0 1.0 0.997021
1 1.0 0.993624
2 1.0 0.995098
3 1.0 0.993948
4 0.0 0.995708

4.1 The binary cross-entropy:

\[ L(p, y) = -y\log(p)-(1-y)\log(1-p) \]

from torch.nn.functional import binary_cross_entropy


with torch.no_grad():
    p = model(x_input)
    L = binary_cross_entropy(p, y_true, reduction='none').numpy()
    df = pd.DataFrame({
        'y_i': y_true,
        'p_i': p,
        'L_i': L,
    })
df.head()
y_i p_i L_i
0 1.0 0.997021 0.002984
1 1.0 0.993624 0.006396
2 1.0 0.995098 0.004914
3 1.0 0.993948 0.006070
4 0.0 0.995708 5.451027

5 Train the model

model = LogisticRegression()
loss = binary_cross_entropy
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
epochs = 100
for epoch in range(epochs):
    optimizer.zero_grad()
    L = loss(model(x_input), y_true)
    L.backward()
    optimizer.step()
    if epoch % (epochs // 10) == 0:
        with torch.no_grad():
            print(L.numpy())
3.1235023
0.503605
0.47179082
0.44397813
0.4195141
0.3978672
0.37860397
0.36136955
0.3458722
0.33187073
model.w, model.b
(Parameter containing:
 tensor([ 0.7795, -2.3339], requires_grad=True),
 Parameter containing:
 tensor(3.0383, requires_grad=True))

6 Plot the classification boundary

7 A bit of linear algebra on plotting the surface

Let’s go through the linear algebra that computes the classification boundary. Since the classification boundary is in \(\mathbb{R}^2\), the boundary is a line.

Logistic regression models the boundary line by the normal vector \(w\) and offset \(b\), and the line is given by: \[ L = \{x\in\mathbb{R}^2: w^Tx + b = 0\} \]

The nice thing is that this equation for \(L\) generalizes to higher dimensions.

To plot \(L\), we need to transform it into another form:

\[ L = \{x_0 + vt: t\in\mathbb{R}\} \] where \(x_0, v\in\mathbb{R}^2\).

So, we need to find:

  1. A point already on \(L\): \(x_0\in\mathbb{R}^2\),
  2. the direction vector of \(L\): \(v\in\mathbb{R}^2\).

7.1 Finding a point on \(L\)

Note

Claim:

\(x_0 = -\frac{w}{\|w\|}b\) is on \(L\)

Proof:

Just check:

\[ \begin{eqnarray} w^T x_0 + b &=& -w^T(\frac{w}{\|w\|^2}b + b \\ &=& -\frac{w^T w}{\|w\|^2}b + b \\ &=& -b + b \\ &=& 0 \end{eqnarray} \]

7.2 Finding the direction vector of \(L\)

Note

Claim: \[ \left[\begin{array}{c} u \\ v \end{array} \right]^T \left[\begin{array}{c} -v \\ u \end{array} \right] = 0 \]

Therefore, given \(w = [w_0, w_1]\), the direction vector is simply \(v = [-w_1, w_0]\).

7.3 Plotting \(L\)

Now that we have: \(L = \{x_0 + vt\}\),

We just need to points:

  • \(p_1 = x_0 + vt_1\)
  • \(p_2 = x_0 + vt_2\)

We pick \(t_1\) and \(t_2\) to be two arbitrary values.

{python} plot( [p1[0], p2[0]], [p1[1], p2[1]], ...)

8 Generalizing to multiple classes

We will now perform classification on all three species based on the sepal length and sepal width.

With multiple classes, the model needs to compute multiple probabilities.

\[ f(x|\theta) = \left[\begin{array}{c} p_1 \\ p_2 \\ p_3 \\ \end{array}\right] \]

where each \(p_k\) represents the likelihood that \(x\) is the sepal dimensions of species \(k\).

Thus: \(p_1+p_2+p_3 = 1\).

8.1 Logits of classification

The model consists of the model parameters \((W, b)\) where - \(W\in\mathbb{R}^{2\times 3}\) - \(b\in\mathbb{R}^3\).

Given an input \(x\in\mathbb{R}^2\), we have define

\[ v = xW + b \]

From the dimensions, we can tell that \(v\in\mathbb{R}^3\). Here \(v\) is the vector of logits. In order to convert the logits into probabilities, we use the softmax function.

8.2 Softmax function

\[ \mathrm{softmax}:\mathbb{R}^n\to[0,1]^n \]

For \(p = \mathrm{softmax}(v)\), we have:

\[p_i = \frac{e^{v_i}}{\sum_k e^{v_k}}\]

8.3 A general model

\[ f(x|W,b) = \mathrm{softmax}(xW + b) \]

Note, \(W\) and \(b\) can be designed to accommodate any input dimension and number of classes.

9 Linear Layer With Activation Function

The model:

\[f(x|W, b) = \mathrm{softmax}(xW + b)\]

is called the linear layer.

The function softmax is called the activation function.

Both are supported by PyTorch.

class Classifier(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(2, 3)
    def forward(self, x):
        return nn.functional.softmax(self.linear(x), dim=-1)
x_input = torch.tensor(
    dataset[["sepal_length", "sepal_width"]].values,
    dtype=torch.float32)

y_true = torch.tensor(dataset['species'], dtype=torch.float32)
#
# Use the model to do some classification WITHOUT training.
#

model = Classifier()
p_pred = model(x_input)

p_pred[:5]
tensor([[8.0284e-01, 4.4152e-04, 1.9672e-01],
        [8.2812e-01, 1.8536e-04, 1.7169e-01],
        [7.8302e-01, 9.0312e-04, 2.1607e-01],
        [7.9710e-01, 6.1849e-04, 2.0228e-01],
        [7.8712e-01, 7.7459e-04, 2.1211e-01]], grad_fn=<SliceBackward0>)
#
# The true labels
#

y_true[:5]
tensor([2., 1., 1., 1., 1.])
#
# The one-hot encoding of the true labels
#

one_hots = nn.functional.one_hot(
    y_true.to(torch.int64),
    3
)

one_hots[:5]
tensor([[0, 0, 1],
        [0, 1, 0],
        [0, 1, 0],
        [0, 1, 0],
        [0, 1, 0]])
  • y_true is a tensor of float32.
  • y_true.to(torch.int64) converts each element to int64, so it can be used as an index tensor
  • functional.one_hot(index_tensor, num_classes) converts the index tensor elements to their one-hot encodings.

10 Cross Entropy

Let \(p_i = [p_{i1}, p_{i2}, p_{i3}]\) be the prediction prediction, and \(y_i\) the true label.

We need a loss function \(L(p_i, y_i)\) to assess the quality of the prediction.

10.1 Cross entropy loss of one-hot encodings

Let \(\mathbf{b}\) be an one-hot encoding vector over \(k\) classes, and \(\mathbf{p} = [p_1, p_2, \dots p_k]\) be the classification probability.

The cross entropy loss is given by:

\[ L_i = \mathrm{crossentropy}(\mathbf{p},\mathbf{b}) = - \sum_{i=1}^k b_i\cdot \log(p_i) \]

crossentropies = -torch.sum(torch.log(p_pred) * one_hots, axis=-1)
df = pd.DataFrame(p_pred.detach())
df['y_true'] = y_true.int()
df['crossentropy' ] = crossentropies.detach()
df.head().round(2)
0 1 2 y_true crossentropy
0 0.80 0.0 0.20 2 1.63
1 0.83 0.0 0.17 1 8.59
2 0.78 0.0 0.22 1 7.01
3 0.80 0.0 0.20 1 7.39
4 0.79 0.0 0.21 1 7.16

11 Training the linear layer with activation function

  • PyTorch CrossEntropyLoss() constructs a loss function \(f(\mathrm{logits}, \mathrm{indexes})\)

  • It computes the softmax inside the loss function.

  • It computes the one-hot vectors inside the loss function.

model = nn.Linear(2, 3)
loss = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)

x_input = torch.tensor(
    dataset[['sepal_length', 'sepal_width']].values,
    dtype=torch.float32
)

y_true = torch.tensor(
    dataset['species'].values,
    dtype=torch.int64
)
epochs = 1000
for epoch in range(epochs):
    optimizer.zero_grad()
    pred = model(x_input)
    l = loss(pred, y_true)
    l.backward()
    optimizer.step()
    
    if epoch % (epochs//10) == 0:
        with torch.no_grad():
            print(epoch, l.numpy().round(3))
print(epoch, l.detach().numpy().round(3))
0 1.164
100 0.716
200 0.626
300 0.584
400 0.559
500 0.541
600 0.528
700 0.517
800 0.509
900 0.501
999 0.495

12 Visualizing the 3-way classification

model.eval()
p_pred = model(x_input)
y_pred = p_pred.argmax(axis=1)
print("Predicted classes", y_pred[:5])
print("True classes", y_true[:5])
Predicted classes tensor([2, 2, 1, 1, 1])
True classes tensor([2, 1, 1, 1, 1])
torch.sum(y_pred == y_true) / y_true.shape[0]
tensor(0.7800)
colormap = {
    0: 'red',
    1: 'blue',
    2: 'orange',
}
c = [colormap[y] for y in y_true.numpy()]
pl.scatter(x_input[:,0], x_input[:, 1], c=c)
pl.title('True species classes');

x_min, y_min = x_input.min(axis=0).values.numpy()
x_max, y_max = x_input.max(axis=0).values.numpy()

x_range = np.linspace(x_min, x_max, 100)
y_range = np.linspace(y_min, y_max, 100)
xx, yy = np.meshgrid(x_range, y_range)

X = np.concatenate([
    xx.reshape(100, 100, 1),
    yy.reshape(100, 100, 1)
], axis=-1)

X.shape
(100, 100, 2)
input = torch.tensor(X.reshape(-1, 2), dtype=torch.float32)
logits = model(input)
output = logits.argmax(axis=-1)
output.shape
torch.Size([10000])
coordinates = X.reshape(-1, 2)

for c in [0, 1, 2]:
    pl.scatter(
        coordinates[output==c, 0],
        coordinates[output==c, 1],
        c=colormap[c],
        alpha=0.2,
        edgecolor='none',
        s=5,
    )
    
c = [colormap[y] for y in y_true.numpy()]
pl.scatter(x_input[:,0], x_input[:, 1], c=c)
pl.title('True species classes');