kNN Classifier from Scratch (numpy only)
k-Nearest Neighbors
k-Nearest Neighbors is a supervised machine learning algorithm for regression, classification and is also commonly used for empty-value imputation. Β This technique "groups" data according to the similarity of its features.
KNN has only one hyper-parameter: the size of the neighborhood (k):
- k represents the number of neighbors to compare data with. Β Most of the times, at least in classification and imputation, k is odd just in case there is a tie between different neighbors.
- the bigger the k, the less 'defined' the classification areas.
Distance is a key factor in order to determine who is the closest. Distance impacts the size and characteristics of the neighborhoods. The most commonly used is Euclidean, which is pretty simple, as it gives the closest distance between 2 points. But it is not suited for all distance calculations. Based on your needs, you may select one of the following forms of distance measurements:
- Euclidean: the shortest distance between two points. This might not be the best option when features are normalized. It's typically used in face recognition.
- Taxicab or Manhattan: the sum of the absolute differences of the Cartesian coordinates of 2 points. It works the same way as when a car needs to move around 'blocks' to get to the destination. So, it is basically adding the Β horizontal and vertical distances in a 2 dimensional setting.Β
- Minkowski: a mix of both Euclidean and Minkowski.
The number of features influences kNN significantly because the more points we have, the more 'unique' each neighborhood becomes. It also affects speed because we need to measure each distance first in order to determine who are the closest k neighbors.
The kNN Algorithm
The most efficient way to calculate the algorithm is in a vectorized form, so instead of calculating the points one by one is better to vectorize the final table and then sort the elements with shortest distances.
1.- Create a matrix with all the distances. The size of the matrix is i x j where i = rows in training set and j = rows in testing set.
[code language='python']
import pandas as pd
import numpy as np
def knn(xTrain, xTest, k):
"""
Finds the k nearest neighbors of xTest in xTrain.
Input:
xTrain = n x d matrix. n=rows and d=features
xTest = m x d matrix. m=rows and d=features (same amount of features as xTrain)
k = number of nearest neighbors to be found
Output:
dists = distances between xTrain/xTest points. Size of n x m
indices = kxm matrix with indices of yTrain labels
"""
#the following formula calculates the Euclidean distances.
distances = -2 * xTrain@xTest.T + np.sum(xTest**2,axis=1) + np.sum(xTrain**2,axis=1)[:, np.newaxis]
#because of numpy precision, some really small numbers might
#become negatives. So, the following is required.
distances[distances < 0] = 0
#for speed you can avoid the square root since it won't affect
#the result, but apply it for exact distances.
distances = distances**.5
indices = np.argsort(distances, 0) #get indices of sorted items
distances = np.sort(distances,0) #distances sorted in axis 0
#returning the top-k closest distances.
return indices[0:k, : ], distances[0:k, : ]
[/code]
2.- Sort the matrix by columns.Β Since all testing point distances to each training points is now in a matrix, we can sort the indexes for each testing point to find the closest k-neighbors.
3.- Get the y-label that repeats more (classification) or the average of the y-labels (regression).Β Find the points in the training set that are closer to the testing set points. Use mean for regression or mode for classification.
4- Create a new array with the projected label of the testing set. The size of the array is the same size as the y of the testing set.
[code language='python']
def knn_predictions(xTrain,yTrain,xTest,k=3):
"""
Input:
xTrain = n x d matrix. n=rows and d=features
yTrain = n x 1 array. n=rows with label value
xTest = m x d matrix. m=rows and d=features
k = number of nearest neighbors to be found
Output:
predictions = predicted labels, ie preds(i) is the predicted label of xTest(i,:)
"""
indices, distances = knn(xTrain,xTest,k)
yTrain = yTrain.flatten()
rows, columns = indices.shape
predictions = list()
for j in range(columns):
temp = list()
for i in range(rows):
cell = indices[i][j]
temp.append(yTrain[cell])
predictions.append(max(temp,key=temp.count)) #this is the key function, brings the mode value
predictions=np.array(predictions)
return predictions
[/code]
5- Calculate accuracy of the projected labels. Evaluate the differences between the projected label of y in the testing set with the actual y of the testing set. If accuracy is low, we can change it by modifying k.
#from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
from sklearn import metrics
import matplotlib.pyplot as plt
#will first check which is the best k
Ks = 15
mean_acc = np.zeros((Ks-1))
std_acc = np.zeros((Ks-1))
#ConfustionMx = [];
for n in range(1,Ks):
#Train Model and Predict
#neigh = KNeighborsClassifier(n_neighbors = n).fit(xTrain,yTrain)
#yhat=neigh.predict(xTest)
yhat=knn_predictions(xTrain,yTrain,xTest,n)
mean_acc[n-1] = metrics.accuracy_score(yTest, yhat)
std_acc[n-1]=np.std(yhat==yTest)/np.sqrt(yhat.shape[0])
print( "The best accuracy was:", np.round(mean_acc.max()*100,2), "% with k=", mean_acc.argmax()+1)
plt.plot(range(1,Ks),mean_acc,'g')
plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.05)
plt.legend(('Accuracy ', '+/- 3xstd'))
plt.ylabel('Accuracy ')
plt.xlabel('Number of Neighbors (k)')
plt.tight_layout()
plt.show()