The first two cells are programmed to run without errors. Post suitable codes that would run without errors in the cells 1.1, 1.2, 1.3 and 1.4 based on all the descriptions below. This is a Python exercise to be solved in JupyterLab from Anaconda. The template file with the data set can not be attached here but give it a shot of what you think would work based on the instructions.
# KNN Exercise The aim of this exercise is to implement a simple kNN algorithm and apply to the Iris data set. We first need to set up all packages and import the data. : import sklearn.datasets as datasets import numpy as np iris datasets.load_iris) N, D = iris.data. shape ## 1. Implement nearest neighbours The first part is to develop the algorithm. We do this in few steps and then put the result into a function so we can apply this regularly. We first define the training data and the test data point that we use. : # Let the first sample be the test point x=iris.data[0,:] ttest-iris. target[O] # And the rest is the training data X=iris.data[1:N,:] t-iris.target[1:N] Ntr, Dtr = X. shape ### 1.1 Calculate distance The first part of the algoritm is to calculate the distance from the test set to all the points in the training set. [ ]: # calculate distances ### 1.2 Find the K nearest neighbours Now that all the distances are known, it is possible to identify the K nearest neighbours. You can either implement this yourself or use utilities from Numpy. Create a K dimensional vector (you can call it knearest) that contains the indices of the K nearest neighbours. [ ]: # find knearest ### 1.3 Find out the labels and vote Now that the K nearest neighbours have been identified, we need to figure out their labels and find which label has majority [ ]: # calculate klabels, find which Label has majority ### 1.4 Create the myKNN function Now put this all together in a function and apply [ ]: def myKNN(x,x,t,k): #xriris.data[0, :), X=iris.data[1:N, :), t=iris. target[1:N] have already been defined K=5 guess-myKNN(X,X,t,K) print(guess) Is your guess right? How do you know? Now the vector x is the test point and the matrix X is the training set (data matrix). The true label of the test point (which is not known when the algorithm is in use) is ttest and the labels of the training set ist. Implement the kNN algorithm using the following steps: 1.1 Calculate the distance Implement a distance calculation so that we get the distance between the test point to all the points in the training set. Use the Euclidean distance: dn = 123–1(x; – ztrain n)2 where ztrain n is the n-th vector in the training set. The outcome of this step is a vector containing all dn 1.2 Identify the K nearest neighbours Now that all the distances are known, it is possible to identify the nearest neighbours. You can either implement this yourself or use utilities from Numpy. Create a K dimensional vector (you can call it knearest) that contains the indices of the K nearest neighbours. 1.3 Find the labels and vote Use the indices of the nearest neighbours to find their labels (in the target vector). Let's call this vector klabels. 1.4 Create the myKNN function These steps need to be put together in a function. Call it myKNN and definite it using the following (put your code where the… are): def myKNN(X,X,t,k): The function should take as inputs: 1. A test point x 2. The training set data matrix X, 3. The training set target values (labels)t, 4. The number of nearest neighbours K. It should return a single value, the index of the class which is the algorithm's guess. You should therefore be able to call your function (and print the answer) with: K=5 guess-myKNN(X,X,t,k) print(guess) Is your guess right? How do you know?