About the author:
Daniel is CTO at rhome GmbH, and Co-Founder at Aqarios GmbH. He holds a M.Sc. in Computer Science from LMU Munich, and has published papers in reinforcement learning and quantum computing. He writes about technical topics in quantum computing and startups.
jrnl · home about list ventures publications LinkedIn Join my Slack

# Kernels - Additional Intuition

If you're here, you probably checked out a few blog posts or Wikipedia to understand Kernels. So this is just a few additional thoughts. Firstly, why even do this 'kernel trick'?

The application we will be looking at in this post is clustering a set of points. For this, it is of course important to have some kind of distance measure between the points, in fact that is the main function needed.

Consider a dataset of 4-dimensional points. Pick a point (x1,x2,x3,x4). For the sake of this example you find that to separate the points in multiple clusters you will have to move them into the 10-dimensional space using the following formula:

(i4xiyi)2(\sum_i^4 x_iy_i)^2

Fully expanded, each of the 4-dimensional points is now the following 10-dimensional point:

{x12y12,2x1x2y2y1,2x1x3y3y1,2x1x4y4y1,x22y22,x32y32,x42y42,2x2x3y2y3,2x2x4y2y4,2x3x4y3y4}\{x_1^2 y_1^2 , 2 x_1 x_2 y_2 y_1 , 2 x_1 x_3 y_3 y_1 , 2 x_1 x_4 y_4 y_1 , x_2^2 y_2^2 , x_3^2 y_3^2 , x_4^2 y_4^2 , 2 x_2 x_3 y_2 y_3 , 2 x_2 x_4 y_2 y_4 , 2 x_3 x_4 y_3 y_4\}

You could just convert all points into this and then apply a clustering algorithm, which will use some kind of distance measure on those points such as euclidean distance, but observe all these calculations.. For each point you have to do 1010 calculations. Now imagine if you did not have to convert all points. You use the polynomial kernel:

(i4xiyi)2=(xTy)2(\sum_i^4 x_iy_i)^2 = (x^Ty)^2

The crucial point here is that the kernel gives us a kind of similarity measure (due to the dot product), which is what we want. Since this is basically the only thing we need for a successful clustering, using a kernel works here. Of course, if you needed the higher dimension for something more complicated, a kernel would not suffice any longer. The second crucial point here is that calculating the kernel is much faster. You do not have to first convert all the points to 10 dimensions and then apply some kind of distance measure, no, you do that in just one dot operation, i.e. one sum loop. To be precise, you iterate N=4N = 4 times with a kernel, but you do it N2.5=10N*2.5=10 times with the naive way of converting points to higher dimension.

Published on