Kernels and Deep learning.

Following comment in

The one thing that irks me is kernels keep being referred to as “mapping to infinite dimensional space” in these vids. While true, it’s kind of unhelpful. It’s probably more helpful to think of it as mapping to a (potentially infinite) space where the (potentially infinite) bases aren’t restricted to be vectors. The whole trick about the Hilbert space is it’s just a space with a well-defined inner product, and that inner product can be between functions etc. You could have a function mapping to a space where the bases are the sin and cosine functions, for example, and you use that space to take the inner product between different sine/cosine wave combinations. Typically the kind of space you want to map to will express something about the problem you’re trying to solve.

I bring this up because I’ve seen kernels mentioned twice now, and both times the explanations are kind of lacking. Not Yannic’s fault, I was lucky enough to spend a bunch of time around kernel gurus that cleared up a bunch of misconceptions for me…before that, I was making similar mistakes. Content otherwise very solid, as usual!

in

is informative.

The author “Alex Stenlake” has interesting talk in the following Machine Learning Street Talk: