Curiosity is bliss    Archive    Feed    About    Search

Julien Couvreur's programming blog and more

Hybrid neural network training

 

Brain age memory and drawing game

I am not a machine learning expert by any means, but I found the results presented in Jeff' Hinton's GoogleTalk presentation quite impressive and convincing.

He demonstrates a specific kind of network which can be trained a clever way: mostly unsupervised, with unlabeled data, and partially supervised, with relatively little labeled data. He presents some results such as digit recognition, document classification and similar document search.

The first interesting idea is that this kind of neural networks can not only recognize images (or other inputs), but can also generate them.

The second is that if you can recognize and regenerate a given input, then you can compare the original input with the regenerated one and use that comparison as training.
This makes it possible to train the net to model the data by feeding it unlabeled data (which is more available). The point is that you need too much labeled data to train a complex and deep networks, which is not realistic, so discovering and using smarter training algorithms is the only way to go.

The third idea is to combine this mostly unsupervised training with a final training on a small(er) set of labeled data.
This allows the network to associate proper labels with the natural patterns that it discovered in the unlabeled data.

What is most exciting is how Hinton ties the neural network training problem back to the question of how our brains learn.
The connections that compose the brain are too complex to be encoded in DNA, and there is way too little "labeled data" to train such a massive network. That must mean that the brain has evolved a really good learning algorithm which is able to leverage the huge amounts of (multi-)sensory input that it has available.

It seems that the next logical step is to provide the machine some way of interacting with the world: just observing is not enough. Humans validate and strengthen their interpretation of the world thru constant experimentation. I see an object, interpret its structure in 3 dimensions, then confirm that belief by moving around the object, or maybe by touching it.

More details are available on Jeff Hinton's homepage.

Update (2008/01/24): Coincidentally, I stumbled on Greg Linden's blog (who just joined the Live Labs). While exploring his posts, I noticed he also recommends Hinton's talk.

Update (2008/04/25): I am half-way thru Dan Gilbert's book, Stumbling on Happiness, and was struck by some similarities with Jeff Henton's approach.
Dan Gilbert observes that prediction and imagination are unique abilities of the human brain, and that imagination functions by feeding memories into the perception areas of the brain to simulate experiences.
In some limited way, it seems that Jeff Henton's neural networks emulate this behavior, by using current knowledge and fragments of perception to hallucinate or imagine things.

As a side note: the book goes on to explain many limitations and tricks that accompany this special skill, and illustrates how they often mislead our efforts in pursuing happiness. So far, this is a great book (instructive, fun and challenging). Check out Dan's excellent TED talk (video) if you are curious.

comments powered by Disqus