Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, there is more.

E.g. abbreviating deep belief nets with DBM, which is the commonly used acronym for deep boltzmann machines. These are similar, but very different. Calling an RBM an encoder is somehow not far fetched, but there are many differences between auto encoders and RBMs. He eventually claims an RBM minimises reconstruction error, which is just plain wrong and shows that this guy has absolutely no clue what he is writing about.



'Technically' this is correct--the RBM CD algo is not minimizing this function; that's not the point.

It is known that when training an RBM, the reconstruction error decreases but not monotonically; in fact it fluctuates. In the words of Hinton, 'trust it but don't use it'.

http://www.cs.toronto.edu/~hinton/absps/guideTR.pdf (which is cited in the post as well)

So in a global sense, yes, I would say that the RBM does eventually minimize the reconstruction error even though it fluctuates.

I can even offer a conjecture here on why the error fluctuates ; in a discrete RG flow map, there could be finite size effects that would give log-periodic fluctuations. This is a stretch--but it is something that could be tested.

I explain this idea here http://charlesmartin14.wordpress.com/2015/01/16/the-bitcoin-...

As to stacking the RBMs to form a DBN--yeah that's the point. "Hinton showed that RBMs can be stacked and trained in a greedy manner to form so-called Deep Belief Networks (DBN)" http://deeplearning.net/tutorial/DBN.html




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: