E.g. abbreviating deep belief nets with DBM, which is the commonly used acronym for deep boltzmann machines. These are similar, but very different. Calling an RBM an encoder is somehow not far fetched, but there are many differences between auto encoders and RBMs. He eventually claims an RBM minimises reconstruction error, which is just plain wrong and shows that this guy has absolutely no clue what he is writing about.
'Technically' this is correct--the RBM CD algo is not minimizing this function; that's not the point.
It is known that when training an RBM, the reconstruction error decreases but not monotonically; in fact it fluctuates. In the words of Hinton, 'trust it but don't use it'.
So in a global sense, yes, I would say that the RBM does eventually minimize the reconstruction error even though it fluctuates.
I can even offer a conjecture here on why the error fluctuates ; in a discrete RG flow map, there could be finite size effects that would give log-periodic fluctuations. This is a stretch--but it is something that could be tested.
As to stacking the RBMs to form a DBN--yeah that's the point.
"Hinton showed that RBMs can be stacked and trained in a greedy manner to form so-called Deep Belief Networks (DBN)"
http://deeplearning.net/tutorial/DBN.html
E.g. abbreviating deep belief nets with DBM, which is the commonly used acronym for deep boltzmann machines. These are similar, but very different. Calling an RBM an encoder is somehow not far fetched, but there are many differences between auto encoders and RBMs. He eventually claims an RBM minimises reconstruction error, which is just plain wrong and shows that this guy has absolutely no clue what he is writing about.