Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't want to knock the article, since this kind of exploration is fun and well written blogs are always a joy to read.

But knowing some statistics, the entire of Snakes and Ladders is an absorbing markov chain [1] and can be very quickly analyzed as such without having to resort to sampling.

Random sampling is easy but take a step back and the entire state space is an integer in [1,100]. (Actually there are fewer than 100 states because the bottom of a ladder or top of a snake isn't a state).

The state transitions are very easy to model, they're just 1/6 to each of the 6 next states (sometimes fewer, in which case they just add).

Having constructed our markov chain, we can instantly and accurately get back our time-to-victory from each square.

[1] http://en.wikipedia.org/wiki/Absorbing_Markov_chain




Interesting that these two articles have different rule-sets. The first reckons rolling anything about 100 is a win, whereas the second requires an exact landing!

(Neither plays the "bounce-back" rule always demanded by my friend's little sister!)


I also wrote a little blog post about that very thing back in 2011. In addition to using a Markov chain approach, also took a look at it from an information entropy perspective. And the code is also in R, to boot! http://bayesianbiologist.com/2011/12/31/uncertainty-in-marko...


He mentioned in the article that you can use Markov chains.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: