Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The author mentioned 10^20 combinations taking millions of years, but a modern server GPU can put out 10^15 computations per second (about a petaflop), assuming you use them in FP16 mode. Keep in mind that 65K divisions of a box one meter per side is 15 micrometers! This means that roughly speaking, it would be possible to brute-force through all possibilities in about 10^5 seconds, which is just one day. It helps that this type of algorithm is almost all computation with very little data transfer to and from main memory, and is "embarrassingly parallel".

Some light optimisation such as utilising symmetries to reduce work, combined with multiple GPUs in parallel could bring this down to hours or even minutes.

It would be a fun thing to cloud-host, spinning up spot-priced GPUs on demand.

Similar brute-force tricks can be applied to other NP-hard problems such as optimising electronic circuit board wiring, factory floor planning, etc...

The ridiculous amount of compute we have available to us in a GPU reminds me of this quote from Stargate Atlantis:

JEANNIE: The energy you'd need would be enormous to the point of absurd.

McKAY: Absurd we can do. We have something called a Zero Point Module which essentially does what we're attempting on a smaller scale -- extract energy from subspace time.



The program taking a million of years is hyperbole but spinning up a GPU cluster for this can't possibly be an effective use of time.


That's your intuitive take on it, because it feels wasteful somehow, but it boils down to simple dollar terms. If for a few hundred dollars worth of compute you can make a box smaller, that might save the company hundreds of thousands of dollars over years.


This seems very unlikely considering how people would have different things they wanted packed




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: