Hacker Newsnew | past | comments | ask | show | jobs | submit | atmosx's favoriteslogin

There is an excellent tutorial available on youtube:

Part 1: http://www.youtube.com/watch?v=d-abs0s8uis Part 2: http://www.youtube.com/watch?v=ZuiSbMS0_5g


I remember reading around 2002-2004 about a sysadmin managing a very large "supernode" p2p server who was able to fine-tune its Linux kernel, and to recompile an optimized version of the p2p app (to allocate data structures as small as possible for each client) to support up to one million concurrent TCP connections. It wasn't a test system, it was a production server routinely reaching this many connection at its daily peak.

If it was possible in 2002-2004, I am not impressed that it is still possible in 2011.

One of the optimizations was to reduce the per-connection TCP buffers (net.ipv4.tcp_{mem,rmem,wmem}) to only allocate one physical memory page (4kB) per client, so that one million concurrent TCP connections would only need 4GB RAM. His machine had barely more than 4GB RAM (6 or 8? can't remember), which was a lot of RAM at the time.

I cannot find a link to my story though...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: