Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And praying that the slow memory is magically in the fast memory.

I think ultimately what I'm asking for means a much, much tighter bound on worst case performance, but at the cost of best case performance. That most of us never see anyway. For certain workloads, that could end up being a net positive. And there's probably some way to expose CPU metadata that gives some of that difference back.



I think it’s worthwhile to distinguish between throughput and latency for these sorts of discussions, rather than just talking about performance since scratchpads are usually better for latency (even best-case latency) and caches are usually better for throughput. Though of course in this as in any sort of discussion of computer performance, caveats abound.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: