Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

is it the fork/exec overhead or the overhead of re-parsing the dwarf data that is responsible for the slowdown? process spawn thrash is obviously bad, but i'm curious how much it contributes to the issue here? forks are pretty cheap these days as i understand, and i think an exec may also be pretty cheap since the program image is already going to be sitting in the buffer cache.

reading/parsing dwarf data, on the other hand, is likely to be slow. not totally sure, but maybe i/o could be sped up by mmap'ing in the dwarf data and maybe part of the parsing could be cached?

fun story, i once solved a similar performance regression in a machine learning context, where calling code would serialize an entire model and pass it to inference code, which would then deserialize it all, make one inference, and then tear it all down. if online/streaming is not required, a huge speedup can come from just batching the work.



Parsing the dwarf data has orders of magnitude more overhead than fork/exec. (source: I'm the one who opened the linked Debian bug)


I suspect it's both. But I didn't measure, because the patch does away with both sources of overhead at the same time :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: