I’ll be honest, I kinda don’t get flame graphs. I mean I understand what they are. I have just always strictly preferred a proper timeline view ala Superluminal or Tracy.
Using 20ms chunks for a game is fine but also super weird. Ain’t no game using 20ms frames! So if you were using this for real you’d get all kinds of oddities. Just give me a timeline and call it a day plz.
The origin problem for flame graphs was MySQL server performance involving dozens of threads: as a timeline view you need dozens of timelines, one for each thread, since if you render it on one (I know this is probably obvious) then you have samples from different threads from one moment to the next turning the visualization into hair. Flame graphs scale forever and always show the aggregate: any number of threads, servers, microservices, etc.
I think great UI should do both: have a toggle for switching between flame graphs (the summary) and timelines (aka "flame charts") for analyzing time-based patterns. I've encouraged this before and now some do provide that toggle, like Firefox's profiler (Flame Graphs and Stack Charts for timeline view).
As for 20ms, yes, we do want to take it down. A previous HN comment from years ago, when I first published FlameScope, was to put a game frame on the y-axis instead of 1 second, so now each column shows the rendering of a game frame, and you can see time-offset patterns across the frames (better than a time-series timeline). We started work on it and I was hoping to include it in this post. Maybe next one.
I’ve never actually seen a profiler that shows quite what I want. I have lots of subsystems running at different rates. Gameplay at 30Hz, visual render at 90Hz, physics at 200Hz, audio at some rate, network, some device, etc.
So what I want is the ability to view each subsystem in a manner that lets me see when it didn’t hit its update rate. I have many many different frame rates I care about hitting.
Of course things even get more complex when you have all the work broadly distributed with a job system…
The difference between a flame graph and a trace visualization is that a flame graph is a aggregate/summary visualization. It helps visualize total runtime attributed to functions.
It is like the difference between seeing the mean of a distribution and seeing a plot of every datapoint in the distribution. They are useful for different purposes.
An example of how you might use it in conjunction with a trace visualizer is that you would select a time span in a trace and generate a flame graph for the selection. This would show you which functions and call stacks were responsible for most of the execution time in the selection. You would then use that to find one of those call stacks in the trace to examine how they execute to see if it makes sense.
I could just regenerate these heat maps with 60 rows instead of 50. I'm limited by the sampling rate that was captured in the profile data file. To provide even more resolution (so you had many samples within a game frame) I'd need to re-profile the target with a higher frequency.
When Martin, my colleague at Netflix at the time, built a d3 version of FlameScope, he put a row selector in the UI: https://github.com/Netflix/flamescope
> Correct. Which means that every 20ms pixel slices two or three frames. Which is a really really bad way to profile!
If 20 ms is a reasonable frame time for a modern game, why is it an unreasonable thing to profile?
I understand other, shorter, frame times may be interesting to profile too. My point is that if you want to understand a reasonable or realistic workload, then it should also be reasonable to profile that workload.
The issue isn’t that 20ms is an unreasonable slice size. The issue is you can’t perform an arbitrary slice.
Imagine a game that runs at 50Hz/20ms frame. Unusual but let’s go with it because the exact value doesn’t matter. Ideally this update takes AT MOST 20ms. Otherwise we miss a frame. Which means most frames actually take maybe 15ms. And some may take only 5ms. If you drew this on a timeline there would be obvious sleeps waiting for the next frame to kick off.
If you take an arbitrary sequence of 20ms slices you’re not going to capture individual frames. You’re going to straddle frames. Which is really bad and means each pixel is measuring a totally different body of work.
It sounds like your problem might be not with the visualization itself, but with the underlying idea of a sampling profiler as opposed to tracing every single call from every single frame.
No. Sampling profilers are great. Most powerful is of course a mix of sampling and instrumentation. But nothing beats the feeling of a sampling profiler fixing big issues in under 5 minutes.
Flamegraphs are a nice tool to have in the bag I suppose. But they’re more tertiary than primary or even secondary to me.
The game model might involve 20ms time slices. The frame rate is simply the best available visualisation of the "action" that the machine can manage.
So, you have your game model, input and output. Output needs to be good enough to convince you that you are in control and immersive enough to keep you engaged and input needs to be responsive enough to feel that you are in control. The model needs to keep track of and co-ordinate everything.
I'm old enough to still own a Commodore 64 and before that I played games and wrote some shit ones on ZX 80, 81 and Speccies. I typed in a lot of DATA statements back in the day (40 odd years ago)!
When you pare back a game to the bare basics - run it on a box with KB to deal with instead of GB - you quite quickly get to understand constraints.
Things are now way more complicated. You have to decide whether to use the CPU or the GPU for each task.
Flame graphs are definitely less sophisticated than Superluminal/Tracy/etc, but that's a part of the attraction - you can visualize the output of many profiling tools as a flamegraph without prior setup. I also think it's a pretty good UX for the "which function is the performance bottleneck" game.
I’ll be honest, I kinda don’t get flame graphs. I mean I understand what they are. I have just always strictly preferred a proper timeline view ala Superluminal or Tracy.
Using 20ms chunks for a game is fine but also super weird. Ain’t no game using 20ms frames! So if you were using this for real you’d get all kinds of oddities. Just give me a timeline and call it a day plz.