"Oboe.js is marginally slower for messages that load very quickly but for most real-world cases reacting to i/o sooner beats fussing about CPU usage."
How true is this? Most ajax messages are probably fairly small (few KB?). The transfer time is likely to be pretty quick between the first and last byte. But if you're receiving a bunch of messages, the cpu slowdown especially on mobile might be a lot worse?
For small messages it will be slightly slower in terms of CPU time because of JS vs native DOM parser. But for small messages the time taken with modern JS engines should be less than a monitor refresh.
For small messages on fast networks there really isn't very much time to save, so for this worst case in terms of looking for somewhere to optimise I think it is best to aim for no significant performance impact. The best improvements are where the messages are very large, on slow networks, or where the server can write out a stream.
The XHR2 spec specifies "While the download is progressing, queue a task to fire a progress event named progress about every 50ms or for every byte received, whichever is least frequent"
Because of "whichever is the least frequent", for messages that take less than 50ms from first byte to last, Oboe.js will get only one callback from the browser and notify all callbacks from inside the same frame of Javascript execution. This is exactly what we want it to do since it avoids rendering happening between the callbacks which would be very bad for overall download time.
I'm in two minds if Oboe should be dropped in to replace JSON AJAX calls, because everything will be accessed from a slow network one day, or if it should be used only for the very large, streamed REST responses.
How true is this? Most ajax messages are probably fairly small (few KB?). The transfer time is likely to be pretty quick between the first and last byte. But if you're receiving a bunch of messages, the cpu slowdown especially on mobile might be a lot worse?