Also another mistake here is the classical "if you want to measure x, don't measure y".
If the OP wanted to measure function with or without return , even without realizing how futile that is, they still should not include things like jQuery.
Well, before the GP post, my takeaway was that it's better to return when using jQuery. The justification for this would have been some magic, which is the sole purpose of jQuery, slowing stuff down.
And the web being as it is, testing with jQuery might even be more useful than testing without it.
That is a common fallacy and benchmarking anti-pattern, and it really enrages me that some people look at the completely incorrect results and justify it with "I am going to be using this with jQuery so why shouldn't I add some jQuery method calls". And you are denying this despite the "fixed" js perf showing different results e.g. on chrome 29.
What I meant was that if I'm going to run a website with jQuery calls, I should absolutely test with jQuery calls. It makes no sense to claim that one should, say, always use short variable names, if one then ships with google closure.
If the OP wanted to measure function with or without return , even without realizing how futile that is, they still should not include things like jQuery.