A small repo with the files would be useful for other people to run the script.
I made one here https://github.com/AntouanK/node-modules-benchmark
( I removed the huge number of tests, I didn't want to wait more than a minute :D )
And maybe a way to get the results would help you see differences between machines and platforms.
Am I incorrect in my normal usage of "x" when conveying relative performance? With 120.8498 vs 120.4320 being conveyed as "1.00 x slower". Is this the correct terminology for results that are on par with each other? Really, these results are (relatively) unchanged. Shouldn't it be 0.003x slower, with the formula being (Old / New) - 1.
From the page:
Result Node.js v0.12.0
Called v2, util.yo("man") 100.000.000.000 times in 120.849870219 sec (1.00 x slower than v3)
Called v3, util.yo("man") 100.000.000.000 times in 120.432087601 sec
Calling prototype functions directly from the constructor function instead on the objects doesn't make much sense. So I'm not sure how to value the results.
I started using Node for some things where PHP feels inadequate (async network requests), and overall I'm happy with the speed, but after years of PHP it's been rough figuring out the right way to do objects. I don't need nearly this many requests, but I find it useful to internalize micro-optimizations to make your code a speed demon
Ohhh, I was confused because I know Node caches module objects, but I see the code in the V1 case is creating a new function each time it's used. Seems like it's not really a module issue per se though, just a general JS best practice.
Yes, version 1 is using closures which creates a new function every time the function is called. I've seen this pattern in many places and even used it myself :)