1x RCU gives you 1 strongly consistent read of up to 4KB _per second_. So 8000 reads per second require 8000 RCUs, which at list pricing comes to $1 per hour. And that's assuming reads do need to be strongly consistent (otherwise half the price) and no discounts are applied.
My initial plan was to provide the node features using Foundation.framework features only. Now, this could turn out to be
more boilerplate code than I intended to write and if it does, I'm
now pretty sure I'd go with libuv.
I'm not quite sure about using libuv from the start, though. My consideration was that a) this is not targeted at server applications, so you don't necessarily need very high performance and b) libuv is a cross-platform library, while I'm targeting darwin only, so it might be a bit overkill.
Do you have experience with libuv? If so, do you have experience regarding footprint / conciseness? Can you confirm/deny my presumptions?
Even the non-server parts of Node are, at a low level, mostly implemented with libuv. And obviously, the APIs in libuv map very closely to Node, so there's less of an impedance mis-match between that and the Foundation framework.
I've written quite a bit in both Node, and in libuv directly, as well as against the Foundation APIs (although less these days, because libuv's APIs are just really, really well done). IMO, you'll write less code, with far less errors, if you decide to build on top of libuv, and you won't lose anything that Foundation gives you (they're both wrappers around the underlying Unix APIs anyway...).
I think you got a very good point there regarding the impedance mismatch between native and js apis.
Really enjoying the uvbook, this got me sold on the libuv api. After skimming through the source and being happy that libuv is indeed just a very thin wrapper around posix, I just integrated libuv with my project and will base all bindings upon that.
I did some evaluation a while ago about bundling a Javascript engine with mobile apps on iOS and concluded JSC was way easier to compile and integrate. This is in fact what e.g. Appcelerator Titanium does currently.
However, this bundling results in huge binaries, which is not very nice for mobile platforms.
The thing that got me into hacking the prototype was the release of JSC as a public system framework on iOS7, eliminating the need to separately bundle JSC.
So its not really a platform preference, I'm just trying to work with what's currently there...
JavaScriptCore is a cross-platform javascript engine on it's own that is used by default in WebKit browsers (Safari, Mobile Safari, Konqueror, etc.).
The new thing on iOS 7 is that this framework is exposed as a public API you can code against. So you don't have to initialize a heavy-weight WebView just to execute Javascript.
This is very rough and no IO is supported yet. However, I think it serves well as a proof-of-concept that a lightweight node-compatible interface can be done for mobile apps using javascript.
Typed arrays do deliver a lot of speedup, and in fact, the above example is slightly faster in Chrome than in Firefox Nightly for me. However, if your algorithm does crunch a lot of data in a low-level way, there is definitely even more performance you can get by using asm.js.
See my link in the comment below for benchmarks of a sha1 algorithm that shows a good speedup in Chrome by using TypedArrays, but an even greater one in OdinMonkey.