Looking at that project page is there a reason you went for an interpreter rather than compiling to JVM byte code directly? Obviously it's a little harder if you're targeting JVMs <v7 as you don't have invokeDynamic to play with, but most of that work will be common to an interpreter or compiler. On the project I'm working on we haven't even implemented an interpreter, the REPL simply compiles to class files that are then loaded to evaluate them.
There are a few features of the R-language that make direction translation into byte code daunting for a muggle like myself:
1. Computing-on-the-language: R code expects to be able to access and modify the AST and frame of itself, its caller, and other closures.
2. Impure call-by-need argument-passing semantics.
The compiler that's in the trunk is experimental but evolving fast, I think the next steps will probably to start compiling simple but performance-critical basic blocks to byte code at runtime, and then slowly expand the scope of language that can handled from there... (Expert advice welcome!!)
Ah right, those are going to make things fun. :-) I think in your position I'd write a compiler, but keep the AST associated with the byte code and back off to an interpreter if the AST is modified, maybe recompile after enough calls without further change. The call-by-need arguments don't sound too bad, but could take some thought on the memoisation strategy, I think I'd do it at the caller and pass those structures through, but I'd want to think about it.
I'd hardly count myself as an expert, but I think the best win we've had is in thinking carefully about callsite caching strategies and having a eureka moment about just how insanely powerful MethodHandles.exactInvoker can be.