Yes, there's a little research about it out there, but the industry side seems to be in deeper apathy about it.
It would be very interesting to hear what happened to these LLVM research directions. I guess the default answer is "nobody wanted to fund it after the publications got written"...
It might just be timing. 2004 was when everyone was switching from Java and C++ to Perl, PHP, Python, and Ruby. Hard to make a case for a compiler optimization that might save 10x on a couple percent of workloads when everybody a.) is focused on the workloads that it's not useful for and b.) is using languages that give up 30x performance from the beginning.
Things may be different now that Moore's Law no longer holds and many programs are cache-bound, but any current research also needs to take into account GPUs, distributed computing, and deep learning. There might be room for new research, but the research would likely be as much about auto-vectorization and minimizing trips between memory spaces as about local optimizations like AoS -> SoA conversion.
It would be very interesting to hear what happened to these LLVM research directions. I guess the default answer is "nobody wanted to fund it after the publications got written"...