I think this is the part where it gets interesting. Once you have enough parallel general-purpose compute horsepower available to run a physically-based renderer (e.g. what Pixar uses) at frame rates beyond 30/second, you can start to enter into a realm of arbitrarily-complex scenes within real-time applications.
How far off are we from this possibility, assuming someone sat down and optimized existing solutions for this use case?