I'm a vfx guy and used to work at ILM (lucasfilm). This title is incredible misleading. Tech to "previsualize" objects "in camera" has been around a long time and is in pretty wide use. Avatar used it very heavily, for example.
It's important to realize that what happens in post is a whole lot more than making a 3d object and placing it in the scene. We do a lot of integration work in 2D by hand to match things like color, edges, etc. There's massive amount of simulation work on water, fire, etc. Sometimes people straight-up paint on film frames in things like photoshop.
The big win with real-time visualization is the creative control for the direction and director of photography, who are generally far removed from the final product and this can cause expensive second-guessing all around.
Lastly, it's a bit condescending to artists working on games to suggest that the painstaking work that goes into making/optimizing/QCing interactive content is something that can just happen on the fly. Sure, game engines and hardware are pretty great, but games themselves are more and more realistic because very specialized people are working very hard to make them that way.
I'm one of the engineers working on this, I even make a semi-appearance in the video. I'm not really going to say much about it because it's a little annoying that it leaked out, and by such a crappy recording too, but -
a) I think the title here is pretty good, better than the hyperbolic one the Inquirer used.
b) Your middle two paragraphs are spot on
c) Given the video, it's understandable people are focused on the "actor as a virtual character" previsualization part ('but Avatar did this two years ago!', 'our game engine does that!', yadda yadda). That's only a small part of it, and honestly one of the less interesting ones.
I don't see what you think is condescending though. There's not a lot of difference today in the skill-set of a CG artist and a games artist. Generally they're just working to different budgets. (I say that as someone who spent 15 years working on console games).
The painstaking days of game artists building models that use less than 100 verts and hand-painting 256x256 textures are gone. Now, both your CG and game-artist build super-high resolution models, probably starting with something like ZBrush, then decimate down to whatever they need with the highres asset used to generate normal or displacement maps.
Optimius Prime in the original Transformers movie used ~20x more polygons than a PS4/Xbone game would today for a similar character. That's pretty amazing when you think that the GPUs in those machines are already handily outmatched by PCs, and the performance gains Nividia/AMD bring with every hardware cycle.
Hey, thanks for the reply! It's definitely really impressive tech and I (and everyone else) are hugely excited about the possibilities.
My comment was definitely more about the media coverage than anything else. I've been a lighter/comper/pipeline dev mostly in features and have never worked in games so I can't speak with really any authority on the subject. I really just wanted to convey that mainstream coverage of VFX (and games) tech seems to really gloss over the human element.
Games and film vfx are two different industries with two similar but different goals:
- how high quality an image can I produce in ~1/30s
- how high quality an image can I produce in several hours.
The interesting thing is how different the solutions often end up being based on these constraints. Also, the continual migration of techniques from film to games.
Including an accurate lighting model? Or do you think that there's a significant inflection point before that when we're still attempting to replicate a final render using rasterizers?
(My current bee-in-bonnet, besides integrating HMDs and mocap, is realtime-ish path tracing - but that's a while away yet, still.)
It's important to realize that what happens in post is a whole lot more than making a 3d object and placing it in the scene. We do a lot of integration work in 2D by hand to match things like color, edges, etc. There's massive amount of simulation work on water, fire, etc. Sometimes people straight-up paint on film frames in things like photoshop.
The big win with real-time visualization is the creative control for the direction and director of photography, who are generally far removed from the final product and this can cause expensive second-guessing all around.
Lastly, it's a bit condescending to artists working on games to suggest that the painstaking work that goes into making/optimizing/QCing interactive content is something that can just happen on the fly. Sure, game engines and hardware are pretty great, but games themselves are more and more realistic because very specialized people are working very hard to make them that way.