Introducing the IR & Inciprocal Digital Double Pipeline (Part 2)
This is blog post is a follow to Part 1 here about IR’s new scanning pipeline.
We’ve been working closely with Inciprocal for the last few years, designing and specifying out a capture pipeline utilizing our AeonX: S0L scanning rig to fit within the Inciprocal software pipeline.
Our plan is to try and blog post at least once a month!
Today’s inspiration:
[soundcloud url=”https://api.soundcloud.com/tracks/1083639427″ params=”” width=” 100%” height=”166″ iframe=”true” /]
The “Infinite” Head Scan (2010)
Back in 2010 IR released the Infinite head scan using much more basic scanning processes and a lot of hand sculpting. The scan was downloaded by 1,000’s of artists, games companies and research institutes.
cgchannel.com/2010/09/infinite-realities-releases-free-photorealistic-head-model
github.com/keijiro/InfiniteScan
zbrushcentral.com/t/infinite-3d-head-scan-and-sculpt/295587
docs.unrealengine.com/4.27/en-US/RenderingAndGraphics/Materials/LightingModels/SubSurfaceProfile/
It was even at one point integrated into Unity and Unreal! We’ve since learned a great deal and our scanning system has had a complete overhaul. We’ve finally had a chance to document this process.
Our R&D Results for InfiniteV2
You can see an early example of our new R&D digital double research. It’s still WIP and we are learning as we go. We utilized IR’s captured data combined with the Inciprocal full Disney PBR texture stack. We took these processed results added some extra modifications, like extra sculpting and texture work. We then re-built our LED rig virtually and visualized the results inside Marmoset Toolbag 4’s raytraced virtual environment.
Using the data that’s been captured with IR’s AeonX: S0L scanning system, it’s then processed through the Inciprocal pipeline. Once completed it then has some further artistic enhancements added, such as texture tweaks, eye models, teeth and hair details.
For accurate validation we captured a mirror ball set, grey ball set and colour calibration chart inside our capture rig and replicated them digitally.
Real-World & Digital Comparisons
Split images. Digital Double on the right. Real-World subject on the left.
Raytraced Eye Caustics
The new version of Toolbag 4 now features accurate ray traced hard caustics. Instead of using the cornea height bump method and specular trick we built a more realistic eye model featuring a simulated liquid to mimic the aqueous humour inside the anterior chamber of the eye model. This can then better simulate light transport inside the iris and creates some really interesting scattering of light and refractive caustic effects.
This technique also gets rid of the nasty side effects of the cornea/iris bump shadow that you will see if you use a bright direct sunlight using the height bump technique. It will also help bounce light around inside the iris when your digital double is being lit from a glancing angle. The only drawback is that you need a lot of samples to get clean results.
Virtual Lighting
Once the scan, textures and materials have been validated we can then begin to render the subjects in other lighting conditions!
SteadyCam Renders
Detailed Virtual Camera Validation
We used CamtrackAR to provide some quick virtual camera tracking data captured from a 3D print of the subject on an iPhone. Converted in Blender then rendered in Toolbag 4!
Another detailed set of videos steady cam again, this time in slow-motion. The digital-double still has further work to be done but time was running out this month to get something published for another blog post!
Detailed Virtual Camera Validation (Slow-motion)
FACS Expressions
Our next step is to develop a new optimized capture pipeline to allow us to capture a full Disney PBR stack of expressions to process with Inciprocal. This will allow us to capture detailed wrinkle information at the pore level for the diffuse, specular and roughness properties of a subject. This is still a WIP.
We can then combine this data with our own motion scanned data.
“But can you make it move?”
Our next objective is to bring life to IR’s scans. We’ve already started to run experiments utilizing the power from our motion scanning pipeline. Our motion scanning pipeline was the driving force behind the recent Unity Enemies demo and ZIVA ZRT product launch. You can see an early example below. See if you can figure out what the dialogue is from!!?
We’re really excited about the potential of our new AeonX: S0L system and the Inciprocal pipeline. We’re looking forward to working with clients on new and exciting projects and pushing the boundaries of what’s possible with 3D scanning and facial capture technology.
What’s coming up in Part 3?
Look out for Part 3 of our Digital Double Pipeline.
Next time will be talking about scanning in “colour”..
Thanks for reading