Over the last 2 years we’ve been busy drilling down on our new capture technique for scanning faces and busts of people. In the hope to try and improve the scanning process on from Photogrammetry. Inspired by the incredible research of Paul Debevec at USC ICT, Dr. Abhishek Dutta and William Smith based on the Photometric Scanning process.
Our solution has a few extra features to add to the process. We use our own custom software application to generate normal maps and displacement data, specular separated and multi light reference information. We are also able to capture the photometric normals in 360 degrees, in one session, no rotational stitching required.
This is combined with ida-tronic’s incredible DS1 lighting system.
IR’s scanning system can capture high resolution reference data for use in game and visual effects pipelines. The sort of data produced is perfect for real-time and offline rendering by comparing results in the view-port directly to measured scan data. We can capture synchronized RAW (.exr) multi-light data, non polarized and cross polarized from up to 50x angles of a subject very quickly.
We are able to synchronize over 50 DSLR’s capturing RAW data over multiple PC’s using USB 3.1, reliably.
The system is using a custom built mixed spherical gradient illumination lighting solution as well as separate flash heads. We can capture all the necessary lighting directions, cross and non polarized as well as hot flash shots for as many directions as required.
Cross and non polarized data
Hot flash shots are purely for visual EXR reference during the character creation pipeline. Whether that’s in Maya, Max, Vray, Arnold or in real-time applications like Unity or UE4. Being able to capture lots of skin reference data during the scanning process (not before or after) is essential for a 1:1 visual match when trying to create a digital double. This also allows look dev artists to analyze how skin reacts to light under varying conditions. Perfect for PBR shader testing.
This takes standard photogrammetry and boosts it to the next level. Standard photogrammetry peaked in 2013 and hasn’t evolved much since. Acquiring extra lighting information is an essential step forwards. So much reference material!
The first capture set we acquire are cross polarized which allows us to see the skin without any reflections or specular highlights, giving very rich saturated colour and a detailed texture output. Perfect for the base scan. We can also capture standard non polarized shots and separate the two to generate a pure specular reference set of data, ideal for checking were the skin is more reflective or oily.
Example chosen texture projection directions (12-16 recommended, the rest are redundant)
5DSR detail shots
When using cross polarized data this totally removes the ability to rely on highpass embossing the colour data to a scan. If really necessary, this is a process that should be done externally by an artist, not during the scan pre-process pipeline. You also cannot use a highpass effect on cross polarized data because it contains none of the upper layer of specular highlights which highpass embossing accentuates and requires. This applies to all reflective surfaces, not just skin.
Highpass embossing whilst useful, is a hack and contains inaccuracies with bump information. Like on the eyebrows, or moles causing incorrect negative bump. Any specular information on the skin will create incorrect positive bump artifacts. You can of course artistically hide this with sculpting and by negatively morphing the details in ZBrush using layers.
Similar to the innovative work by texturing-xyz using photometric capture. We can go the extra step and compute the surface normals of the skin more accurately using calculations processed from the different lighting directions. The difference here is we can do this using the same data that was used for the scan, with a 1:1 match of texture to scan information. We don’t need to wrap the data by hand in Mari or Mudbox as it’s just a texture baking process done in our Baker tool.
This can give users more detailed information about the top most surface layer of skin. We’re able to process specular separated normals of cross GI and non polarized GI surface normals but found that just processing the non gi capture shots reduces capture time (less subject movement) and produces just as accurate data.
Early Iteration of Norman in Action (IR’s custom photometric normals application)
Photometric output. Objectspace, hacked tangent space and z-depth only height
Surface normals are captured and calculated. We can then produce a height map to use to displace the scan geometry. Notice the more accurate bump information recovered from the normals.
We then use a second application Steerer to be able to steer the normals. We can use these different z-depth directions to produce a 360 degree height map to use to emboss on the scan in place of a highpass map.
Z only height and 360 degree generated height. Plus Highpass compared to true Displacement.
We can also use the same process to produce custom micro detail normals
Testing optical flow if clients use slower DSLR’s
Steerer In Action (purely for debugging)
That wraps up this blog post. I hope some of the research here will offer some insight and inspire others. We’ve been working on solving this problem since 2009, it’s been a complex process to direct and organize without a proper academic background or access to research grants. Just this last phase of R&D took many hundreds of hours to figure out the syncing pipeline between cameras and DS1. The software processing has been just as complex because we had to figure out our own custom solution but all the pieces are in place now.
We can process and deliver easy to manage data sets, that any studio can integrate into their pipeline.
After working with multi-light data we really cannot look back and plan to further improve this capture solution moving forwards to be bigger, better, faster, stronger.
I hope to post some more soon about the final results of this data when used in Maya and Arnold and also when used in UE4 and Unity for VR rendering.