Blog

Discover what makes us unique
← Back

Over the last 2 years we’ve been busy drilling down on our new capture technique for scanning faces and busts of people. In the hope to try and improve the scanning process on from Photogrammetry. Inspired by the incredible research of Paul Debevec at USC ICT, Dr. Abhishek Dutta and William Smith based on the Photometric Scanning process.

Our solution has a few extra features to add to the process. We use our own custom software application to generate normal maps and displacement data, specular separated and multi light reference information. We are also able to capture the photometric normals in 360 degrees, in one session, no rotational stitching required.

.
Today’s inspiration:

.
This is combined with ida-tronic’s incredible DS1 lighting system.

irscanner-w

IR’s scanning system can capture high resolution reference data for use in game and visual effects pipelines. The sort of data produced is perfect for real-time and offline rendering by comparing results in the view-port directly to measured scan data. We can capture synchronized RAW (.exr) multi-light data, non polarized and cross polarized from up to 50x angles of a subject very quickly.

We are able to synchronize over 50 DSLR’s capturing RAW data over multiple PC’s using USB 3.1, reliably.

Backup_27_01_2016_14_23_49-MR Nick-Compile-01

The system is using a custom built mixed spherical gradient illumination lighting solution as well as separate flash heads. We can capture all the necessary lighting directions, cross and non polarized as well as hot flash shots for as many directions as required.

Cross and non polarized data

nick-example-gi-only

Hot flash shots are purely for visual EXR reference during the character creation pipeline. Whether that’s in Maya, Max, Vray, Arnold or in real-time applications like Unity or UE4. Being able to capture lots of skin reference data during the scanning process (not before or after) is essential for a 1:1 visual match when trying to create a digital double.  This also allows look dev artists to analyze how skin reacts to light under varying conditions. Perfect for PBR shader testing.

This takes standard photogrammetry and boosts it to the next level. Standard photogrammetry peaked in 2013 and hasn’t evolved much since. Acquiring extra lighting information is an essential step forwards. So much reference material!

nick-example-multi-shots-crop  nick-example-photomet  nat-example-multi-shots-crop-w  nat-example-photmet-w

The first capture set we acquire are cross polarized which allows us to see the skin without any reflections or specular highlights, giving very rich saturated colour and a detailed texture output. Perfect for the base scan. We can also capture standard non polarized shots and separate the two to generate a pure specular reference set of data, ideal for checking were the skin is more reflective or oily.

processed-02-w

Example chosen texture projection directions (12-16 recommended, the rest are redundant)

xyzrgb-photometric02

5DSR detail shots

eyofthetiger03b-w skin-anim-w
When using cross polarized data this totally removes the ability to rely on highpass embossing the colour data to a scan. If really necessary, this is a process that should be done externally by an artist, not during the scan pre-process pipeline. You also cannot use a highpass effect on cross polarized data because it contains none of the upper layer of specular highlights which highpass embossing accentuates and requires. This applies to all reflective surfaces, not just skin.

Capture-Bump-02a

Highpass embossing whilst useful, is a hack and contains inaccuracies with bump information. Like on the eyebrows, or moles causing incorrect negative bump. Any specular information on the skin will create incorrect positive bump artifacts. You can of course artistically hide this with sculpting and by negatively morphing the details in ZBrush using layers.

Similar to the innovative work by texturing-xyz using photometric capture.  We can go the extra step and compute the surface normals of the skin more accurately using calculations processed from the different lighting directions. The difference here is we can do this using the same data that was used for the scan, with a 1:1 match of texture to scan information. We don’t need to wrap the data by hand in Mari or Mudbox as it’s just a texture baking process done in our Baker tool.

This can give users more detailed information about the top most surface layer of skin. We’re able to process specular separated normals of cross GI and non polarized GI surface normals but found that just processing the non gi capture shots reduces capture time (less subject movement) and produces just as accurate data.

Early Iteration of Norman in Action (IR’s custom photometric normals application)

Photometric output. Objectspace, hacked tangent space and z-depth only height

NON NON-Blue Bump

Surface normals are captured and calculated. We can then produce a height map to use to displace the scan geometry. Notice the more accurate bump information recovered from the normals.

We then use a second application Steerer to be able to steer the normals. We can use these different  z-depth directions to produce a 360 degree height map to use to emboss on the scan in place of a highpass map.

Z only height and 360 degree generated height. Plus Highpass compared to true Displacement.

st-displacement-example-01 gif-compa

We can also use the same process to produce custom micro detail normals

micro-geometry-test02 micro-geometry-test01 micro-geometry-test03

(further inspired by http://gl.ict.usc.edu/Research/Microgeometry/ & http://gl.ict.usc.edu/Research/SkinStretch/)

Testing optical flow if clients use slower DSLR’s

Steerer In Action (purely for debugging)

Early R&D Displacement Tests
Lee-Steerer-Newe-01 Lee-Steerer-Newc-01 Lee-Steerer-Newa-01 Lee-8K

That wraps up this blog post. I hope some of the research here will offer some insight and inspire others. We’ve been working on solving this problem since 2009, it’s been a complex process to direct and organize without a proper academic background or access to research grants. Just this last phase of R&D took many hundreds of hours to figure out the syncing pipeline between cameras and DS1. The software processing has been just as complex because we had to figure out our own custom solution but all the pieces are in place now.

We can process and deliver easy to manage data sets, that any studio can integrate into their pipeline.

After working with multi-light data we really cannot look back and plan to further improve this capture solution moving forwards to be bigger, better, faster, stronger.

I hope to post some more soon about the final results of this data when used in Maya and Arnold and also when used in UE4 and Unity for VR rendering.

Infinite

33 Comments
  • Milos LukacReply

    Lee, congrats to new scanner and fantastic results that we can see here...

    March 29, 2016
    • InfiniteReply

      Thanks. Interested to talk more about the highpass emboss claim with RC. Interested to see your thoughts on the images I've sent over.

      March 29, 2016
  • Brett SinclairReply

    Brilliant

    March 29, 2016
    • InfiniteReply

      Thanks Brett!

      March 29, 2016
  • Jeremy CelesteReply

    Great work Lee, looking forward to seeing more ! Congrats !

    March 29, 2016
    • InfiniteReply

      Thanks Jeremy. Blown away with what you are up to, your work is a very high benchmark to reach.

      March 29, 2016
  • Thomas MansencalReply

    Congratulations Lee! Exciting stuff, great to see you improving your rig.

    March 30, 2016
    • InfiniteReply

      Thanks Thomas.

      March 30, 2016
  • Glen HarrisReply

    Awesome to see new posts Lee !
    Inspiration ;)

    March 30, 2016
    • InfiniteReply

      Inspiration indeed. It's what drives us right ;) There was no 360 degree normal map solution on the market (no one had solved it) That could also capture multi-light! Someone had to step up. I hope that inspires :)

      March 30, 2016
  • WalterReply

    Glad you're back. Your work is really inspiring!

    March 30, 2016
    • InfiniteReply

      Thanks.

      March 30, 2016
  • Daniel HenniesReply

    Would love to test some sample asset acquired by this technique! Looks very promising.
    cheers

    Daniel

    March 30, 2016
    • InfiniteReply

      Thanks we will share some data results soon.

      March 30, 2016
  • Salim LjabliReply

    Looks really great Lee, haven't seen you post much over CGfeedback anymore more , but glad to see you keep rocking it .
    keep it up .

    March 30, 2016
    • InfiniteReply

      Thank you!

      March 30, 2016
  • MagnusReply

    Great stuff as always! Thank you so much for sharing things so freely.

    Do you get pretty much complete coverage (for the normal map) with those lighting directions or is there any occlusion?

    Best, Magnus.

    March 31, 2016
    • InfiniteReply

      Yes complete coverage. Between 16-20 cameras is just enough for full texture coverage. Obviously you need more for the scan shot, which is what we do.

      March 31, 2016
  • Luc BéginReply

    Really good Lee! Hope all is good ,long time :)

    April 7, 2016
    • InfiniteReply

      Thanks Luc. I hope you are doing well!

      April 7, 2016
  • James FinnertyReply

    Crikey that capture detail is immense :o Excellent texture work

    April 15, 2016
  • POLevesqueReply

    Well, this is pure awesomeness, thanks for sharing! Do we have any idea, scientifically, why the reflection of the skin comes out blue? When you recreate the model in a 3D software, would you add some blue in the refl/spec color?

    June 9, 2016
    • InfiniteReply

      It could be due to the polarizing filters, but we've seen the same effect with other similar technology like ICT's research. Without the blue tint, the skin looks very dull and yellow.

      June 21, 2016
  • DerekReply

    Lee, I notice that the specular pics made by subtracting the non-polarized photos from cross-polarized ones have blue spec rather than white spec. Could you comment on why this is? Is it possible to get the expected white spec for a dielectric using cross pol techniques?

    June 13, 2016
    • InfiniteReply

      Derek, as we've spoke by email and shared notes I think we share similar ideas on this.

      June 21, 2016
  • Craig GilchristReply

    Incredible work. It's nice to see UK businesses smashing boundaries.

    June 19, 2016
    • InfiniteReply

      Thanks!

      June 21, 2016
  • Ken FinlaysonReply

    Awesome and inspiring work.

    July 11, 2016
  • tischbein 3Reply

    Interesting read
    Have you ever thought about releasing those front / side views as modeling reference on your store ? (Similar to what 3d.sk is offering,.. wich I would like to use, although its subscription based system always turns me off.)
    Might make a few bucks...

    July 18, 2016
    • InfiniteReply

      Yes :) We have a VERY big update coming soon for triplegangers that you might find useful. We're also working on some new related tech for scanning.

      July 18, 2016
      • tischbein 3Reply

        sounds great.

        July 19, 2016
  • FelixReply

    Thank you for sharing your inventions on this blog, it's always a great read!

    August 10, 2016
  • arnaudReply

    Hello, and thank you for sharing.
    really nice utilisation of the polarizer filter, my 2 cents ideas :
    grease is more senisitive to the light so one stop down for those shots might be tested.
    and then may be place the two shots in difference mode so to get the specs only, treat it as a mask then adjust levels (photoshop).
    Another idea, i don't know but if grease contains more blue than the skin then putting a filter on the non polarized lens might put the specs up a bit.
    Also displacements makes me think of what utlra violet camera are delivering, or infra red, something might happen in those color waves.

    cheers,

    August 18, 2016
Would you like to share your thoughts?

Leave a Reply

*