Blog

Discover what makes us unique
← Back

2017 is here! We would like to wish all you all a Happy New Year. We’ve been super busy this last year. Helping other studios build scanning systems as well as starting to focus on video capture and finalizing our photometric scanning pipeline.

All in the continued effort to perfecting virtual digital humans and perhaps one day, analog versions as well.

This is a follow on post from our previous research Next Generation Photometric Scanning Which was Inspired by the incredible research of Paul Debevec (and team) at USC ICT, Dr. Abhishek Dutta and William Smith based on the Photometric Scanning process.

.

Today’s inspiration:

.

Being self funded our research takes time, it’s been a long journey so far. We are just about staying a float paying the HMRC and our taxes but damn is it expensive. Even running a small indie studio. What we would give for research grants and funding…

IR has since upgraded the Idatronic DS1 capture rig from last year, the previous rig was too small, cumbersome and slow. The new rig now has a larger scanning volume, it’s faster and can transmit much more data. It’s also a perfect setup for continuous lighting tests for video acquisition and 4D capture.

Over the last few months we have been able to spend time analyzing lots of data and perfecting the output process to be more precise as well as streamlining the pipeline process to get consistent results, at pore level across different expressions and time lines. Unfortunately it means scanning my head, again, and again, and again ad infinitum.

It has been nearly 7 years since IR released the Infinite head scan, which has been downloaded thousands of time and has been used by many different research institutes and integrated into various rendering applications such as the Unreal Engine.

The 2010 Infinite head scan was captured using only 4 cameras, from a Dimensional Imaging system and was mostly based on artistic sculpting.

For the new tests and scan upgrade we now use over 60x DSLR’s, with custom triggering and power components (from IDATronic) that can capture over 15 different lighting conditions in under 1.5 seconds. We use primarily USB 3.1. Using 3x capture PC’s setup on a 30GB network (cheers HP!)

One single scan will weigh in at 5GB’s just for the raw .CR2 files. Everything is processed into a linear pipeline, as we discussed on the triplegangers website recently

 

It’s about time for an overhaul.

Using these new photometric scanning methods, we are now able to acquire much more detailed surface information, as well as a ton of important multi-light reference imagery. Which is essential for testing virtual skin shaders to fine tune and get a one to one visual match. What makes our system different from others is that we scan in 360 degrees, using no structured light in the process, utilizing our own custom software approach to steer the acquired normal map data to produce high fidelity displacement information.

 

(Combined displacement layers, colours exaggerated)

We can also acquire skin reflectance information, which is an incredibly important texture input to control skin specularity. We are able to generate displacement information from the normals we acquire and as a trade off we get a cavity map for free.

Another important aspect is that we can capture cross polarized data, without any specularity baked into the skin. We can see through the oily layer of skin to the base layer, something that cannot be faked with level adjustment tricks on a non polarized image. It just does not produce the same quality.

Using Agisoft Photoscan’s new mosaic texturing algorithm we can achieve much higher fidelity texture mapping now, with no hard edges or seams. We first build the photogrammetry scan as usual, clean the RAW scan, retopologize, create UV’s, bake the lighting directions, generate the normals and displace. Our displacement information is also coupled to the scan, as it’s capture at the same time as the base scan data. So no need to paint the displacement information by hand, or stitch in Mari. The displacement data can also be cross baked across varying scans, as long as they share the same base topology.

To generate this displacement data we use our own software process to steer the normals to match the calibrated camera positions.

We use very fast DSLR’s (soon machine vision) to be able to capture additional multi-lighting reference data that we can also cross bake onto the base scan, as they were captured at the same time. Which is a bonus for VFX or game pipelines when trying to match lighting or accurate skin comparisons under varying conditions. These act purely as a guide.

As this data is baked from each camera angle to the base scan most of the specularity is compounded together. The photo reference data will still contain the specularity and fresnel glare, as seen earlier in this post. Which is useful during the skin shader and rendering setup.

 

The next stage would be to take the data into Maya (Arnold) or Modo to setup your rendering pipeline ..however, just before we got to this stage Marmoset Toolbag 3.0 was released!

*As a side note. We plan to render this data in Maya with Arnold as currently real-time shaders cannot replace the quality of Arnold’s skin shader, specifically Anders Langlands alshaders. We just need to find the time to re-learn Maya as it’s been a while! Perhaps others out there would like to try the data and do some test renders?

Toolbag 3.0 now has a bunch of UI upgrades and also integrated Voxel GI! Which has been the missing component for real-time rendering of characters for a while. As it stops diffuse and specular light bleed from IBL, as well as creates some stunning subtle bounced light in areas like the ears, or eye sockets. It’s also super fast with instant feedback, no waiting for each frame to be rendered in minutes or hours. Toolbag 3 will even take a 30 million polygon mesh.

Warning! If you step into Toolbag 3.0.. you will never leave.

After some requests here are some full size screen grabs from Toolbag 3.0:

That’s it for this blog post.

The plan is to share the upgraded Infinite V2 head scan, possibly with the Marmoset scene files (including ear fluff!), if there is enough interest. We also plan to start scanning many new models soon to share on Triplegangers, including their base scans with all the multi-lighting reference data required to match them in offline rendering or real-time, VR? 🙂

In the next post we will share example 360 degree Pano images taken in the rig, as well as discuss capturing and processing expressions with a multi-light setup.

As a side note, I started to loose track when compiling these images, as to what was real and what was not…

Infinite

6 Comments
  • BPReply

    Thanks so much for sharing your work and journey! You are an inspiration!

    January 4, 2017
    • InfiniteReply

      Thank you!

      January 5, 2017
  • TomReply

    Awesome work again infinite! Keep up the posts!

    January 4, 2017
    • InfiniteReply

      Thanks Tom

      January 5, 2017
  • williamReply

    Wow! Really nice work Infinite! I am currently studying photorealism for character in real time and I just found your website, what a chance! :) It would be supert nice if you could share the marmoset scene (and also the unreal project you did for the real-time skin shader). Also, I would like to know, I saw you put a map for the specular but as far as I know, for skin, the specular is not really changing and so it's better to only put a value around 0.35 (from UE4 documentation). Am I wrong or is there any other resaon? Again, awesome result! :)

    January 12, 2017
  • VisitorReply

    Sounds great! I am sure there is plenty of interest in the new data set. Keep up the good work!

    January 13, 2017
Would you like to share your thoughts?

Leave a Reply


*