I wanted to set a challenge to anyone in the CGI art and research field who might be interested in demonstrating how to deform a mesh using a Camera Space Normal Map. Is there anyone out there who is up for the challenge?

A $500 Reward! is up for grabs to the first person who can effectively demonstrate this process using the Free data supplied below.

Using a Photometric Stereo process I was able to generate Camera Space Normal Maps to use to apply and to deform a medium resolution captured mesh from a DI3D system, the problem is I don’t have access to the code to do that. I know it exists but there is nothing commercially or publicly available.

I am now looking for some help with regards to using the Camera Space Normal Maps effectively to actually deform the vertices on the Medium Resolution Mesh to further generate a High Resolution Mesh to include pore and wrinkle details, using the Vectors that are stored in the Normal Map. Each colour represents a theoretical vector for the mesh to deform to.

Camera Space Normal Map Textures and OBJ files.



https://www.ir-ltd.net/uploads/Lee-Normals.rar 93 MB’s

Is there anyone capable or interested in this challange?

11 Comments
  • MTReply

    Easy!

    Screenshots and models:
    https://vcg.isti.cnr.it/~tarini/files/normalmap2geometry_test/

    Additional info:
    1- refined the original mesh with MeshLab [*], (x4 face, x4 verts)
    to have enough spatial resolution
    2- ad-hoc program in C++ -- uses VCG library [**]:
    it recovers 3D shape so to follow given normals, (<== that's the point) Apply a bit of smoothing. Reiterate. Export. 3- imported in MeshLab[*], simplified, exported. (one quarter faces and verts--back to original poly count, but... adaptive simplification pays) [*] MeshLab: https://meshlab.sourceforge.net/ [**] VCG library: https://vcg.sourceforge.net/

    May 20, 2011
    • InfiniteReply

      MT, it certainly looks good but seems a little off. There are some odd artefacts present that aren't there in the NM. I would be interested to see the Mesh directly to inspect further.

      May 20, 2011
      • MTReply

        Sure, it is a little rushed, just a proof of concept.

        I think the main problem is that the normalmap is modulated with an albedo map.
        I re-normalize the normals in the map, so I get results which are a little inconsistent.

        To understand how much the original normalmap is off, consider that the two upper images in my link should look basically the same, barring the hi-frequency details -- in other words, the upper left (naked geometry) should look like a smoothed version of the upper right (bumpmapped). Instead, the two look totally different from each other, because of all the inconsistencies of the normalmap.

        If they did look more or less the same (again, barring all the wrinkles etc), then, I assume, the two bottom images (naked synthesized geometry) would look pretty much like the top right one.

        -----

        May 20, 2011
        • MTReply

          PS: meshes can be downloaded in the page above

          May 20, 2011
          • Infinite

            That's ace thanks, they look good. Weirdly recovered_smooth looks better than the more detailed version.

            I still need to verify the quality output from another person who submitted their idea, I should have an idea on Monday which data is the best. This looks like a pretty rock solid method. I still don't quite understand how I can replicate it sadly.

            May 21, 2011
        • InfiniteReply

          I'm not sure what you mean by Bump map, as that isn't a bump map. It's just a greyscale version of the Albedo. A simulated version. I used the same approach as those USC ICT papers, i.e. x=r, y=g, z=b. The maps are a little miss aligned but not a great deal.

          Your final version looks like the sort of results I would get from doing a Highpass filter effect from PS.

          In theory there should be enough information in that NM to be able to displace the model in the right directions.

          May 20, 2011
          • MT

            Sorry, I'm using BumpMap as a synonym of NormalMap. A texture encoding a normal in each texel.

            Sure, the model IS displaced in the right direction, and of the right amount (barring the problem with the NormalMaps). A proof of this is as follow: re-compute the normals from the 3D geometry alone. For example, just flat shading, one normal per face.
            You'll see your wrinkles. So the wrinkles *are* in the geometry (that's how my screenshots above are done, BTW).

            Sure, they are small wrinkles. That's to be expected. Wrinkles depression can be, factually, very small in depth, and yet be very much visible in the shading.

            May 20, 2011
  • InfiniteReply

    Is there a way to boost the displacement more?

    May 21, 2011
    • MTReply

      The displacements can be made to be exactly as deep as they should be in order to "explain" what we see in the input NormalMap. I think they are already pretty close to that... maybe not exactly there, but real close. (the mesh could be subdivided more, the process could be iterated more, the normal map cleaned of its inconsistencies... but as I said I doubt we would see much of a difference).

      After that point, the mesh geometry would be coherent with the NormalMap, and independent from it. Its detail could be enhanced just like any other geometry (independently from its origin). You know, enhancing hi freq details by basic signal processing. Nothing to do with the NormalMap anymore, IMO.

      May 21, 2011
  • didier muanzaReply

    Hey,
    I 'm currently working on kind of the same application, extracting camera screen normals to enhance a model.
    I 'm working on a shader which can handle photometric normals. I ll show some result shortly. So far, MT is doing more or less the correct approcah.
    I keep you guys posted on my results. Not interested in money, just sharing research to achieve amazingly believable digidoubles.

    Didier

    October 10, 2011
    • InfiniteReply

      Hi Didier, Looking forward to your results. We found a way to approximate the Normal Map information using a Cinema 4D method.

      Ideally you need to be able to Calibrate everything. Lighting positions, Camera position and Subject position to get truly accurate World Space Normal Maps. I hope that helps. Lee

      October 10, 2011
Would you like to share your thoughts?

Leave a Reply to didier muanza Cancel

*