Another quick blog post to show off some scans IR did recently testing out some new scanning techniques, new lights, new reference cameras and noise projection experiments on face capture.

Life Model

Usually a client will ring up with a request about scanning, either full-body, faces or both. We discuss the techniques, options available, then ultimately pricing. IR has a variety of model agencies to work with that can supply some outstanding life model talent. Ranging from dancers, acrobats, kung-fu experts, stunt doubles, actors, cat walk models you name it. Some of the best and brightest talent in the industry. Or alternatively a client will cast using their own agencies and bring the talent with them to direct and shoot.

IR is currently situated 2-3 hours outside of London by car or train from Liverpool Street station. A single scanning session can take anywhere from 15 minutes to a few hours. IR can scan 100’s of expression scans in under an hour, dozens of t/a-pose full-body scan costume changes an hour, or 100’s of dynamic poses an hour.

The system is now designed to be 99.999% reliable, NO dropped frames, NO black images and super fast. It’s now very easy to capture high resolution, super detailed scans of humans in a split second. Processing takes time but most projects can be delivered in around 24-48hrs thanks to IR’s custom built render / processing farm built specifically for photo reconstruction.

Example of a typical FACS Scanning session:

Lauren-04 Lauren-03
Lauren-01 Lauren-02

(visualized in real-time with Marmoset Toolbag)

You might recognize the model. Lauren is a fantastic dancer and actress. Lauren was booked by IR for an in house VR, R&D research project, with model release forms signed as standard.

Lauren and I scanned over 75 expressions in around 30 minutes. Using a new structured lighting technique. Lauren had extremely smooth, pale skin, normally VERY hard to scan.

Face-Scans02 Face-Scans01

RAW scans straight from Agisoft. Better inner ear scan, sharper details overall. Notice on teeth and eyes. The trick is syncing quickly with noise and colour capture, otherwise you get a mismatch. As humans tend to move great distances even in a split second! Very difficult to capture.

Notice the lack of specular sheen on the skin. Agisoft is able to counter act Fresnel specular sheen quite well when capturing from multiple-angles and combining the images.

I’m still undecided on whether to use linear-polarizing techniques. 1) because of another layer of glass = diffraction and blur. 2) nearly 3 stop down in available light, because you also have to linearly polarize your light sources.

Much more testing to be done.

Syncing / Aligning FACS Expressions

(more to come on that soon)


This .gif features another agency model called Toby

Example A-Pose Fullbody Scan:


Split second capture. 115x DSLR images downloaded and used to build the 3D scan. This example has been cleaned, uv-unwrapped and is ready for animation. 10k textures from Gigapixels of colour information. The beauty of photogrammetry scanning is not just the high resolution scan output and textures, but the near 360 degree reference imagery! as well as 8x 36MP fullbody shots.

Some Extra Dynamic Dance Scans:






After working with some recent fashion designers and Mannequin Manufactuerers IR has employed 8x Nikon D800 reference Cameras, so clients can evaluate each pose quickly and decide upon the best choice of each scan set.

Example Reference Shot 5x D800 Setup (now 8x)





One restriction at the studio is space. So to compensate because the 36MP DSLR’s are fullframe, I can use 35mm lenses, the issue is this introduces distortion to the images. Ideally these shots should have as little perspective and distortion as possible. On an upcoming new build, I will have more space and should be able to utilize 50mm prime lenses to help with distortion.

Having reference images taken during scanning from multiple angles is essential for some clients and art directors to make quick informed decisions about pose and form, so they can choose what to be processed later on.

scanset04 scanset03
scanset02 scanset01

The size of the capture volume and amount of cameras allows for a deeper depth range to be scanned, for more dynamic and cleaner shots. Note point count. Anything around 250,000 for a single shot system is normally good news. Very rarely any failed alignments. Maybe some cameras that don’t see anything wont be included, if the subject is out of shot.

DOF (depth of field) is always the enemy! curse its creation!

Scanning a large depth volume in a short distance range, restricts f-stop to around 12-14. Anything higher and diffraction sets in, anything lower and DOF will blur through the capture volume.

Mannequin manufactures and fashion designers are EXTREMELY focused on what they want, mm by mm posing in some cases. Some clients may want to take 100’s of takes of just 1 pose, and they may require dozens of poses during the booking. This can really push the capture system to the limits (100’s and 100’s of captures x 115x images)

This is why it’s ESSENTIAL to have a really reliable scanning setup, with robust storage and redundancies.

network01 network02

4x PC’s running up to 30x DSLR’s each. All controlled from 1x Master PC. That’s allot of DSLR’s that have to play nice, together. Plus another separate PC and triggering system running the 8x reference Cameras using the Breeze Nikon Software.

I envy “photographers” they don’t realize how easy they have it!! 🙂

The network infrastructure and reliability of the system has to be running at 100%. You DO NOT want the system to break half way through a capture session, clients don’t like this.

This used to happen (to me) allot with the old 14x laptop system, but not any more. The gremlins have been exorcized!

..but there is always that 1% error margin, broken / popped power adapter(s), software malfunction, clog in the USB pipes! etc ..not due to system design but (depending on your ‘preference’) an act of ‘one’ of the god(s) 🙂

IR Freezing Time


Split second dynamic scan, using a system built for a client in the EU. 115x DSLR setup built in 3 days. Scan features Alexander Tomchuk!

An IR dynamic scan, displayed Pepper’s Ghost style in Hong Kong!

There you have it, a brief run down of scanning at IR. I hope to post a video soon, using a GOPRO3, showing a system power up. So you can see how it all runs and works.

I hope this Blog post might be of some use (duck and dive around the grammatical errors!).

Bon voyage!


  • Creative CharacterReply

    Lee, you're the man. such an inspiration in the craft and your outlook on creative collaboration.

    August 20, 2013
    • InfiniteReply

      Thanks! :)

      August 20, 2013
  • NiceviewReply

    Great post!

    When I first saw your site, I was very surprised with your outstanding 3D scan results. I think your method is even more superior than any other commercial 3D scan equipments.
    I'd like to take 3D scan for face. I read all posts in your blog. And, read almost every posts your wrote at Agisoft forum. I really agree with you that Agisoft PhotoScan is amazing software!!
    I'm trying to take pilot photos with Canon 500D w/ 60mm lens. (I think 50mm F1.8 is better. but it is only I have.) I think 18~20 cameras are suitable for medical facial photo for my purpose.
    Of cause I know that positions and settings of cameras and lights are important. But, I really want to known your preferences for PhotoScan. I wonder what options are used for your outstanding scans, such as maximum points per photo for alignment, depth filtering mode, face count, model recon and texturing parameters, etc... (I couldn't believe that only 5 photos were use for this. - )
    Can you share the information which is shown with "Show info..." menu with right-click on chunk?
    And, do you use Agisoft Lens, coded targets or scale marker to calibrate?

    Thank you, and I really appreciate for your sharing your idea and passion for 3D scan.

    August 21, 2013
    • InfiniteReply

      Thanks. I hope to answer some of these questions with an online video soon. I'm not when this will be up but sometime soon. Generally I use most of the default settings in Agisoft. Keep checking back for tutorials and updates.

      August 21, 2013
  • RobReply

    Do you get bsdf data from your scans? You mentioned linear-polarizing techniques, but besides getting some of the fresnel out for the captures and building a nice albedo map, it also could be useful for coming up with shader data. You already get accurate normals relative to the camera(s). If you have multiple lights and you know the locations of those lights and can trigger them individually and normalize the data by dividing out the albedo. For a bsdf table you basically just need the values given the light vector and surface normal relative to the camera.

    I played around with doing something similar but I don't have even close to the means that you have (I tried to do it manually with one flash that I rotated around, lots of movement errors, and I basically made a sculpt to try and guestimate the normals... so practically it didn't really work out haha).

    September 20, 2013
    • InfiniteReply

      Thanks yes I have experimented with that kind of capture. It's more hassle than it's worth to be honest. You can get just as good data creating the high frequency bumps synthetically using the colour information.

      September 20, 2013
  • jimReply

    For the camera connection to PC, is the speed determined by southbridge that limit the camera number? Could you share your PC basic config? thanks

    November 26, 2013
    • InfiniteReply

      There is a driver issue with USB3, someone needs to address this but I have no idea who we could contact so for now USB2 is the best option. There are ways to connect more than 30 cameras per PC.

      November 26, 2013
  • ultimatoReply

    Will you be releasing more oculus rift demos? I have found the ones you have released to be incredibly interesting, and be the most amazing things to be released for the rift to date

    November 26, 2013
    • InfiniteReply

      You know it! some big things are coming.

      November 26, 2013
  • ultimatoReply

    And of course if it will be possible to get 4D scans in the future that would be insane to see. Very much looking forward to what you guys can make :D

    November 26, 2013
    • InfiniteReply

      I agree it could be very cool indeed :) thanks for your interest and posts.

      November 26, 2013
  • michael_dejangest@hotmail.comReply

    Great post, I love your work!

    The rendering time depends on the number of cameras, number of megapixels and Computer?
    Is there in Agisoft a way to determine the quality of the render? (to make the render proces go faster)
    Can the render proces be ready in a hour?

    sorry for my english, I'm from holland

    January 31, 2014
    • InfiniteReply

      Processing time depends on many factors, amount of cameras, megapixels, subject size, PC you use to process. Times can vary from 30 minutes to 3 hours. I recommend reading over the Agisoft forums for more insight.

      February 4, 2014
  • dizzykarReply

    Awesome blog,
    Can you give me advice. We scanning people and print it with ProJet 660pro. Now we use Artec Eva, but we not satisfied in quality of model and sometimes Eva not catching details. Now we are looking on photogrammetry, looks much powerful. How you think how much cameras minimum we should to have? We thinking 16 of first steps.
    And I read all posts in your blog, you also used SkannerKiller before. We are looking on SK too, because they offer 12 or 24 DSLR is ok. Now I'm corresponding with Helmut, developer of SK. Why you refused this soft?
    Maybe this technology will enough for printing? Resolution of print not so high, even figures is scale 1:18.
    Thank you.

    July 16, 2014
    • InfiniteReply

      I recommend Agisoft Photoscan

        over any other software

      , it's far superior. 12-24 DSLR's is not enough for full body. Occlusion is the issue, you need to capture from many viewpoints to solve this. I recommend a minimum of 80's cameras.

      July 16, 2014
  • Kaleb WymanReply

    Great work, Lee!

    Has anyone in the photogrammetry field tried capturing a data set using UltraViolet photography? Considering how Agisoft Photoscan works, I imagine it would really help to capture 3d data from smooth, pale, and "flawless" skin... perhaps even teeth. This video on YouTube about how the sun sees you was inspiring:

    UV photography reveals sun-damaged skin:

    All of those dots revealed on the skin's surface look perfect for the software to generate a 3D point cloud consistently! Perhaps one day there will be an inexpensive camera that captures both a UV photo and RAW photo in one shot... it would be ideal to use UV photo for the 3d point cloud generation, and RAW for the beautiful texture map.

    November 22, 2014
    • InfiniteReply

      Thanks yes I investigated a few years back. The problem is you need very bright light for photogrammetry and UV is very dangerous for the skin and especially eyes.

      November 24, 2014
  • pabloReply

    AGAIN NO sensor detected with this demo, is it compatible with the 0.4.4?

    January 8, 2015
    • InfiniteReply

      AGAIN this is not a DK2 demo. It was made for DK1.

      January 8, 2015
      • pabloReply

        OK! sorry then. THE Again in big letters was becaus caps lock was on and I was too lazy atm : ))))))))))

        January 9, 2015
    • InfiniteReply

      It's not a DK2 demo...

      February 8, 2015
  • JonReply

    How much success will I have if I stand a person still in an A-pose and rotate 4 photographers around that person on a camera dolly, each at a different height but all with high-res cameras and with the subject properly lit? You seem to have been through it all. Have you ever tried this?

    June 8, 2016
    • InfiniteReply

      Yes I don't recommend it. Useful for testing but not production. You need fixed cameras to freeze time.

      June 21, 2016
  • Angie McCormickReply

    how do I sync 80 cameras?

    September 17, 2016
    • InfiniteReply

      You need opto-isolated trigger hubs. Esper Design used to sell them. The Agisoft forums might be of help to you, lots to learn there.

      September 17, 2016
  • Angie McCormickReply

    What is the least expensive cameras i can use to create a full body and face scan?

    September 17, 2016
Would you like to share your thoughts?

Leave a Reply