Another quick post to complement

This is just a quick post to reference my previous claim that the Capturing Reality software does internal high-pass processing. Whilst this is debatable there is definitely some extra internal image processing happening and applied to the meshes data. I realize I might have upset a few die hard fans of the software, so would like to address my concerns about the software at the moment. It’s no doubt an incredible piece of software so it wasn’t a bash.

IR had access to the software back in early 2014 and we did a great deal of testing with it.

This is a process that Capturing Reality does automatically in the software to give the appearance of higher fidelity.

See example point of reference:

Capturing Reality and Photoscan output

cr2 photoscan2

This resembles a highpass or image filter process directly applied onto the mesh although yes debatable. Either that or missing data from the dark areas of the tattoo structure or some other artefact that Photoscan does not seem to struggle with.

We’ve also had many alignment issues, missing cameras, a bug on export /import with orientation issues plus general surface artifacts as seen above. We just don’t have the time to put in to bug test it. However CapturingReality is fast there is no doubt and the interface is stunning.

IR spent a great deal of time and money helping to improve Agisoft Photoscan in 2011 (multiple license purchases) We also helped convince many studios in the industry to use the software. You can see my direct posts, requests and ideas all over the Agisoft forum, most of which were integrated. Lots of these features made their way into Photoscan from many months of R&D, testing, bug fixing etc.

Many people in the industry benefited from that effort ($) with most not participating back into the software, just using it. So I didn’t plan to do it again with Capturing Reality. I’m more than happy to wait for others to put in their effort this time. IR would be glad to put it into production when it’s really ready.

We’re happy to use anything that is stable and includes features we require.

Currently IR is focused purely on R&D and building scanning solutions, we always offer as many options to clients as possible regarding software. Some choose Capturing Reality, some Photoscan, most use both together.

At the moment there seems to be some bugs still with texturing in CR, it gives off a slight grey saturation to the texture output, plus the other features mentioned. It still has some way to go but seems to be close to production ready. There are also no mesh viewing abilities in the view-port, no direct texture viewing abilities and I’m still not convinced with the pricing structure. But damn is it fast! Big thanks to Milos for the feedback, ideas and support you give our clients. It’s invaluable.

Many of our clients use Capturing Reality and we have seen some great results with it, it’s being used in feature film projects, we’ve seen it.

As a side note we are also well aware of the Facebook 3D scanning group. I’m not a fan of Facebook for various privacy reasons, LinkedIn is ideal for networking, that’s all I personally require. Regarding the 3D scanning Facebook group, I don’t like the idea of having to be approved by a moderator to another companies marketing page. This is why IR are absent from the group but we have many friends in various companies that keep us up to date with the “gossip”.

We are software agnostic and happy to use what ever works, ideally bug free. Use what feels right 🙂





  • MartinReply

    Very nice text, thanks for Ads :) but we do not do any high-pass. The effect you can see is caused by a small misalignment and high detail. You should check cameras connectivity using inspection tool or check matches coverage, adjust camera placement to fix it. Also you should try the latest version, we have changed alignment pipeline since 2014 twice and will release a new generation soon. You should try it now ;)

    To explain the effect which you have experienced - cameras with a slightly shifted registration (even 0.5-1 pixel) creates a similar effect. It is like differentiating blued and sharp image in photoshop - it will rise edges. Unlike we, PhotoScan does a surface fitting - fits a low-resolution polynomial surface to a point cloud. Amount of artifacts depends on a degree, or how many triangles you allow for fitting (sampling density). With small count of triangles you simply jump over the edge. If you changed tris-count to 70M then I'm fairly certain you would see this too effect too.

    April 4, 2016
    • InfiniteReply

      Well that put me in my place :) Thanks for the reply and clarification. I will amend my previous post to reflect your feedback. I did wonder what the visible issues were. I have tried the software since but do battle quite often with alignment, where Photoscan seems to succeed more often than not. Have the export and import rotation bugs been fixed that some users have reported? What texture process do you use compared to Photoscan? Is your process more precise than Mosaic?

      April 4, 2016
      • MartinReply

        I would yes all to all :) But you should try yourself.
        You can export/import mesh without problem. Do it also using command-line. You can use structured light to generate meshes and another set of images to texture them. We have completely changed texturing algorithms, with better and roughly 20x faster algs. We made texturing out-of-core so you will never run out of memory for any detail you wish. Mosaic means - cutting pieces of images to create a texture. We do that multi-band + some magic. You can use your own UVs and just calculate textures, or you can use improved build in UVs generator too - which was updated this year to improve texture utilization.

        Alignment is a tricky part of it. Photoscan might be able to register images which are more apart of each other. On the other hand it tends to register images with 25 points and with a very bad conditioning. If you do a 100 photo data set then you can remove such bad guys by hand. If you do 1K dataset then you spend more time on cleaning than you would like. We are more conservative there. It is questionable if you would get a valuable info from a photo if there was not enough information to register it. So we rather throw this image to tell the user it is a weak part of a chain. Shooting images is typically faster than doing any manual post. So why not add more images? This is our approach. Also, not every image needs to be 20mpx, you can mix 5mpx and 20mpx, why not?

        But that is my word :) you should downlaoad try.

        April 4, 2016
        • InfiniteReply

          Great advice thank you martin. I'm interested to check out the new texture process. I will investigate more when I have some spare time.

          April 4, 2016
Would you like to share your thoughts?

Leave a Reply