Here is a Blog post to outline how to correctly deliver RAW scans from Agisoft. How you can align, scale and orient properly in 3D space.

You must fix the source of the content..

So it might seem like a quick fix to scale and orient scans after the fact. Dirty and cheap I say. You must fix the scans at the source of the process. In Agisoft Photoscan (if that’s what you use) Agisoft Photoscan processing scans arbitrarily in world space and to a random scale and position. Not noticeable at first but a pain in the arse during production. There are no built in tools (yet) to solve this problem but you can solve it using the Region Box and Python Scripts. I have the scripts for this, email me if you want them.

First Problem. Scale.

You will need Agisoft Photoscan Pro and I urge any serious developers who make a living from Agisoft Photoscan to purchase a Pro license. It helps Agisoft LLC and it helps the community.

Here is what we see after a 109x DSLR single shot scan that has been processed (on High) and exported to 3DSMax



Poor scanned girl is the size of a thimble and laying on her side! “not good!” says Client X

As we can see in Agisoft Photoscan, it tells us the orientation is wildly off in some random direction compared to x,y,z and it’s position could be anything to 0,0,0 center of world co-ordinates.


See the culprit! bottom right, it tells us its drunken state!


So let’s fix that.

You go into each photograph that has your scale calibration visible and place two markers, one at 0 cm and the other at 1 cm. You could use pencil or pen marks of two dots that have been measured. Whatever. Just some reference points we know are the correct scale in the scene, that are visible from more than 3 x camera positions. So we can refine marker position.





Next up, lets measure the varmints.


Drag from 0 to 1 markers to define the red line scale.


Next up hard set the scale to 1.0 = 1cm!


Now lets update that MOTHER!


Correct scale accomplished!


Still with me? good.

The other alternative, or as I like to call it “chicken before the egg : |” you could append in an already scaled and aligned file. But do you have one? this is why it is VERY important to make sure you ALWAYS use the same naming convention for your single shot camera array. But that’s for another post “How to correctly use XX control Software for Remote Multi-thethering” I will post later.


Appending the egg!.. or.. the chicken?


Another girl scan! (yeah, it’s a hard life)

So we can use this appended chunk to align all cameras by camera. Align by camera is a very cool feature (that IR got introduced into Agisoft Photoscan, one of many, shared Beta testing! no NDA b&^^%^&s) As I mentioned it will only work if your cameras are always named the same. Organization is key. Naming conventions like IMG0007.jpg are no good.

So we can append align, if we already have a correctly scaled and oriented previous file. Or for every session or client we can make damn sure each master scan for the whole set, is correctly aligned, scaled, oriented and positioned correctly in world space.

Ready for export.

You only ever have to do this once per scan session because you can align the others by that chunk using align by camera (if you take 100’s of scans) and as long as you don’t drastically move cameras or rename mid session (are you CRAZY!) this is also where a tethered camera system is very important, none of the wireless nonsense. As going wireless looks all pretty, flowery and pink but it’s not practical in a reality. As you must not touch cameras during a session!! let alone all the dropouts and issues associated with wireless. Oh and changing batteries every 5 minutes!!

Back to previous process

SO.. after we have scaled the scene, if we can’t append the egg/chicken we can use the Region bounding box and a python script to do the hard work (props to Alexey and Dmitry for this method)



It helps to alter to View: Perspective/Orthographic and turn on/off cameras whilst adjusting your region box correctly. The goal is get it the right way around. Red panel on the top, dash along the +x axis. This will give us the correct x,y,z orientation. The region box center is where the world center 0,0,0 will be.

I also find it’s wise to duplicate your chunk, decimate it and then you can use your script to orient and position that first, and keep the HIGH scans as backup.

Decimate down (we don’t need textures for this purpose)


Happy with the region box position? we then run the “magic script”


BOOM! BOOM! double BOOM! all done.

Well you could tweak some more. Lets test the scale, orientation and position in a 3rd party application again.


Not bad but her center point is off slightly. Let’s correct that. Back to Agisoft Photoscan once more..



Run the script again. That should do it!



Now if you wanted you could use this chunk to align all other chunks by camera positions! 🙂


Now lets check the final results in 3DSMax (or whatever your 3d artistry desires)


1.695m of pure 3D athletic goodness..

That’s all you need to know for now. This is the hard code fix.

Hopefully in the future this will be an obsolete post and Agisoft will have a fix or natural flow for how to correctly scale a scene. I did post in their forums about this but because of the lack of developer beta testing support it can be hard to get new features integrated without backup by user numbers.

Join the cause!

If you use Agisoft Photoscan, please contribute in the forums. You know who you are, especially the larger studios.

There are amazing benefits to properly beta testing the software and also helping to fund the developers by buying the PRO license version. Support them.. support us all..

For example IR was able to use the brand new Mosaic Texture algorithm back in June-July 2012 for some very high profile projects, while other companies were still stuck on the Average (god awful blurry!) texture method, until the fix was released in February 2013.

The more you contribute, the more early access you can get.

IR has helped Agisoft Photoscan improve, helping to beta test and suggest features such as:

Difference Masking

4D Support on timeline

Mosaic Texture Method. Super sharp highly detailed, blur free, seam free, textures (this took months of testing, uploading GB’s of data)

Strip Depth Maps, Strip Textures, Strip Models, Strip Masks command

New Spherical Texture Mapping Method for 360 faces, distortion free.

UV Viewer.

.FBX and .DAE exporter refinement for use in applications like Mudbox

A lot of Batch commands that never existed.

Alignment by marker and Camera

a multitude of Python Scripts

But to do this, to make this software grow and evolve you have to put in the time and effort, write out detailed ideas, posts, emails, blog posts, share images and share data freely with the developers.

If any of these features have helped you. If they help your business, please come over to the Agisoft Forums and contribute. Help the community, help others who make a living from scanning, to improve the software. We are contributing to the same cause. Better, more lifelike digital humans.

I hope this Blog post might be of some use (duck and dive around the grammatical errors!).

Thanks. Lee

  • TaoReply


    Very Noble of you to be doing this for the Agi team, I will join your cause once I have this whole thing up and running

    April 3, 2013
    • InfiniteReply

      Thanks Tao. I think all us users together can make it a better application, with just simple feedback and ideas :) plus sharing of results and images etc The Agisoft forum is a great place.

      April 3, 2013
  • MarkusReply

    Dear Lee,

    thanks for that nice article of you. Helped me a lot to get my scaling right. And you are right, Agisoft PS is pretty cool! It does what you want right.

    I would be interested in the script, do you share it?

    Have a nice day,


    August 22, 2013
    • InfiniteReply

      Thanks Markus! Please send me a quick email via the comments form and I will forward on the script.

      August 22, 2013
  • IkeReply

    Hi Lee,

    I recently joined the Agisoft forums and I'm doing my first 3D scans. Results are very basic for now, but they keep improving. I'll do my best to share my knowledge in photography to help others get better scans. You've been very helpul providing your workflow and all those important tips. Agisoft documentation on certain topics is quite small (nonexistent?), so your tutorials and tips are incredibly helpful.

    It's very encouraging to see the evolution of your work, starting with very simple setups and building up step by step.

    On another topic, I'd like to ask you about your retopologizing methods. I saw a very good technique on cgfeedback forums where you used a Shape Transfer tool form DI3D, and I'm wondering how it evolved from there... It´ll be better to send you an email regarding that, i guess... Anyway, thanks for your excelent work.

    October 11, 2013
    • InfiniteReply

      Hi Ike, thank you for your post. I don't use shape transfer anymore I find the built in tools easier, or hand matching custom meshes to be faster. Thanks for your kind comments, Lee

      October 18, 2013
    • IkeReply

      Hi again Lee,

      This is the post I was referring to.

      Could you elaborate a bit on how you do it now? Is it in Photoscan or Zbrush? I haven't been able to find anything similar, but I'm quite a newbie on both programs. Looks like a great feature to have at hand. Up to now I've wrapped my generic mesh onto my Zremeshed scan using a few tools in 3Dsmax, and then used Xnormals to get the textures. It's not straightforward at all, but up to now it's the only workflow I've found to work.

      Unfortunately I don't think it'll work in order to get expression morphs, that's why I was interested in the landmark matching feature in DI3Dview. It'd be very helpul if you could shed some light on this matter. Thanks again.

      November 7, 2013
      • InfiniteReply

        That link comes up invalid. You can use ZBrush to match a base cage to each expression scan. You need some background in art and sculpting.

        November 7, 2013
  • IkeReply

    I'll have to do some more research on Zbrush then... Matchmaker perhaps? I come from the artistic field actually, that's why I keep running into brickwalls everytime. Thanks again.

    November 7, 2013
  • IkeReply

    Maybe this link works...

    November 7, 2013
    • InfiniteReply


      That post was from 2010 when I had a 4 camera system. I don't use that method any more. Matching by hand in ZBrush is the best method.

      November 7, 2013
  • AndreaReply

    Dear Lee,

    thanks for that interesting article of you. is the script stil available?
    Regards Andrea

    November 24, 2013
    • InfiniteReply

      Please check the latest blog post for November.

      November 26, 2013
  • RenataReply

    Hey there. May I have a copy of this script as well? I'm work with terrain modelling for the uni, and I happen to have some problems with the alignment. Thought maybe seeing the script might help me get an idea how I can fix this. Or perhaps, if the script works for terrains as well as people, it might solve my predicament.

    November 25, 2013
    • InfiniteReply

      Please check the latest blog post for November.

      November 26, 2013
  • RenataReply

    Hi Lee,

    Thanks for all the work mate, and happy birthday!


    November 26, 2013
    • InfiniteReply


      November 26, 2013
  • David WortleyReply

    It would be nice if you could just select a camera and tell Agisoft that this camera was perfectly straight and level (which would rely on your rig being set so), then at least the scans would always come in at the same orientation.

    June 13, 2014
    • InfiniteReply

      Agreed, this is a good idea.

      July 16, 2014
  • KhangReply

    Hi Lee,

    I sent an email to ask for the python script some time ago but not sure if it reached you? I sent the message via your contact form.

    Thank you.

    July 18, 2014
    • InfiniteReply

      Khang, thanks I received it. I will make a public post soon sharing the script.

      July 23, 2014
  • SauravReply

    What was the file format you used to export the files into 3Ds max? I tried .3ds extension for export , but it does not export the texture with it.

    Thanks in advance!

    August 23, 2014
    • InfiniteReply


      September 8, 2014
  • David sanchezReply

    I would love to get that python script if you don't mind sharing. Awesome blog btw.


    January 20, 2015
    • InfiniteReply

      Send me an email!

      February 8, 2015
  • KenReply

    Hey Lee,

    This is awesome! I work at The CaptureLab at EAC and we've been looking for a way to consistently scale and align our meshes. This would help us out tons! Going to email you to get that script.


    June 2, 2015
    • InfiniteReply

      Thanks and no problem.

      March 29, 2016
  • Mike SReply

    If you python script is still the best/only/recommended way of getting aligned and scaled Photoscan outputs, then I'd really appreciate getting it. Thanks.

    September 11, 2015
    • InfiniteReply

      March 29, 2016
  • RonReply

    What email do I send to get the python script?

    November 6, 2015
    • InfiniteReply

      March 29, 2016
  • LiChaoReply

    Any update for 1.2.0 ? This script can not work with 1.2.0 ,the coord did not change.

    November 21, 2015
    • InfiniteReply

      March 29, 2016
  • WMRReply

    Found your blog useful. Could you mail me the Python Scripts please?

    December 5, 2015
    • InfiniteReply

      You can find them here

      March 29, 2016
  • ThokozaniReply

    Hi Lee,

    I am a university student using Agisoft to calculate the area of my 3D model. Thank you very much for this blog, it is very helpful.

    I want to know whether I can use Agisoft (pro) to calculate specific area measurements of my 3D model. I want to calculate the surface area of my 3D model at specific regions, do you know of a way I can do this using Agisoft? So instead of getting the area measurement of the whole 3D model, get area measurements of the 3D model at specific regions of the 3D model.

    Using this blog I have managed to scale my 3D model and I am very grateful, now I need to know how to get the software to calculate the area of my 3D model at specific regions of the model.

    The second part of my question is tricky but you might be able to help. I am going to generate 5100 3D models using the Agisoft software and will need to repeat these measurements for all 5100 3D models, is there a way that I can calculate the area using Agisoft for one or two 3D scans, and then instruct the software to run the same measurements for the remaining 5000 and so 3D models?

    Patiently waiting to hear from you,
    All the way from South Africa, Africa

    December 7, 2015
    • InfiniteReply

      Hi excuse the late reply. Thanks for your message. Sorry, this is a bit out of my remit.

      March 29, 2016
  • ZachReply

    Great articles. I'm having trouble with my model positioning would you be able to send me the script or refer me to where I can find the Python Scripting to assist me?

    April 13, 2017
    • InfiniteReply

      You can try here

      May 27, 2017
Would you like to share your thoughts?

Leave a Reply