Category: ODM

ODM 2.0 released! See what’s new.


Today we’re excited to announce a new major release of ODM! What have we been working on? Well a lot of things. The two most important ones are the ones we hope you won’t notice, because they don’t affect functionality, but have been part of necessary “infrastructure” updates to make sure that ODM continues to work in the future. Namely:

  • We’ve upgraded the codebase from Python 2 to Python 3. Python 2 has been deprecated and will not receive updates past 2020. With Python 3 support, ODM can continue moving forward with the rest of the Python ecosystem.
  • We’ve upgraded the base OS target from Ubuntu Linux 16.04 to 18.04. 18.04 will continue to receive extended security maintenance updates until 2028 and compatibility with 18.04 has been frequently requested from our community.

If you’re using docker, these changes are (should be) transparent. If you’re running ODM natively, you will need to upgrade your base system to 18.04 before updating ODM (or continue using a 1.x release of ODM until you decide to upgrade).

Aside from these important under-the-hood upgrades, we couldn’t make an important release such as a 2.0 release without adding something new and shiny!

  • Image Geolocation Files allow you to specify/override the GPS location of your images without having to fiddle with tools such as exiftool. This is different than using Ground Control Point files. With these you simply specify the location of the camera centers instead of the location of points on the ground.
  • Image Masks allow you to mask out areas that you don’t want to include in your reconstruction. This is useful in many scenarios. For example, during cell tower inspections, photos typically include parts of the sky, which end up creating strange artifacts and negatively affect the reconstruction. By using masks, one can delineate areas to exclude from a reconstruction, thus obtaining a clean and more accurate result.
  • A new option, –feature-quality automatically adjusts the image sizes for feature extraction based on predetermined ratios instead of relying on the user input or making assumptions about the image size.
  • Static tiles that were previously computed in NodeODM for use in viewers such as Leaflet or OpenLayers can now be generated via the –tiles option directly in ODM.

Aside from the speed improvements of having updated Python, PDAL, numpy and many other libraries, we’ve specifically improved memory usage in split-merge to handle even larger DEMs, have improved DEMs compression and improved speed/network stability in ClusterODM/NodeODM.

We’ve fixed numerous bugs and increased overall stability. See the related PRs for geeky details (#1156 and #124). We also cleaned up the ODM repository from old, large binary objects that were inflating the git repository. A git clone previously took 1.1GB of downloads, now it takes only 12MB. If you forked ODM, read this announcement as it affects you.

Give ODM 2.0 a try! If you find any issues, please report them on GitHub.

360 Cameras


For some time, OpenSfM, the photogrammetry library maintained by Mapillary that underpins OpenDroneMap, has had support for 360 cameras. We are working on a project at the moment with some great engineering students from Case Western Reserve University on building a next generation 360 camera for photogrammetry, but while that project wraps up, I wanted to test what can be done with a commodity unit.

So, with hunker-down-in-place orders the du jour, I opted to do my initial tests in-between my house and the neighbors:

I know I probably don’t need my mask there, but I have nasty tree allergies, so I am taking advantage of the normalization of mask wearing to keep my lungs healthier than they are most Spring seasons.

First, the why-what?!

In the increasingly tenuously named OpenDroneMap project, we have seen some interesting alternatives to drones in use — general photogrammetry that I owe more blog posts on (in the meantime, you can sneak-peak them at — I have just been too busy to reblog them yet). From tiny pits and seeds of hard to identify plants to animal skulls, there are some interesting non-drone use cases for good photogrammetry.

Are drones enough?

Drone mapping is a really exciting and useful innovation: it allows for mapping large areas with low capital investment, an opportunity to leverage local talent, can often capture with a faster cadence, and higher resolution and has a small fossil fuel footprint as compared with using manned aircraft. But the detail available is not always the detail needed. Consider dense urban locales, especially in places that also are thickly vegetated, and drone mapping may not always be enough for capturing the bottom-of-the-urban-canyon elevations needed for certain detailed hydrological analyses.

360 → 3D?

With a 360 camera and enough walking, can we create a synoptic understanding of our world that augments what we are doing now with drones? Tests from my driveway are very promising.

Introducing the UAV Mapping Arena


With the help of several members of the community, today we have launched a simple tool to visually compare outputs processed with several drone mapping packages (including ODM, of course):

We are pleased with how far ODM has come over the years and hopefully this tool will continue to help track our progress, at least as a measure of “visual appeal” (up next, accuracy comparisons?).

Drone mapping for the rest of us


One of our community members, Zach Ryall, has covered OpenDroneMap in a recent article for AOPA (Aircraft Owners and Pilots Association).

Maps and mosaics are among the most powerful products a drone camera can produce, but producers of polished, user-friendly software made for professionals aren’t giving freebies anymore. Open-source software offers hobbyists and soloists an affordable alternative.

You can read the full article here:

Small correction on the attribution of credits from the article: OpenDroneMap was founded by Stephen Mather and not me (I’d like to get this record straight). If it wasn’t for Steve’s commitment and vision for the project, we wouldn’t be reading this post (or this article, or anything OpenDroneMap related for the matter).

Stone Town Digital Surface Model


Reposted from

Thanks to the tireless work of the folks behind the Zanzibar Mapping Initiative, I have been exploring the latest settings in OpenDroneMap for processing data over Stone Town. I managed to get some nice looking orthos from the dataset:

But, excitingly, I was able to extract some nice looking surface models from the dataset too. This required using the Brown-Conrady model that recently got added to OpenSfM:

This post is a small homage to the late his Majesty Sultan Qaboos. Given the strong affinity and shared history between Zanzibar and Oman, it seems fitting to post these.

Checking a running process in WebODM


reposted from

Edit: the dreaded “Reconstructing all views” message has been replaced with a progress monitor! But, how to dig into the back end and explore the machine that does the work is always a helpful skill to have… .

Learning objectives:

  • We’ll learn how to check how far along a process is when it is calculating depthmaps and giving no feedback
  • Along the way, we’ll also learn how to look at the docker images running for WebODM and
  • login to said docker images in order to
  • inspect that status of processing data.
  • Let’s go!

So, you threw a really big dataset at WebODM, and now you are waiting. It’s been hours, maybe days, and it’s stuck on the dreaded “Reconstructing all views”:

Did you make the depthmap resolution really high, because you wanted really detailed data? Did you make it too high? How long is this going to take?

I have had this dilemma too. Sometimes I just get disgusted with myself and my predilection for choosing ridiculously computationally expensive settings, kill the process, turn the settings down a bit, and restart. But this can waste hours or days of processing, which feels wrong.

The alternative

We could do the alternative. We could poke under the hood of WebODM and see how it’s progressing. For this project that’s been running for 146+ hours, this is what I decided to do.

The depthmaps that are running are being done in MVE, which can give us really detailed information about progress, but unfortunately, it makes logs a real mess, so we have it logging nothing. Let’s see how we can get around this and check in our status.

Finding the docker instance and logging in:

First, we log into the machine where WebODM is running. We need a list of the existing docker machines, as we need access to the correct machine to look at how things are progressing.

docker ps

The result should give us something like this:

CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                         NAMES
4b0659fe6761        opendronemap/webodm_webapp   "/bin/bash -c 'chmod…"   38 hours ago        Up 38 hours>8000/tcp,>8080/tcp   webapp
0e26ebf918f2        opendronemap/webodm_webapp   "/bin/bash -c '/webo…"   38 hours ago        Up 38 hours                                                       worker
1954c5136d44        redis                        "docker-entrypoint.s…"   38 hours ago        Up 38 hours         6379/tcp                                      broker
bdc69502ca50        opendronemap/webodm_db       "docker-entrypoint.s…"   38 hours ago        Up 38 hours>5432/tcp                       db
81f401a0e138        opendronemap/nodeodm         "/usr/bin/nodejs /va…"   38 hours ago        Up 38 hours>3000/tcp                        webodm_node-odm_1

We want to access the webodm_node-odm_1 node, in most cases. To do this we use docker exec as follows:

docker exec -it webodm_node-odm_1 bash

Woah! We are now inside the node!

Checking the available data directories:

Typically, if we only have one process running, there will only be one dataset in the /var/www/data directory

cd /var/www/data/99002823-c48b-4af5-af1b-c0fef2ed8b56/

Checking our depthmap data from MVE:

For depthmaps nearly complete in MVE, there will be a file called depth-L1.mvei. We need to find out how many these are as compared with the number of images that we need depthmaps for. We’ll use a combination of the find command and wc (or word count):

find . -name depth-L1.mvei | wc -l

In my case, I have 2,485 images, or roughly 2/3s of my images processed. Looks like I am 6 days into a 9 day process before we get done with the MVE step.

I guess I will wait until Monday to check again… .

Reconstructing cliffs in OpenDroneMap, or how to beat LiDAR at its own game (part 2)


(Reposted from

In the beginning

In a previous blog post, we explored how we can quite effectively derive terrain models using drones over deciduous, winter scenes. We ran into some limitations in the quality of the terrain model: the challenge was removing the unwanted features (things like tree trunks) while retaining wanted features (large rock features).

I concluded the post thusly:

For our use case, however, we can use the best parameters for this area, take a high touch approach, and create a really nice map of a special area in our parks for very low cost. High touch/low cost. I can’t think of a sweeter spot to reach.

Good parameters for better filtering

In the end, the trick was to extract as good of a depthmap as possible depthmap-resolution: 1280 in my case, set the point cloud filtering (Simple Morphological Filter or SMRF) smrf-window and smrf-threshold to 3 meters to only filter things like tree trunks, and set ignore-gsd: true to ensure we are keeping the highest quality data all the way through the toolchain.

Full list of processing settings:

smrf-window: 3, mesh-octree-depth: 11, orthophoto-resolution: 1.8, dtm: true, dem-resolution: 7, ignore-gsd: true, dsm: true, max-concurrency: 8, camera-lens: brown, depthmap-resolution: 1280, smrf-threshold: 3, rerun-from: dataset


How well do the new settings work? Here’s the old vs. the new, including contours:

Comparison of old and new settings showing much smoother terrain model and contours

This is a much less noisy result. Unfortunately, I ran it at the wrong resolution, so I am rerunning at full resolution now and hope to see something similar.

ODM 0.9.8 Adds Multispectral, 16bit TIFFs Support and Moar!


WebODM introduced support for plant health algorithms about a month ago. It was no secret that we concurrently started work to support TIFFs file inputs and multispectral cameras, both features that have been highly requested.

Today we are excited to announce the release of ODM 0.9.8!

TIFF Support

Up until now ODM was able to process only JPG files. With this release we added support for processing TIFF files, both 8bit and 16bit. TIFF is a popular format especially with multispectral and thermal cameras.

Mapir Camera 16bit TIFFs processed in ODM

Multispectral Support (Experimental)

We have added the ability to process multispectral images (for example those captured with a Parrot Sequoia, MicaSense Altum or RedEdge). We have obtained some promising preliminary results. When provided with N camera bands, ODM will generate an N band orthophoto (in the proper bit resolution, up to 16).

We have not added support for spectral calibration targets, which we plan to add in the near future and we’re currently looking to add support for more cameras. The task of identifying different bands is different for each camera vendor and we’ll need to add support for more cameras with time. We hope the community will start to process some datasets and help us improve multispectral support (share your datasets?)

MicaSense RedEdge 5 Bands processed in ODM

We recommend to pass the –texturing-skip-global-seam-leveling option when processing thermal/multispectral datasets. Global seam leveling will attempt to normalize the values distribution across orthophotos, which works well for making pretty RGB images, but will affect measured values in thermal/multispectral settings.

Split-merge Improvements

We rewrote from scratch the orthophoto cutline blending algorithm to merge orthophotos, a bottleneck that was causing processing to take longer than necessary in the last stage of the pipeline. The new algorithm is faster and much more memory efficient. We also sped up by a factor of 30x the time it takes to merge point clouds from submodels, as well as reducing memory usage drastically.

OpenSfM/MVE Updates

We brought the latest version of OpenSfM in this update, which delivers up to 1.6x faster image matching than before.

MVE has also been (finally) modified to report progress on the status of computations. You’ll finally know if the program is “stuck” or not.

Improved Brown-Conrady Camera Model

We made modifications to the camera model used to compute camera poses and points. Up until now the default in ODM has been to use a simplified perspective camera model. The community has been testing the usage of the brown-conrady model for a while, with great results. The original brown-conrady model however uses two parameters for specifying focal length, which are unfortunately not accurately supported by the texturing and dense reconstruction stages of the pipeline (a single focal length is used by those stages). We’ve approximated the brown-conrady computation by averaging the two focal lengths, but could we do better? Yes!

We modified the brown-conrady model to use a single focal length, bringing the model to full support for all stages of the pipeline and set it as the default camera model. We expect this will improve the quality of results for all outputs. Preliminary tests confirm this.

Credits: Klas Karlsson

Easier To Contribute to ODM

If you are on Linux, it’s now easier than ever to make a change to ODM. A new script setups a development environment for you inside a docker container.

And One More Thing: NodeODM Changes

NodeODM now exposes a task list API endpoint. This is a feature that has been requested a lot and allows people to view the tasks running on a particular node. If you send a task to NodeODM via WebODM (or CloudODM or PyODM, or any other client), if you open the NodeODM interface you will be able to monitor and manage the task. This is also implemented in ClusterODM.

We have also replaced the .zip compression method in NodeODM to be faster, more memory efficient.

What do you think of the new changes? Try them out and report any bug.

Reconstructing cliffs in OpenDroneMap, or how to beat LiDAR at its own game


From the top of Whipps Ledges at Hinckley Reservation on November 16, 2016 (Kyle Lanzer/Cleveland Metroparks)

Reposted from

LiDAR and photogrammetric point clouds

If we want to understand terrain, we have a pricey solution and an inexpensive solution. For a pricey and well-loved solution, LiDAR is the tool of choice. It is synoptic, active (and therefore usable day or night), increasingly affordable (but still quite expensive), and works around even thick and tall evergreen vegetation (check out Oregon’s LiDAR specifications as compared with US federal ones, and you’ll understand that sometimes you have to turn the LiDAR all the way up to 11 to see through vegetation).

For a comparably affordable solution, photogrammetrically derived point clouds and the resultant elevation models like the ones we get from OpenDroneMap are sometimes an acceptable compromise. Yes, they don’t work well around vegetation in thickets and forests, and other continuous vegetation covers, but with a few hundred dollar drone, a decent camera, and a bit of field time, you can quickly collect some pretty cool datasets.

As it turns out, sometimes we can collect really great elevation datasets derived from photogrammetry under just the right conditions. More about that in a moment: first let’s talk a little about the locale:

Sharon Conglomerate and Whipps Ledges, Hinckley Reservation

One of my favorite rock formations in Northeast Ohio is Sharon Conglomerate. A mix of sandstone and proper conglomerate, Sharon is a stone in NEO that provides wonderful plant and animal habitats, and not coincidentally provides a source of coldwater springs, streams, and cool wetland habitats across the region. A quick but good overview of the geology of this formation can be found here:

Mapping conglomerate

One of the conglomerate outcrops in Cleveland Metroparks is Whipps Ledges in Hinckley Reservation. It’s a favorite NEO climbing location, great habitat, and a beautiful place to explore. We wanted to map it with a little more fidelity, so we did a flight in August hoping to see and map the rock formations in their glorious detail:

Overall orthophoto of Whipps Ledges from August 2019
Digital surface model of the forest over
Inset image of Whipps Ledges from August 2019
Inset digital surface model of the forest over

Unfortunately, as my geology friends and colleagues like to joke, to map out the conglomerate, we need to “scrape away the pesky green vegetation stuff first”. We don’t want to do this, of course — this is a cool ecological place because it’s a cool geological place! It just happens to be a very well vegetated rocky outcrop. The maple, beech, oak and other trees there take full advantage of the lovely water source the conglomerate provides, so we can’t even glean the benefits of mapping over sparse and lean xeric oak communities: this is a lush and verdant locale.

So yesterday, we flew Whipps Ledges again, but this time the leaves were off the trees. It can be a challenge still to get a good sense of the shape of the landform, even with leafless trees: forest floors do not provide good contrast with the trees above them, and it can be difficult to get good reconstructions of the terrain.

But yesterday, we were lucky: there was a thin layer of snow everywhere providing the needed contrast without being too thick to distort the height of the forest floor too much; shadows from the low sun created great textures on the featureless snow that could be used in matching.

Image above the snowy forest on Whipps Ledges

The good, the bad, and the spectacular

The bad…

So, how are the results? Let’s start with the bad. The orthophoto is a mess. There’s actually probably very little technically wrong with the orthophoto: the stitching is good, the continuity is excellent, the variation between scenes non-existent, the visual distortions minimal. But, it’s a bad orthophoto in that between the high contrast between the trees and the snow compounded with the shadows from the low, nearly cloudless sky result in a difficult to read and noisy orthophoto. Bad data for an orthophoto in; bad orthophoto out.

Orthophoto from December 21 flight

The good

The orthophoto wasn’t our priority for these flights, however. We were aiming for good elevation models. How is our Digital Terrain Model (DTM)? It’s pretty good.

Photogrammetrically derived digital terrain model from drone imagery

The DTM looks good on it’s own, and even compares quite favorably with a (admittedly dated, 2006) LiDAR dataset. It is crisp, shows the cliff features better than the LiDAR dataset, and represents the landform accurately:

Comparison of crisp and cliff-like OpenDroneMap digital terrain model and the blurry LiDAR dtm.

The spectacular

So, if the ortho is bad and the DTM is good, what is great? The DSM is quite nice:

Overview of digital Surface Model from December 21 flight

The DSM looks great. We get all the detail over the area of interest, each cliff face and boulder show up clearly in the escarpment.

Constraining the elevation range to just those elevation around the conglomerate outcrop.
Constraining the elevation range to just those elevation around the conglomerate outcrop , inset 1
Constraining the elevation range to just those elevation around the conglomerate outcrop , inset 2

Improvements in the next iteration

The digital surface model is really quite wonderful. In it we can see many of the major features of the formation, including named features like The Island, a clear delineation of the Main Wall and other features that don’t show in the existing terrain models.

Due to untuned filtering parameters, we filter out more of the features than we’d like in the terrain model itself. It would be nice to keep The Island and other smaller rocks that have separated from the primary escarpment. I expect that when we choose better parameters for deriving the terrain model from the surface model points, we can strike a good balance and get an even better terrain model.

Animation comparing digital surface model and digital terrain model showing the loss of certain core features to Whipps Ledges due to untuned filtering parameters in the creation of the terrain model.

Beating LiDAR at it’s own game

It is probably not fair to say we beat LiDAR at it’s own game. The LiDAR dataset we have to compare to is 13 years old, and a lot has improved in the intervening years. That said, with a $900 drone, free software, 35 minutes of flying, and two batteries, we reconstructed a better terrain model for this area than the professional version of 2006.

And we have control over all the final products. LiDAR filtering tends to remove features like this regardless of point density, because The Island and similar formations are difficult to distinguish in an automated fashion from buildings. Tune the model for one, and you remove the other.

For our use case, however, we can use the best parameters for this area, take a high touch approach, and create a really nice map of a special area in our parks for very low cost. High touch/low cost. I can’t think of a sweeter spot to reach.

Choosing good OpenDroneMap parameters



I had an interesting question recently at a workshop: “What parameters do you use for OpenDroneMap?” Now, OpenDroneMap has a lot of configurability, lots of different parameters, and it can be difficult to sift through to find the right parameters for your dataset and use case. That said, the defaults tend to work pretty well for many projects, so I suspect (and hope) there are a lot of users who never have to worry much about these.

The easiest way to proceed, is to use some of the pre-built defaults in WebODM. These drop downs let you take advantage of the combination of a few different settings abstracted away for convenience, whether settings for processing Multispectral data, doing a Fast Orthophoto, flying over Buildings or Forest, etc.

You can also save your own custom settings. You will see at the bottom of this list “Steve’s Default”. This has a lot of the settings I commonly tweak from defaults.

Back to the question at hand: what parameters do I change and why? I’ll talk about 7 parameters that I regularly or occasionally change.

The Parameters

Model Detail

Occasionally we require a little more detail (sometimes we also want less!) in our 3D models from OpenDroneMap. Mesh octree depth is one of the parameters that helps control this. A higher number gives us higher detail. But, there are limits to what makes sense to set for this. I usually don’t go any higher than 11 or maybe 12.

Sylvain Lefebvre - PhD thesis

Elevation Models


Often with a dataset, I want to calculate a terrain model (DTM) or surface model (DSM) or both as part of the products. To ensure these calculate, we set the DTM and DSM flags. The larger category for DTM and DSM is Digital Elevation Model, or DEM. All flags that affect settings for both DTM and DSM are named accordingly.

Ignore GSD

OpenDroneMap often does a good job guessing what resolution our orthophoto and DEMs should be. But it can be useful to specify this and override the calculations if they aren’t correct. ignore-gsd is useful for this.

DEM Resolution

DEM Resolution applies to both DTMs and DSMs. A criterion that is useful to follow for this setting is 1/4th the orthophoto resolution. So, if you flew the orthophoto at a height that gives you 1cm resolution ortho imagery, your dem-resolution should probably be 4cm.


Depthmap resolution

A related concept is depthmap resolution. Depthmaps can be thought of as little elevation models from the perspective of each of the image pairs. The resolution here is set in image space, not geographic coordinates. For Bayer style cameras (most cameras), aim for no more than 1/2 the linear resolution of the data. So if your data are 6000×4000 pixels, you don’t want a depthmap value greater than 3000.

That said, usually, 1/4 is a better, less noisy value, and depthmap calculations can be very computationally expensive. I rarely set this above 1024 pixels.

Camera Lens Type

I saved the best for last here. So, if you’ve made it this far in the blog post, this is the most important tip. In 2019, OpenSfM, our underlying Structure from Motion library, introduced the Brown-Conrady camera model as an option. The default for camera type is auto, which usually results in the use of a perspective camera, but Brown-Conrady is much better. Set your camera-lens to brown, and you will get much better results for most datasets. If it throws an error (which does happen with some images), just switch it back to auto and rerun. Brown will be a default in the near future.