Posts by: Stephen Mather

Flattening the world: faster processing with planar reconstruction

 

A few weeks back, I had the pleasure of visiting with Piero Toffanin in Florida and had 4 days of eating, talking, and drinking espresso. OpenDroneMap was much of the conversation, but we wandered a lot conversationally, caught up a lot, and had a great time. We even found some time to fly drones. (Random sidebar, Piero’s better half, Danielle, is a delight too). At the time, Piero was working on an interesting problem: planar reconstruction. Planar reconstruction is an interesting alternative to what we typically do in OpenDroneMap. If you’ve been using OpenDroneMap for even a short time, you’ll notice it does a lot of work: it’s a full photogrammetric workflow, so it creates a detailed point cloud, mesh, and elevation models too, if you ask for them. The detailed point cloud and mesh aren’t really optional.

By contrast, planar reconstruction performs the minimum needed to integrate a consistent enough solution (for flatish things, anyway).

Every few weeks or months, someone shows up on the forums begging for something faster and less resource hungry for their simpler use case, or else they compare the full work OpenDroneMap completes to much faster, but less complete workflows from a particular proprietary tool (you know the one… it rhymes with “fix a tree, shield)”. If you are one of those users (or just begged silently in the comfort of your own head), well this is your day: we now have an implementation of planar reconstruction. When I was visiting, Piero was puzzling over how best to implement a solution of planar reconstruction that is highly parallel but also memory efficient. He must have gotten there, because a few days ago, he closed pull request 1452.

So how does it work and how does it look? Initial test indicate that it is much faster. Depending on your machine and dataset, you might see 4-6 times faster. And the results for flat areas are quite suitable. I’m even pleased with trees (from an appropriate high-enough height, anyway).

image
image

There will be artifacts. That is the trade-off. But, if it’s an acceptable trade-off, if you need a good-enough solution fast, this might be the setting for you:

OpenDroneMap 2.1.0 — the biggest update yet?

 

That title might be clickbait, but it is meant as a nod to the feature parity that has been long in the making between OpenDroneMap relative to more mature projects, whether those more mature projects are the venerable, free and open source MICMAC or its derivatives in the form of the various closed-source offerings in the photogrammetry industry.

So, is 2.1.0 the biggest update yet? I would say it’s the update that suddenly brings us to feature parity in almost all categories. And it all starts with improved structure from motion (updates to OpenSfM), continues through improved point clouds (replacement of MVE with OpenMVS), and finishes with improved (waves at) everything.

Speed and Structure from Motion

With the update to the latest OpenSfM, we see a 25-45% speed up in portions of OpenSfM processing. This is very exciting and due to some low level changes, specifically: “reprojection of derivatives … are now computed analytically instead of autodiff-ed”. There are other, subtler improvements to OpenSfM, but that is the feature most noticable to end users.

Speed and dense point cloud production

The MVE library has been swapped out for OpenMVS. On the surface, this slows things down: for the same resolution depthmaps, OpenMVS runs at half the speed of MVE. But, the speed comparison is deceptive: we see such a high quality improvement from changing libraries, we can easily run at lower depthmap values and get better results resulting in an effective speed increase, with improved quality.

Quality improvements

The biggest noticeable change will be in the quality of point cloud outputs and all subsequent products, from orthophotos to digital surface models, and digital terrain models.

Improved point cloud over Stone Town, Zanzibar, Zanzibar Mapping Initiative

Because the point clouds effect the quality throughout the rest of the toolchain, we see substantially improved digital surface models:

Higher fidelity in point clouds results in better automated point cloud filtering:

And so on. Updates are committed and out in the wild. It’s time to update your OpenDroneMap instance and have fun with the improved quality and speed.

How much better?

So, how much better are these point clouds? How do they compare to the rest of the industry? You don’t have to take our word for it, check out the leaderboard at tanksandtemples.org.

You’ll find the OpenMVS leads amongst FOSS solutions. Look closely enough, you might find a closed source drone image processing favorite trailing OpenMVS by a large margin… .

(Footnote, these leaderboard comparisons don’t compare the combination of OpenSfM and OpenMVS, but are a decent proxy for a full comparison).

Is this really bigger than that closed-source product?

Probably. OpenDroneMap has for more than a year had the benefit of being one of the most scalable solutions, both from a technology perspective with an industry leading process for splitting and recombining datasets, but from the licensing “technology” with licensing that costs the same amount regardless of how many machines it is deployed to.

In 2019, we saw OpenDroneMap process datasets in excess of 30,000 images into a spatially harmonized dataset, something at the edge of the possible at the time. In 2020, we have seen 90,000 image datasets successfully processed in the same way. It might be the largest spatially harmonized dataset yet processed, though we are happy to hear of similar projects in case we are wrong!

Is this really better than that closed-source product?

The recent (2 day old) quality improvements don’t just benefit from OpenMVS. OpenSfM does an industry leading job of balancing the accuracy of GPS information with the quality of camera calibration. This means that for massive datasets where you need good 3D data, OpenDroneMap can deliver a good product for things like flood modeling, terrain change detection, and other data needs that are sensitive to three-dimensional fidelity.

Time to try, try again, or upgrade

It is probably time for you to upgrade. If you haven’t tried OpenDroneMap before, or if it has been a while, then it is time to try it with your dataset. In the meantime, enjoy these screen shots from data produced by Federico Debetto, KMedia SAS, which just like the Zanzibar Mapping Initiative data above, is from Unguja Island, Zanzibar:

Props as always to the teams involved, including contributors to OpenSfM (thanks Mapillary!), contributors to OpenMVS (Whoa, 3Dnovator), and especially Piero Toffanin of UAV4Geo. Special mention to Daniel Llewellyn in the form of packaging. More on his work soon for you Windows and Snap folx… .

ODM360

 

“We’re creating the most sustainable drone mapping software with the friendliest community on earth.”

OpenDroneMap has long been all about processing data from aerial assets, whether drones, balloons, kites, even planes and satellites have come up now and again. In a recent post, I posited the question: “Are drone enough?” Like a good rhetorical question, the answer is meant to be clear. In many cases, where vegetation is heavy, or buildings are close together and tall (or both!), a drone cannot adequately capture data at the ground in enough detail.

Enter 360 cameras as a method for rapidly collecting photogrammetric data from the ground. There are plenty of cool 360 cameras these days, and in testing, we can get some nice photogrammetric outputs from these cameras. But an issue with these cameras is that they are either “affordable” but super low resolution, unaffordable, but often also unfit for purpose, or some other combination of unsustainable. The real insult is that most of the best solutions out there are built around a bunch of cheap commodity hardware that’s been nicely packaged into a very expensive proprietary package.

A new hardware project

Case Western Reserve University has a really cool project class called EECS 398. Engineering Projects. It’s a boring name for a great class. In it, the students get together in small groups and work on interesting real engineering problems. We had the pleasure this term of working with Gabriel Maguire, Kimberly Meifert, and Rachel Volk on this project, three electrical engineering students with a bright future:

Gabriel Maguire, Kimberly Meifert, and Rachel Volk (left to right)
The whole team with India Johnson on the far left and me in the middle. For the record, these are all pre-COVID images. We did a lot of video calls for months after this.

The idea was to put together a 360 camera from a pile of Sony α6000s, with a raspberry pi controller, RTK GPS, and feed those data in an automated way into OpenDroneMap.

The idea has some legs. The final prototype triggered 5 cameras at a time and stored it to disk. We have some work yet to do on syncronization of firing, and low cost RTK GPS.

Ivan Gayton and Hessel Winsemius have jumped in on the fun, and development is continuing to take place here: https://github.com/OpenDroneMap/odm360

Postscript…

“But Steve: 5 Sony α6000s plus their nice lenses isn’t really an affordable 360 camera solution… . I mean $5000 is less than the $20-30k one might spend on a platform with similar specs, but come on!”

Ok, doubting voice in my head, this isn’t affordable, it’s merely more affordable. At it’s core is gphoto2 which gives us the ability to plug in 2500 different possible cameras, so that helps with affordability. But maybe, doubting voice, you are interested not just in cost but in software freedom. Should we be doing this with fully open hardware including the cameras? To this end, we are building out a version with the new High Quality Raspberry Pi camera. At $75 a camera and lens or maybe more like $100 once we include a pi 0 and other parts for control, we get a 72MP 360 camera for 3D reconstruction in the ~$1000 range. And we can get that number lower by using fewer cameras and wider angle lenses, as a tradeoff with quality.

I’m no videographer, but here’s a quick intro on the objectives of the project:

CC-by-sa:
Photo credits: Guojun Yang / CC BY-SA (https://creativecommons.org/licenses/…) Ron Van Oers / CC BY-SA 3.0-IGO (https://creativecommons.org/licenses/…)

360 Cameras

 

For some time, OpenSfM, the photogrammetry library maintained by Mapillary that underpins OpenDroneMap, has had support for 360 cameras. We are working on a project at the moment with some great engineering students from Case Western Reserve University on building a next generation 360 camera for photogrammetry, but while that project wraps up, I wanted to test what can be done with a commodity unit.

So, with hunker-down-in-place orders the du jour, I opted to do my initial tests in-between my house and the neighbors:

I know I probably don’t need my mask there, but I have nasty tree allergies, so I am taking advantage of the normalization of mask wearing to keep my lungs healthier than they are most Spring seasons.

First, the why-what?!

In the increasingly tenuously named OpenDroneMap project, we have seen some interesting alternatives to drones in use — general photogrammetry that I owe more blog posts on (in the meantime, you can sneak-peak them at https://smathermather.com — I have just been too busy to reblog them yet). From tiny pits and seeds of hard to identify plants to animal skulls, there are some interesting non-drone use cases for good photogrammetry.

Are drones enough?

Drone mapping is a really exciting and useful innovation: it allows for mapping large areas with low capital investment, an opportunity to leverage local talent, can often capture with a faster cadence, and higher resolution and has a small fossil fuel footprint as compared with using manned aircraft. But the detail available is not always the detail needed. Consider dense urban locales, especially in places that also are thickly vegetated, and drone mapping may not always be enough for capturing the bottom-of-the-urban-canyon elevations needed for certain detailed hydrological analyses.

360 → 3D?

With a 360 camera and enough walking, can we create a synoptic understanding of our world that augments what we are doing now with drones? Tests from my driveway are very promising.

WebODM and the OpenDroneMap Ecosystem

 

With the release of a native installer of WebODM for Windows, we have been getting some questions about what does this mean? Per Piero’s blog:

Ok, admittedly this requires that you have a processing node running NodeODM somewhere else, as we still haven’t managed to compile/run the full processing pipeline natively on Windows, this is just the user interface, but it’s a really important step toward full native support for Windows (which we have long-term plans for).

https://www.opendronemap.org/2020/03/webodm-now-available-as-a-native-windows-application/

The underlying photogrammetry tool, OpenSfM, is not (yet) able to be compiled on Windows due to dependency issues. Mapillary is doing a major rebuild and removing dependencies, so it may get easier in the future to run the full OpenDroneMap stack on Windows. What do we mean by the full OpendroneMap stack? Something like this:

WebODM itself provides authentication and authorization, visualization, and post-processing. It is the graphical user interface to OpenDroneMap.

Underneath WebODM runs NodeODM. NodeODM is a tool to expose OpenDroneMap processing to a web interface and application programming interface, but doesn’t provide all the user facing parts that WebODM does.

And ODM itself is the piece that does the processing of the data. If you use the command line version of OpenDroneMap, this is probably the tool you are using.

So with the new Windows installer, you get the top of that stack. You get that great interface. You can then connect that with your own autoscaling ClusterODM or NodeODM instance on a local or cloud instance or connect it with WebODM Lightning network, which is a NodeODM setup maintained by Piero’s company UAV4GEO.

Need help setting up a ClusterODM instance? Reach out here:

https://community.opendronemap.org/c/help/clusterodm/

It’s really satisfying to set up your own autoscaling cluster, if you are into that sort of that thing. If not, you can set up a local or cloud NodeODM instance. That is as simple as running a docker command. Or, the simplest solution of all is to sign up with WebODM.net. One of the great wonders of free and open source software is we have lots of choices.

Reminder: if you use ClusterODM, it does have a different license than the rest of the OpenDroneMap ecosystem. It is licensed as GNU Affero General Public License v3.0, so any changes you make to it have to be shared via the same license with your users.

Stone Town Digital Surface Model

 

Reposted from smathermather.com



Thanks to the tireless work of the folks behind the Zanzibar Mapping Initiative, I have been exploring the latest settings in OpenDroneMap for processing data over Stone Town. I managed to get some nice looking orthos from the dataset:

But, excitingly, I was able to extract some nice looking surface models from the dataset too. This required using the Brown-Conrady model that recently got added to OpenSfM:

This post is a small homage to the late his Majesty Sultan Qaboos. Given the strong affinity and shared history between Zanzibar and Oman, it seems fitting to post these.



Checking a running process in WebODM

 

reposted from smathermather.com

Edit: the dreaded “Reconstructing all views” message has been replaced with a progress monitor! But, how to dig into the back end and explore the machine that does the work is always a helpful skill to have… .

Learning objectives:

  • We’ll learn how to check how far along a process is when it is calculating depthmaps and giving no feedback
  • Along the way, we’ll also learn how to look at the docker images running for WebODM and
  • login to said docker images in order to
  • inspect that status of processing data.
  • Let’s go!

So, you threw a really big dataset at WebODM, and now you are waiting. It’s been hours, maybe days, and it’s stuck on the dreaded “Reconstructing all views”:

Did you make the depthmap resolution really high, because you wanted really detailed data? Did you make it too high? How long is this going to take?

I have had this dilemma too. Sometimes I just get disgusted with myself and my predilection for choosing ridiculously computationally expensive settings, kill the process, turn the settings down a bit, and restart. But this can waste hours or days of processing, which feels wrong.

The alternative

We could do the alternative. We could poke under the hood of WebODM and see how it’s progressing. For this project that’s been running for 146+ hours, this is what I decided to do.

The depthmaps that are running are being done in MVE, which can give us really detailed information about progress, but unfortunately, it makes logs a real mess, so we have it logging nothing. Let’s see how we can get around this and check in our status.

Finding the docker instance and logging in:

First, we log into the machine where WebODM is running. We need a list of the existing docker machines, as we need access to the correct machine to look at how things are progressing.

docker ps

The result should give us something like this:

CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                                         NAMES
4b0659fe6761        opendronemap/webodm_webapp   "/bin/bash -c 'chmod…"   38 hours ago        Up 38 hours         0.0.0.0:443->8000/tcp, 0.0.0.0:80->8080/tcp   webapp
0e26ebf918f2        opendronemap/webodm_webapp   "/bin/bash -c '/webo…"   38 hours ago        Up 38 hours                                                       worker
1954c5136d44        redis                        "docker-entrypoint.s…"   38 hours ago        Up 38 hours         6379/tcp                                      broker
bdc69502ca50        opendronemap/webodm_db       "docker-entrypoint.s…"   38 hours ago        Up 38 hours         0.0.0.0:32769->5432/tcp                       db
81f401a0e138        opendronemap/nodeodm         "/usr/bin/nodejs /va…"   38 hours ago        Up 38 hours         0.0.0.0:3000->3000/tcp                        webodm_node-odm_1

We want to access the webodm_node-odm_1 node, in most cases. To do this we use docker exec as follows:

docker exec -it webodm_node-odm_1 bash
root@81f401a0e138:/var/www#

Woah! We are now inside the node!

Checking the available data directories:

Typically, if we only have one process running, there will only be one dataset in the /var/www/data directory

cd /var/www/data/99002823-c48b-4af5-af1b-c0fef2ed8b56/

Checking our depthmap data from MVE:

For depthmaps nearly complete in MVE, there will be a file called depth-L1.mvei. We need to find out how many these are as compared with the number of images that we need depthmaps for. We’ll use a combination of the find command and wc (or word count):

find . -name depth-L1.mvei | wc -l

In my case, I have 2,485 images, or roughly 2/3s of my images processed. Looks like I am 6 days into a 9 day process before we get done with the MVE step.

I guess I will wait until Monday to check again… .

Reconstructing cliffs in OpenDroneMap, or how to beat LiDAR at its own game (part 2)

 

(Reposted from smathermather.com)

In the beginning

In a previous blog post, we explored how we can quite effectively derive terrain models using drones over deciduous, winter scenes. We ran into some limitations in the quality of the terrain model: the challenge was removing the unwanted features (things like tree trunks) while retaining wanted features (large rock features).

I concluded the post thusly:

For our use case, however, we can use the best parameters for this area, take a high touch approach, and create a really nice map of a special area in our parks for very low cost. High touch/low cost. I can’t think of a sweeter spot to reach.

Good parameters for better filtering

In the end, the trick was to extract as good of a depthmap as possible depthmap-resolution: 1280 in my case, set the point cloud filtering (Simple Morphological Filter or SMRF) smrf-window and smrf-threshold to 3 meters to only filter things like tree trunks, and set ignore-gsd: true to ensure we are keeping the highest quality data all the way through the toolchain.

Full list of processing settings:

smrf-window: 3, mesh-octree-depth: 11, orthophoto-resolution: 1.8, dtm: true, dem-resolution: 7, ignore-gsd: true, dsm: true, max-concurrency: 8, camera-lens: brown, depthmap-resolution: 1280, smrf-threshold: 3, rerun-from: dataset

Results

How well do the new settings work? Here’s the old vs. the new, including contours:

Comparison of old and new settings showing much smoother terrain model and contours

This is a much less noisy result. Unfortunately, I ran it at the wrong resolution, so I am rerunning at full resolution now and hope to see something similar.



Reconstructing cliffs in OpenDroneMap, or how to beat LiDAR at its own game

 

From the top of Whipps Ledges at Hinckley Reservation on November 16, 2016 (Kyle Lanzer/Cleveland Metroparks)

Reposted from smathermather.com

LiDAR and photogrammetric point clouds

If we want to understand terrain, we have a pricey solution and an inexpensive solution. For a pricey and well-loved solution, LiDAR is the tool of choice. It is synoptic, active (and therefore usable day or night), increasingly affordable (but still quite expensive), and works around even thick and tall evergreen vegetation (check out Oregon’s LiDAR specifications as compared with US federal ones, and you’ll understand that sometimes you have to turn the LiDAR all the way up to 11 to see through vegetation).

For a comparably affordable solution, photogrammetrically derived point clouds and the resultant elevation models like the ones we get from OpenDroneMap are sometimes an acceptable compromise. Yes, they don’t work well around vegetation in thickets and forests, and other continuous vegetation covers, but with a few hundred dollar drone, a decent camera, and a bit of field time, you can quickly collect some pretty cool datasets.

As it turns out, sometimes we can collect really great elevation datasets derived from photogrammetry under just the right conditions. More about that in a moment: first let’s talk a little about the locale:

Sharon Conglomerate and Whipps Ledges, Hinckley Reservation

One of my favorite rock formations in Northeast Ohio is Sharon Conglomerate. A mix of sandstone and proper conglomerate, Sharon is a stone in NEO that provides wonderful plant and animal habitats, and not coincidentally provides a source of coldwater springs, streams, and cool wetland habitats across the region. A quick but good overview of the geology of this formation can be found here:

Mapping conglomerate

One of the conglomerate outcrops in Cleveland Metroparks is Whipps Ledges in Hinckley Reservation. It’s a favorite NEO climbing location, great habitat, and a beautiful place to explore. We wanted to map it with a little more fidelity, so we did a flight in August hoping to see and map the rock formations in their glorious detail:

Overall orthophoto of Whipps Ledges from August 2019
Digital surface model of the forest over
Inset image of Whipps Ledges from August 2019
Inset digital surface model of the forest over

Unfortunately, as my geology friends and colleagues like to joke, to map out the conglomerate, we need to “scrape away the pesky green vegetation stuff first”. We don’t want to do this, of course — this is a cool ecological place because it’s a cool geological place! It just happens to be a very well vegetated rocky outcrop. The maple, beech, oak and other trees there take full advantage of the lovely water source the conglomerate provides, so we can’t even glean the benefits of mapping over sparse and lean xeric oak communities: this is a lush and verdant locale.

So yesterday, we flew Whipps Ledges again, but this time the leaves were off the trees. It can be a challenge still to get a good sense of the shape of the landform, even with leafless trees: forest floors do not provide good contrast with the trees above them, and it can be difficult to get good reconstructions of the terrain.

But yesterday, we were lucky: there was a thin layer of snow everywhere providing the needed contrast without being too thick to distort the height of the forest floor too much; shadows from the low sun created great textures on the featureless snow that could be used in matching.

Image above the snowy forest on Whipps Ledges

The good, the bad, and the spectacular

The bad…

So, how are the results? Let’s start with the bad. The orthophoto is a mess. There’s actually probably very little technically wrong with the orthophoto: the stitching is good, the continuity is excellent, the variation between scenes non-existent, the visual distortions minimal. But, it’s a bad orthophoto in that between the high contrast between the trees and the snow compounded with the shadows from the low, nearly cloudless sky result in a difficult to read and noisy orthophoto. Bad data for an orthophoto in; bad orthophoto out.

Orthophoto from December 21 flight

The good

The orthophoto wasn’t our priority for these flights, however. We were aiming for good elevation models. How is our Digital Terrain Model (DTM)? It’s pretty good.

Photogrammetrically derived digital terrain model from drone imagery

The DTM looks good on it’s own, and even compares quite favorably with a (admittedly dated, 2006) LiDAR dataset. It is crisp, shows the cliff features better than the LiDAR dataset, and represents the landform accurately:

Comparison of crisp and cliff-like OpenDroneMap digital terrain model and the blurry LiDAR dtm.

The spectacular

So, if the ortho is bad and the DTM is good, what is great? The DSM is quite nice:

Overview of digital Surface Model from December 21 flight

The DSM looks great. We get all the detail over the area of interest, each cliff face and boulder show up clearly in the escarpment.

Constraining the elevation range to just those elevation around the conglomerate outcrop.
Constraining the elevation range to just those elevation around the conglomerate outcrop , inset 1
Constraining the elevation range to just those elevation around the conglomerate outcrop , inset 2

Improvements in the next iteration

The digital surface model is really quite wonderful. In it we can see many of the major features of the formation, including named features like The Island, a clear delineation of the Main Wall and other features that don’t show in the existing terrain models.

Due to untuned filtering parameters, we filter out more of the features than we’d like in the terrain model itself. It would be nice to keep The Island and other smaller rocks that have separated from the primary escarpment. I expect that when we choose better parameters for deriving the terrain model from the surface model points, we can strike a good balance and get an even better terrain model.

Animation comparing digital surface model and digital terrain model showing the loss of certain core features to Whipps Ledges due to untuned filtering parameters in the creation of the terrain model.

Beating LiDAR at it’s own game

It is probably not fair to say we beat LiDAR at it’s own game. The LiDAR dataset we have to compare to is 13 years old, and a lot has improved in the intervening years. That said, with a $900 drone, free software, 35 minutes of flying, and two batteries, we reconstructed a better terrain model for this area than the professional version of 2006.

And we have control over all the final products. LiDAR filtering tends to remove features like this regardless of point density, because The Island and similar formations are difficult to distinguish in an automated fashion from buildings. Tune the model for one, and you remove the other.

For our use case, however, we can use the best parameters for this area, take a high touch approach, and create a really nice map of a special area in our parks for very low cost. High touch/low cost. I can’t think of a sweeter spot to reach.

Choosing good OpenDroneMap parameters

 

Introduction

I had an interesting question recently at a workshop: “What parameters do you use for OpenDroneMap?” Now, OpenDroneMap has a lot of configurability, lots of different parameters, and it can be difficult to sift through to find the right parameters for your dataset and use case. That said, the defaults tend to work pretty well for many projects, so I suspect (and hope) there are a lot of users who never have to worry much about these.

The easiest way to proceed, is to use some of the pre-built defaults in WebODM. These drop downs let you take advantage of the combination of a few different settings abstracted away for convenience, whether settings for processing Multispectral data, doing a Fast Orthophoto, flying over Buildings or Forest, etc.

You can also save your own custom settings. You will see at the bottom of this list “Steve’s Default”. This has a lot of the settings I commonly tweak from defaults.

Back to the question at hand: what parameters do I change and why? I’ll talk about 7 parameters that I regularly or occasionally change.

The Parameters

Model Detail

Occasionally we require a little more detail (sometimes we also want less!) in our 3D models from OpenDroneMap. Mesh octree depth is one of the parameters that helps control this. A higher number gives us higher detail. But, there are limits to what makes sense to set for this. I usually don’t go any higher than 11 or maybe 12.

Sylvain Lefebvre - PhD thesis

Elevation Models

DTM/DSM

Often with a dataset, I want to calculate a terrain model (DTM) or surface model (DSM) or both as part of the products. To ensure these calculate, we set the DTM and DSM flags. The larger category for DTM and DSM is Digital Elevation Model, or DEM. All flags that affect settings for both DTM and DSM are named accordingly.

Ignore GSD

OpenDroneMap often does a good job guessing what resolution our orthophoto and DEMs should be. But it can be useful to specify this and override the calculations if they aren’t correct. ignore-gsd is useful for this.

DEM Resolution

DEM Resolution applies to both DTMs and DSMs. A criterion that is useful to follow for this setting is 1/4th the orthophoto resolution. So, if you flew the orthophoto at a height that gives you 1cm resolution ortho imagery, your dem-resolution should probably be 4cm.

Depthmaps

Depthmap resolution

A related concept is depthmap resolution. Depthmaps can be thought of as little elevation models from the perspective of each of the image pairs. The resolution here is set in image space, not geographic coordinates. For Bayer style cameras (most cameras), aim for no more than 1/2 the linear resolution of the data. So if your data are 6000×4000 pixels, you don’t want a depthmap value greater than 3000.

That said, usually, 1/4 is a better, less noisy value, and depthmap calculations can be very computationally expensive. I rarely set this above 1024 pixels.

Camera Lens Type

I saved the best for last here. So, if you’ve made it this far in the blog post, this is the most important tip. In 2019, OpenSfM, our underlying Structure from Motion library, introduced the Brown-Conrady camera model as an option. The default for camera type is auto, which usually results in the use of a perspective camera, but Brown-Conrady is much better. Set your camera-lens to brown, and you will get much better results for most datasets. If it throws an error (which does happen with some images), just switch it back to auto and rerun. Brown will be a default in the near future.