Ok, so far what is available is broken code but that will be fixed. You can check it out in the meantime in this pull request. It calculates the full depthmaps, but does not yet write the PLY file needed for subsequent steps.
The trick to improve DEMs was already in our build process: for elevation models, MVE’s dmrecon utility is a slower but otherwise better option than smvs, the utility we are currently using. It provides more detail, is less “melty” as one person described smvs to me, and overall gives us much better results. A theoretical disadvantage is that svms can find results where there aren’t features, and thus does better gap filling. For drone mapping use cases, however, this remains predominantly a theoretical and not practical limitation.
DEM Improvements continued, this time with an animated gif, a little hillshading, and a modicum of intelligent smoothing
We have some cool new-to-us approaches in the works for digital elevation models. Here’s a quick teaser and comparison old to new. This is a digital surface model over some buildings, fences, and trees in Dar es Salaam:
Blurry DSM image — Old approach
Nice DSM image — New approach
Water running downhill is a challenge for elevation models derived from drone imagery. This is for a variety of reasons, some fixable, some unavoidable. The unavoidable ones include the challenges of digital terrain models derived from photogrammetric point clouds, which don’t penetrate the way LiDAR does. The avoidable ones we seek to fix in OpenDroneMap.
Animation of flow accumulation on poorly merged terrain model, credit Petrasova et al, 2017
The fixable problems include poorly merged and misaligned elevation models. Dr. Anna Petrasova’s GRASS utility r.patch.smooth is one solution to this problem when merging synoptic aerial lidar derived elevation models with patchy updates from UAVs.
Another fix for problematic datasets is built into the nascent split-merge approach for OpenDroneMap. In short, that approach takes advantage of OpenSfM’s extremely accurate and efficient hybrid structure from motion (SfM) approach (a hybrid of incremental and global SfM) to find all the camera positions for the data, and then split the data into chunks to process in sequence or in parallel (well, ok, the tooling isn’t there for the parallel part yet…). These chunks stay aligned through the process, thus allowing us to put them back together at the end.
For a while now, we’ve been in the awkward position with respect to the nickname: split-merge. It lacks a lot of merge features. You can merge the orthos with it, but not the remaining datasets. And while this will be true for a while longer, we have been experimenting a bit with the merging of these split datasets (thanks to help from Dr. Petrasova). Here for your viewing pleasure is the first merged elevation model from the split merge approach:
Digital surface model of Dar es Salaam, merged from 4 separately processed split-merge submodules.
More to come!
After much anticipation, we are proud to announce the 0.4 release of OpenDroneMap. We have given a preview of what was coming in a previous blog post and in this post we want to expand on what you can get today from the program.
Attention! We have normalized –orthophoto-resolution and –dem-resolution to use the de-facto industry standard unit of cm / pixel. This is a breaking change, so please update your scripts if the output resolutions are lower than what you would expect.
Much Denser Point Clouds
With the proper settings (just increase –depthmap-resolution) we can now easily achieve 20x point density, much better point coverage and better memory usage. This is thanks to the fantastic work of Dakota that brought Shading Aware Multiview Stereo in replacement of PMVS.
We have done two major improvements to increase the quality of our orthophotos. A new 2.5D meshing approach which is both faster and yields improved building outlines and a modification of the texturing program to affect the priority of nadir shots (which users can control via –texturing-nadir-weight). With these two modifications in place, we are seeing visual improvements across a wide range of datasets.
The new version includes an improved screened poisson reconstruction algorithm and an additional cleanup step, which means better 3D meshes, less artifacts and better memory usage.
Faster DSM/DTM Generation
Much denser point clouds meant that existing performance problems in the DEMs generation became really apparent and a bottleneck. This is why we worked to both parallelize and speed-up key areas of the DEMs pipeline, which now yield results orders of magnitude faster, especially on multi-core machines.
We have normalized –orthophoto-resolution and –dem-resolution to use the de-facto industry standard unit of cm / pixel. This is a breaking change, so please update your scripts if the output resolutions are lower than what you would expect. Previous units were pixel / meter and meter / pixel. OpenDroneMap finally joins the rest of the industry in terms of standard units.
Ground Sampling Distance
With this optimization, texturing is faster, memory usage is lowered and if you try to process at a resolution higher than what your images allow you, OpenDroneMap will automatically choose the highest resolution for you. No more guess work. If you want the highest resolution possible for your dataset, simply set –orthophoto-resolution to 0.0001 and OpenDroneMap will do the rest.
Please update, test and report any issues you find directly on GitHub.
Now that OpenDroneMap has a new website, we have a blog to go with it. For the first post, we will repost something from https://smathermather.com on the latest upcoming changes to OpenDroneMap:
One of the greatest challenges with OpenDroneMap (ODM) is getting great results out of sparse data. I used to describe this as getting good data out of mediocre inputs, but this isn’t a fair descriptor, and here I’ll make a public apology: just because I have the time to fly with lots of overlap most of the time doesn’t mean it should be a requirement for everyone else. Ok. Now that that is off my chest, let’s talk about upcoming improvements which address this and more.
What does Better everything mean?
In the title, I say “Better everything”. The secret to better ODM data (and Piero Toffanin can take nearly all the credit for these improvements and discoveries of how) is to improve nearly every step in the pipeline. Let’s talk in general terms about what that means:
We can roughly break the pipeline into the following steps:
- Extract features
- Match features
- Structure from motion
- Dense matching / Multi-view stereo
- Final products
- Digital surface model
There are other parts and pieces here — spatial referencing, undistorting of imagery, etc., but this is a good enough outline for discussion purposes.
As it happens, our underlying process is pretty good for steps 1 through 3. OpenSfM is our library of choice, and it handles the SfM functions quite favorably. At some point I would like to compare the positions of a really good RTK dataset to what we get from SfM, and maybe even do some testing with a synthetic dataset for a robust test of or SfM quality, but on the whole we don’t have notable deficiencies in this part of the tool.
Focusing on the parts to improve:
This just leaves us the other half… steps 4 through 7. The first step that is important here is dense matching. After a considerable amount of parameter testing, tuning, etc., we concluded that OpenSfM’s dense matching isn’t up to snuff yet. This isn’t a necessarily a critique: it’s one of the younger parts of the toolchain, and more mature FOSS solutions are out there already.
So, we look to improve our dense matching / multi-view stereo. We are getting some favorable results by changing the underlying multi-view stereo library. These results compare favorably to Agisoft and Pix4D, two closed-source industry standards for UAV photogrammetry:
Meshing and Texturing:
Now, for steps 5 through 6, we need an improved meshing procedure and improved texturing in order to get great results in our final output. Here, we can evaluate with orthophotos as a final product: