Ok, so far what is available is broken code but that will be fixed. You can check it out in the meantime in this pull request. It calculates the full depthmaps, but does not yet write the PLY file needed for subsequent steps.
The trick to improve DEMs was already in our build process: for elevation models, MVE’s dmrecon utility is a slower but otherwise better option than smvs, the utility we are currently using. It provides more detail, is less “melty” as one person described smvs to me, and overall gives us much better results. A theoretical disadvantage is that svms can find results where there aren’t features, and thus does better gap filling. For drone mapping use cases, however, this remains predominantly a theoretical and not practical limitation.
DEM Improvements continued, this time with an animated gif, a little hillshading, and a modicum of intelligent smoothing
We have some cool new-to-us approaches in the works for digital elevation models. Here’s a quick teaser and comparison old to new. This is a digital surface model over some buildings, fences, and trees in Dar es Salaam:
Blurry DSM image — Old approach
Nice DSM image — New approach
Water running downhill is a challenge for elevation models derived from drone imagery. This is for a variety of reasons, some fixable, some unavoidable. The unavoidable ones include the challenges of digital terrain models derived from photogrammetric point clouds, which don’t penetrate the way LiDAR does. The avoidable ones we seek to fix in OpenDroneMap.
Animation of flow accumulation on poorly merged terrain model, credit Petrasova et al, 2017
The fixable problems include poorly merged and misaligned elevation models. Dr. Anna Petrasova’s GRASS utility r.patch.smooth is one solution to this problem when merging synoptic aerial lidar derived elevation models with patchy updates from UAVs.
Another fix for problematic datasets is built into the nascent split-merge approach for OpenDroneMap. In short, that approach takes advantage of OpenSfM’s extremely accurate and efficient hybrid structure from motion (SfM) approach (a hybrid of incremental and global SfM) to find all the camera positions for the data, and then split the data into chunks to process in sequence or in parallel (well, ok, the tooling isn’t there for the parallel part yet…). These chunks stay aligned through the process, thus allowing us to put them back together at the end.
For a while now, we’ve been in the awkward position with respect to the nickname: split-merge. It lacks a lot of merge features. You can merge the orthos with it, but not the remaining datasets. And while this will be true for a while longer, we have been experimenting a bit with the merging of these split datasets (thanks to help from Dr. Petrasova). Here for your viewing pleasure is the first merged elevation model from the split merge approach:
Digital surface model of Dar es Salaam, merged from 4 separately processed split-merge submodules.
More to come!
Now that OpenDroneMap has a new website, we have a blog to go with it. For the first post, we will repost something from https://smathermather.com on the latest upcoming changes to OpenDroneMap:
One of the greatest challenges with OpenDroneMap (ODM) is getting great results out of sparse data. I used to describe this as getting good data out of mediocre inputs, but this isn’t a fair descriptor, and here I’ll make a public apology: just because I have the time to fly with lots of overlap most of the time doesn’t mean it should be a requirement for everyone else. Ok. Now that that is off my chest, let’s talk about upcoming improvements which address this and more.
What does Better everything mean?
In the title, I say “Better everything”. The secret to better ODM data (and Piero Toffanin can take nearly all the credit for these improvements and discoveries of how) is to improve nearly every step in the pipeline. Let’s talk in general terms about what that means:
We can roughly break the pipeline into the following steps:
- Extract features
- Match features
- Structure from motion
- Dense matching / Multi-view stereo
- Final products
- Digital surface model
There are other parts and pieces here — spatial referencing, undistorting of imagery, etc., but this is a good enough outline for discussion purposes.
As it happens, our underlying process is pretty good for steps 1 through 3. OpenSfM is our library of choice, and it handles the SfM functions quite favorably. At some point I would like to compare the positions of a really good RTK dataset to what we get from SfM, and maybe even do some testing with a synthetic dataset for a robust test of or SfM quality, but on the whole we don’t have notable deficiencies in this part of the tool.
Focusing on the parts to improve:
This just leaves us the other half… steps 4 through 7. The first step that is important here is dense matching. After a considerable amount of parameter testing, tuning, etc., we concluded that OpenSfM’s dense matching isn’t up to snuff yet. This isn’t a necessarily a critique: it’s one of the younger parts of the toolchain, and more mature FOSS solutions are out there already.
So, we look to improve our dense matching / multi-view stereo. We are getting some favorable results by changing the underlying multi-view stereo library. These results compare favorably to Agisoft and Pix4D, two closed-source industry standards for UAV photogrammetry:
Meshing and Texturing:
Now, for steps 5 through 6, we need an improved meshing procedure and improved texturing in order to get great results in our final output. Here, we can evaluate with orthophotos as a final product: