We strive to make ODM as fast as possible. GPUs are very good at performing parallel calculations on blocks of data. While they can’t be employed at every stage of the pipeline, they can help in several areas.
GPU-based features extraction has been in ODM for a while (if you ventured to use the opendronemap/odm:gpu docker image) and has given some modest speed improvements when run on consumer hardware compared to its CPU implementation. But perhaps the most useful area for leveraging the GPU in a photogrammetry pipeline is point cloud densification.
WebODM Native, which already contains the latest ODM binaries (Windows)
Early benchmarks point to a ~10x speed improvement in point cloud densification times! Considering that point cloud densification is (was?) the longest step in the processing pipeline, we’re talking about a significant improvement.
You will need an NVIDIA card to make use of the new GPU-based point cloud densification code.
Check it out and let us know in the community forum if you notice improvements!
This is a guest post authored by Zoltán Siki, the creator of the awesome Find-GCP project.
Have you tried to create the gcp_list.txt file for a project with hundreds of images? It is a time-consuming and boring task to find each Ground Control Point (GCP) in 6-10 images and the chance of misidentification are high.
Some pieces of commercial software have an automatic GCP recognition system based on image processing algorithms, but no similar solution can be found in the open-source UAV image processing world.
I met ArUco codes some years ago and I used these black and white rectangular markers for indoor navigation and automatic movement detection. When I started to use ODM I realized the same technology can be used to find image coordinates of GCPs. There is a contrib package in OpenCV open-source library to detect marker positions and poses in images. The detection can be automatized using different ArUco markers for each GCP.
At the Geo4All lab of the Department of Geodesy and Surveying at the Budapest University of Technology and Economics a small project was started to implement a solution in Python. The code and some instructions are available on GitHub Its usage is straightforward:
Install Python3, OpenCV and OpenCV contrib on your machine
Download the source code from GitHub (zip or clone)
Generate ArUco markers (aruco_make.py or dict_gen_3x3.py) and print them in a suitable size
Put the markers on the field before making UAV images and measure coordinates with the necessary accuracy
Make the flight
Collect the GCP’s 3D coordinates in a TXT file
Add the generated gcp_list.txt to your ODM project and start ODM processing
Actually there are command line tools plus a simple visual check in the project. A GUI would be a useful addition to the project to edit and review GCP matching results possibly integrated in WebODM. More details are available on the GitHub page of the project and in an article in Baltic Journal of Modern Computing.
Contributions to this project are welcome, usage experience, case studies, bug reports and fixes, enhancements…
Whenever ghosting occurs, the problem is related to band alignment. In multispectral datasets, sensors such as the Sentera 6x, take multiple pictures at once using different cameras, each capable of capturing a different part of the light spectrum. This enables us to have images for various bands such as red, green, blue and near infrared. The images however are not perfectly aligned, due to the relative position of the cameras and slightly different lens distortion.
ODM now includes state-of-the-art automatic band alignment using a hybrid approach of feature-based and Enhanced Correlation Coefficient (ECC) maximization alignment, which is capable of estimating the alignment parameters from a sample computed on a subset of the input images and applied to the entire reconstruction.
After alignment, things start to look much better!
After aligning the bands, we simply make sure that our orthophoto / texturing generation process uses the sections of one band consistently with all the others, and the ghosts are officially… busted!
If you’ve used OpenDroneMap in the past to process multispectral datasets and you’ve been less-than-enthusiast about the results… it’s time to give it another shot!
Chapter 2. Speed
We’ve also gotten a lot faster at processing these datasets too, courtesy of the awesome team at Mapillary which introduced more efficient multi-threading and a change in how we process multispectral datasets, which now require ~1/3 of the time compared to before. That’s a lot faster!
ODM is now also capable of generating Entwine Point Cloud (EPT) point clouds more quickly and with less memory usage, thanks to the awesome team at Hobu and their new Untwine software.
Chapter 3. AGPL License Switch
Starting from ODM 2.3.0, NodeODM 2.1.0 and WebODM 1.6.0, all three projects join the family of free software projects (which already included ClusterODM) that use the AGPL license.
This move is designed to continue guaranteeing the software freedom of our users in a world of ever-increasing cloud services. Despite some questionable claims of the otherwise, by certain large organizations, AGPL is an excellent license for an open source project such as OpenDroneMap (as this humorous blog post by Drew DeVault explains). This choice will continue to help OpenDroneMap be as open and inclusive as it possibly can.
Today we’re excited to announce a new major release of ODM! What have we been working on? Well a lot of things. The two most important ones are the ones we hope you won’t notice, because they don’t affect functionality, but have been part of necessary “infrastructure” updates to make sure that ODM continues to work in the future. Namely:
We’ve upgraded the codebase from Python 2 to Python 3. Python 2 has been deprecated and will not receive updates past 2020. With Python 3 support, ODM can continue moving forward with the rest of the Python ecosystem.
We’ve upgraded the base OS target from Ubuntu Linux 16.04 to 18.04. 18.04 will continue to receive extended security maintenance updates until 2028 and compatibility with 18.04 has been frequently requested from our community.
If you’re using docker, these changes are (should be) transparent. If you’re running ODM natively, you will need to upgrade your base system to 18.04 before updating ODM (or continue using a 1.x release of ODM until you decide to upgrade).
Aside from these important under-the-hood upgrades, we couldn’t make an important release such as a 2.0 release without adding something new and shiny!
Image Geolocation Files allow you to specify/override the GPS location of your images without having to fiddle with tools such as exiftool. This is different than using Ground Control Point files. With these you simply specify the location of the camera centers instead of the location of points on the ground.
Image Masks allow you to mask out areas that you don’t want to include in your reconstruction. This is useful in many scenarios. For example, during cell tower inspections, photos typically include parts of the sky, which end up creating strange artifacts and negatively affect the reconstruction. By using masks, one can delineate areas to exclude from a reconstruction, thus obtaining a clean and more accurate result.
A new option, –feature-quality automatically adjusts the image sizes for feature extraction based on predetermined ratios instead of relying on the user input or making assumptions about the image size.
Static tiles that were previously computed in NodeODM for use in viewers such as Leaflet or OpenLayers can now be generated via the –tiles option directly in ODM.
Aside from the speed improvements of having updated Python, PDAL, numpy and many other libraries, we’ve specifically improved memory usage in split-merge to handle even larger DEMs, have improved DEMs compression and improved speed/network stability in ClusterODM/NodeODM.
We’ve fixed numerous bugs and increased overall stability. See the related PRs for geeky details (#1156 and #124). We also cleaned up the ODM repository from old, large binary objects that were inflating the git repository. A git clone previously took 1.1GB of downloads, now it takes only 12MB. If you forked ODM, read this announcement as it affects you.
Give ODM 2.0 a try! If you find any issues, please report them on GitHub.
For some time, OpenSfM, the photogrammetry library maintained by Mapillary that underpins OpenDroneMap, has had support for 360 cameras. We are working on a project at the moment with some great engineering students from Case Western Reserve University on building a next generation 360 camera for photogrammetry, but while that project wraps up, I wanted to test what can be done with a commodity unit.
So, with hunker-down-in-place orders the du jour, I opted to do my initial tests in-between my house and the neighbors:
I know I probably don’t need my mask there, but I have nasty tree allergies, so I am taking advantage of the normalization of mask wearing to keep my lungs healthier than they are most Spring seasons.
First, the why-what?!
In the increasingly tenuously named OpenDroneMap project, we have seen some interesting alternatives to drones in use — general photogrammetry that I owe more blog posts on (in the meantime, you can sneak-peak them at https://smathermather.com — I have just been too busy to reblog them yet). From tiny pits and seeds of hard to identify plants to animal skulls, there are some interesting non-drone use cases for good photogrammetry.
Are drones enough?
Drone mapping is a really exciting and useful innovation: it allows for mapping large areas with low capital investment, an opportunity to leverage local talent, can often capture with a faster cadence, and higher resolution and has a small fossil fuel footprint as compared with using manned aircraft. But the detail available is not always the detail needed. Consider dense urban locales, especially in places that also are thickly vegetated, and drone mapping may not always be enough for capturing the bottom-of-the-urban-canyon elevations needed for certain detailed hydrological analyses.
360 → 3D?
With a 360 camera and enough walking, can we create a synoptic understanding of our world that augments what we are doing now with drones? Tests from my driveway are very promising.
One of our community members, Zach Ryall, has covered OpenDroneMap in a recent article for AOPA (Aircraft Owners and Pilots Association).
Maps and mosaics are among the most powerful products a drone camera can produce, but producers of polished, user-friendly software made for professionals aren’t giving freebies anymore. Open-source software offers hobbyists and soloists an affordable alternative.
Small correction on the attribution of credits from the article: OpenDroneMap was founded by Stephen Mather and not me (I’d like to get this record straight). If it wasn’t for Steve’s commitment and vision for the project, we wouldn’t be reading this post (or this article, or anything OpenDroneMap related for the matter).
Thanks to the tireless work of the folks behind the Zanzibar Mapping Initiative, I have been exploring the latest settings in OpenDroneMap for processing data over Stone Town. I managed to get some nice looking orthos from the dataset:
But, excitingly, I was able to extract some nice looking surface models from the dataset too. This required using the Brown-Conrady model that recently got added to OpenSfM:
This post is a small homage to the late his Majesty Sultan Qaboos. Given the strong affinity and shared history between Zanzibar and Oman, it seems fitting to post these.
Edit: the dreaded “Reconstructing all views” message has been replaced with a progress monitor! But, how to dig into the back end and explore the machine that does the work is always a helpful skill to have… .
We’ll learn how to check how far along a process is when it is calculating depthmaps and giving no feedback
Along the way, we’ll also learn how to look at the docker images running for WebODM and
login to said docker images in order to
inspect that status of processing data.
So, you threw a really big dataset at WebODM, and now you are waiting. It’s been hours, maybe days, and it’s stuck on the dreaded “Reconstructing all views”:
Did you make the depthmap resolution really high, because you wanted really detailed data? Did you make it too high? How long is this going to take?
I have had this dilemma too. Sometimes I just get disgusted with myself and my predilection for choosing ridiculously computationally expensive settings, kill the process, turn the settings down a bit, and restart. But this can waste hours or days of processing, which feels wrong.
We could do the alternative. We could poke under the hood of WebODM and see how it’s progressing. For this project that’s been running for 146+ hours, this is what I decided to do.
The depthmaps that are running are being done in MVE, which can give us really detailed information about progress, but unfortunately, it makes logs a real mess, so we have it logging nothing. Let’s see how we can get around this and check in our status.
Finding the docker instance and logging in:
First, we log into the machine where WebODM is running. We need a list of the existing docker machines, as we need access to the correct machine to look at how things are progressing.
The result should give us something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4b0659fe6761 opendronemap/webodm_webapp "/bin/bash -c 'chmod…" 38 hours ago Up 38 hours 0.0.0.0:443->8000/tcp, 0.0.0.0:80->8080/tcp webapp
0e26ebf918f2 opendronemap/webodm_webapp "/bin/bash -c '/webo…" 38 hours ago Up 38 hours worker
1954c5136d44 redis "docker-entrypoint.s…" 38 hours ago Up 38 hours 6379/tcp broker
bdc69502ca50 opendronemap/webodm_db "docker-entrypoint.s…" 38 hours ago Up 38 hours 0.0.0.0:32769->5432/tcp db
81f401a0e138 opendronemap/nodeodm "/usr/bin/nodejs /va…" 38 hours ago Up 38 hours 0.0.0.0:3000->3000/tcp webodm_node-odm_1
We want to access the webodm_node-odm_1 node, in most cases. To do this we use docker exec as follows:
Typically, if we only have one process running, there will only be one dataset in the /var/www/data directory
Checking our depthmap data from MVE:
For depthmaps nearly complete in MVE, there will be a file called depth-L1.mvei. We need to find out how many these are as compared with the number of images that we need depthmaps for. We’ll use a combination of the find command and wc (or word count):
find . -name depth-L1.mvei | wc -l
In my case, I have 2,485 images, or roughly 2/3s of my images processed. Looks like I am 6 days into a 9 day process before we get done with the MVE step.
I guess I will wait until Monday to check again… .
In a previous blog post, we explored how we can quite effectively derive terrain models using drones over deciduous, winter scenes. We ran into some limitations in the quality of the terrain model: the challenge was removing the unwanted features (things like tree trunks) while retaining wanted features (large rock features).
I concluded the post thusly:
For our use case, however, we can use the best parameters for this area, take a high touch approach, and create a really nice map of a special area in our parks for very low cost. High touch/low cost. I can’t think of a sweeter spot to reach.
Good parameters for better filtering
In the end, the trick was to extract as good of a depthmap as possible depthmap-resolution: 1280 in my case, set the point cloud filtering (Simple Morphological Filter or SMRF) smrf-window and smrf-threshold to 3 meters to only filter things like tree trunks, and set ignore-gsd: true to ensure we are keeping the highest quality data all the way through the toolchain.