ODM 3.0, the leading open-source drone mapping engine, has been released!
This release has focused on improving output quality, removing some legacy parameters and fixing long-standing issues with the software. You can check the change log to see what has been removed. If your scripts use legacy parameters, they will be ignored and you will get a warning. This follows our philosophy of “not breaking stuff”.
It can be difficult to convey in words what I mean by “improving output quality”. So I’m not going to, and instead, I will direct you to our UAVArena page.
If you want to get nostalgic, compare the ODM 3.0.0 engine outputs with those of ODM 0.9.9. How far we’ve gone!
If you want to get excited, compare the results to those of alternative proprietary software, or against a previous 2.x release.
ODM 3.0.0 is available today as a standalone application or can be used with the WebODM GUI, which as of today also ships with the latest version of ODM.
A few weeks back, I had the pleasure of visiting with Piero Toffanin in Florida and had 4 days of eating, talking, and drinking espresso. OpenDroneMap was much of the conversation, but we wandered a lot conversationally, caught up a lot, and had a great time. We even found some time to fly drones. (Random sidebar, Piero’s better half, Danielle, is a delight too). At the time, Piero was working on an interesting problem: planar reconstruction. Planar reconstruction is an interesting alternative to what we typically do in OpenDroneMap. If you’ve been using OpenDroneMap for even a short time, you’ll notice it does a lot of work: it’s a full photogrammetric workflow, so it creates a detailed point cloud, mesh, and elevation models too, if you ask for them. The detailed point cloud and mesh aren’t really optional.
By contrast, planar reconstruction performs the minimum needed to integrate a consistent enough solution (for flatish things, anyway).
Every few weeks or months, someone shows up on the forums begging for something faster and less resource hungry for their simpler use case, or else they compare the full work OpenDroneMap completes to much faster, but less complete workflows from a particular proprietary tool (you know the one… it rhymes with “fix a tree, shield)”. If you are one of those users (or just begged silently in the comfort of your own head), well this is your day: we now have an implementation of planar reconstruction. When I was visiting, Piero was puzzling over how best to implement a solution of planar reconstruction that is highly parallel but also memory efficient. He must have gotten there, because a few days ago, he closed pull request 1452.
So how does it work and how does it look? Initial test indicate that it is much faster. Depending on your machine and dataset, you might see 4-6 times faster. And the results for flat areas are quite suitable. I’m even pleased with trees (from an appropriate high-enough height, anyway).
There will be artifacts. That is the trade-off. But, if it’s an acceptable trade-off, if you need a good-enough solution fast, this might be the setting for you:
Find-GCP is a program that detects ArUco markers in digital photos and is able to create OpenDroneMap ground control points files automatically. The project has been utilized by members of our community for a long time and has provided means to automate ground control point detection workflows using commodity hardware. Kudos to the GeoForAll Lab at the Department of Geodesy and Surveying of the Budapest University of Technology and Economics for their efforts in making this software and in particular to its lead maintainer, Zoltan Siki.
We’re pleased to announce that as of today Find-GCP has joined the OpenDroneMap family as a member project! We will do our best to support its adoption and further its development / integration with the wider ecosystem.
OpenDroneMap is now compatible with the Apple M1 (no, not with this arm). The community came together to fund this feat. Thanks to everyone that contributed!
What was challenging about this?
Apple M1 speaks a different language than Intel chips (the mainstream chips that most people’s computers have). This language is a dialect of ARM. Apple has added a piece of software to guarantee some compatibility with applications not specifically built for its new chip, but the compatibility layer fails with software such as ODM that has many optimizations and low-level functions.
The steps to a port are always more or less:
Try to build the software on the new architecture
Get showered with errors
Fix a few errors
Repeat from 1 until success
Re-test on every other architecture (in case you’ve broken something in the process)
Automate the build process
Some really strange bugs came up during this process (like this one or this one). We upgraded the base layer of the software to be based on the cutting edge Ubuntu 21.04, since certain packages we depend on aren’t available for ARM. This broke our deployment to Snap. So we had to fix that with some conditional build logic. In the process our Windows build broke. We finally got everything ready for building and deploying, but the build times were now exceeding 6+ hours (up from 2+), which is a limit on GitHub. So we setup our own GitHub builder node, which is faster and enjoys higher time limits.
A challenge was getting PostgreSQL + PostGIS running. To guarantee compatibility with previous releases and a smooth upgrade experience for long-time users, WebODM’s docker images need PostgreSQL 9.5. As we plan for a future database migration / update which will require us to include multiple Postgres installations in the same database image, we opted to build from source both packages (PostgreSQL 9.5 is no longer included in newer Ubuntu releases).
After several days (and lots of coffee), everything came together!
So there you have it! From the command line:
git clone https://github.com/OpenDroneMap/WebODM
Will get you up and running on the Apple M1 (and on other ARMv7 devices such as newer Raspberry PIs too)!
What’s next? Maybe a sans-docker port to MacOS? We’ll see. Stay tuned.
I’m proud to to announce that as of last week, our esteemed community member, long-time contributor and all-around champion of open source Saijin_Naib has become the first non-developer paid contributor to be sponsored by UAV4GEO.
He’s no stranger to OpenDroneMap. He has been involved with the project for a long time and has demonstrated incredible enthusiasm across these years while helping people, rallying for fundraising rounds and advocating for a world with more free and open source software. UAV4GEO is delighted to be able to support his efforts to continue doing what he has been doing all along: helping people with OpenDroneMap. He will also spend time to expand and improve the documentation across all projects and provide hands-on learning experiences to the community. We look forward to see what he will bring next.
I think every open source project one has to choose between two priorities: code or people. If you want to invest into building features, you invest in software developers. If you want to build a community, you invest in the community. With finite resources, you cannot prioritize both. I think a strong community, with it’s diverse set of experiences and ideas, over the long-run, will lead to better code. And for communities to thrive, they need people like Saijin_Naib that will jump in to lend a hand when things get confusing or when somebody needs some guidance.
So, congratulations Saijin_Naib on your new position of IT Specialist and Community Manager! See you on the forum.
That title might be clickbait, but it is meant as a nod to the feature parity that has been long in the making between OpenDroneMap relative to more mature projects, whether those more mature projects are the venerable, free and open source MICMAC or its derivatives in the form of the various closed-source offerings in the photogrammetry industry.
So, is 2.1.0 the biggest update yet? I would say it’s the update that suddenly brings us to feature parity in almost all categories. And it all starts with improved structure from motion (updates to OpenSfM), continues through improved point clouds (replacement of MVE with OpenMVS), and finishes with improved (waves at) everything.
Speed and Structure from Motion
With the update to the latest OpenSfM, we see a 25-45% speed up in portions of OpenSfM processing. This is very exciting and due to some low level changes, specifically: “reprojection of derivatives … are now computed analytically instead of autodiff-ed”. There are other, subtler improvements to OpenSfM, but that is the feature most noticable to end users.
Speed and dense point cloud production
The MVE library has been swapped out for OpenMVS. On the surface, this slows things down: for the same resolution depthmaps, OpenMVS runs at half the speed of MVE. But, the speed comparison is deceptive: we see such a high quality improvement from changing libraries, we can easily run at lower depthmap values and get better results resulting in an effective speed increase, with improved quality.
The biggest noticeable change will be in the quality of point cloud outputs and all subsequent products, from orthophotos to digital surface models, and digital terrain models.
Because the point clouds effect the quality throughout the rest of the toolchain, we see substantially improved digital surface models:
Higher fidelity in point clouds results in better automated point cloud filtering:
And so on. Updates are committed and out in the wild. It’s time to update your OpenDroneMap instance and have fun with the improved quality and speed.
How much better?
So, how much better are these point clouds? How do they compare to the rest of the industry? You don’t have to take our word for it, check out the leaderboard at tanksandtemples.org.
You’ll find the OpenMVS leads amongst FOSS solutions. Look closely enough, you might find a closed source drone image processing favorite trailing OpenMVS by a large margin… .
(Footnote, these leaderboard comparisons don’t compare the combination of OpenSfM and OpenMVS, but are a decent proxy for a full comparison).
Is this really bigger than that closed-source product?
Probably. OpenDroneMap has for more than a year had the benefit of being one of the most scalable solutions, both from a technology perspective with an industry leading process for splitting and recombining datasets, but from the licensing “technology” with licensing that costs the same amount regardless of how many machines it is deployed to.
In 2019, we saw OpenDroneMap process datasets in excess of 30,000 images into a spatially harmonized dataset, something at the edge of the possible at the time. In 2020, we have seen 90,000 image datasets successfully processed in the same way. It might be the largest spatially harmonized dataset yet processed, though we are happy to hear of similar projects in case we are wrong!
Is this really better than that closed-source product?
The recent (2 day old) quality improvements don’t just benefit from OpenMVS. OpenSfM does an industry leading job of balancing the accuracy of GPS information with the quality of camera calibration. This means that for massive datasets where you need good 3D data, OpenDroneMap can deliver a good product for things like flood modeling, terrain change detection, and other data needs that are sensitive to three-dimensional fidelity.
Time to try, try again, or upgrade
It is probably time for you to upgrade. If you haven’t tried OpenDroneMap before, or if it has been a while, then it is time to try it with your dataset. In the meantime, enjoy these screen shots from data produced by Federico Debetto, KMedia SAS, which just like the Zanzibar Mapping Initiative data above, is from Unguja Island, Zanzibar:
Props as always to the teams involved, including contributors to OpenSfM (thanks Mapillary!), contributors to OpenMVS (Whoa, 3Dnovator), and especially Piero Toffanin of UAV4Geo. Special mention to Daniel Llewellyn in the form of packaging. More on his work soon for you Windows and Snap folx… .
Today we’d like to share an impressive open source UAV hardware and software project, possible thanks to the effort of Ryan Brazeal and supporters.
When we think of open source UAV hardware projects we usually think of small, classroom friendly, tiny drones. Well, this one is not a toy, but will give a run for the money to the most sophisticated and professional rigs.
OpenMMS “presents a single source of information that lists every nut, bolt, OEM sensor, and manufacturing step required to build a modern, tightly-integrated MMS [Mobile Mapping System] sensor. The project also includes a robust suite of novel open-source software applications that rigorously process the collected data and generate industry-standard geospatial deliverables”.
Now for the kicker: “The complete OpenMMS solution provides many of the same features and achieves the same (or better) accuracy specifications as comparable commercial systems. The hardware combines a lidar sensor, global navigation satellite system sensors, an inertial sensor, cameras, and computers, to create a tightly-integrated digital mapping tool. The software is easy to use, fast, supports multiple OS, and, when possible, leverages multi-core (CPU) and graphics card (GPU) processing.”.
While OpenMMS is an independent project, here at OpenDroneMap we’re fully committed to support other open source projects in UAV mapping. We have opened a community channel on our forum for OpenMMS that will provide a venue for users to talk and share ideas about the project.
We look forward to see what the OpenMMS team will bring up next.
I have a keen interest in finding solutions to the open-source funding problem. In particular, how do you give more voice and decision-making power to the community? How can a community better steer the direction of the project? What features are worth implementing right away? What has value? What is noise?
When I stumbled on the idea of quadratic payments I knew that something interesting was up. The core ideas that came out of the quadratic payments paper as applied to open-source are that:
It’s not economically advantageous for people to directly fund open-source software. A system could make it advantageous by combining each pledge with additional funds from a sponsor pool. The additional funds optimally compensate for the value a pledge benefits other users. Example: if Bob makes a contribution ($25) for something that could be valuable to Alice, Bob needs to be compensated for the additional benefit that Alice gets ($25). So Bob’s pledge needs to be worth $50. More pledges lead to more overall value for everyone involved, making it economically advantageous for both Bob and Alice to pledge.
Pledging money toward the implementation of a feature/bug is not just a mean of payment, it’s also a vote of preference. It helps answer the question of “what’s important to me as a user?”.
Where does the “sponsor pool” money come from? It’s from one of the organizations that currently develops and benefits commercially from OpenDroneMap software. While ~90% of the funds come directly from this sponsor pool, the remaining 10% is community driven.
If you think this is not a lot, you might be missing the most important value of the funding process: the voting aspect of it.
As maintainers of FOSS we receive hundreds of feature requests. Time is limited however, so how do you prioritize? Thumbs up on GitHub / forum polling is a “one-person one vote” system, which is not ideal. If you let a single organization sponsor the development of a feature, you get a “one-dollar one-vote” system, which gives too much power to the single organization.
Quadratic voting is a fairer system. After the funding period has ended, we know what the community values and we know what to prioritize.
Oh, and we raised some funds in the process, too! Win-win?
We hope this model can be studied, improved and replicated in other FOSS communities. It could be the start of a new era of sustainable funding.
“We’re creating the most sustainable drone mapping software with the friendliest community on earth.”
OpenDroneMap has long been all about processing data from aerial assets, whether drones, balloons, kites, even planes and satellites have come up now and again. In a recent post, I posited the question: “Are drone enough?” Like a good rhetorical question, the answer is meant to be clear. In many cases, where vegetation is heavy, or buildings are close together and tall (or both!), a drone cannot adequately capture data at the ground in enough detail.
Enter 360 cameras as a method for rapidly collecting photogrammetric data from the ground. There are plenty of cool 360 cameras these days, and in testing, we can get some nice photogrammetric outputs from these cameras. But an issue with these cameras is that they are either “affordable” but super low resolution, unaffordable, but often also unfit for purpose, or some other combination of unsustainable. The real insult is that most of the best solutions out there are built around a bunch of cheap commodity hardware that’s been nicely packaged into a very expensive proprietary package.
A new hardware project
Case Western Reserve University has a really cool project class called EECS 398. Engineering Projects. It’s a boring name for a great class. In it, the students get together in small groups and work on interesting real engineering problems. We had the pleasure this term of working with Gabriel Maguire, Kimberly Meifert, and Rachel Volk on this project, three electrical engineering students with a bright future:
The idea was to put together a 360 camera from a pile of Sony α6000s, with a raspberry pi controller, RTK GPS, and feed those data in an automated way into OpenDroneMap.
The idea has some legs. The final prototype triggered 5 cameras at a time and stored it to disk. We have some work yet to do on syncronization of firing, and low cost RTK GPS.
“But Steve: 5 Sony α6000s plus their nice lenses isn’t really an affordable 360 camera solution… . I mean $5000 is less than the $20-30k one might spend on a platform with similar specs, but come on!”
Ok, doubting voice in my head, this isn’t affordable, it’s merely more affordable. At it’s core is gphoto2 which gives us the ability to plug in 2500 different possible cameras, so that helps with affordability. But maybe, doubting voice, you are interested not just in cost but in software freedom. Should we be doing this with fully open hardware including the cameras? To this end, we are building out a version with the new High Quality Raspberry Pi camera. At $75 a camera and lens or maybe more like $100 once we include a pi 0 and other parts for control, we get a 72MP 360 camera for 3D reconstruction in the ~$1000 range. And we can get that number lower by using fewer cameras and wider angle lenses, as a tradeoff with quality.
I’m no videographer, but here’s a quick intro on the objectives of the project:
I had the pleasure a couple weeks ago of attending the sibling conferences Understanding Risk West and Central Africa and State of the Map Africa in Côte d’Ivoire (Ivory Coast) a few weeks ago. It included a nice mix of formal discussions of how to reduce risk in light of climate change, as well as discussions of all aspects of OpenStreetMap efforts.
The conferences were hosted in two locales: one in Abidjan and the other in Grand Bassam, with a day of overlap of the two conferences in Abidjan.
Cristiano Giovando of Humanitarian OpenStreetMap Team and World Bank Global Facility for Disaster Reduction and Recovery, Olaf Veerman of Development Seed, and I (Stephen Mather, Cleveland Metroparks) led a workshop on drone mapping for resilience and focused on a workflow that went from flying, to processing, uploading to OpenAerialMap, and digitizing in OpenStreetMap. It went seamlessly and some of my jokes even translated well into French for the Francophiles in the workshop.
Plenty of dancing occurred at the end of the days of hard work, and throughout the rest of the conferences I led drone demos and discussed approaches to drone mapping at breaks.