I’m proud to to announce that as of last week, our esteemed community member, long-time contributor and all-around champion of open source Saijin_Naib has become the first non-developer paid contributor to be sponsored by UAV4GEO.
He’s no stranger to OpenDroneMap. He has been involved with the project for a long time and has demonstrated incredible enthusiasm across these years while helping people, rallying for fundraising rounds and advocating for a world with more free and open source software. UAV4GEO is delighted to be able to support his efforts to continue doing what he has been doing all along: helping people with OpenDroneMap. He will also spend time to expand and improve the documentation across all projects and provide hands-on learning experiences to the community. We look forward to see what he will bring next.
I think every open source project one has to choose between two priorities: code or people. If you want to invest into building features, you invest in software developers. If you want to build a community, you invest in the community. With finite resources, you cannot prioritize both. I think a strong community, with it’s diverse set of experiences and ideas, over the long-run, will lead to better code. And for communities to thrive, they need people like Saijin_Naib that will jump in to lend a hand when things get confusing or when somebody needs some guidance.
So, congratulations Saijin_Naib on your new position of IT Specialist and Community Manager! See you on the forum.
That title might be clickbait, but it is meant as a nod to the feature parity that has been long in the making between OpenDroneMap relative to more mature projects, whether those more mature projects are the venerable, free and open source MICMAC or its derivatives in the form of the various closed-source offerings in the photogrammetry industry.
So, is 2.1.0 the biggest update yet? I would say it’s the update that suddenly brings us to feature parity in almost all categories. And it all starts with improved structure from motion (updates to OpenSfM), continues through improved point clouds (replacement of MVE with OpenMVS), and finishes with improved (waves at) everything.
Speed and Structure from Motion
With the update to the latest OpenSfM, we see a 25-45% speed up in portions of OpenSfM processing. This is very exciting and due to some low level changes, specifically: “reprojection of derivatives … are now computed analytically instead of autodiff-ed”. There are other, subtler improvements to OpenSfM, but that is the feature most noticable to end users.
Speed and dense point cloud production
The MVE library has been swapped out for OpenMVS. On the surface, this slows things down: for the same resolution depthmaps, OpenMVS runs at half the speed of MVE. But, the speed comparison is deceptive: we see such a high quality improvement from changing libraries, we can easily run at lower depthmap values and get better results resulting in an effective speed increase, with improved quality.
The biggest noticeable change will be in the quality of point cloud outputs and all subsequent products, from orthophotos to digital surface models, and digital terrain models.
Because the point clouds effect the quality throughout the rest of the toolchain, we see substantially improved digital surface models:
Higher fidelity in point clouds results in better automated point cloud filtering:
And so on. Updates are committed and out in the wild. It’s time to update your OpenDroneMap instance and have fun with the improved quality and speed.
How much better?
So, how much better are these point clouds? How do they compare to the rest of the industry? You don’t have to take our word for it, check out the leaderboard at tanksandtemples.org.
You’ll find the OpenMVS leads amongst FOSS solutions. Look closely enough, you might find a closed source drone image processing favorite trailing OpenMVS by a large margin… .
(Footnote, these leaderboard comparisons don’t compare the combination of OpenSfM and OpenMVS, but are a decent proxy for a full comparison).
Is this really bigger than that closed-source product?
Probably. OpenDroneMap has for more than a year had the benefit of being one of the most scalable solutions, both from a technology perspective with an industry leading process for splitting and recombining datasets, but from the licensing “technology” with licensing that costs the same amount regardless of how many machines it is deployed to.
In 2019, we saw OpenDroneMap process datasets in excess of 30,000 images into a spatially harmonized dataset, something at the edge of the possible at the time. In 2020, we have seen 90,000 image datasets successfully processed in the same way. It might be the largest spatially harmonized dataset yet processed, though we are happy to hear of similar projects in case we are wrong!
Is this really better than that closed-source product?
The recent (2 day old) quality improvements don’t just benefit from OpenMVS. OpenSfM does an industry leading job of balancing the accuracy of GPS information with the quality of camera calibration. This means that for massive datasets where you need good 3D data, OpenDroneMap can deliver a good product for things like flood modeling, terrain change detection, and other data needs that are sensitive to three-dimensional fidelity.
Time to try, try again, or upgrade
It is probably time for you to upgrade. If you haven’t tried OpenDroneMap before, or if it has been a while, then it is time to try it with your dataset. In the meantime, enjoy these screen shots from data produced by Federico Debetto, KMedia SAS, which just like the Zanzibar Mapping Initiative data above, is from Unguja Island, Zanzibar:
Props as always to the teams involved, including contributors to OpenSfM (thanks Mapillary!), contributors to OpenMVS (Whoa, 3Dnovator), and especially Piero Toffanin of UAV4Geo. Special mention to Daniel Llewellyn in the form of packaging. More on his work soon for you Windows and Snap folx… .
Today we’d like to share an impressive open source UAV hardware and software project, possible thanks to the effort of Ryan Brazeal and supporters.
When we think of open source UAV hardware projects we usually think of small, classroom friendly, tiny drones. Well, this one is not a toy, but will give a run for the money to the most sophisticated and professional rigs.
OpenMMS “presents a single source of information that lists every nut, bolt, OEM sensor, and manufacturing step required to build a modern, tightly-integrated MMS [Mobile Mapping System] sensor. The project also includes a robust suite of novel open-source software applications that rigorously process the collected data and generate industry-standard geospatial deliverables”.
Now for the kicker: “The complete OpenMMS solution provides many of the same features and achieves the same (or better) accuracy specifications as comparable commercial systems. The hardware combines a lidar sensor, global navigation satellite system sensors, an inertial sensor, cameras, and computers, to create a tightly-integrated digital mapping tool. The software is easy to use, fast, supports multiple OS, and, when possible, leverages multi-core (CPU) and graphics card (GPU) processing.”.
While OpenMMS is an independent project, here at OpenDroneMap we’re fully committed to support other open source projects in UAV mapping. We have opened a community channel on our forum for OpenMMS that will provide a venue for users to talk and share ideas about the project.
We look forward to see what the OpenMMS team will bring up next.
I have a keen interest in finding solutions to the open-source funding problem. In particular, how do you give more voice and decision-making power to the community? How can a community better steer the direction of the project? What features are worth implementing right away? What has value? What is noise?
When I stumbled on the idea of quadratic payments I knew that something interesting was up. The core ideas that came out of the quadratic payments paper as applied to open-source are that:
It’s not economically advantageous for people to directly fund open-source software. A system could make it advantageous by combining each pledge with additional funds from a sponsor pool. The additional funds optimally compensate for the value a pledge benefits other users. Example: if Bob makes a contribution ($25) for something that could be valuable to Alice, Bob needs to be compensated for the additional benefit that Alice gets ($25). So Bob’s pledge needs to be worth $50. More pledges lead to more overall value for everyone involved, making it economically advantageous for both Bob and Alice to pledge.
Pledging money toward the implementation of a feature/bug is not just a mean of payment, it’s also a vote of preference. It helps answer the question of “what’s important to me as a user?”.
Where does the “sponsor pool” money come from? It’s from one of the organizations that currently develops and benefits commercially from OpenDroneMap software. While ~90% of the funds come directly from this sponsor pool, the remaining 10% is community driven.
If you think this is not a lot, you might be missing the most important value of the funding process: the voting aspect of it.
As maintainers of FOSS we receive hundreds of feature requests. Time is limited however, so how do you prioritize? Thumbs up on GitHub / forum polling is a “one-person one vote” system, which is not ideal. If you let a single organization sponsor the development of a feature, you get a “one-dollar one-vote” system, which gives too much power to the single organization.
Quadratic voting is a fairer system. After the funding period has ended, we know what the community values and we know what to prioritize.
Oh, and we raised some funds in the process, too! Win-win?
We hope this model can be studied, improved and replicated in other FOSS communities. It could be the start of a new era of sustainable funding.
“We’re creating the most sustainable drone mapping software with the friendliest community on earth.”
OpenDroneMap has long been all about processing data from aerial assets, whether drones, balloons, kites, even planes and satellites have come up now and again. In a recent post, I posited the question: “Are drone enough?” Like a good rhetorical question, the answer is meant to be clear. In many cases, where vegetation is heavy, or buildings are close together and tall (or both!), a drone cannot adequately capture data at the ground in enough detail.
Enter 360 cameras as a method for rapidly collecting photogrammetric data from the ground. There are plenty of cool 360 cameras these days, and in testing, we can get some nice photogrammetric outputs from these cameras. But an issue with these cameras is that they are either “affordable” but super low resolution, unaffordable, but often also unfit for purpose, or some other combination of unsustainable. The real insult is that most of the best solutions out there are built around a bunch of cheap commodity hardware that’s been nicely packaged into a very expensive proprietary package.
A new hardware project
Case Western Reserve University has a really cool project class called EECS 398. Engineering Projects. It’s a boring name for a great class. In it, the students get together in small groups and work on interesting real engineering problems. We had the pleasure this term of working with Gabriel Maguire, Kimberly Meifert, and Rachel Volk on this project, three electrical engineering students with a bright future:
The idea was to put together a 360 camera from a pile of Sony α6000s, with a raspberry pi controller, RTK GPS, and feed those data in an automated way into OpenDroneMap.
The idea has some legs. The final prototype triggered 5 cameras at a time and stored it to disk. We have some work yet to do on syncronization of firing, and low cost RTK GPS.
“But Steve: 5 Sony α6000s plus their nice lenses isn’t really an affordable 360 camera solution… . I mean $5000 is less than the $20-30k one might spend on a platform with similar specs, but come on!”
Ok, doubting voice in my head, this isn’t affordable, it’s merely more affordable. At it’s core is gphoto2 which gives us the ability to plug in 2500 different possible cameras, so that helps with affordability. But maybe, doubting voice, you are interested not just in cost but in software freedom. Should we be doing this with fully open hardware including the cameras? To this end, we are building out a version with the new High Quality Raspberry Pi camera. At $75 a camera and lens or maybe more like $100 once we include a pi 0 and other parts for control, we get a 72MP 360 camera for 3D reconstruction in the ~$1000 range. And we can get that number lower by using fewer cameras and wider angle lenses, as a tradeoff with quality.
I’m no videographer, but here’s a quick intro on the objectives of the project:
I had the pleasure a couple weeks ago of attending the sibling conferences Understanding Risk West and Central Africa and State of the Map Africa in Côte d’Ivoire (Ivory Coast) a few weeks ago. It included a nice mix of formal discussions of how to reduce risk in light of climate change, as well as discussions of all aspects of OpenStreetMap efforts.
The conferences were hosted in two locales: one in Abidjan and the other in Grand Bassam, with a day of overlap of the two conferences in Abidjan.
Cristiano Giovando of Humanitarian OpenStreetMap Team and World Bank Global Facility for Disaster Reduction and Recovery, Olaf Veerman of Development Seed, and I (Stephen Mather, Cleveland Metroparks) led a workshop on drone mapping for resilience and focused on a workflow that went from flying, to processing, uploading to OpenAerialMap, and digitizing in OpenStreetMap. It went seamlessly and some of my jokes even translated well into French for the Francophiles in the workshop.
Plenty of dancing occurred at the end of the days of hard work, and throughout the rest of the conferences I led drone demos and discussed approaches to drone mapping at breaks.