So you want drones in your cities, doing, well, whatever the heck you want them to do (take-out, packages, snooping …). The trick, of course, is to have the drones do their thing, without running into lamp-posts, cranes, people, wires, and so forth.
The even trickier bit is that a lot of these — yes, even lamp-posts — have the habit of suddenly showing up one day, making for a constantly changing topography.
The even more trickier part is that, well, how do you get all this 3-D info? I mean, you can’t just have a plane flying around doing the “Google StreetView” thing, or even a drone being flown around.
Loquercio et al. have come up with a remarkably nifty solution — which they call DroNet (•). It’s a fast (and tiny!) 8-layers residual network, which outputs a steering angle and a collision probability. (Think “if you fly straight ahead, you will crash”. )
They’ve also figured out how to collect the relevant data from cars and bikes, and it turns out that the data is good enough to even work indoors!
About the only thing missing here is that their models doesn’t take advantage of drone “agility”, but that seems to be something that can be readily plugged in.
Good stuff indeed!
(•) DroNet: Learning to Fly by Driving : http://rpg.ifi.uzh.ch/docs/RAL18_Loquercio.pdf