https://medium.com/planet-stories/a-beginners-guide-to-calcu...
Might be good info to plan safer routes ahead of time too
No they don't. For airliners it doesn't really matter. The only place they can set down safely is an airport. Which are already listed in their systems and flight plan (alternates)
For the smaller stuff it depends on the pilot, a common electronic flight system like the Garmin G1000 doesn't have sensors to actually make that determination.
Could an automated system make a better determination than a skilled pilot? And is the scenario frequent enough to warrant the big cost of cameras etc (keeping in mind they must be stabilized and with huge aperture to function at night). I doubt it.
The "miracle on the Hudson" was not called a miracle for nothing. Usually it ends like a few months ago at Washington Reagan.
And a freeway is never a safe place to land an airliner of course. The traffic makes it so. Even if there's very little, there's lampposts, barriers etc. If an airline pilot ever steers towards one they're really going for the least terrible option. Small planes fare better of course but again they won't have such tech for decades.
It works even better with high resolution synthetic aperture radar as you can measure the tank height displacement directly: https://www.iceye.com/blog/daily-analysis-and-forecast-of-gl...
ESA provides worldwide 20m x 5m radar imagery from Sentinel-1 free online. Revisit in the mid-latitudes is generally a few times per week, with an exact repeat cycle every 12 days. Once Sentinel-1C is fully operational, it'll be half that.
Was also some similar evidence regarding three gorges dam, and how it's not doing so great. Ie estimated height of surrounding area over time to indicate problematic movement, or something like that.
If you can figure out fairly close-to-the-ground elevations, you can model a strike zone quite well.
Good for special operations raids.
But those folks might also have access to specialized NRO satellites, that can give you the data without the inference.
In my case I just used it as a vehicle for learning about neural networks. I couldn't really think of a compelling use case. I wonder if the author of this article of the authors of the model have found one.
Certainly it will find a niche use, but during that time the headlines in robotics papers were all about replacing traditional depth /range sensing with it, which doesn't seem plausible.
Nunif has tools to convert images/videos or even turn your desktop into a stereoscopic image and live-stream it to your VR headset over WiFi[1] and there's workflow nodes for ComfyUI[2].
I tried the former and it reached conversation speeds of around 10FPS for full HD content on consumer hardware, so definitely usable. Still, I don't really see the point outside adding a gimmick to vacation photos or pornography. Don't think anyone would want to convert and consume a non-VR hollywood movie this way, but feel free to correct me on that.
[1] https://github.com/nagadomi/nunif
[2] https://github.com/kijai/ComfyUI-DepthAnythingV2 + https://github.com/MrSamSeen/ComfyUI_SSStereoscope
Even for that I don't know why you'd want it artificial. Porn is all about being as real as possible.