A lot of the claimed s&r applications are really a thin cover for military gear.
I remember trying to mentor a hardware startup on an event, and we just couldn’t find the common language with the girl that was pitching it. I just couldn’t see how the cases would justify the market.
Only later on I figured out that the cases were absurd, and all of them really a cover for military applications.
Finding survivors in a forest = finding partisans, is the most common one.
Rescue victims are just the co-operating category of hidden people.
https://jumpshare.com/s/cBhNffK6aWZX5SwYP0YN
I asked it to instead generate a comic in text form and got the following:
Title: "The XKCD Paradox"
Panel 1: A stick figure is typing on a computer. Above them is a thought bubble: "I bet there's an XKCD about how there's always a relevant XKCD."
Panel 2: The screen shows a Google search result. The stick figure exclaims, "Huh, there isn't one. That's weird."
Panel 3: We see the inside of a Google data center. Servers are labeled "XKCD Generator AI." An alarm is flashing with the text "Paradox detected!"
Panel 4: A stick figure resembling Randall Munroe is shown frantically drawing at a desk. A speech bubble reads, "Must... draw... comic... about... always... having... relevant... comic..."
Panel 5: Back to the original stick figure, now looking at a new XKCD comic on their screen. The comic is titled "Meta-Relevance" and shows a recursive image of stick figures looking at computers.
Panel 6: A wide shot shows both the stick figure and the Randall Munroe character simultaneously saying, "Whew, balance restored."
Footnote at the bottom of the comic: "Plot twist: This comic was generated by an AI trying to convince you it wasn't generated by an AI."
This version plays with the meta-concept of XKCD's relevance, the idea of paradoxes in AI, and adds a layer of self-referential humor typical of XKCD. It also touches on the theme of AI-generated content trying to prove it's not AI-generated, adding an extra layer of cleverness to the concept.
Use drones with IR cameras:
* Find deer after they're shot. Right now you need to hire a blood hound and it takes hours
* Do wildlife surveys for conservation and management departments
* Pest management for farmers
https://www.skysenderos.com/blogs/thermal-drone-deer-recover...
Thermal is really good in the UK because the ground is rarely that warm. You can easily see a person hiding among rocks, for example. It's basically Cops. In equatorial Africa you can only use it at dawn/dusk (or overnight) because the ground is so hot.
I'm picturing bumper-to-bumper traffic on a highway with a cloud of drones overhead. Each person in their individual car using their individual drone to all report back the same thing: that everything is moving slowly because there are just too many cars on the road right now. With luck, the drones only crash into each other every once in a while, just like the cars below.
They tried this exact thing with the kentucky freeway shooter using both helicopter based FLIR system and IR camera equipped drones and failed. Eventually the dudes body was found by a group of ... as far as i can tell, wilderness youtubers working with a police search party.
Even the dogs didn't find him.
Pest management is a heck of a good idea. The province of Alberta is officially rat free - if Alberta doesn’t have something like this I bet they would be interested. Especially if it could do double duty for wildlife surveys.
The last DARPA grand challenge (Subterrainean) had automated drone networks that could find and identify humans in caves and tunnels. They were at least up front about the military challenges in these environments. (https://www.darpa.mil/program/darpa-subterranean-challenge), but the nod at civilian first-responders doesn't seem fair. Honestly, is cave-in such a big civilian problem that we need to prioritize it as a talking point at all levels?
Considering (1) the number of people who are employed in mining occupations, (2) the frequency of serious accidents in mines, yes. Particularly in developed countries, societies expect that great lengths will be gone to rescue or recover the victims, and mine rescue is incredibly dangerous work.
(1) BLS says ~200K in the US in 2024, although only a minority of them work underground.
(2) BLS says "underground mining machine operators" is the 9th deadliest job in the US, and that is with a large and well-equipped mine rescue system (MSRA says 250 teams across the country).
Roofers, fishing and hunting workers, construction "helpers", etc?
There's a case to be made that some of them are, I guess.
Pragmatically speaking, when someone falls off a roof or a tree, it doesn't turn into a highly public, high-risk, government-responsibility rescue mission. When someone gets trapped in a mine, it does.
(If you fall off a tree logging in Alaska, there is a good chance a USCG helicopter crew comes to your aid, but that is more of a "five minutes in the local news" story than "nightly news host reporting live on location" event.)
According to [1] there were 8 deaths in underground machine operators category in 2022.
There's a more detailed table at [2] but I don't quite understand how this aligns with the first one (the numbers seem different, but I think the category is "Mining (except oil and gas)").
In any case the majority of fatalities are from "Transportation incidents" or "Contact with object and equipment". I think cave-ins would be classed as "Fires and explosions"
[1] https://www.bls.gov/charts/census-of-fatal-occupational-inju...
[2] https://www.bls.gov/iif/fatal-injuries-tables/fatal-occupati...
Everything about the rocketry needed to get to orbit started from warfare purposes, for example. And ARPANET was a foray into how to build a disruption-resistant network for military purposes.
Science and knowledge are a bit of a soup.
Drones are useful. There have been zero useful technologies in war that have ever been successfully banned. (No [1].)
Every weapon that has been banned brought asymmetric advantage, i.e. disadvantaged the powerful, or has had its ban flouted, e.g. cluster munitions.
[1] https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch...
An oil terminal in Feodosia is still burning after the latest Ukrainina strike.
There was even a few cases of re-purposed ultra light aircraft serving as one way drones for ultra long range strikes on the Ukrainian side.
In another region Israel has to shoot down various terrorist launched one way UAVs almost regularly by this point & uses UAVs heavily by itself.
So while US certainly did pioneer UAV use, it seems to be it is getting eclipsed by other states in this area.
(Certainly not advocating for this, but noting that it's the most likely trigger to get the ball rolling on regulation of drones in military operation where very little currently exists).
I joke, but not. I'm a researcher and AI has been a pretty ambiguous term for years, mostly because intelligence is still not well defined. Unfortunately I think it's becoming less well defined in the last few years (while prior to that was getting better defined) via the (Fox) Mulder Effect.
for it to be AI, it needs some sort of ML basis. otherwise its just fancy "classical" computer vision.
(this is from someone who's been working in the field for far too long, and remembers a time before "deep", "ML" and "ai" were part of every paper. )
They are definitely right that our (I am a hiker) gear tends to stand out against nature. Not only is it generally in colors that do not appear in any volume in nature, but almost nothing in the plant and mineral kingdoms is of uniform color. A blob of uniform color is in all probability either a monochromatic animal (the sheep their system detects) or man made.
What surprises me about this is that it hasn't been tried before.
Using a term like "AI" to describe this is like using a term "Food" to describe pickles. Poor analogy but "AI" is just so vast that most lay readers or those not familiar with this phrase in regular computer science discussions aren't grounded in the consequence.
I feel that we as an industry need to do better and use terms more responsibly and know our audience. There is a big difference between a clustering algorithm that detects pixels and flags them and a conscious, self-aware system. However both of those things are "AI" and both have very different consequences.
The following are some aspects of the artificial intelligence problem:
1 Automatic Computers
If a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.
2. How Can a Computer be Programmed to Use a Language
It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture. From this point of view, forming a generalization consists of admitting a new word and some rules whereby sentences containing it imply and are implied by others. This idea has never been very precisely formulated nor have examples been worked out.
3. Neuron Nets
How can a set of (hypothetical) neurons be arranged so as to form concepts. Considerable theoretical and experimental work has been done on this problem by Uttley, Rashevsky and his group, Farley and Clark, Pitts and McCulloch, Minsky, Rochester and Holland, and others. Partial results have been obtained but the problem needs more theoretical work.
4. Theory of the Size of a Calculation
If we are given a well-defined problem (one for which it is possible to test mechanically whether or not a proposed answer is a valid answer) one way of solving it is to try all possible answers in order. This method is inefficient, and to exclude it one must have some criterion for efficiency of calculation. Some consideration will show that to get a measure of the efficiency of a calculation it is necessary to have on hand a method of measuring the complexity of calculating devices which in turn can be done if one has a theory of the complexity of functions. Some partial results on this problem have been obtained by Shannon, and also by McCarthy.
5. Self-lmprovement
Probably a truly intelligent machine will carry out activities which may best be described as self-improvement. Some schemes for doing this have been proposed and are worth further study. It seems likely that this question can be studied abstractly as well.
6. Abstractions
A number of types of ``abstraction'' can be distinctly defined and several others less distinctly. A direct attempt to classify these and to describe machine methods of forming abstractions from sensory and other data would seem worthwhile.
7. Randomness and Creativity
A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of a some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking.
From:https://web.archive.org/web/20070826230310/http://www-formal...
So, no, the fundamental difference is not that "AI is trained, algorithms are not". Some hand-crafted algorithms fall under the purview of AI research. A modern example is graph-search algorithms like MCTS or A*.
At some point we just have to stop with the breathless hype. I'm sure labelling it as AI gets more clicks and exposure so I know exactly why they do it. Still, it's annoying.
There is no model here, there is no neural net.
I’ve always been fine with calling things AI even though they are all jumbles of stats nonsense that wouldn’t be able to put their own pants on. Does a submarine swim? No, but that’s just the metaphor that the most vocal adherents are wedded to (at the hips). The metaphor doesn’t harm me. And to argue against it is like Chomsky trying to tell programming language designers that programming languages being languages is just a metaphor.
[1] EDIT: In other words it can be on the level of a crow. Or a dog. Just something general. Something that has some animalistic-like intelligence.
Say that you have a closed system that largely operates without human intervention - for example, the current ad fraud mess where you have bots pretending to be humans that don't actually exist to inflate ad counts, all of which gets ranked higher by the ML ad models because it inflates their engagement numbers, but it's all to sell products that don't really work anyway so that the company can post better revenue numbers to Wall Street and unload the shares on prop trading bots and index funds that are all investing algorithmically anyway. On some level, this is a form of "intelligence" even though it doesn't put pants on. For that matter, many human societies don't put pants on, nor do my not-quite-socialized preschool kids. It's only the weight of our collective upbringing, coupled with a desire to feel intelligent, that leads us to equate putting pants on with intelligence. Plenty of people don't put pants on and consider themselves intelligent as well. And the complexity of what computers actually do do is often well beyond the complexity of what humans do.
I often like to flip the concept of "artificial intelligence" on its head and instead think about "natural stupidity". Sure, the hot AI technologies of the moment are basically just massive matrix computations that statistically predict what's likely to come next given all the training data they've seen before. Humans are also basically just massive neural networks that respond to stimulus and reward given all the training data they've seen before. You can make very useful predictions about, say, what is going to get a human to click on a link or open their wallet using these AI technologies. And since we too are relatively predictable human machines that are focused on material wealth and having enough money to get others to satisfy our emotions, this is a very useful asset to have.
I know what the point is. Of course computer scientists that make AI (whatever that means) want to be known for making Intelligence. And they get cross when the marvel of yesterday becomes a humdrum utility.
As you can see this part cuts both ways:
> > and they get redefined to suit human ego needs
> Say that you have a closed system that largely operates without human intervention - for example, the current ad fraud mess where you have bots pretending to be humans that don't actually exist to inflate ad counts, all of which gets ranked higher by the ML ad models because it inflates their engagement numbers, but it's all to sell products that don't really work anyway so that the company can post better revenue numbers to Wall Street and unload the shares on prop trading bots and index funds that are all investing algorithmically anyway. On some level, this is a form of "intelligence" even though it doesn't put pants on. For that matter, many human societies don't put pants on, nor do my not-quite-socialized preschool kids. It's only the weight of our collective upbringing, coupled with a desire to feel intelligent, that leads us to equate putting pants on with intelligence. Plenty of people don't put pants on and consider themselves intelligent as well. And the complexity of what computers actually do do is often well beyond the complexity of what humans do.
I bet your AI of choice could write a thesis on how putting pants on is a stupid social construct. Yet if it is incapable of doing it it would just be a bunch of hot air.
> I often like to flip the concept of "artificial intelligence" on its head and instead think about "natural stupidity".
This philosophy tends to go with the territory.
> Sure, the hot AI technologies of the moment are basically just massive matrix computations that statistically predict what's likely to come next given all the training data they've seen before. Humans are also basically just massive neural networks that respond to stimulus and reward given all the training data they've seen before.
“Basically” doing some heavy lifting here.
This is obviously false. We would have gone extinct pretty much immediately if we had to tediously train ourselves from scratch. We have instincts as well.
“But that’s just built-in training.” Okay, now we’re back to it not basically being stimulus-responses to training data they’ve seen before. So what’s the point? When it’s not basically just that.
> You can make very useful predictions about, say, what is going to get a human to click on a link or open their wallet using these AI technologies. And since we too are relatively predictable human machines that are focused on material wealth and having enough money to get others to satisfy our emotions, this is a very useful asset to have.
Yes. Humans have wants and needs and act in ways consistent with cause and effect. E.g. as the clueless “consumer subject” against billions of dollars of marketing money and AI owned by those same marketing departments.
Amazingly: Humans are what you allow them to be.
We could treat all humans according to Skinner Box theory. We could treat them as if Skinner’s stimulus-response theories are correct and only allow them to act inside that framework. That would (again, amazingly) confirm that Skinner was right all along.
Any organism can express itself maximally only in a maximally free setting. A free dog is a dog; a chained human might only be a dog.
The only difference is that humans have words that they can express through their mouthholes about what kind of future they want. If they want to be humans (i.e. human ego needs, sigh) or if they want to be the natural stupidity subjects of the artificial intelligence.
Or they don’t care because they don’t think AI will ever be able to put its pants on.
It is just that the language drifted a little the way it did with cyber meaning something else to post 90s kids. So now AI seems to be mostly associated with llms, but not that long ago, AI seemed to almost include just use to of an algorithm.
I am not an expert in the field at all. I am just looking at stuff for personal growth.
Machine learning was widely considered to be a subset of AI, until it got a big resurgence almost 2 decades ago. Now some people use the terms interchangeably.
These silly things are neat. I'm also really interested in the snake/worm robots that dig to find people and inflate to move rubble.
Now I barely use it, which does suggest that building it is what I really wanted to do all along and that kind of validates that robotics is probably a hobby I'd enjoy.
Relevant changelogs are 6.0a and 6.0b. http://mrmap.org.uk/forum/viewforum.php?f=24
CORRECTION: All but one of the DroneAid Symbol Language Symbols are drawn within upward pointing triangles.
Is there a simpler set of QR codes for the ground that could be made with sticks or rocks or things the wind won't bend?
I really hate when people use very uncommon terms without defining them. (or sometimes even people's names)
It's not that I couldn't make a guess based on context, but it's distracting, and I feel like my eyes must have skipped over something and I often keep going back over the text to see what I must have missed reading.
I imagine this is sometimes caused by sloppy editing, especially when they refer to a last name of a person who has yet to be introduced in the article, but I think sometimes it's a deliberate choice and I object.
"...a second Munro, as the Scottish mountains above 3,000 feet are known."
The opening paragraph describes him climbing/hiking a mountain in Scotland. "His plan was to climb Creise, a 1,100-meter-high peak overlooking Glen Etive...". Which then leads into him trying to "bag" a second one.
Just a counterpoint that it does not feel like sloppy editing at all. I struggle to see what would be difficult here for native speakers.
Peak bagging is common in that community but "to bag" something is quite common in native english or at least enough so that its in the Oxford dictionary. Hard for me to see a native speaker struggle with this, the connection can be made just from the prior paragraph.
They define what a Munro is in the same sentence. Are you here to just argue? I had to go back and add your post as a quote as I am not sure how someone can miss the literal definition within the sentence. "Munro, as the Scottish mountains above 3,000 feet are known". Is that difficult for you to read and understand?
Having only spent a few days of my life in Scotland I didn't know "Munro" but the article defined it.
"Many figurative senses, such as the verb meaning "to kill game" (1814) and its colloquial extension to "catch, seize, steal" (1818) are from the notion of the game bag (late 15c.) into which the product of the hunt was placed. This also probably explains modern slang in the bag "assured, certain" (1922, American English). To be left holding the bag (and presumably nothing else), "cheated, swindled" is attested by 1793." https://www.etymonline.com/search?q=bag
From Oxford Dictionary of English
verb (bags, bagging, bagged) [with object]
1 put (something) in a bag: customers bagged their own groceries | we bagged up the apples | once you've raked the leaves, bag them up right away so that they don't get wet.
2 succeed in killing or catching (an animal): Mike bagged nineteen cod. • succeed in securing (something): we've bagged three awards for excellence | get there early to bag a seat in the front row.
3 [no object] (of clothes, especially trousers) form loose bulges due to wear: these trousers never bag at the knee.
4 North American English informal fit (a patient) with an oxygen mask or other respiratory aid.
5 (bags or bags I) British English informal a child's expression used to make a claim to something: bags his jacket.
6 North American English informal abandon or give up on: she ought to just bag this marriage and get on with her life.
7 informal, mainly Australian and New Zealand English criticize: the fans should be backing him not bagging him.
No, both the GP and I are referring to number two, gaining something and literally or figuratively putting it in a bag. It applies equally to game and SaaS revenue and everything in between.
You just ripped on an editor for absolutely no reason.