223 points by sohkamyung 1 day ago | 11 comments
0_____0 23 hours ago
It's basically the FREE POINT square on the bingo card at this point. When someone builds a cool robot that they don't know what to do with, it's inevitably for SAR. I've worked on a couple of them myself.
drhagen 22 hours ago
It even has an xkcd: https://xkcd.com/2128/
y-curious 16 hours ago
I'm mind-blown at how relevant that is here
thaumasiotes 14 hours ago
Oddly, the example in the comic is an absolutely awful example of the phenomenon described in the caption. A robot that can induce targeted lightning strikes has obvious military applications. Screw search and rescue.
kolinko 8 hours ago
Or perhaps it’s a meta comic about dual use :)

A lot of the claimed s&r applications are really a thin cover for military gear.

I remember trying to mentor a hardware startup on an event, and we just couldn’t find the common language with the girl that was pitching it. I just couldn’t see how the cases would justify the market.

Only later on I figured out that the cases were absurd, and all of them really a cover for military applications.

Finding survivors in a forest = finding partisans, is the most common one.

rightbyte 30 minutes ago
The hookshot also is a weapon in OoT.
JumpCrisscross 5 hours ago
> robot that can induce targeted lightning strikes has obvious military applications

Rescue victims are just the co-operating category of hidden people.

ppsreejith 11 hours ago
To be pedantic, the comic doesn't mention _targeted_ lightning strikes.
A4ET8a8uTh0 22 hours ago
At this point, xkcd must be like Simpsons. They already did everything.
amelius 16 hours ago
I'm starting to suspect that google is behind xkcd, i.e. running some generative AI script behind the scenes whenever someone looks for a comic.
KMnO4 11 hours ago
I wonder how hard this would be. As a joke, I put your comment into Claude and asked for an XKCD comic. Surprisingly, it generated (a very bad) SVG image:

https://jumpshare.com/s/cBhNffK6aWZX5SwYP0YN

I asked it to instead generate a comic in text form and got the following:

Title: "The XKCD Paradox"

Panel 1: A stick figure is typing on a computer. Above them is a thought bubble: "I bet there's an XKCD about how there's always a relevant XKCD."

Panel 2: The screen shows a Google search result. The stick figure exclaims, "Huh, there isn't one. That's weird."

Panel 3: We see the inside of a Google data center. Servers are labeled "XKCD Generator AI." An alarm is flashing with the text "Paradox detected!"

Panel 4: A stick figure resembling Randall Munroe is shown frantically drawing at a desk. A speech bubble reads, "Must... draw... comic... about... always... having... relevant... comic..."

Panel 5: Back to the original stick figure, now looking at a new XKCD comic on their screen. The comic is titled "Meta-Relevance" and shows a recursive image of stick figures looking at computers.

Panel 6: A wide shot shows both the stick figure and the Randall Munroe character simultaneously saying, "Whew, balance restored."

Footnote at the bottom of the comic: "Plot twist: This comic was generated by an AI trying to convince you it wasn't generated by an AI."

This version plays with the meta-concept of XKCD's relevance, the idea of paradoxes in AI, and adds a layer of self-referential humor typical of XKCD. It also touches on the theme of AI-generated content trying to prove it's not AI-generated, adding an extra layer of cleverness to the concept.

kenjackson 9 hours ago
Maybe a little on the nose, but way better than I can do.
myst 10 hours ago
The textual one is quite good! I wish Randall does it.
xarope 11 hours ago
xkcd has targeted humor, which I think GANs are not able to demonstrate (at least for now!)
ImHereToVote 1 hour ago
Just don't fill the drone with explosives SAR.
hiddencost 23 hours ago
Free business ideas, because I want this to exist:

Use drones with IR cameras:

* Find deer after they're shot. Right now you need to hire a blood hound and it takes hours

* Do wildlife surveys for conservation and management departments

* Pest management for farmers

krisoft 22 hours ago
Suprisingly your first idea is illegal in some states. For example it is illegal in Texas.

https://www.skysenderos.com/blogs/thermal-drone-deer-recover...

ivanhoe 5 hours ago
It'd be impossible to limit the use of drones to just recovery, some people would use them for tracking down the animals which is a horrible idea to allow.
numpad0 1 hour ago
The problem is that night-vision-y technologies are heavily controlled and price inflated. Only few are willing to buy QCIF 30Hz cameras at $500 from AliExpress or from legitimate resellers at $2k.
joshvm 9 hours ago
I've worked on 2 (and also for SAR). Not many people working in that field still and there's very little open data.

Thermal is really good in the UK because the ground is rarely that warm. You can easily see a person hiding among rocks, for example. It's basically Cops. In equatorial Africa you can only use it at dawn/dusk (or overnight) because the ground is so hot.

arkh 22 hours ago
Launch from car while stuck in some random traffic jam: learn the cause of the jam and how long it is.
tashi 17 hours ago
A good idea unless it becomes popular.

I'm picturing bumper-to-bumper traffic on a highway with a cloud of drones overhead. Each person in their individual car using their individual drone to all report back the same thing: that everything is moving slowly because there are just too many cars on the road right now. With luck, the drones only crash into each other every once in a while, just like the cars below.

kjs3 47 minutes ago
There was a recent story about a medivac helicopter not being able to land to help accident victims because someone had parked their drone over the wreck to check it out/get clicks.
khafra 3 hours ago
Launch while parking with an unfamiliar trailer or in an unfamiliar area.
quesera 13 hours ago
Mobile maps apps already do this pretty well.
rnewme 15 hours ago
Just get cb radio
Scoundreller 13 hours ago
please, there are children in my car!
sverhagen 9 hours ago
This may not be legal, per the FAA.
sleepybrett 22 hours ago
> * Find deer after they're shot. Right now you need to hire a blood hound and it takes hours

They tried this exact thing with the kentucky freeway shooter using both helicopter based FLIR system and IR camera equipped drones and failed. Eventually the dudes body was found by a group of ... as far as i can tell, wilderness youtubers working with a police search party.

Even the dogs didn't find him.

hluska 22 hours ago
If you google “thermal drone for hunting” you will find some YouTube videos about people solving the first problem.

Pest management is a heck of a good idea. The province of Alberta is officially rat free - if Alberta doesn’t have something like this I bet they would be interested. Especially if it could do double duty for wildlife surveys.

pclmulqdq 2 minutes ago
Stick a .22 on a drone with a thermal camera for pest management and that will be one of the most "American Dynamism" startups ever built.
vzaliva 17 hours ago
Satellite SOS, recently introduced in iPhone and Google Pixel phones, will help a lot with lost hiker cases. However, drone-based search will still be useful in case the hiker broke the phone or is too incapacitated to use it.
isx726552 23 hours ago
Put weapons on it (as already seen in current conflicts) and it becomes a seek-and-assassinate tool. Drones are cheap enough it could even be done en masse. It is a scary future, and it’s not far away at all.
jvanderbot 23 hours ago
S&R has always been a front for weaponized robotics, IMHO.

The last DARPA grand challenge (Subterrainean) had automated drone networks that could find and identify humans in caves and tunnels. They were at least up front about the military challenges in these environments. (https://www.darpa.mil/program/darpa-subterranean-challenge), but the nod at civilian first-responders doesn't seem fair. Honestly, is cave-in such a big civilian problem that we need to prioritize it as a talking point at all levels?

cpgxiii 15 hours ago
> Honestly, is cave-in such a big civilian problem that we need to prioritize it as a talking point at all levels?

Considering (1) the number of people who are employed in mining occupations, (2) the frequency of serious accidents in mines, yes. Particularly in developed countries, societies expect that great lengths will be gone to rescue or recover the victims, and mine rescue is incredibly dangerous work.

(1) BLS says ~200K in the US in 2024, although only a minority of them work underground.

(2) BLS says "underground mining machine operators" is the 9th deadliest job in the US, and that is with a large and well-equipped mine rescue system (MSRA says 250 teams across the country).

jvanderbot 14 hours ago
Are the other 8 BLS most dangerous professions being heavily automated and augmented with robots?

Roofers, fishing and hunting workers, construction "helpers", etc?

There's a case to be made that some of them are, I guess.

cpgxiii 13 hours ago
Mining is heavily mechanized and automated already, yet remains inescapably dangerous.

Pragmatically speaking, when someone falls off a roof or a tree, it doesn't turn into a highly public, high-risk, government-responsibility rescue mission. When someone gets trapped in a mine, it does.

(If you fall off a tree logging in Alaska, there is a good chance a USCG helicopter crew comes to your aid, but that is more of a "five minutes in the local news" story than "nightly news host reporting live on location" event.)

nl 10 hours ago
I suspect the vast majority of deaths in underground mining in the US aren't from cave-ins but instead from heavy equipment accidents.

According to [1] there were 8 deaths in underground machine operators category in 2022.

There's a more detailed table at [2] but I don't quite understand how this aligns with the first one (the numbers seem different, but I think the category is "Mining (except oil and gas)").

In any case the majority of fatalities are from "Transportation incidents" or "Contact with object and equipment". I think cave-ins would be classed as "Fires and explosions"

[1] https://www.bls.gov/charts/census-of-fatal-occupational-inju...

[2] https://www.bls.gov/iif/fatal-injuries-tables/fatal-occupati...

shadowgovt 18 hours ago
It's generally hard to say what's a "front" for what, unless you mean "what can you get someone to grant you research money for when you really expect to parlay the learnings into another topic."

Everything about the rocketry needed to get to orbit started from warfare purposes, for example. And ARPANET was a foray into how to build a disruption-resistant network for military purposes.

Science and knowledge are a bit of a soup.

looofooo0 22 hours ago
Guess what happens in Ukraine.
morkalork 22 hours ago
The future is already here – it's just not very evenly distributed
rvnx 23 hours ago
It exists in Estonia ("thanks" to Googley Eric Schmidt!), it's a company that had codename White Stork.
ants_everywhere 23 hours ago
It looks like they may have changed their name. White Stork is the name of a charity that provides first aid kits and other aid in Ukraine.

https://whitestork.us/

https://x.com/WilliamMcNulty/status/1798855191858712929

avibhu 23 hours ago
Ironic name.
hiddencost 23 hours ago
National bird of Ukraine
lupusreal 20 hours ago
A read a comment here a while ago about "search and rescue" being a euphemism for military applications and that's the first thing I thought of when I saw this story.
LorenPechtel 20 hours ago
This system would not work against camouflage.
idunnoman1222 19 hours ago
I mean it would you just have to put different optics on it thermal, near infrared and normal, and have three different detection neural nets
yieldcrv 21 hours ago
I've been saying that any armistace on drones won't come until the US starts being hit by drone warfare. Especially by a foreign militia or nation state
JumpCrisscross 5 hours ago
> armistace on drones won't come

Drones are useful. There have been zero useful technologies in war that have ever been successfully banned. (No [1].)

Every weapon that has been banned brought asymmetric advantage, i.e. disadvantaged the powerful, or has had its ban flouted, e.g. cluster munitions.

[1] https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch...

colechristensen 20 hours ago
What do you think UFOs are?
yieldcrv 19 hours ago
they need to cause collateral damage, not busy work for FOIA respondents in Ohio
idunnoman1222 19 hours ago
The US is the biggest user of drones. What are you talking about?
m4rtink 15 hours ago
I would not be so sure - a mind boggling number of drones and drone types are used in the Ukraine war, from small observation drones, over supply drones, drop drones, fast one way FPV kill drones up to almost regular drone swarm exchanges with 100+ drones going one way (indigenous Ukrainian drones one way, clones of Iranian Shaheds from the Russian side).

An oil terminal in Feodosia is still burning after the latest Ukrainina strike.

There was even a few cases of re-purposed ultra light aircraft serving as one way drones for ultra long range strikes on the Ukrainian side.

In another region Israel has to shoot down various terrorist launched one way UAVs almost regularly by this point & uses UAVs heavily by itself.

So while US certainly did pioneer UAV use, it seems to be it is getting eclipsed by other states in this area.

Sevii 11 hours ago
It's not unless you consider Ukraine part of the US. Russia and Ukraine are using 10k+ drones per month each.
hooverd 18 hours ago
US civilians aren't subjected to them though.
shadowgovt 18 hours ago
Precisely. It's about terror. The US having the political capital it does (among other things, being a Security Council member in the UN), Americans won't push their government to curtail drone use until and unless they're on the receiving end of asymmetric warfare attacks perpetrated with low-cost disposables carrying lethal payloads.

(Certainly not advocating for this, but noting that it's the most likely trigger to get the ball rolling on regulation of drones in military operation where very little currently exists).

pgraf 22 hours ago
I don‘t see any hint of AI being used here, but rather a handcrafted computer vision algorithm. Can anyone more involved in the matter elaborate if there was an actual AI model used?
meigwilym 2 minutes ago
Wired have edited the headline to remove "AI".
yifanl 22 hours ago
We don't have a formal classification of which technologies can be considered "AI", but computer vision would feel like a valid entrant to me.
godelski 18 hours ago
I thought AI meant "ML" + marketing.

I joke, but not. I'm a researcher and AI has been a pretty ambiguous term for years, mostly because intelligence is still not well defined. Unfortunately I think it's becoming less well defined in the last few years (while prior to that was getting better defined) via the (Fox) Mulder Effect.

datameta 22 hours ago
Computer vision totally qualifies as AI as it can grant an agent artificially intelligent behavior.
KaiserPro 4 hours ago
The fuck it does.

for it to be AI, it needs some sort of ML basis. otherwise its just fancy "classical" computer vision.

(this is from someone who's been working in the field for far too long, and remembers a time before "deep", "ML" and "ai" were part of every paper. )

sleepybrett 22 hours ago
Based on what is said in the article, it seems like a VERY simple algorithm. It clusters the pixels in the image by color and reports any small blobs of unusual color. That's not AI by any of the stupid definitions we've come up with recently.
morkalork 22 hours ago
Clustering and outlier detection is not AI?
LorenPechtel 20 hours ago
To me the fundamental difference is that AI is trained, algorithms are not. There's not training here, it's a simple frequency count looking for outliers. While it's an approach a human would take the human is doing it in a very different fashion. And the human is much more sensitive to form, this is much more sensitive to color.

They are definitely right that our (I am a hiker) gear tends to stand out against nature. Not only is it generally in colors that do not appear in any volume in nature, but almost nothing in the plant and mineral kingdoms is of uniform color. A blob of uniform color is in all probability either a monochromatic animal (the sheep their system detects) or man made.

What surprises me about this is that it hasn't been tried before.

KolmogorovComp 20 hours ago
You are confusing AI and Machine Learning, the latter being a subset of the former.
kxrm 19 hours ago
This really gets at one of my issues with the term "AI". There is a very scientific, textbook definition of what Artificial Intelligence is however, this term carries baggage from sci-fi.

Using a term like "AI" to describe this is like using a term "Food" to describe pickles. Poor analogy but "AI" is just so vast that most lay readers or those not familiar with this phrase in regular computer science discussions aren't grounded in the consequence.

I feel that we as an industry need to do better and use terms more responsibly and know our audience. There is a big difference between a clustering algorithm that detects pixels and flags them and a conscious, self-aware system. However both of those things are "AI" and both have very different consequences.

YeGoblynQueenne 19 hours ago
This is the list of discussion topics from the Dartmouth Workshop on Artificial Intelligence (1955) where the term was first introduced:

  The following are some aspects of the artificial intelligence problem: 

  1 Automatic Computers

  If a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.

  2. How Can a Computer be Programmed to Use a Language

  It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture. From this point of view, forming a generalization consists of admitting a new word and some rules whereby sentences containing it imply and are implied by others. This idea has never been very precisely formulated nor have examples been worked out.

  3. Neuron Nets

  How can a set of (hypothetical) neurons be arranged so as to form concepts. Considerable theoretical and experimental work has been done on this problem by Uttley, Rashevsky and his group, Farley and Clark, Pitts and McCulloch, Minsky, Rochester and Holland, and others. Partial results have been obtained but the problem needs more theoretical work.

  4. Theory of the Size of a Calculation

  If we are given a well-defined problem (one for which it is possible to test mechanically whether or not a proposed answer is a valid answer) one way of solving it is to try all possible answers in order. This method is inefficient, and to exclude it one must have some criterion for efficiency of calculation. Some consideration will show that to get a measure of the efficiency of a calculation it is necessary to have on hand a method of measuring the complexity of calculating devices which in turn can be done if one has a theory of the complexity of functions. Some partial results on this problem have been obtained by Shannon, and also by McCarthy.

  5. Self-lmprovement

  Probably a truly intelligent machine will carry out activities which may best be described as self-improvement. Some schemes for doing this have been proposed and are worth further study. It seems likely that this question can be studied abstractly as well.

  6. Abstractions

  A number of types of ``abstraction'' can be distinctly defined and several others less distinctly. A direct attempt to classify these and to describe machine methods of forming abstractions from sensory and other data would seem worthwhile.

  7. Randomness and Creativity

  A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of a some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking. 
From:

https://web.archive.org/web/20070826230310/http://www-formal...

So, no, the fundamental difference is not that "AI is trained, algorithms are not". Some hand-crafted algorithms fall under the purview of AI research. A modern example is graph-search algorithms like MCTS or A*.

tpxl 7 hours ago
Novel stuff is AI, old stuff is statistics. Decision trees used to be called AI :)
short_sells_poo 21 hours ago
I mean, if something as traditional as simple clustering is AI, then so is linear regression and Excel Sheets have been doing AI/ML for the past 2 decades.

At some point we just have to stop with the breathless hype. I'm sure labelling it as AI gets more clicks and exposure so I know exactly why they do it. Still, it's annoying.

YeGoblynQueenne 19 hours ago
At least until recently any introductory machine learning course would teach linear regression and clustering, the latter as an example of unsupervised learning.
sleepybrett 10 hours ago
Sure, but as a stepping stone.

There is no model here, there is no neural net.

morkalork 21 hours ago
Yes! AI is any sort of machine intelligence and its been around for more than 2 decades, the 80s even had its own "AI winter" after all.
Gud 19 hours ago
There is no intelligence here, only pattern matching.
kvakerok 19 hours ago
The same could be said about many people.
wizzwizz4 21 hours ago
You're only saying this because we're in a hype cycle. Circa 2018, there was no problem at all with calling this AI: in fact, it was normal.
kjkjadksj 21 hours ago
Back then we still called things image classifiers or machine learning, and when you said AI most people probably had an image of Arnold Schwarzenegger or Cortana flash in their mind.
sleepybrett 10 hours ago
it was not.
nostrademons 22 hours ago
keybored 22 hours ago
I thought that AGI covered that. AGI to my mind doesn’t have to surpass human thinking. It just has to be categorically the same as it (it can be less powerful, or more). It has to be general. A chess machine in a box which can’t do anything else is not general.[1]

I’ve always been fine with calling things AI even though they are all jumbles of stats nonsense that wouldn’t be able to put their own pants on. Does a submarine swim? No, but that’s just the metaphor that the most vocal adherents are wedded to (at the hips). The metaphor doesn’t harm me. And to argue against it is like Chomsky trying to tell programming language designers that programming languages being languages is just a metaphor.

[1] EDIT: In other words it can be on the level of a crow. Or a dog. Just something general. Something that has some animalistic-like intelligence.

nostrademons 22 hours ago
I think the point of the Wikipedia article is that human categories are flexible, and they get redefined to suit human ego needs regardless of what's happening in the objective outside world.

Say that you have a closed system that largely operates without human intervention - for example, the current ad fraud mess where you have bots pretending to be humans that don't actually exist to inflate ad counts, all of which gets ranked higher by the ML ad models because it inflates their engagement numbers, but it's all to sell products that don't really work anyway so that the company can post better revenue numbers to Wall Street and unload the shares on prop trading bots and index funds that are all investing algorithmically anyway. On some level, this is a form of "intelligence" even though it doesn't put pants on. For that matter, many human societies don't put pants on, nor do my not-quite-socialized preschool kids. It's only the weight of our collective upbringing, coupled with a desire to feel intelligent, that leads us to equate putting pants on with intelligence. Plenty of people don't put pants on and consider themselves intelligent as well. And the complexity of what computers actually do do is often well beyond the complexity of what humans do.

I often like to flip the concept of "artificial intelligence" on its head and instead think about "natural stupidity". Sure, the hot AI technologies of the moment are basically just massive matrix computations that statistically predict what's likely to come next given all the training data they've seen before. Humans are also basically just massive neural networks that respond to stimulus and reward given all the training data they've seen before. You can make very useful predictions about, say, what is going to get a human to click on a link or open their wallet using these AI technologies. And since we too are relatively predictable human machines that are focused on material wealth and having enough money to get others to satisfy our emotions, this is a very useful asset to have.

keybored 21 hours ago
> I think the point of the Wikipedia article is that human categories are flexible, and they get redefined to suit human ego needs regardless of what's happening in the objective outside world.

I know what the point is. Of course computer scientists that make AI (whatever that means) want to be known for making Intelligence. And they get cross when the marvel of yesterday becomes a humdrum utility.

As you can see this part cuts both ways:

> > and they get redefined to suit human ego needs

> Say that you have a closed system that largely operates without human intervention - for example, the current ad fraud mess where you have bots pretending to be humans that don't actually exist to inflate ad counts, all of which gets ranked higher by the ML ad models because it inflates their engagement numbers, but it's all to sell products that don't really work anyway so that the company can post better revenue numbers to Wall Street and unload the shares on prop trading bots and index funds that are all investing algorithmically anyway. On some level, this is a form of "intelligence" even though it doesn't put pants on. For that matter, many human societies don't put pants on, nor do my not-quite-socialized preschool kids. It's only the weight of our collective upbringing, coupled with a desire to feel intelligent, that leads us to equate putting pants on with intelligence. Plenty of people don't put pants on and consider themselves intelligent as well. And the complexity of what computers actually do do is often well beyond the complexity of what humans do.

I bet your AI of choice could write a thesis on how putting pants on is a stupid social construct. Yet if it is incapable of doing it it would just be a bunch of hot air.

> I often like to flip the concept of "artificial intelligence" on its head and instead think about "natural stupidity".

This philosophy tends to go with the territory.

> Sure, the hot AI technologies of the moment are basically just massive matrix computations that statistically predict what's likely to come next given all the training data they've seen before. Humans are also basically just massive neural networks that respond to stimulus and reward given all the training data they've seen before.

“Basically” doing some heavy lifting here.

This is obviously false. We would have gone extinct pretty much immediately if we had to tediously train ourselves from scratch. We have instincts as well.

“But that’s just built-in training.” Okay, now we’re back to it not basically being stimulus-responses to training data they’ve seen before. So what’s the point? When it’s not basically just that.

> You can make very useful predictions about, say, what is going to get a human to click on a link or open their wallet using these AI technologies. And since we too are relatively predictable human machines that are focused on material wealth and having enough money to get others to satisfy our emotions, this is a very useful asset to have.

Yes. Humans have wants and needs and act in ways consistent with cause and effect. E.g. as the clueless “consumer subject” against billions of dollars of marketing money and AI owned by those same marketing departments.

Amazingly: Humans are what you allow them to be.

We could treat all humans according to Skinner Box theory. We could treat them as if Skinner’s stimulus-response theories are correct and only allow them to act inside that framework. That would (again, amazingly) confirm that Skinner was right all along.

Any organism can express itself maximally only in a maximally free setting. A free dog is a dog; a chained human might only be a dog.

The only difference is that humans have words that they can express through their mouthholes about what kind of future they want. If they want to be humans (i.e. human ego needs, sigh) or if they want to be the natural stupidity subjects of the artificial intelligence.

Or they don’t care because they don’t think AI will ever be able to put its pants on.

mportela 22 hours ago
I had heard that quote many times but never know it's called "AI effect". Thanks!
A4ET8a8uTh0 22 hours ago
Maybe? I am currently going through 'artificial intelligence modern approach' by Russel&Norvig and from historical perspective alone, it seems vision would qualify.

It is just that the language drifted a little the way it did with cyber meaning something else to post 90s kids. So now AI seems to be mostly associated with llms, but not that long ago, AI seemed to almost include just use to of an algorithm.

I am not an expert in the field at all. I am just looking at stuff for personal growth.

RcouF1uZ4gsC 17 hours ago
I think at one time, a mechanical calculator would have been considered AI
IshKebab 18 hours ago
No, even before the current AI era classical computer vision was not considered to be "AI"... because it isn't. That's just a fact.
gamblor956 18 hours ago
Computer vision is a field of AI. But this is just an algorithm without any sort of learning or training process.
sangnoir 18 hours ago
ML =/= AI.

Machine learning was widely considered to be a subset of AI, until it got a big resurgence almost 2 decades ago. Now some people use the terms interchangeably.

pj_mukh 17 minutes ago
Handcrafted CV algorithms and this level of autonomy is textbook AI, it’s just not Machine Learning.
moffkalast 20 hours ago
Deep learning is just a subset of AI which has officially been a thing since 1956. A chess algorithm is smarter than any human yet it's just classical search.
ithkuil 19 hours ago
It's just that the "AI" word is no longer taboo
RayVR 13 hours ago
What is AI?
snapcaster 18 hours ago
I'm so tired of this argument. AI is a blurry term as it's used in the world. Who the fuck cares if this is "officially AI" or not? Can we just stop having this discussion?
inexcf 23 hours ago
Everyone is thinking the same thing reading that headline. In a stroke of comedic genius the link still says "bodies" instead of "people".
scintill76 23 hours ago
I’m amazed this problem isn’t fixed in every CMS, or at least publishing team processes, by now (the problem is that the link slug is generated from the first title and doesn’t update when the title is updated.)
0xffff2 19 hours ago
I'm not a web developer at all, but I thought keeping the URL was intentional for SEO reasons.
13 hours ago
Sparkyte 9 hours ago
I have always wanted to get into robotics. Maybe my next evolution in stuff is robotics. I'm starting to get tired of big corporate software engineering.

These silly things are neat. I'm also really interested in the snake/worm robots that dig to find people and inflate to move rubble.

ehnto 6 hours ago
I had a similar thought years ago. Recently I built a CNC machine and it was a great blend of hands on, practical building work and software work. The software domain being a new challenge was fun too. It felt great to excercise existing skills in a new domain, to reach a new totally novel goal for me.

Now I barely use it, which does suggest that building it is what I really wanted to do all along and that kind of validates that robotics is probably a hobby I'd enjoy.

tay_bridge 13 hours ago
As somebody familiar with the area described in the article I cannot fathom how somebody could be lost, conscious (article mentions messages were exchanged) for so long.
modhash 8 hours ago
Does anyone know if there is something similar to this open source?
x62Bh7948f 7 hours ago
WALDO: Whereabouts Ascertainment for Low-Lying Detectable Objects

https://news.ycombinator.com/item?id=41723311

idunnoman1222 19 hours ago
It’s a long article and I’m on mobile. Do they link the code or not?
schiffern 19 hours ago
"MRMap is free for use by Mountain Rescue Teams. While being free it is NOT open source." http://www.mrmap.org.uk/index.php/introduction

Relevant changelogs are 6.0a and 6.0b. http://mrmap.org.uk/forum/viewforum.php?f=24

westurner 4 hours ago
{Code-and-Response, Call-for-Code}/DroneAid : "DroneAid: A Symbol Language and ML model for indicating needs to drones, planes" (2010) https://github.com/Code-and-Response/DroneAid .. https://westurner.github.io/hnlog/#story-22707347 :

CORRECTION: All but one of the DroneAid Symbol Language Symbols are drawn within upward pointing triangles.

Is there a simpler set of QR codes for the ground that could be made with sticks or rocks or things the wind won't bend?

catskul2 23 hours ago
> The weather was unusually mild for the season, and Kelly thought he might even have time to “bag” a second Munro,

I really hate when people use very uncommon terms without defining them. (or sometimes even people's names)

It's not that I couldn't make a guess based on context, but it's distracting, and I feel like my eyes must have skipped over something and I often keep going back over the text to see what I must have missed reading.

I imagine this is sometimes caused by sloppy editing, especially when they refer to a last name of a person who has yet to be introduced in the article, but I think sometimes it's a deliberate choice and I object.

infecto 22 hours ago
Honest question, what was the most confusing part for you? I am guessing bag as that one might be more obscure but even then in the context I think its guessable but maybe a struggle for non-native english speakers. Munro seems difficult but since your selective quote makes it worse imo.

"...a second Munro, as the Scottish mountains above 3,000 feet are known."

The opening paragraph describes him climbing/hiking a mountain in Scotland. "His plan was to climb Creise, a 1,100-meter-high peak overlooking Glen Etive...". Which then leads into him trying to "bag" a second one.

Just a counterpoint that it does not feel like sloppy editing at all. I struggle to see what would be difficult here for native speakers.

more_corn 21 hours ago
[flagged]
infecto 21 hours ago
> Who bags a mountain? A tortured metaphor if I’ve ever heard one. And 90% of English speakers don’t know what a Munro is. I’ve been to Scotland and never heard the word.

Peak bagging is common in that community but "to bag" something is quite common in native english or at least enough so that its in the Oxford dictionary. Hard for me to see a native speaker struggle with this, the connection can be made just from the prior paragraph.

They define what a Munro is in the same sentence. Are you here to just argue? I had to go back and add your post as a quote as I am not sure how someone can miss the literal definition within the sentence. "Munro, as the Scottish mountains above 3,000 feet are known". Is that difficult for you to read and understand?

LorenPechtel 20 hours ago
I hike but I'm not a peak bagger. But the first time I encountered the term I found it completely obvious what it meant.

Having only spent a few days of my life in Scotland I didn't know "Munro" but the article defined it.

arrowleaf 22 hours ago
Peak bagging is a very common term in the outdoor sports world. This complaint is like a non-tech person reading a Wired article that mentions JSON and complaining that there's no explainer.
infecto 22 hours ago
Which is I assume an extension from the usage in hunting to "bag" an animal which is to catch/kill.
closewith 22 hours ago
No, both stem from literarily and figuratively putting things in a bag. You can bag anything, a kiss, an award, item, person, accomplishment, etc.
infecto 21 hours ago
Do you have any source for your disagreement. Last time I checked the phrasing as it applies to a game bag goes quite far back which would hint at its usage in later examples that you provided.

"Many figurative senses, such as the verb meaning "to kill game" (1814) and its colloquial extension to "catch, seize, steal" (1818) are from the notion of the game bag (late 15c.) into which the product of the hunt was placed. This also probably explains modern slang in the bag "assured, certain" (1922, American English). To be left holding the bag (and presumably nothing else), "cheated, swindled" is attested by 1793." https://www.etymonline.com/search?q=bag

aspenmayer 20 hours ago
You’re both right in a way, in that you’re able to reason about the word from usage and context but it’s a separate meaning entirely, #5 below

From Oxford Dictionary of English

verb (bags, bagging, bagged) [with object]

1 put (something) in a bag: customers bagged their own groceries | we bagged up the apples | once you've raked the leaves, bag them up right away so that they don't get wet.

2 succeed in killing or catching (an animal): Mike bagged nineteen cod. • succeed in securing (something): we've bagged three awards for excellence | get there early to bag a seat in the front row.

3 [no object] (of clothes, especially trousers) form loose bulges due to wear: these trousers never bag at the knee.

4 North American English informal fit (a patient) with an oxygen mask or other respiratory aid.

5 (bags or bags I) British English informal a child's expression used to make a claim to something: bags his jacket.

6 North American English informal abandon or give up on: she ought to just bag this marriage and get on with her life.

7 informal, mainly Australian and New Zealand English criticize: the fans should be backing him not bagging him.

closewith 19 hours ago
Number 5, bagsying, is subtly different. It's a claim to something, like dibs in the US.

No, both the GP and I are referring to number two, gaining something and literally or figuratively putting it in a bag. It applies equally to game and SaaS revenue and everything in between.

aspenmayer 17 hours ago
I listed that definition also, to differentiate it from the one regarding the mountains.
lukeinator42 22 hours ago
It's honestly even closer to a non-tech person complaining about the word upload being used without an explainer.
daemonologist 18 hours ago
The word to "bag" may be more common in this context but it's not exclusive to it nor very uncommon, at least in North America. You might say "they bagged a record in the 4x400m relay" or "we bagged the contract" or another form like "that objective is in the bag." I think it's etymologically derived from hunting (literally putting game in a bag) but at this point it's just a word.
hluska 22 hours ago
They introduce Charlie Kelly the previous paragraph, explain what a Munro is right after that and use quotes around “bag”. What else could you expect? “Bag” is extremely common in many industries and they defined both other different terms.

You just ripped on an editor for absolutely no reason.