Firing programmers for AI is a mistake(defragzone.substack.com)
741 points by frag 29 days ago | 146 comments
dham 29 days ago
There's such a huge disconnect between people reading headlines and developers who are actually trying to use AI day to day in good faith. We know what it is good at and what it's not.

It's incredibly far away from doing any significant change in a mature codebase. In fact I've become so bearish on the technology trying to use it for this, I'm thinking there's going to have to be some other breakthrough or something other than LLM's. It just doesn't feel right around the corner. Now completing small chunks of mundane code, explaining code, doing very small mundane changes. Very good at.

csmpltn 29 days ago
I think that LLMs are only going to make people with real tech/programming skills much more in demand, as younger programmers skip straight into prompt engineering and never develop themselves technically beyond the bare minimum needed to glue things together.

The gap between people with deep, hands-on experience that understand how a computer works and prompt engineers will become so insanely deep.

Somebody needs to write that operating system the LLM runs on. Or your bank's backend system that securely stores your money. Or the mission critical systems powering this airplane you're flying next week... to pretend like this will all be handled by LLMs is so insanely out of touch with reality.

hombre_fatal 29 days ago
I think we who are already in tech have this gleeful fantasy that new tools impair newcomers in a way that will somehow serve us, the incumbents, in some way.

But in reality pretty much anyone who enters software starts off cutting corners just to build things instead of working their way up from nand gates. And then they backfill their knowledge over time.

My first serious foray into software wasn't even Ruby. It was Ruby on Rails. I built some popular services without knowing how anything worked. There was always a gem (lib) for it. And Rails especially insulated the workings of anything.

An S3 avatar upload system was `gem install carrierwave` and then `mount_uploader :avatar, AvatarUploader`. It added an avatar <input type="file"> control to the User form.

But it's not satisfying to stay at that level of ignorance very long, especially once you've built a few things, and you keep learning new things. And you keep wanting to build different things.

Why wouldn't this be the case for people using LLM like it was for everyone else?

It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that. You get better, you learn more, and you become the question-answerer. And one day you sheepishly look at your question history in amazement at how far you've come.

lolinder 29 days ago
> Why wouldn't this be the case for people using LLM like it was for everyone else?

I feel like it's a bit different this time because LLMs aren't just an abstraction.

To make an analogy: Ruby on Rails serves a similar role as highways—it's a quick path to get where you're going, but once you learn the major highways in a metro area you can very easily break out and explore and learn the surface streets.

LLMs are a GPS, not a highway. They tell you what to do and where to go, and if you follow them blindly you will not learn the layout of the city, you'll just learn how to use the GPS. I find myself unable to navigate a city by myself until I consciously force myself off of Google Maps, and I don't find that having used GPS directions gives me a leg up in understanding the city—I'm starting from scratch no matter how many GPS-assisted trips I've taken.

I think the analogy helps both in that the weaknesses in LLM coding are similar and also that it's not the end of the world. I don't need to know how to navigate most cities by memory, so most of the time Google Maps is exactly what I need. But I need to recognize that leaning on it too much for cities that I really do benefit from knowing by heart is a problem, and intentionally force myself to do it the old-fashioned way in those cases.

abeppu 29 days ago
I think also a critical weakness is that LLMs are trained on the code people write ... and our code doesn't annotate what was written by a human and what was suggested by a tool. In your analogy, this would be like if your sat nav system suggests that you turn right where other people have turned right ... because they were directed to turn by their sat nav.
thedanbob 29 days ago
In fact, I'm pretty sure this already happens and the results are exactly what you'd expect. Some of the "alternate routes" Google Maps has suggested for me in the past are almost certainly due to other people making unscheduled detours for gas or whatever, and the algorithm thinks "oh this random loop on a side street is popular, let's suggest it". And then anyone silly enough to follow the suggestion just adds more signal to the noise.
ahi 29 days ago
Google Maps has some strange feedback loops. I frequently drive across the Bay Bridge to Delaware beaches. There are 2 or 3 roughly equal routes with everyone going to the same destination. Google will find a "shorter" route every 5 minutes. Naturally, Maps is smart enough to detect traffic, but not smart enough to equally distribute users to prevent it. It creates a traffic jam on route A, then tells all the users to use route B which causes a jam there, and so on.
zahlman 28 days ago
It hadn't even occurred to me that there are places where enough people are using Google Maps while driving to cause significant impact on traffic patterns. Being car-free (and smartphone-free) really gives a different perspective.
aucisson_masque 28 days ago
Are you Theodore Kaczynski ghost? :)

Seriously what job you do that allows you to not have a smartphone ?

lolinder 28 days ago
Not OP: I have a smartphone for my own personal use but don't use it for work at all. If my employer wants me to use specific phone apps they can provide one for me like they do a laptop.
aucisson_masque 27 days ago
yeah so you got a smartphone, the dude was saying he doesn't have a smartphone.

no smartphone make it a real pain in the ass to do most things nowadays, job is one of the biggest but it's not the least. to even connect to my bank account on my computer i need a phone.

zahlman 27 days ago
Cell phones exist which are not smartphones, and everyone who uses phones for 2FA is happy to send 2FA codes to a "dumb" phone. They only have your phone number, after all.
aucisson_masque 27 days ago
Well my bank doesn't give me the possibility to use SMS code. It's obligatory to use the app on a verified phone and put a pin.
conaclos 28 days ago
This is also problematic in cases where navigation apps are not updated and drivers start taking routes they are no longer authorized to take.
thaumasiotes 29 days ago
Long before any use of LLMs, OsmAnd would direct you, if you were driving past Palo Alto, to take a congested offramp to the onramp that faced it across the street. There is no earthly reason to do that; just staying on the freeway is faster and safer.

So it's not obvious to me that patently crazy directions must come from watching people's behavior. Something else is going on.

smackeyacky 29 days ago
In Australia the routes seem to be overly influenced by truck drivers, at least out of the cities. Maps will recommend you take some odd town bypass when just going down Main Street is easier.

I imagine what you saw is some other frequent road users making choices that get ranked higher.

therein 29 days ago
> if you were driving past Palo Alto, to take a congested offramp to the onramp that faced it across the street

If you're talking about that left turn into Alma with the long wait instead of going into the Stanford roundabout and then the overpass, it still does that.

HomeDeLaPot 28 days ago
I've seen this type of thing with OsmAnd too. My hypothesis is that someone messed up when drawing the map, and made the offramp an extension of the highway. But I haven't actually verified this.
RicoElectrico 29 days ago
OsmAnd doesn't use traffic data. You can enable traffic map layers by feeding a reverse-engineered URL, though.
thaumasiotes 28 days ago
I'm not talking about use of traffic data. In the abstract, assuming you are the only person in the world who owns a car, that route would be a very bad recommendation. Safety concerns would be lower, but still, there's no reason you'd ever do that.
bobthepanda 28 days ago
safety concerns would probably actually be higher since the most dangerous places on roads are areas where traffic crosses and conflicts (the road you cross to get from the offramp to onramp)
foobarchu 27 days ago
An example I notice frequently on interstate highways that go through large cities is GM suggesting you get off on an exit, take the access road and skip one of no exits, then get back on at the next ramp. It does it especially often during rush hour traffic. Denver is where I've noticed it the most, but it's not limited to that area by any means.
pishpash 29 days ago
That already happens. Maps directs you to odd nonsense detours somewhat frequently now, that you get better results by overriding the machine. It's going down the way of web search.
thaumasiotes 29 days ago
> It's going down the way of web search.

This is an interesting idea. There's an obvious force directing search to get worse, which is the adversarial desire of certain people to be found.

But no such force exists for directions. Why would they be getting worse?

fuzzzerd 28 days ago
Probably my cynicism but its the more stores you drive past the more likely you are to stop off and buy something.
adityamwagh 28 days ago
Exactly! This is an amazing observation and analogy.
nuancebydefault 29 days ago
The problem is now that the LLM GPS will lead you to the wrong place once a day on average, and then you still need either open the map and study where you are and figure out the route, or refine the destination address and pray it will bring you to the correct place. Such a great analogy!
gausswho 29 days ago
Strangely this reminds me of exactly how you would navigate in parts of India before the Internet became ubiquitous.

The steps were roughly: Ask a passerby how to get where you want to go. They will usually confidently describe the steps, even if they didn't speak your language. Cheerfully thank them and proceed to follow the directions. After a block or two, ask a new passerby. Follow their directions for a while and repeat. Never follow the instructions fully. This triangulation served to naturally fill out faulty guidance and hucksters.

Never thought that would one day remind me of programming.

nuancebydefault 28 days ago
Indeed. My experience in India is that people are friendly and helpful and try to help you in a very convincing way, even so when they don't know the answer. Not so far off LLM user experience.
cozzyd 29 days ago
Try asking your GPS for the Western blue line stop on the Chicago L. (There are two of them and it will randomly pick one)
epcoa 29 days ago
What is “your GPS” meant here. With Google Maps and Apple Maps it consistently picks the closest one (this being within minutes to both but much closer to one), which seems reasonable. Maybe not ideal as when either of these apps will bring up a disambiguation for a super market chain or similar, but I’m not witnessing randomness.
nuancebydefault 29 days ago
To be clear, above i was talking about LLMs. Randomness in real GPS usage is something I have never encountered in using Google maps already since 15 years or so. 99 percent of the time it brings/brought me exactly where i want to be, even around road works or traffic jams. It seems some people have totally different experiences, so odd.
cozzyd 28 days ago
Perhaps they have improved their heuristic for this one, though perhaps it was actually Uber/Lyft that randomly picks one when given as a destination...
bgoated01 29 days ago
I'm the kind of guy who decently likes maps, and I pay attention to where I'm going and also to the map before, during, and after using a GPS (Google maps). I do benefit from Google maps in learning my way around a place. It depends on how you use it. So if people use LLMs to code without trying to learn from it and just copy and paste, then yeah, they're not going to learn the skills themselves. But if they are paying attention to the answers they are getting from the LLMs, adjusting things themselves, etc. then they should be able to learn from that as well as they can from online code snippets, modulus the (however occasional) bad examples from the LLM.
Terr_ 28 days ago
> I do benefit from Google maps in learning my way around a place.

Tangent: I once got into a discussion with a friend who was surprised I had the map (on a car dashboard display) locked to North-is-up instead of relative to the car's direction of travel.

I agreed that it's less-convenient for relative turn decisions, but rationalized that setting as making it easier to learn the route's correspondence to the map, and where it passed relative to other landmarks beyond visual sight. (The issue of knowing whether the upcoming turn was left-or-right was addressed by the audio guidance portion.)

harpiaharpyja 27 days ago
It's neat to hear that I'm not the only one who does this. It makes a night-and-day difference for me.

When the map is locked north, I'm always aware of my location within the larger area, even when driving somewhere completely new.

Without it, I could never develop any associations between what I'm seeing outside the windshield and a geospatial location unless I was already familiar with the area.

jddj 29 days ago
> LLMs are a GPS, not a highway.

I love these analogies and I think this one is apt.

To adapt another which I saw here to this RoR thread, if you're building furniture then LLMs are powertools while frameworks are ikea flatpacks.

bloomingkales 29 days ago
It’s best the analogy of the month. I don’t think cab drivers today are the same cab drivers that knew the city by heart of the past.

So, it’s been a privilege gentlemen, writing apps from scratch with you.

walks off the programming Titanic with a giant violin

rsanek 21 days ago
One small change to using a GPS radically impacts how much you know about the area -- do you use the default, always-forward view, or do you use the slightly-less-usable always-north setting? If you use the latter, you will find that you learn far more about the city layout while still benefitting from the GPS functionality itself.

I think LLMs are similar. Sure, you can vibe code and blindly accept what the LLM gives you. Or, you can engage with it as if pair programming.

JumpCrisscross 29 days ago
> LLMs are a GPS, not a highway. They tell you what to do and where to go

It still gives you code you can inspect. There is no black box. Curious people will continue being curious.

lolinder 28 days ago
The code you can inspect is analogous to directions on a map. Some have noted in this thread that for them directions on a map actually do help them learn the territory. I have found that they absolutely do not help me.

That's not for lack of curiosity, it seems to be something about the way that I'm wired that making decisions about where to navigate helps me to learn in a way that following someone else's decisions does not.

JumpCrisscross 28 days ago
You have to study the map to learn from it. Zoom in and out on surroundings, look up unfamiliar landmarks, et cetera. If you just follow the GPS or copy paste the code no, you won’t learn.
zahlman 28 days ago
The problem is that coders taking this approach are dominantly ones who lack the relevant skill - ones who are taking that approach because they lack that skill.
Ma8ee 28 days ago
The ones that until now copied and pasted everything from Stack Overflow.
startupsfail 29 days ago
For now LLMs in coding are more like an LLM version of a GPS, not the GPS itself.

Like imagine you’d ask turn-by-turn directions from an LLM and then follow these directions ;). That’s how it feels when LLMs are used for coding.

Sometimes amazing, sometimes generated code is a swamp of technical debt. Still, a decade ago it was completely impossible. And the sky is the limit.

askonomm 29 days ago
Difference here being that you actually learned the information about Ruby on Rails, whereas the modern programmer doesn't learn anything. They are but a clipboard-like vessel that passes information from an LLM onto a text editor, rarely ever actually reading and understanding the code. And if something doesn't work, they don't debug the code, they debug the LLM for not getting it right. The actual knowledge here never gets stored in the brain, making any future learning or evolving impossible.

I've had to work with developers that are over dependent on LLM's, one didn't even know how to undo code, they had to ask an LLM to undo. Almost as if the person is a zombie or something. It's scary to witness. And as soon as you ask them to explain their rationale for the solution they came up with - dead silence. They can't, because they never actually _thought_.

Terr_ 28 days ago
> I've had to work with developers that are over dependent on LLM's, one didn't even know how to undo code, they had to ask an LLM to undo.

Some also get into a loop where they ask the LLM to rewrite what they have, and the result ends up changing in a subtle undetected way or loses comments.

Kerrick 28 days ago
Difference here being that you actually learned the information about computers, whereas the modern programmer doesn't learn anything. They are but a typist-like vessel that passes information from an architect onto a text editor, rarely ever actually reading and understanding the compiled instructions. And if something doesn't work, they don't debug the machine code, they complain about the compiler for not getting it right. The actual knowledge here never gets stored in the brain, making any future learning or evolving impossible.

I've had to work with developers that are over dependent on high-level languages. One didn't even know how to trace execution in machine code; they had to ask a debugger. Almost as if the person is a zombie or something. It's scary to witness. And as soon as you explain them to explain their memory segmentation strategy - dead silence. They can't, because they never actually _thought_.

zahlman 28 days ago
No, it really isn't at all comparable like that (and other discussion in the thread makes it clear why). Users of high-level languages clearly still do write code in those languages, that comes out of their own thought rather than e.g. the GoF patterns book. They don't just complain about compilers; they actually do debug the high-level code, based on the compiler's error messages (or, more commonly, runtime results). When people get their code from LLMs, however, you can see very often that they have no idea how to proceed when the code is wrong.

Debugging is a skill anyone can learn, which applies broadly. But some people just don't. People who want correct code to be written for them are fundamentally asking something different than people who want writing correct code to be easier.

floatrock 28 days ago
Abstractions on top of abstractions on top of turtles...

It'll be interesting to see what kinds of new tools come out of this AI boom. I think we're still figuring out what the new abstraction tier is going to be, but I don't think the tools to really work at that tier have been written yet.

28 days ago
thechao 29 days ago
I think you're right; I can see it in the accelerating growth curve of my good Junior devs; I see grandOP's vision in my bad Junior devs. Optimistically, I think this gives more jr devs more runway to advance deeper into more sophisticated tech stacks. I think we're gonna need more SW devs, not fewer, as these tools get better: things that were previously impossible will be possible.
gorjusborg 29 days ago
> I think we're gonna need more SW devs, not fewer

Code is a liability. What we really care about is the outcome, not the code. These AI tools are great at generating code, but are they good at maintaining the generated code? Not from what I've seen.

So there's a good chance we'll see people using tools to generate a ton of instant legacy code (because nobody in house has ever understood it) which, if it hits production, will require skilled people to figure out how to support it.

kmoser 29 days ago
We will see both: lots of poor code, lots of neutral code (LLMs cranking out reasonably well written boilerplate), and even some improved code (by devs who use LLMs to ferret out inefficiencies and bugs in their existing, human-written codebase).

This is no different from what we see with any tool or language: the results are highly dependent on the experience and skills of the operator.

gorjusborg 28 days ago
You've missed my core point if you think those isn't different. Before AI there was always someone who understood the code/system.

In the a world where people are having machines build the entire system, there is potentially no human that has ever understood it. Now, we are talking about a yet unseen future; I have yet to see a real world system that did not have a human driving the design. But, maintaining a system that nobody has ever understood could be ultra-hardmode.

kmoser 28 days ago
Humans will always have a hand in the design because they need to explain the real-world constraints to the AI. Sure, the code it produces may be complex, but if the AI is really as smart as you're claiming it will eventually be, then it will also have the ability to explain how the code works in plain English (or your human language of choice). Even today, LLMs are remarkably good at summarizing what code does.

Philosophical question: how is LLM-produced code that nobody has ever understood any different from human-written legacy code that nobody alive today understands?

gorjusborg 27 days ago
> Philosophical question: how is LLM-produced code that nobody has ever understood any different from human-written legacy code that nobody alive today understands?

- There is zero option of paying an obscene amount of money to find the person and make the problem 'go away'

- There is a non-zero possibility that the code is not understandable by any developer you can afford. By this I mean that the system exhibits the desired behavior, but is written in such a way that only someone like Mike Pall* can understand.

* Mike Pall is a robot from the future

duderific 29 days ago
> Code is a liability

Another way I've seen this expressed, which resonates with me, is "All code is technical debt."

intelVISA 28 days ago
As one HNer aptly put it: coding is to SWE as cutting is to surgery.
podunkPDX 28 days ago
These AI tools are also not good at answering PagerDuty to fix a production problem that is a result of the code they imagined.
ivanbalepin 29 days ago
> Code is a liability

This is so true! Actual writing of the code is such a small step in overall running of a typical business/project, and the less of it the better.

zozbot234 29 days ago
> more sophisticated tech stacks

Please don't do this, pick more boring tech stacks https://news.ycombinator.com/item?id=43012862 instead. "Sophisticated" tech stacks are a huge waste, so please save the sophisticated stuff for the 0.1% of the time where you actually need it.

marcosdumay 29 days ago
Sophistication doesn't imply any increase or decrease in "boringness".
zozbot234 29 days ago
The dictionary definition of 'sophisticated' is "changed in a deceptive or misleading way; not genuine or pure; unrefined, adulterated, impure." Pretty much the polar opposite of "boring" in a technology context.
whstl 29 days ago
No, this is not "the" dictionary definition.

This definition is obsolete according to Wikitionary: https://en.wiktionary.org/wiki/sophisticated (Wikitionary is the first result that shows when I type your words)

rcxdude 29 days ago
That is an extremely archaic definition that's pretty far from modern usage, especially in a tech context
Timwi 29 days ago
No clue what dictionary you looked at but this is not at all what dictionaries actually say.
ungreased0675 28 days ago
An edge case in startups is something that provides a competitive advantage. When you run a startup, you have to do something different from the way everyone else does, or you’ll get the same results everyone else does. My theory is that some part of a startup’s operations should be cutting edge. HR processes, programming stack, sales cycle, something.
sophacles 29 days ago
That's great advice when you're building a simple CRUD app - use the paved roads for the 10^9th instance.

It's terrible advice when you're building something that will cause that boring tech to fall over. Or when you've reached the limits of that boring tech and are still growing. Or when the sophisticated tech lowers CPU usage by 1% and saves your company millions of dollars. Or when that sophisticate tech saves your engineers hours and your company 10s of millions. Or just: when the boring tech doesn't actually do the things you need it to do.

zozbot234 29 days ago
"Boring" tech stacks tend to be highly scalable in their own right - certainly more so than the average of trendy newfangled tech. So what's a lot more likely is that the trendy newfangled tech will fail to meet your needs and you'll be moving to some even newer and trendier tech, at surprisingly high cost. The point of picking the "boring" choice is that it keeps you off that treadmill.
sophacles 29 days ago
I'm not disagreeing with anything you said here - reread my comment.

Sometimes you want to use the sophisticated shiny new tech because you actually need it. Here's a recent example from a real situation:

The linux kernel (a boring tech these days) has a great networking stack. It's choking on packets that need to be forwarded, and you've already tuned all the queues and the cpu affinities and timers and polling. Do you -

a) buy more servers and network gear to spread your packets across more machines? (boring and expensive and introduces new ongoing costs of maintenance, datacenter costs, etc).

b) Write a kernel module to process your packets more efficiently? (a boring, well known solution, introduces engineer costs to make and maintain as well as downtime because the new shiny module is buggy?)

c) Port your whole stack to a different OS (risky, but choosing a different boring stack should suffice... if youre certain that it can handle the load without kernel code changes/modules).

d) Write a whole userspace networking system (trendy and popular - your engineers are excited about this, expensive in eng time, risks lots of bugs that are already solved by the kernel just fine, have to re-invent a lot of stuff that exists elsewhere)

e) Use ebpf to fast path your packets around the kernel processing that you don't need? (trendy and popular - your engineers are excited about this, inexpensive relative to the other choices, introduces some new bugs and stability issues til the kinks are worked out)

We sinned and went with (e). That new-fangled tech met our needs quite well - we still had to buy more gear but far less than projected before we went with (e). We're actually starting to reach limits of ebpf for some of our packet operations too so we've started looking at (d) which has come down in costs and risk as we understand our product and needs better.

I'm glad we didn't go the boring path - our budget wasn't eaten up with trying to make all that work and we could afford to build features our customers buy instead.

We also use postgres to store a bunch of user data. I'm glad we went the boring path there - it just works and we don't have to think about it, and that lack of attention has afforded us the chance to work on features customers buy instead.

The point isn't "don't choose boring". It's: blindly choosing boring instead of evaluating your actual needs and options from a knowledgeable place is unwise.

zozbot234 29 days ago
None of these seem all that 'trendy' to me. The real trendy approach would be something like leaping directly to a hybrid userspace-kernelspace solution using something like https://github.com/CloudNativeDataPlane/cndp and/or the https://www.kernel.org/doc/html/latest/networking/af_xdp.htm... addressing that the former is built on. Very interesting stuff, don't get me wrong there - but hardly something that can be said to have 'stood the test of time' like most boring tech has. (And I would include things like eBPF in that by now.)
sophacles 29 days ago
I have similar examples from other projects of using io_uring and af_xdp with similar outcomes. In 2020 when the ebpf decision was made it was pretty new an trendy still too... in a few cases each of these choices required us to wait for deployment until some feature we chose to depend on landed in a mainline kernel. Things move a bit slower that far down the stack so new doesn't mean "the js framework of the week", but it's still the trendy unproven thing vs the well-known path.

The point is still: evaluate the options for real - using the new thing because it's new and exicting is equally as foolish as use the boring thing because it's well-proven... if those are your main criteria.

itronitron 29 days ago
Today I learned that some tech stacks are sophisticated, I suppose those are for the discerning developer.
thomasfromcdnjs 29 days ago
I agree with this stance. Junior developers are going to learn faster than previous generations, and I'm happy for it.

I know that is confronting for a lot of people, but I think it is better to accept it, and spend time thinking about what your experience is worth. (A lot!)

Glawen 27 days ago
> Junior developers are going to learn faster than previous generations, and I'm happy for it.

How? Students now are handing out LLM homework left and right. They are not nurturing the resolve to learn. We are training a cohort of youngs who will give up without trying hard, and end learning nothing

zahlman 28 days ago
>Junior developers are going to learn faster than previous generations, and I'm happy for it.

I would have agreed, until I started seeing the kinds of questions they're asking.

csmpltn 29 days ago
> "But it's not satisfying to stay at that level of ignorance very long"

It's not about satisfaction: it's literally dangerous and can bankrupt your employer, cause immense harm to your customers and people at home, and make you unhirable as an engineer.

Let's take your example of "an S3 avatar upload system", which you consider finished after writing 2 lines of code and a couple of packages installed. What makes sure this can't be abused by an attacker to DDOS your system, leading to massive bills from AWS? What happens after an attacker abuses this system and takes control of your machines? What makes sure those avatars are "safe-for-work" and legal to host in your S3 bucket?

People using LLMs and feeling all confident about it are the equivalent of hobby carpenters after watching a DIY video on YouTube and building a garden shed over the weekend. You're telling me they're now qualified to go build buildings and bridges?

> "It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that."

I meet people like this during job interviews all of the time, if I'm hiring for a position. Can't tell you how many people with 10+ years of industry experience I met recently that can't explain how to read data from a local file, from the machine's file system.

sterlind 28 days ago
At present, LLMs are basically Stack Overflow with infinite answers on demand... of Stack Overflow quality and relevance. Prompting is the new Googling. It's a critical base skill, but it's not sufficient.

The models I've tried aren't that great at algorithm design. They're abysmal at generating highly specific, correct code (e.g. kernel drivers, consensus protocols, locking constructs.) They're good plumbers. A lot of programming is plumbing, so I'm happy to have the help, but they have trouble doing actual computer science.

And most relevantly, they currently don't scale to large codebases. They're not autonomous enough to pull a work item off the queue, make changes across a 100kloc codebase, debug and iterate, and submit a PR. But they can help a lot with each individual part of that workflow when focused, so we end up in the perverse situation where junior devs act as the machine's secretary, while the model does most of the actual programming.

So we end up de-skilling the junior devs, but the models still can't replace the principal devs and researchers, so where are the principal devs going to come from?

zahlman 28 days ago
>The models I've tried aren't that great at algorithm design. They're abysmal at generating highly specific, correct code (e.g. kernel drivers, consensus protocols, locking constructs.) They're good plumbers. A lot of programming is plumbing, so I'm happy to have the help, but they have trouble doing actual computer science.

I tend towards tool development, so this suggests a fringe benefit of LLMs to me: if my users are asking LLMs to help with a specific part of my API, I know that's the part that sucks and needs to be redesigned.

zahlman 28 days ago
>Why wouldn't this be the case for people using LLM like it was for everyone else?

Because of the mode of interaction.

When you dive into a framework that provides a ton of scaffolding, and "backfill your knowledge over time" (guilty! using Nikola as a SSG has been my entry point to relearn modern CSS, for example), you're forced to proceed by creating your own loop of experimentation and research.

When you interact with an LLM, and use forums to figure out problems the LLM didn't successfully explain to you (about its own output), you're in chat mode the whole time. Even if people are willing to teach you to fish, they won't voluntarily start the lesson, because you haven't shown any interest in it. And the fish are all over the place - for now - so why would you want to learn?

>It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that.

Of course nobody on HN would relate to that first-hand. But as someone with extensive experience curating Stack Overflow, I can assure you I have seen it second-hand many times.

unyttigfjelltol 29 days ago
> But in reality pretty much anyone who enters software starts off cutting corners just to build things instead of working their way up from nand gates.

The article is right in a zoomed-in view (fundamental skills will be rare and essential), but in the big picture the critique in the comment is better (folks rarely start on nand gates). Programmers of the future will have less need to know code syntax the same way current programmers don't have to fuss with hardware-specific machine code.

The people who still do hardware-specific code, are they currently in demand? The marketplace is smaller, so results will vary and probably like the article suggests, be less satisfactory for the participant with the time-critical need or demand.

raducu 28 days ago
> The article is right in a zoomed-in view.

First of all, I think the problems of the industry were long overdue. It started with Twitter and it proved it can be done. AI just made it easier psychologically because it's much easier to explore and modify existing code and not freak out "omg, omg, omg, we've lost this guy and ony he understands the code and we're so lost without him". AI just removes the incentives to hoard talent.

I also think of excel/spreadsheets and how it did in fact change accounting industry forever. Every claim the author makes about software developers could have been made about accounting after the advent of electronic spreadsheets.

I don't want to even get started on the huge waste and politics in the industry. I'm on the 3rd re-write of a simple task that removes metrics in Grafana which saves the team maybe 50$ monthly. If the team was cut in half, I'm sure we'd simply not do half the bullshit "improvements" we do.

geodel 29 days ago
Great points. I see my journey from an offshore application support contractor to full time engineer and learning a lot along the way. Along the journey I've seen folks who held good/senior engineering roles just stagnated or moved to management role.

Industry is now large enough to have all sort of people. Growing, stagnating, moving out, moving in, laid off, retiring early, or just plain retiring etc.

markisus 28 days ago
This is a great point. I remember my first computer programming class was Java. The teacher said “just type public static void main( string[] args) at the top”. I asked why and he said it didn’t matter for now just memorize that part. That was great advice. At that point it was more important to get a feel for how computers behave and how programs are structured on a high level. I just kept typing that cryptic line mindlessly on top of all my programs so that I could get to the other stuff. Only many months later I looked into the mysterious line and understood what all the keywords meant.

It’s funny now that I haven’t programmed Java for more than a decade and the “public static void main” incantation is still burned into my memory.

amanda99 29 days ago
I agree, and I also share your experience (guess I was a bit earlier with PHP).

I think what's left out though is that this is the experience of those who are really interested and for whom "it's not satisfying" to stay there.

As tech has turned into a money-maker, people aren't doing it for the satisfaction, they are doing it for the money. That appears to cause more corner cutting and less learning what's underneath instead of just doing the quickest fix that SO/LLM/whatever gives you.

pipes 28 days ago
I'm not so sure, I think a junior dev on my team might be being held back by AI, he's good at using it. However he was really struggling to do something very basic. In my view he just needs to learn that syntax and play around with it in a throw away console app. But I think AI is crutch that may distract from doing that. Then again it is utterly fantastic at explaining small bits of code so it could be an excellent teacher too.
britzkopf 29 days ago
Who the hell, in today's market, is going to hire an engineer with a tenuous grasp on foundational technological systems, with the hope that one day they will backfill?!
28 days ago
Terr_ 28 days ago
Yeah, my recollection of the past couple decades is many companies felt like: "Someone else will surely train up the junior developers, we'll just hire them away after they know what they're doing." This often went with an oddly-bewildered: "Wow, why is it so hard to find good candidates?"

I don't see how that trend would change much just because junior developers can use LLMs as a crutch. (Well, except when it helps them cheat at an interview that wasn't predictive of what the job really needed.)

tgv 29 days ago
> And then they backfill their knowledge over time.

If only. There are too many devs who've learnt to write JS or Python, and simply won't change. I've seen one case where someone ported an existing 20k C++ app to a browser app in the most unsuitable way with emscripten, where a 1100 lines of typescript do a much better job.

Capricorn2481 29 days ago
> But it's not satisfying to stay at that level of ignorance very long

That's the difference. This is how you feel because you like programming to some extent. Having worked closely with them, I can tell you there are many people going into bootcamps that flat out dislike programming and just heard it pays well. Some of them get jobs, but they don't want to learn anything. They just want to do as much that doesn't get them fired. They are not curious even with tasks they are supposed to do.

I don't think this is inherently wrong, as I don't feel like gatekeeping the profession if their bosses feel they add value. But this is a classic case of losing the junior > expert pipeline. We could easily find ourselves in a spot in 30 years where AI coding is rampant but there's no experts to actually know what it does.

ytpete 29 days ago
There have been people entering the profession for (purported) money and not love of the craft for at least as long as the 20 years I've been in it. So long as there are also people who still genuinely enjoy it and take pride in doing the job well, then the junior->expert pipeline isn't lost.

I buy that LLMs may shift the proportion of those two camps. But doubt it will really eliminate those who genuinely love building things with code.

Capricorn2481 28 days ago
I'm not sure it takes more than a shift, though. There aren't 0 people in training to be a doctor, but we have a shortage for sure and it causes huge problems.
sevensor 28 days ago
> I think we who are already in tech have this gleeful fantasy that new tools impair newcomers in a way that will somehow serve us, the incumbents, in some way.

Well put. There’s a similar phenomenon in industrial maintenance. The “grey tsunami.” Skilled electricians, pipefitters, and technicians of all stripes are aging out of the workforce. They’re not being replaced, and instead of fixing the pipeline, many factories are going out of business, and many others are opting to replace equipment wholesale rather than attempt repairs. Everybody loses, even the equipment vendors, who in the long run have fewer customers left to sell to.

ropable 28 days ago
I very much relate to the sentiment of starting out with simple tools and then backfilling knowledge gaps as I went. For me it was Excel -> Access shared DB forms -> Django web application framework -> etc. From spreadsheets, to database design, to web application development, to scaling HTTP services, and on and on it goes.
jjav 28 days ago
> But it's not satisfying to stay at that level of ignorance very long

I agree, but have found that for a lot of people that is totally satisfying enough. Most people don't care to really understand how the code works.

jayd16 29 days ago
That's fine and all but I'm not sure the nand-gate folks are out of a job either.
catlover76 29 days ago
[dead]
peoplefromibiza 29 days ago
[dead]
MattGaiser 29 days ago
Or assuming software needs to be of a certain quality.

Software engineers 15 years ago would have thought it crazy to ship a full browser with every desktop app. That’s now routine. Wasteful? Sure. But it works. The need for low level knowledge dramatically decreased.

whynotminot 29 days ago
Isn’t this kind of thing the story of tech though?

Languages like Python and Java come around, and old-school C engineers grouse that the kids these days don’t really understand how things work, because they’re not managing memory.

Modern web-dev comes around and now the old Java hands are annoyed that these new kids are just slamming NPM packages together and polyfills everywhere and no one understands Real Software Design.

I actually sort of agree with the old C hands to some extent. I think people don’t understand how a lot of things actually work. And it also doesn’t really seem to matter 95% of the time.

HarHarVeryFunny 29 days ago
I don't think the value of senior developers is so much in knowing how more things work, but rather that they've learnt (over many projects of increasing complexity) how to design and build larger more complex systems, and this knowledge mostly isn't documented for LLMs to learn from. An LLM can do the LLM thing and copy designs it has seen, but this is cargo-cult behavior - copy the surface form of something without understanding why it was built that way, and when a different design would have been better for a myriad of reasons.

This is really an issue for all jobs, not just software development, where there is a large planning and reasoning component. Most of the artifacts available to train an LLM on are the end result of reasoning, not the reasoning process themselves (the day by day, hour by hour, diary of the thought process of someone exercising their journeyman skills). As far as software is concerned, even the end result of reasoning is going to have very limited availability when it comes to large projects since there are relatively few large projects that are open source (things like Linux, gcc, etc). Most large software projects are commercial and proprietary.

This is really one of the major weaknesses of LLM-as-AGI, or LLM-as-human-worker-replacement - their lack of ability to learn on the job and pick up a skill for themselves as opposed to needing to have been pre-trained on it (with the corresponding need for training data). In-context learning is ephemeral and anyways no substitute for weight updates where new knowledge and capabilities have been integrated with existing knowledge into a consistent whole.

shafyy 29 days ago
Just because there are these abstractions layers that happened in the past does not mean that it will continue to happen that way. For example, many no-code tools promised just that, but they never caught on.

I believe that there's a "optimal" level of abstraction, which, for the web, seems to be something like the modern web stack of HTML, JavaScript and some server-side language like Python, Ruby, Java, JavaScript.

Now, there might be tools that make a developer's life easier, like a nice IDE, debugging tools, linters, autocomplete and also LLMs to a certain degree (which, for me, still is a fancy autocomplete), but they are not abstraction layers in that sense.

neom 29 days ago
I love that you brought no-code tools into this because I think it's interesting it never worked correctly.

My guess is: on one side, things like squarespace and wix get super super good for building sites that don't feel like squarespace and wix, (I'm not sure I'd want to be a pure "website dev" right now - although I think squarespace squashed a lot of that long ago) - and then very very nice tooling for "real engineers" (whatever that means).

I'm pretty handy with tech, I mean last time I built anything real was the 90s but I know how most things work pretty well. I sat down to ship an app last weekend, no sleep and Monday rolling around GCP was giving me errors and I hadn't realized one of the files the LLMs wrote looked like code but was all placeholder.

I think this is basically what the anthropic report says, automation issues happen via displacement, and displacement is typically fine, except the displacement this time is happening very rapidly (I read in different report, expecting traditionally ~80 years of displacement happens in ~10 years with AI)

zozbot234 29 days ago
Excel is a "no-code" system and people seem to like it. Of course, sometimes it tampers with your data in horrifying ways because something you entered (or imported into the system from elsewhere) just happened to look kinda like a date, even though it was intended to be something completely different. So there's that.
marcosdumay 29 days ago
> Excel is a "no-code" system and people seem to like it.

If you've found any Excel guru that don't spend most of their time in VBA, you have a really unusual experience.

yellowstuff 28 days ago
I've worked in finance for 20 years and this is the complete opposite of my experience. Excel is ubiquitous and drives all sorts of business processes in various departments. I've seen people I would consider Excel gurus, in that they are able to use Excel much more productively than normal users, but I've almost never seen anyone use VBA.
woah 29 days ago
Huge numbers of accountants and lawyers use excel heavily knowing only the built in formula language. They will have a few "gurus" sprinkled around who can write macros but this is used sparingly because the macros are a black box and make it harder to audit the financial models.
helge9210 29 days ago
Excel is a programming system with pure functions, imperative code (VBA/Python recently), database (cell grid, sheets etc.) and visualization tools.

So, not really "no-code".

ozim 28 days ago
That’s technically correct but it’s also wrong.

No-code in excel is that most functions are implemented for user and user doesn’t have to know anything about software development to create what he needs and doesn’t need software developer to do stuff for him.

MyOutfitIsVague 29 days ago
Excel is hardly "no-code". Any heavy use of Excel I've seen uses formulas, which are straight-up code.
sanderjd 29 days ago
But any heavy use of "no-code" apps also ends up looking this way, with "straight-up code" behind many of the wysiwyg boxes.
MyOutfitIsVague 29 days ago
Right, but "no-code" implies something: programming without code. Excel is not that in any fashion. It's either programming with code or an ordinary spreadsheet application without code. You'd really have to stretch your definitions to consider it "no-code" in a way that wouldn't apply to pretty much any office application.
rmah 28 days ago
I would disagree. Every formula you enter into a cell is "code". Moreover, more complex worksheets require VBA.
AnthonyMouse 29 days ago
> Modern web-dev comes around and now the old Java hands are annoyed that these new kids are just slamming NPM packages together and polyfills everywhere and no one understands Real Software Design.

The real issue here is that a lot of the modern tech stacks are crap, but won for other reasons, e.g. JavaScript is a terrible language but became popular because it was the only one available in browsers. Then you got a lot of people who knew JavaScript so they started putting it in places outside the browser because they didn't want to learn another language.

You get a similar story with Python. It's essentially a scripting language and poorly suited to large projects, but sometimes large projects start out as small ones, or people (especially e.g. mathematicians in machine learning) choose a language for their initial small projects and then lean on it again because it's what they know even when the project size exceeds what the language is suitable for.

To slay these beasts we need to get languages that are actually good in general but also good at the things that cause languages to become popular, e.g. to get something better than JavaScript to be able to run in browsers, and to make languages with good support for large projects to be easier to use for novices and small ones, so people don't keep starting out in a place they don't want to end up.

DrFalkyn 28 days ago
With Web Assembly you can write in any language, even C++.

Unfortunately it doesn’t expose the DOM, so you still need JavaScript

commandlinefan 29 days ago
My son is a CS major right now, and since I've been programming my whole adult life, I've been keeping an eye on his curriculum. They do still teach CS majors from the "ground up" - he took system architecture, assembly language and operating systems classes. While I kind of get the sense that most of them memorize enough to pass the tests and get their degree, I have to believe that they end up retaining some of it.
whynotminot 29 days ago
I think this is still true of a solid CS curriculum.

But it’s also true that your son will probably end up working with boot camp grads who didn’t have that education. Your son will have a deeper understanding of the world he’s operating in, but what I’m saying is that from what I’ve seen it largely hasn’t mattered all that much. The bootcampers seem to do just fine for the most part.

SoftTalker 29 days ago
Yes, they remember the concepts, mostly. Not the details. But that's often enough to help with reasoning about higher-level problems.
jbeninger 28 days ago
I always considered my education to be a "Table of Contents" listing for what I'd actually learn later
fuy 29 days ago
And also these old C hands don't seem to get paid (significantly) more than a regular web-dev who doesn't care about hardware, memory, performance etc. Go figure.
jackcosgrove 29 days ago
Pay is determined by the supply and demand for labor, which encompass many factors beyond the difficulty of the work.

Being a game developer is harder than being an enterprise web services developer. Who gets paid more, especially per hour?

bigfishrunning 27 days ago
They do where I'm from, and spend most of their time cleaning up the messes that the regular web-devs create...
bee_rider 29 days ago
The real hardcore experts should be writing libraries anyway, to fully take advantage of their expertise in a tiny niche and to amortize the cost of studying their subproblem across many projects. It has never been easier to get people to call your C library, right? As long as somebody can write the Python interface…

Numpy has delivered so many FLOPs for BLAS libraries to work on.

Does anyone really care if you call their optimized library from C or Python? It seems like a sophomoric concern.

rootnod3 29 days ago
I think the problem is that with the over-reliance on LLMs, that expertise of writing the foundational libraries that even other languages rely on, is going away. That is exactly the problem.
bdhcuidbebe 29 days ago
Yea, every progeammer should write at least a cpu emulator in their language of choice, its such a undervalued exercise that will teach you so much about how stuff really works.
lizknope 29 days ago
You can go to the next step. I studied computer engineering not computer science in college. We designed our own CPU and then implemented it in an FPGA.

You can go further and design it out of discrete logic gates. Then write it in Verilog. Compare the differences and which made you think more about optimizations.

chasd00 29 days ago
"in order to bake a pie you must first create the universe", at some point, reaching to lower and lower levels stops being useful.
lizknope 29 days ago
Sure.

Older people are always going to complain about younger people not learning something that they did. When I graduated in 1997 and started working I remember some topics that were not taught but the older engineers were shocked I didn't know it from college.

We keep creating new knowledge. It is impossible to fit everything into a 4 year curriculum without deemphasizing some other topic.

I learned Motorola 68000 assembly language in college. I talked to a recent computer science graduate and he had never seen assembly before. I also showed him how I write static HTML in vi the same way I did in 1994 for my simple web site and he laughed. He showed me the back end to their web site and how it interacts with all their databases to generate all the HTML dynamically.

bigfishrunning 27 days ago
The universe underneath the pie is mostly made up of invariant laws that must be followed.

The OS, libraries, web browser, runtime, and JavaScript framework underneath your website are absolutely riddled with bugs, and knowing how to identify and fix them makes you a better engineer. Many junior developers get hung up on the assumption that the function they're calling is perfect, and are incapable of investigating whether that's the truth.

This is true of many of the shoulders-of-giants we're standing on, including the stack beneath python, rust, whatever...

halfcat 28 days ago
In fairness, creating a universe is pretty useful.
SoftTalker 29 days ago
When I was a kid I "wrote" (mostly copied from a programming magazine) a 4-bit CPU emulator on my TI-99/4a. Simple as it was, it was the light bulb coming on for me about how CPUs actually worked. I could then understand the assembly language books that had been impenetrable to me before. In college when I first started using "C", pointers made intuitive sense. It's a very valuable exercise.
sanderjd 29 days ago
Notably, I don't think there was a mass disemployment of "old C hands". They just work on different things.
29 days ago
ragle 29 days ago
I wonder about this too - and also wonder what the difference of order is between the historical shifts you mention and the one we're seeing now (or will see soon).

Is it 10 times the "abstracting away complexity and understanding"? 100, 1000, [...]?

This seems important.

There must be some threshold beyond which (assuming most new developers are learning using these tools) fundamental ability to understand how the machine works and thus ability to "dive in and figure things out" when something goes wrong is pretty much completely lost.

TOGoS 29 days ago
> There must be some threshold beyond which (assuming most new developers are learning using these tools) fundamental ability to understand how the machine works and thus ability to "dive in and figure things out" when something goes wrong is pretty much completely lost.

For me this happened when working on some Spring Boot codebase thrown together by people who obviously had no idea what they were doing (which maybe is the point of Spring Boot; it seems to encourage slopping a bunch of annotations together in the hope that it will do something useful). I used to be able to fix things when they went wrong, but this thing is just so mysterious and broken in such ridiculous ways that I can never seem to get to the bottom of it,

SteveNuts 29 days ago
> Languages like Python and Java come around, and old-school C engineers grouse that the kids these days don’t really understand how things work

Everything has a place, you most likely wouldn't write an HPC database in Python, and you wouldn't write a simple CRUD recipe app in C.

I think the same thing applies to using LLMS, you don't use the code it generates to control a power plant or fly an airplane. You use it for building the simple CRUD recipe app where the stakes are essentially zero.

chucky_z 29 days ago
$1 for the pencil, $1000 for the line.

That’s the 5% when it does matter.

whynotminot 29 days ago
Yes this is what people like to think. It’s not really true in practice.
SJC_Hacker 29 days ago
And that last 5% is what you're paying for
whynotminot 29 days ago
But not really. Looking around my shop, I’m probably the only one around who used to write a lot of C code. No one is coming to ask me about obscure memory bugs. I’m certainly not getting paid better than my peers.

The knowledge I have is personally gratifying to me because I like having a deeper understanding of things. But I have to tell you I thought knowing more would give me a deeper advantage than it has in actual practice.

gopher_space 28 days ago
You're providing value every time you kill a bad idea "because things don't actually work that way" or shave a loop, you're just not tracking the value and neither is your boss.

To your employer, hiring people who know things (i.e. you) has giving them a deeper advantage in actual practice.

rootnod3 29 days ago
I would argue that your advantage right now is that YOU are the one position they can't replace with LLMs, because your knowledge requires exact fine detail on pointers and everything and needs that exact expertise. You might have toughen the same pay as your peers, but you also carry additional stability.
kmoser 29 days ago
Is that because the languages being used at your shop have largely abstracted away memory bug issues? If you were to get a job writing embedded systems, or compilers, or OSes, wouldn't your knowledge be more highly valued and sought after (assuming you were one of the more senior devs)?
abnercoimbre 29 days ago
If you have genuine systems programming knowledge, usually the answer is to innovate on a particular toolchain or ship your own product (I understand you may not like business stuff though.)
rcpt 29 days ago
LLMs are a much bigger jump in productivity than moving to a high level language.
coffeecat 28 days ago
Lately, I've been asking ChatGPT for answers to problems that I've gotten stuck on. I have yet to receive a correct answer from it that actually increases my productivity.
rcpt 28 days ago
I don't know what to say.

I've been able to get code working in libraries that I'm wholly unfamiliar with pretty rapidly by asking the LLM what to do.

As an example, this weekend I got a new mechanical keyboard. I like to use caps+hjkl as arrows and don't want to remap in software because I'll connect to multiple computers. Turns out there's a whole open source system for this called QMK that requires one to write C to configure the keyboard.

It's been over a decade since I touched a Makefile and I never really knew C anyway but I was able get the keyboard configured and also have some custom RGB lighting on it pretty easily by going back and forth with the LLM.

psytrancefan 26 days ago
It is just very random. LLMs help me write a synthesizer using an odd synth technique in an obscure musical programming language with no problem, help me fix my broken linux system no problem but then can't do anything right with the python library pyro. I think it is why people have such different experiences. It all depends randomly on how what you want to solve lines up with what the models are good at.
o_nate 29 days ago
At least for the type of coding I do, if someone gave me the choice between continuing to work in a modern high-level language (such as C#) without LLM assistance, or switching to a low-level language like C with LLM assistance, I know which one I would choose.
throwaway0123_5 29 days ago
Likewise, under no circumstances would I trade C for LLM-aided assembly programming. That sounds hellish. Of course it could (probably will?) be the case that this may change at some point. Innovations in higher-level languages aren't returning productivity improvements at anywhere close to the same rate as LLMs are, and in any case LLMs probably benefit from improvements to higher-level languages as well.
deadlast2 29 days ago
The programming is an interface to the machine. The AI even what we know now (LLM's, Agents, RAG) will absorb all that. It has many flaws but is still much better than most programmers.

All future programmers will be using it.

For the programmers that don't want to use it. I think there will be literally billions of lines of unbelievably bad code generated by these 1-100 generation Ai's and junior programmers that need to be corrected and fixed.

bigfishrunning 27 days ago
> It has many flaws but is still much better than most programmers.

This says more about most programmers then about any given LLM

weatherlite 29 days ago
There's no need for tens of millions of OS Kernel devs , most of us are writing business logic CRUD apps.

Also, it's not entirely clear to me why LLMs should get extremely good in web app development but not OS development, as far as I can see it's the amount and quality of training data that counts.

wesselbindt 29 days ago
> as far as I can see it's the amount and quality of training data that counts

Well there's your reason. OS code is not as in demand or prevalent as crud web app code, so there's less relevant data to train your models on.

woah 29 days ago
The OS code that exists is much higher quality so the signal to noise ratio is much better
wesselbindt 29 days ago
I think arguably there's still a quantity issue, but I'm no expert on LLMs. Plus I hear the windows source code is a bit of a nightmare. But for every windows there's a TempleOS I suppose.
trod1234 28 days ago
It is far more likely that everything, and not just IT, but everything collapses than we make it to the point you mention.

LLMs replace entry level people who invested in education. They would have the beginning knowledge, but there's no means to become better because opportunities are non-existent because they replaced these positions. Its a sequential pipeline failure of talent development. In the meantime you have the mid and senior level people who cannot pass their knowledge on, they age out, and die.

What happens when you hit a criticality point where production which is dependent on these systems, and it can no longer continue.

The knowledge implicit in production is lost, the economic incentives have been poisoned. The distribution systems are destroyed.

How do you bootstrap recovery for something that effectively took several centuries to build in the first place, but not in centuries but in weeks/months.

If this isn't sufficient enough to explain the core of the issue. Check out the Atari/Nintendo crash, which isn't nearly as large as this but goes into the dangers of destroying your distributor networks.

If you pay attention to the details, you'll see Atari's crash was fueled by debt financing, and in the process they destroyed their distributor networks with catastrophic losses. After that crash, Nintendo couldn't get shelf-space; no distributor would risk the loss without a guarantee. They couldn't advertise as video games. They had to trojan horse the perception of what they were selling, and guarantee it. There is a documentary on Amazon which covers this, playing with power. Check it out.

brightball 29 days ago
One of my first bosses was a big Perl guy. I checked on what he was doing 15 years later and he was one of 3 people at Windstream handling backbone packet management rules.

You just don’t run into many people comfortable with that technology anymore. It’s one of the big reasons I go out of my way to recruit talks on “old” languages to be included at the Carolina Code Conference every year.

SoftTalker 29 days ago
We've been in this world for decades.

Most developers couldn't write an operating system to save their life. Most could not write more than a simple SQL query. They sling code in some opinionated dev stack that abstracts the database and don't think too hard about the low-level details.

aussieguy1234 28 days ago
They'll probably go a step further and use an ORM instead of writing queries.

Since ORMs generally write crap unoptimized sql for all but the simplest of queries, this will lead to performance issues once things scale up.

eitally 29 days ago
I agree. It's the current generation's version of what happened with the advent of Javascript frameworks about 15 years ago, when suddenly web devs stopped learning how computers actually work. There will always be high demand for software engineers who actually know what they're doing, can debug complex code bases, and can make appropriate decisions about how to apply technology to business problems.

That said, AI agents are absolutely going to put a bunch of lower end devs out of work in the near term. I wouldn't want to be entering the job market in the next couple of years....

mixmastamyk 29 days ago
> There will always be high demand for software engineers who actually know what they're doing

Unfortunately they won’t be found due to horrible tech interviews focused on “culture” (*-isms), leetcode under the gun, or resume thrown in trash at first sight from lack of full degree. AMHIK.

chasd00 29 days ago
> I wouldn't want to be entering the job market in the next couple of years....

I bet there's a software dev employment boom about 5 years away once it becomes obvious competent people are needed to unwind and rework all the llm generated code.

atlintots 29 days ago
Except juniors are not going to be the competent people you're looking for to unwind those systems. Personally, no matter how it plays out, I feel like the entry-level market in this field is going to take a hit. It will become much more difficult and competitive.
InDubioProRubio 28 days ago
The "prompt" engineering is also going to create a ton of cargocult tips and tricks- endless shell scripts, that do nothing but look spectacular, with one or two important commands at the end. Fractal classes, that nobody knows why they exist. Endless boilerplate.

And the ai will be trained on this- and thus cluelessness reinforced and baked in. Omnissiah, hear our prayers in the terminal for we are but -h less man (bashes keyboard with a ritual wrench).

arrowsmith 29 days ago
> I think that LLMs are only going to make people with real tech/programming skills much more in demand, as younger programmers skip straight into prompt engineering and never develop themselves technically beyond the bare minimum needed to glue things together.

This is exactly what the article says in point 3.

devoutsalsa 29 days ago
I hired a junior developer for a couple months and was incredibly impressed with what he was able to accomplish with a paid ChatGPT subscription on a greenfield project for me. He’d definitely struggle with a mature code base, it you have to start somewhere!
bdhcuidbebe 29 days ago
Yea, i agree fully.

real programming of course wont go away. But in the public eye it lost its mysticism as seemingly anyone can code now. Of course that aint true and noone managed to create anything of substance by prompting alone.

weatherlite 29 days ago
How do we define real programming? I'm working on python and JS codebases in my startup. So very high level stuff. However to reason well about everything that goes on in our code is no small feat for an LLM (or a human), if its able to take our requirements , understand the business logic and just start refactoring and creating new features on a codebase that is quite big, well yeah, that sounds like AGI to me. In that case I don't see why it won't be able to hack on kernels.
skydhash 29 days ago
The fact that you don't see why is the issue. Both python and JS are very permissive and their runtime env is very good. More often than not, you're just dealing with logic bugs and malformed domain data. A kernel codebase like Linux is one where there are many motivated individual trying every trick to get the computer to do something. And you're usually dealing with leaner abstractions because general safety logic is not performant enough. It's a bit like the difference between a children playground and a construction site.
weatherlite 29 days ago
> More often than not, you're just dealing with logic bugs

Definitely. More often than not you're dealing with logic bugs. So the thing solving them will sometimes have to be able to reason quite well across large code bases (not every bug of course, but quite often) to the point I don't really see how it's different than general intelligence if it can do that well. And if it gets to the point its AGIish , I don't see why it can't do Kernel work (or in the very least - reduce the amount of jobs dramatically in that space as well). Perhaps you can automate 50% of the job where we're not really thinking at all as programmers, but the other 50% (or less, or more, debatable) involves planning, reasoning, debugging, thinking. Even if all you do is python and js.

skydhash 29 days ago
> So the thing solving them will sometimes have to be able to reason quite well across large code bases

The codebase only describes what the software can do currently, never the why. And you can't reason without both. And the why is the primary vector of changes which may completely redefines the what. And even the what have many possible interpretations. The code is only one specific how. Going from the why, to the what, to a specific how is the core tenet of programming. Then you add concerns like performance, reliability, maintainability, security...

Once you have a mature codebase, outside of refactoring and new features, you mostly have to edit a few lines for each task. Finding the lines to work one requires careful investigation and you need to carefully test after that to ensure that no other operations have been affected. We already have good deterministic tools to help with that.

throwaway0123_5 29 days ago
I agree with this. An AI that can fully handle web dev is clearly AGI. Maybe the first AGI can't fully handle OS kernel development, just as many humans can't. But if/once AGI is achieved it seems highly unlikely to me that it will stay at the "can do web dev but can't do OS kernel dev" level for very long.
rapind 29 days ago
> Somebody needs to write that operating system the LLM runs on. Or your bank's backend system that securely stores your money. Or the mission critical systems powering this airplane you're flying next week... to pretend like this will all be handled by LLMs is so insanely out of touch with reality.

When they do this, I really want to know they did this. Like an organic food label. Right now AI is this buzzword that companies self-label with for marketing, but when that changes, I still want to see who's using AI to handle my data.

ljm 29 days ago
The enshittification will come for the software engineers themselves eventually, because so many businesses out there only have their shareholders in mind and not their customers, and if a broken product or a promise of a product is enough to boost the stock then why bother investing in the talent to build it properly?

Look at Google and Facebook - absolute shithouse services now that completely fail to meet the basic functionality they had ~20 years ago. Google still rakes in billions rendering ads in the style of a search engine and Facebook the same for rendering ads in the format of a social news feed. Why even bother with engineering anything except total slop?

yodsanklai 28 days ago
> as younger programmers skip straight into prompt engineering and never develop themselves technically beyond the bare minimum needed to glue things together.

I'm not worried about that at all. Many young people are passionate, eager to learn and build things. They won't become suddenly dumb and lazy because they have this extra tool available to them. I think it's the opposite. They'll be better than their seniors because they'll have AI help them improving and learn faster.

AgentK20 28 days ago
Have you _seen_ the tragedy that is occurring in primary and secondary education right now with students using LLMs for the bulk of their coursework? Humans, and most forms of life in general, are lazy. They take the lowest energy route to a solution that works, whether that solution is food, shelter, or the answer to a homework question or problem at work. To some degree, this is good: An experienced <animal/student/engineer> has well-defined mental pathways towards getting what they need in as little time/energy as possible. I myself have dozens of things that I don't remember offhand, but that I remember a particular google query will get me to what I need (chmod args being the one that comes to mind). This leaves mental resources available for more important or difficult-to-acquire knowledge, like the subtle nuances of a complex system or cat pictures.

The problem is a lack of balance, and in some instances skipping the entirety of Critical Reasoning. Why go through the effort of working your way through a problem when you would rather be doing <literally anything else> with your time. Iterate on this to the extreme, with what feels like a magic bullet that can solve anything, and your skills *will* atrophy.

Of course there are exceptions to this trend. Star pupils exist in any generation who will go out of their way to discover answers to questions they have, re-derive understanding of things just for the sake of it, and apply their passions towards solving problems they decide are worth solving. The issue is the _average_ person, given an _average_ (e.g. if in America, under-funded) education, with an _average_ mentor, will likely choose the path of least resistance.

AlexCoventry 29 days ago
> people with real tech/programming skills much more in demand, as younger programmers skip straight into prompt engineering

Perhaps Python will become the new assembly code. :-)

hintymad 29 days ago
Would technical depth change the fundamental supply and demand, though? If we view AI as a powerful automation tool, it's possible that the overall demand will be lowered so much that the demand of the deep technical expertise will go down as well. Take EE industry, for instance, the technical expertise required to get things done is vast and deep, yet the demand has not been so good, compared to the software industry.
foota 29 days ago
I think I've seen the comparison with respect to training data, but it's interesting to think of the presence of LLMs as a sort of barrier to developing skills akin to pre-WW2 low background radiation steel (which, fun fact, isn't actually that relevant anymore, since background radiation levels have dropped significantly since the partial end of nuclear testing)
threetonesun 29 days ago
It’s the “CTO’s nephew” trope but at 100x the cost.
justanotherunit 26 days ago
This is so on point IMO, I feel like there is no better time than to learn more low level languages than now. Since the hype will in the end resolve around insanely technical people carrying all the major weight.
dartos 29 days ago
I don’t think “prompt engineering” will remain its own field.

It’s just modern SEO and SEO will eat it, eventually.

Prompt engineering as a service makes more sense than having on-staff people anyway, since your prompt’s effectiveness can change from model to model.

Have someone else deal with platform inconsistencies, like always.

x86hacker1010 29 days ago
You think this like newcomers can’t use the LLM to more deeply understand these topics in addition to glue. This mindset is a fallacy as newcomers are more adept and passionate as any other generation. They have better tools and can compete just the same.
shortrounddev2 29 days ago
> younger programmers skip straight into prompt engineering and never develop themselves technically beyond the bare minimum needed to glue things together

This was true before LLMs though. A lot of people just glue javascript libraries together

aussieguy1234 28 days ago
I'm aware of a designer, no coding skills, who is going to turn his startup idea into an MVP using LLMs. If that ever got serious, they would need an actual engineer to maintain and improve things.
jerpint 28 days ago
I think a good analogy is being able to drive a car vs understanding the engine of a car.

Both are useful, but you wouldn’t hire a mechanic drive you around

bigfishrunning 27 days ago
If cars were as unreliable as most software, you'd need a mechanic to drive you around.
aucisson_masque 28 days ago
Governor Gavin Newsom proposed to use AI agents to manage the California budget.

Using ai to write critical code doesn't seem far stretched to me.

Doing it right now would be suicidal of course, but in a few years when ai is even better ? It sure is coming.

We're already speaking seriously about ai surgeon, we are already using ai to do radiography doctors job and found it to be more reliable.

Some job are really at risk in the near future, that's obvious.

I'm no developer so I got no bias towards it.

efitz 29 days ago
On a recent AllIn podcast[1], there was a fascinating discussion between Aaron Levie and Chamath Palihapitiya about how LLMs will (or will not) supplant software developers, which industries, total addressable markets (TAMs), and current obstacles preventing tech CEOs from firing all the developers right now. It seemed pretty obvious to me that Chamath was looking forward to breaking his dependence on software developers, and predicts AI will lead to a 90% reduction in the market for software-as-a-service (and the related jobs).

Regardless of point of view, it was an eye opening discussion to hear a business leader discussing this so frankly, but I guess not so surprising since most of his income these days is from VC investments.

[1] https://youtu.be/hY_glSDyGUU?t=4333

KerrAvon 29 days ago
yup. the good news is this should make interviewing easier; bad news is there'll be fewer qualified candidates.

the other thing, though, is that you and I know that LLMs can't write or debug operating systems, but the people who pay us and see LLMs writing prose and songs? hmm

29 days ago
benatkin 28 days ago
When I see people trying to define which programmers will enjoy continued success as AI continues to improve, I often see One True Scotsman used.

I wish more would try to describe what the differentiating skills and circumstances are instead of just saying that real programmers should still be in demand.

I think maybe raw talent is more important than how much you "genuinely love coding" (https://x.com/AdamRackis/status/1888965636833083416) or how much of a real programmer you are. This essay captures raw talent pretty well IMO: https://www.joelonsoftware.com/2005/07/25/hitting-the-high-n...

zahlman 28 days ago
>I think that LLMs are only going to make people with real tech/programming skills much more in demand, as younger programmers skip straight into prompt engineering and never develop themselves technically beyond the bare minimum needed to glue things together.

My experience with Stack Overflow, the Python forums, etc. etc. suggests that we've been there for a year or so already.

On the one hand, it's revolutionary that it works at all (and I have to admit it works better than "at all").

But when it doesn't work, a significant fraction of those users will try to get experienced humans to fix the problem for them, for free - while also deluding themselves that they're "learning programming" through this exercise.

tharne 29 days ago
> Or the mission critical systems powering this airplane you're flying next week... to pretend like this will all be handled by LLMs is so insanely out of touch with reality.

Found the guy who's never worked for a large publicly-traded company :) Do you know what's out of touch with reality? Thinking that $BIG_CORP execs who are compensated based on the last three months of stock performance will do anything other than take shortcuts and cut corners given the chance.

wesselbindt 29 days ago
> Or the mission critical systems powering this airplane you're flying next week... to pretend like this will all be handled by LLMs is so insanely out of touch with reality.

Airplane manufacturers have proved themselves more than willing to sacrifice safety for profits. What makes you think they would stop short of using LLMs?

lolinder 29 days ago
Part of the problem is that many working developers are still in companies that don't allow experimentation with the bleeding edge of AI on their code base, so their experiences come from headlines and from playing around on personal projects.

And on the first 10,000 lines of code, the best in class tools are actually pretty good. Since they can help define the structure of the code, it ends up shaped in a way that works well for the models, and it still basically all fits in the useful context window.

What developers who can't use it on large warty codebases don't see is how poorly even the best tools do on the kinds of projects that software engineers typically work on for pay. So they're faced with headlines that oversell AI capabilities and positive experiences with their own small projects and they buy the hype.

Jcampuzano2 29 days ago
My company allowed us to use it but most developers around me didn't reach out to the correct people to be able to use it.

Yes I find it incredibly helpful and try to tell them.

But it's only helpful in small contexts, auto completing things, small snippets, generating small functions.

Any large scale changes like most of these AI companies try to push them being capable of doing it just falls straight on its face. I've tried many times, and with every new model. It can't do it well enough to trust in any codebase that's bigger than a few 10000 lines of code.

nyarlathotep_ 29 days ago
I've found it very easy to end up "generating" yourself into a corner with a total mess with no clear control flow that ends up more convoluted than need be, by a mile.

If you're in mostly (or totally) unfamiliar territory, you can end up in a mess, fast.

I was playing around with writing a dead-simple websocket server in go the other evening and it generated some monstrosity with multiple channels (some unused?) and a tangle of goroutines etc.

Quite literally copying the example from Gorilla's source tree and making small changes would have gotten me 90% of the way there, instead I ended up with a mostly opaque pile of code that *looks good* from a distance, but is barely functional.

(This wasn't a serious exercise, I just wanted to see how "far" I could get with Copilot and minimal intervention)

Jcampuzano2 29 days ago
Yeah I've found its good for getting something basic started from scratch, but often times if I try to iterate, it starts hallucinating very fast and forgetting what it was even doing after a short while.

Newer models have gotten better at this and it takes longer before they start making things gibberish but all of them have their limit.

And given the size of lots of enterprise codebases like the ones I'm working in, it just is too far away from being useful enough to replace many programmers in my opinion. I'm convinced the CEO's who are saying AI are replacing programmers are just using it as an excuse to downsize while getting investors happy.

fesoliveira 28 days ago
That is also my experience. I use ChatGPT to help me iterate a Godot game project, and it does not take more than a handful of prompts for it to forget or hallucinate about something we previously established. I need to constantly remind it about code it suggested a while ago or things I asked for in the past, or it completely ignores the context and focus just on the latest ask.

It is incredibly powerful for getting things started, but as soon as you have a sketch of a complex system going it loses its grasp on the full picture and do not account for the states outside the small asks you make. This is even more evident when you need to correct it about something or request a change after a large prompt. It just throws all the other stuff out the window and hyperfocus only on that one piece of code that needs changing.

This has been the case since GPT 3, the even their most recent model (forgot the name, the reasoning one) has this issue.

ragle 29 days ago
In a similar situation at my workplace.

What models are you using that you feel comfortable trusting it to understand and operate on 10-20k LOC?

Using the latest and greatest from OpenAI, I've seen output become unreliable with as little as ~300 LOC on a pretty simple personal project. It will drop features as new ones are added, make obvious mistakes, refuse to follow instructions no matter how many different ways I try to tell it to fix a bug, etc.

Tried taking those 300 LOC (generated by o3-mini-high) to cursor and didn't fare much better with the variety of models it offers.

I haven't tried OpenAI's APIs yet - I think I read that they accommodate quite a bit more context than the web interface.

I do find OpenAI's web-based offerings extremely useful for generating short 50-200 LOC support scripts, generating boilerplate, creating short single-purpose functions, etc.

Anything beyond this just hasn't worked all that well for me. Maybe I just need better or different tools though?

Jcampuzano2 29 days ago
I usually use Claude 3.5 sonnett since its still the one I've had my best luck with for coding tasks.

When it comes to 10k LOC codebases, I still don't really trust it with anything. My best luck has been small personal projects where I can sort of trust it to make larger scale changes, but larger scale at a small level in the first place.

I've found it best for generating tests, autocompletion, especially if you give context via function names and parameter names I find it can oftentimes complete a whole function I was about to write using the interfaces available to it in files I've visited recently.

But besides that I don't really use it for much outside of starting from scratch on a new feature or getting helping me with getting a plan together before starting working on something I may be unfamiliar with.

We have access to all models available through copilot including o3 and o1, and access to chatgpt enterprise, and I do find using it via the chat interface nice just for architecting and planning. But I usually do the actual coding with help from autocompletion since it honestly takes longer to try to wrangle it into doing the correct thing than doing it myself with a little bit of its help.

ragle 29 days ago
This makes sense. I've mostly been successful doing these sorts of things as well and really appreciate the way it saves me some typing (even in cases where I only keep 40-80% of what it writes, this is still a huge savings).

It's when I try to give it a clear, logical specification for a full feature and expect it to write everything that's required to deliver that feature (or the entirety of slightly-more-than-non-trivial personal project) that it falls over.

I've experimented trying to get it to do this (for features or personal projects that require maybe 200-400 LOC) mostly just to see what the limitations of the tool are.

Interestingly, I hit a wall with GPT-4 on a ~300 LOC personal project that o3-mini-high was able to overcome. So, as you'd expect - the models are getting better. Pushing my use case only a little bit further with a few more enhancements, however, o3-mini-high similarly fell over in precisely the same ways as GPT-4, only a bit worse in the volume and severity of errors.

The improvement between GPT-4 and o3-mini-high felt nominally incremental (which I guess is what they're claiming it offers).

Just to say: having seen similar small bumps in capability over the last few years of model releases, I tend to agree with other posters that it feels like we'll need something revolutionary to deliver on a lot of the hype being sold at the moment. I don't think current LLM models / approaches are going to cut it.

28 days ago
menaerus 29 days ago
Did you have to do any preparation steps before you asked from a model to do the large scale change or there were no steps involved? For example, did you simply ask for the change or did you give a model a chance to learn about the codebase. I am genuinely asking, I'm curious because I haven't had a chance to use those models at work.
Jcampuzano2 29 days ago
There are simply no models that can keep in context the amount of info required in enterprise codebases before starting to forget or hallucinate.

I've tried to give it relevant context myself (a tedious task in itself to be honest) and even tools that claim to automatically be able to do so fail wonderfully at bigger than toy project size in my experience.

The codebase I'm working on day to day at this moment is give or take around 800,000 lines of code and this isn't even close to our largest codebase since its just one client app for our monolith.

Even trivial changes require touching many files. It would honestly take any average programmer less time to implement something themselves than trying to convince an LLM to complete it.

menaerus 29 days ago
The largest context that I am aware that an open-source model (e.g. qwen) can manage is 1M tokens. This should translate to ~30kLoC. I'd envision that this could in theory work even on large codebases. It certainly depends on the change to be done but I can imagine that ~30kLoC of context is large enough for most of the module-specific changes. Possibly the models that you're using have a much smaller context window?

Then again, and I am repeating myself from other comments I made here in the topic, there's also Devon which pre-processes the codebase before you can do anything else. That kinda makes me wonder if current limitations that people observe in using those tools are really representative of what might be the current state of the art.

Jcampuzano2 29 days ago
If you don't mind me asking, what size of codebases do you typically work on? As mentioned I've tried using all the available commercial models and none work better than as a helpful autocomplete, test, and utility function generator. I'm sure maybe big players like Meta, OpenAI, MS, etc do have the capability of expanding its context for their own internal projects and training specifically on their code, but most of the rest of us can't feasibly do that since we don't own our own AI moat.

Even on my personal projects and smaller internal projects that are small toy projects or utility tools I sometimes struggle to get them to build anything significant. I'm not saying its impossible, but I always find it best at starting things from scratch, and small tools. Maybe its just a sign that AI would be best for microservices.

I've never used Devon so I can't speak to it, but I do recall seeing it was also overhyped at best and struggled to do anything it was purported to be able to in demos. Not saying that this is still true.

I would be interested in seeing how Devon performs on a large open source project in real-time (since if I recall their demos were not real-time demonstrations) for instance just to evaluate its capabilities.

menaerus 29 days ago
Several millions lines of code. Can't remember any project that I was involved with and that was less than 5MLoC. C++ system level programming.

Overhyped or not Devon is using something else under the hood since it is pre-processing your whole codebase. It's not "realtime" since it simulates the CoT meaning that it "works" on the patch the very same way a developer would. and therefore it will give you a resulting PR in few hours AFAIR. I agree that a workable example on more complex codebase would be more interesting.

> I've tried using all the available commercial models and none work better than as a helpful autocomplete, test, and utility function generator

That's the why I mentioned qwen because I think commercial AI models do not have such a large window context size. Perhaps, therefore an experience would have been different.

Jcampuzano2 29 days ago
And you have had luck with models like the one you mentioned and Devon generating significant amounts of code in these codebases? I would love to be able to have this due to the productivity gains it should allow but I've just never been able to demonstrate what the big AI coding services claim to be able to do at a large scale.

What they already do is a decent productivity boost but not nearly as much as they claim to be capable of.

menaerus 28 days ago
As I already said in my first comment, I haven't used those models and any of them would have been forbidden at my work.

My point was rather that you might be observing suboptimal results only because you haven't used the models which are more fit, at least hypothetically, for your use case.

vunderba 29 days ago
I've heard pretty mixed opinions about the touted capabilities of Devon.

https://www.itpro.com/software/development/the-worlds-first-...

menaerus 28 days ago
That's good news for us I suppose.
raducu 29 days ago
> For example, did you simply ask for the change or did you give a model a chance to learn about the codebase.

I've tried it both with personal projects and work.

My personal project/benchmark is a 3d snake game. O3 is by far the best, but even with a couple of hundred lines of code it wrote itself it loses coherence and can't produce changes that involve changing 2 lines of code in a span of 50 lines of code. It either cannot comprehend it needs to touch multiple places of re-writes huge chunks of code and breaks other functionality.

At work, it's fine for writing unit tests on straight forward tasks that it most likely has seen examples of before. On domain-specific tasks it's not so good and those tasks usually involve multiple file edits in multiple modules.

The denser the logic, the smaller the context where LLMs seem to be coherent. And that's funny, because LLMs seem to deal much better with changing code humans wrote than the code the LLMs wrote themselves.

Which makes me wonder -- if we're all replaced by AI, who will write the frameworks and programming languages themselves?

menaerus 29 days ago
Thanks but IIUC you're describing a situation where you're simply using a model without giving it a chance to learn from the whole codebase? If so, then I was asking for the opposite where you would ingest the whole codebase and then let the model spit out the code. This in theory should enable the AI model to build a model of your code.

> if we're all replaced by AI, who will write the frameworks and programming languages themselves?

What for? There's enough programming languages and there's enough of the frameworks. How about using an AI model to maintain and develop existing complex codebases? IMHO if AI models become more sophisticated and are able to solve this, then the answer is pretty clear who will be doing it.

lolinder 29 days ago
Not OP, but I've had the same experience, and that's with tools that purport to handle the context for you.

And frankly, if you can't automate context, then you don't have an AI tool that can realistically replace a programmer. If I have to manually select which of my 10000 files are relevant to a given query, then I still need to be in the loop and will also likely end up doing almost as much work as I would have to just write the code.

menaerus 29 days ago
I see that you deleted your previous response which was unnecessarily snarky while my question was genuine and simple I suppose.

> And frankly, if you can't automate context,

How about ingesting the whole codebase into the model? I have seen that this is possible with at least one such tool (Devon) and which I believe is using gpt model underneath meaning that other providers could automate this step too. I am curious if that would help in generating more legit large scale changes.

lolinder 29 days ago
> I see that you deleted your previous response which was unnecessarily snarky while my question was genuine and simple I suppose.

You edited your comment to clarify that you were asking from a place of ignorance as to the tools. Your original comment read as snarky and I responded accordingly, deleting it when I realized that you had changed yours. :)

> How about ingesting the whole codebase into the model? I have seen that this is possible with at least one such tool (Devon) and which I believe is using gpt model underneath meaning that other providers could automate this step too. I am curious if that would help in generating more legit large scale changes.

It doesn't work. Even the models that claim to have really large context windows get very distracted if you don't selectively pick relevant context. That's why I always talk about useful context window instead of just plain context window—the useful context window is much lower and how much you have depends on the type of text you're feeding it.

menaerus 29 days ago
I don't think my comment read as snarky but I was surprised to see the immediate downvote which presumably came from you so I only added the last sentence. This is a stupid way of disagreeing and attempting to shut down the discussion without merits.

> It doesn't work. Even the models that claim to have really large context windows get very distracted if you don't selectively pick relevant context.

I thought Devon is able to pre-process the whole codebase and which it could take up to a one single day for larger codebases so it must be doing something, e.g. indexing the code? If so, this isn't a context-window specific thing, it's something else and it makes me wonder how that works.

lolinder 29 days ago
> I don't think my comment read as snarky but I was surprised to see the immediate downvote which presumably came from you so I only added the last sentence.

I can't downvote you because you are downthread of me. HN shadow-disables downvotes on all child and grandchild comments.

I'm the one who upvoted you to counteract the downvote. :)

whamlastxmas 29 days ago
And then they hugged and became lifelong friends :)
menaerus 29 days ago
You never know - one moment arguing on HN and the second moment you know, drinking at the bar lamenting on how AI is gonna replace us :)
menaerus 29 days ago
Ok, sorry about that.
vunderba 29 days ago
> How about ingesting the whole codebase into the model?

You keep referring to this vague idea of "ingesting the whole codebase". What does this even mean? Are you talking about building a code base specific rag, fine tuning against a model, injecting the entire code base into the system context, etc.?

menaerus 28 days ago
It is vague because the implementation details you are asking me for are closed source, for obvious reasons. I can only guess what it does but that's besides the point. The point is rather that Devon or 1M window context qwen model might be better or more resilient towards the "lack of context" than what the others were suggesting.
29 days ago
throwaway0123_5 29 days ago
Some codebases grown with AI assistance must be getting pretty large now, I think an interesting metric to track would be percent of code that is AI generated over time. Still isn't a perfect proxy for how much work the AI is replacing though, because of course it isn't the case that all lines of code would take the same amount of time to write by hand.
lolinder 29 days ago
Yeah, that would be very helpful to track. Anecdotally, I have found in my own projects that the larger they get the less I can lean on agent/chat models to generate new code that works (without needing enough tweaks that I may as well have just written it myself). Having been written with models does seem to help, but it doesn't get over the problem that eventually you run out of useful context window.

What I have seen is that autocomplete scales fine (and Cursor's autocomplete is amazing), but autocomplete supplements a software engineer, it doesn't replace them. So right now I can see a world where one engineer can do a lot more than before, but it's not clear that that will actually reduce engineering jobs in the long term as opposed to just creating a teller effect.

ryandrake 29 days ago
It might not just be helpful but required one day. Depending on how the legality around AI-generated code plays out, it's not out of the question that companies using it will have to keep track of and check the provenance and history of their code, like many companies already do for any open source code that may leak into their project. My company has an "open source review" process to help ensure that developers aren't copy-pasting GPL'ed code or including copyleft libraries into our non-GPL licensed products. Perhaps one day it will be common to do an "AI audit" to ensure all code written complied with whatever the future regulatory landscape shapes up to be.
WillPostForFood 29 days ago
the kinds of projects that software engineers typically work on for pay

This assumes a typical project is fairly big and complex. Maybe I'm biased the other way, but I'd guess 90% of software engineers are writing boilerplate code today that could be greatly assisted by LLM tools. E.g., PHP is still one of the top languages, which means a lot of basic WordPress stuff that LLMs are great at.

lolinder 29 days ago
The question isn't whether the code is complex algorithmically, the question is whether the code is:

* Too large to fit in the useful context window of the model,

* Filled with bunch of warts and landmines, and

* Connected to external systems that are not self-documenting in the code.

Most stuff that most of us are working on meets all three of these criteria. Even microservices don't help, if anything they make things worse by pulling the necessary context outside of the code altogether.

And note that I'm not saying that the tools aren't useful, I'm saying that they're nowhere near good enough to be threatening to anyone's job.

RivieraKid 29 days ago
I'm surprised to see a huge disconnect between how I perceive things and the vast majority of comments here.

AI is obviously not good enough to replace programmers today. But I'm worried that it will get much better at real-world programming tasks within years or months. If you follow AI closely, how can you be dismissive of this threat? OpenAI will probably release a reasoning-based software engineering agent this year.

We have a system that is similar to top humans at competitive programming. This wasn't true 1 year ago. Who knows what will happen in 1 year.

cejast 29 days ago
Nobody can tell you whether progress will continue at current, faster or slower rates - humans have a pretty terrible track record at extrapolating current events into the future. It's like how movies in the 80's made predictions about where we'll be in 30 years time. Back to the Future promised me hoverboards in 2015 - I'm still waiting!
tmnvdb 29 days ago
Compute power increases and algorithmic efficiency improvements have been rapid and regular. I'm not sure why you thought that Back to the Future was a documentary film.
cejast 29 days ago
Unless you have a crystal ball there is nothing that can give you certainty that will continue at the same or better rate. I’m not sure why you took the second half of the comment more seriously than the first.
tmnvdb 29 days ago
Nobody has certainty about the future. We can only look at what seems most likely given the data.
layer8 29 days ago
When I see stuff like https://news.ycombinator.com/item?id=42994610 (continued in https://news.ycombinator.com/item?id=42996895), I think the field still has fundamental hurdles to overcome.
tmnvdb 29 days ago
Why do you think this is a fundamental hurdle, rather than just one more problem that can be solved? I dont have strong evidence either way, but I've seen a lot of 'fundamental unsurmountable problems' fall by the wayside over the past few years. So I'm not sure we can be that confident that a problem like this, for which we have very good classic algorithms, is a fundamental issue.
lordswork 29 days ago
This kind of error doesn't really matter in programming where the output can be verified with a feedback loop.
layer8 29 days ago
This is not about the numerical result, but about the way it reasons. Testing is a sanity check, not a substitute for reasoning about program correctness.
johnnyanmac 29 days ago
It's the opposite. I don't think it'll replace programmers legitimately within a decade. I DO think that companies will try a lot in the months and years anyway and that programmers will be the only ones suffering the consequences of such actions.
tmnvdb 29 days ago
People somehow have expectations that are both too high and too low at the same time. They expect (demand) current language models completely replace a human engineer in any field without making mistakes (this is obviously way too optimistic) while at the same time they are ignoring how rapid the progress has been and how much these models can now do that seemed impossible just 2 years ago, delivering huge value when used well, and they assume no further progress (this seems too pessimistic, even if progres is not guaranteed to continue at the same rate).
mirsadm 28 days ago
ChatGPT 4 was released 2 years ago. Personally I don't think things have moved on significantly since then.
tmnvdb 28 days ago
Really now. I think that deserves a bit more explaination, given the cost per token has dropped by several orders of magnitude, we have seen large changes on all benchmarks (including entirely new capabilities), multimodality is now a fact since 4o, test time compute with reasoning models is making big strides since o1.... It seems on the surface a lot is happening. In fact, I wanted to share one of the benchmark overviews, but none include ChatGPT 4 anymore since it is totally not competitive anymore..
morsecodist 28 days ago
Benchmarks are meaningless in and of themselves, they are supposed to be a proxy for usefulness. I have used Sonnet 3.5, ChatGPT-3, ChatGPT-3.5, ChatGPT-4, ChatGPT-4o, o1, o3-mini, o3-mini-high nearly daily for software development. I am not saying AI isn't cool or useful but I am experiencing diminishing returns in model quality (I do appreciate the cost reductions). The sorts of things I can have AI do really haven't changed that much since I got access to my first model. The delta between having no LLM to an LLM feels an order of magnitude bigger at least than the delta between the first LLM and now.
chrz 28 days ago
its bigger, shinier, faster, but still doesnt fly
yojat661 28 days ago
Exactly. I have been waiting for gpt5 to see the delta, but after gpt4 things seemed to have stalled.
tmnvdb 28 days ago
This seems like a bizarre claim on the surface, see also my other message above.

https://epoch.ai/data/ai-benchmarking-dashboard

lurking_swe 28 days ago
depends on what you work on in the software field. Many of these LLM’s have pretty small context windows. In the real world when my company wants to develop a new feature, or change the business logic, that is a cross-cutting change (many repos/services). I work at a large org for background. No LLM will be automating this for a long time to come. Especially if you’re in a specific domain that is niche.

If your project is very small, and it’s possible to feed your entire code base into an LLM in the near future, then you’re in trouble.

Also the problem is the LLM output is only as good as the prompt. 99% of the time the LLM won’t be thinking of how to make your API change backwards compatible for existing clients, how to help you do a zero-downtime migration, following security best practices, or handling a high volume of API traffic. Etc.

Not to mention, what the product team _thinks_ they want (business logic) is usually not what they really want. Happens ALL THE TIME friend. :) It’s like the offshoring challenge all over again. Communication with humans is hard. Communication with an LLM is even harder. Writing the code is the easiest part of my job!

I think some software development jobs will definitely be at risk in the next 10-15 years. Thinking this will happen in 1 years time is myopic in my opinion.

thiht 28 days ago
> If you follow AI closely, how can you be dismissive of this threat?

Just use a state of the art LLM to write actual code. Not just a PoC or an MVP, actual production ready code on an actual code base.

It’s nowhere close to being useful, let alone replacing developers. I agree with another comment that LLMs don’t cut it, another breakthrough is necessary.

Imanari 29 days ago
https://tinyurl.com/mrymfwwp

We will see, maybe models do get good enough but I think we are underestimating these last few percent of improvement.

nitwit005 28 days ago
It's a bit paradoxical. A smart enough AI, and there is no point in worrying, because almost everyone will be out of a job.

The problem case is the somewhat odd scenario where there is an AI that's excellent at software dev, but not most other work, and we all have to go off and learn some other trade.

bwfan123 29 days ago
A "causal model" is needed to fix bugs ie, to "root-cause" a bug.

LLMs yet dont have the idea of a causal-model of how something works built-in. What they do have is pattern matching from a large index and generation of plausible answers from that index. (aside: the plausible snippets are of questionable licensing lineage as the indexes could contain public code with restrictive licensing)

Causal models require machinery which is symbolic, which is able to generate hypotheses and test and prove statements about a world. LLMs are not yet capable of this and the fundamental architecture of the llm machine is not built for it.

Hence, while they are a great productivity boost as a semantic search engine, and a plausible snippet generator, they are not capable of building (or fixing bugs in) a machine which requires causal modeling.

fiso64 29 days ago
>Causal models require machinery which is symbolic, which is able to generate hypotheses and test and prove statements about a world. LLMs are not yet capable of this and the fundamental architecture of the llm machine is not built for it.

Prove that the human brain does symbolic computation.

bwfan123 29 days ago
We dont know what the human brain does, but we know it can produce symbolic theories or models of abstract worlds (in the case of math) or real worlds (in the case of science). It can also produce the "symbolic" turing machine which serves as an abstraction for all computation we use (cpu/gpu/etc)
necovek 28 days ago
Agreed, and I haven't yet seen any single instance of a company firing software engineers because AI is replacing them (even if by increasing productivity of another set of software engineers): I've asked this a number of times, and while it's a common refrain, I haven't really seen any concrete news report saying it in so many words.

And to be honest, if any company is firing software engineers hoping AI replaces their production, that is good news since that company will soon stop existing and treating engineers like shit which it probably did :)

pgm8705 29 days ago
Yes. I think part of the problem is how good it is at starting from a blank slate and putting together an MVP type app. As a developer, I have been thoroughly impressed by this. Then non-devs see this and must think software engineers are doomed. What they don't see is how terrible LLMs are at working with complex, mature codebases and the hallucinations and endless feedback loops that go with that.
idle_zealot 29 days ago
The tech to quickly spin up MVP apps has been around for a while now. It gets you from a troubling blank slate to something with structure, something you can shape and build on.

I am of course talking about

  npx create-{template name}
Or your language of choice's equivalent (or git clone template-repo).
tiborsaas 28 days ago
Yes, but the LLM driven MVP-s are not only builerplates but actual functioning apps. The "create-" is somewhat good, but it's usually throwaway code and do it properly later. While my LLM made boilerplate is the actual first few steps to get the boring parts done. It also needs refactoring and polishing, but it's an order of magnitude better than the "MVP helper tooling" before.
ge96 29 days ago
A friend of mine reached out with some code ChatGPT wrote for him to trade crypto. It had so much random crap in it and lines would say "AI enhanced trading algo" and it was just an np.randomint line. It was pulling in random deps not even used.

I get it though like I'm terrible working with IMUs and I want to just get something going but I can't there's that wall I need to overcome/learn eg. the math behind it. Same with programming helps to have the background knowing how to read code and how it works.

HDThoreaun 29 days ago
I used claude to help write a crypto trading bot. It helped me push out thousands of lines a day. What wouldve taken months took a couple weeks. Obviously you still need experienced pilots but unless we find an absolute fuckload of new work to do(not unlikely looking at history) its hard for me to see anything other than way less developers being needed.
DanielHB 29 days ago
The only thing I use it for is for small self-contained snippets of code on problems that require use of APIs I don't quite remember out of the top of my head. The LLM spits out the calls I need to make or attributes/config I need to set and I go check the docs to confirm.

Like "How to truncate text with CSS alone" or "How to set an AWS EC2 instance RAM to 2GB using terraform"

belter 29 days ago
> Now completing small chunks of mundane code, explaining code, doing very small mundane changes. Very good at.

I would not trust them until they can do the news properly. Just read the source Luke.

"AI chatbots unable to accurately summarise news, BBC finds" - https://www.bbc.com/news/articles/c0m17d8827ko

rs186 29 days ago
I use AI coding assistants daily, and whenever there is a task that those tools cannot do correctly/quickly enough so that I need to fallback to editing things by myself, I spend a bit of time thinking what is so special about the tasks.

My observation is that LLMs do repetitive, boring tasks really well, like boilerplate code and common logic/basic UI that thousands of people have already done. Well, in some sense, jobs where developers who spend a lot of time writing generic code is already at risk of being outsourced.

The tasks that need a ton of tweaking or not worth asking AI at all are those that are very specific to a specific product and need to meet specific requirements that often come from discussions or meetings. Well, I guess in theory if we had transcripts for everything, AI could write code like the way you want, but I doubt that's happening any time soon.

I have since become less worried about the pace AI will replace human programmers -- there is still a lot that these tools cannot do. But for sure people need to watch out and be aware of what's happening.

darepublic 29 days ago
I dunno if it's always good at explaining code. It tends to take everything at face value and is unable to opinionatedly reject bs when it's presented with it. Which in the majority of cases is bad.
bodegajed 28 days ago
this is also my problem. When I ask someone a technical question, and I did not provide context on some abstractions. Usually this is common because abstractions can be very deep. "Hmm, not sure.. can you check what's this supposed to do?"

LLMs don't do this, it confidently hallucinate the abstraction out of thin air or uses their outdated knowledge store. Sending wrong use or wrong input parameters.

lawlessone 29 days ago
Same, LLMs are interesting but on their own are a dead end. I think something needs to actually experience the world in 3d in real time to understand what it is actually coding things or doing tasks for.
data-ottawa 29 days ago
I don’t know that it needs to experience the world in real-time, but when the brain thinks about things it’s updating its own weights. I don’t think attention is a sufficient replacement for that mechanism.

Reasoning LLMs feel like an attempt to stuff the context window with additional thoughts, which does influence the output, but is still a proxy for plasticity and aha-moments that can generate.

lawlessone 29 days ago
>I think this is true only if there is a novel solution that is in a drastically different direction than similar efforts that came before.

That's good point, we don't do that right now. it's all very crystalized.

cratermoon 29 days ago
> actually experience the world in 3d in real time

AKA embodiment. Hubert L. Dreyfus discussed this extensively in "Why Heideggerian AI Failed and How Fixing it Would Require Making it More Heideggerian": http://dx.doi.org/10.1080/09515080701239510

falcor84 29 days ago
> LLMs are interesting but on their own are a dead end.

I don't think that anyone is advocating for LLMs to be used "on their own". Isn't it like saying that airplanes are useless "on their own" in 1910, before people had a chance to figure out proper runways and ATC towers?

dingnuts 29 days ago
there was that post about "vibe coding" here the other day if you want to see what the OP is talking about
falcor84 29 days ago
You mean Karpathy's post discussed on https://twitter.com/karpathy/status/1886192184808149383 ?

If so, I quite enjoyed that as a way of considering how LLM-driven exploratory coding has now become feasible. It's not quite there yet, but we're getting closer to a non-technical user being able to create a POC on their own, which would then be a much better point for them in engaging an engineer. And it will only get better from here.

sarchertech 29 days ago
Technology to allow business people to create POCs has been around for a long time.
falcor84 29 days ago
All previous examples have been of the "no code" variety, where you press buttons and it controls presets that the creators of the authoring tool have prepared for you. This is the first time where you can talk to it and it writes arbitrary code for you. You can argue that it's not a good idea, but it is a novel development.
sarchertech 29 days ago
A no code solution at its most basic level is nothing more or less than a compiler.

You wouldn’t argue that writing in a high level language doesn’t let you produce arbitrary code because the compiler is just spitting out presets its author prepared for you.

There are 2 main differences between using an LLM to build an app for you and using a no code solution with a visual language.

1. The source code is English (which is definitely more expressive).

2. The output isn’t deterministic (even with temperature set to 0 which is probably not what you want anyway)

Both 1 and 2 are terrible ideas. I’m not sure which is worse.

falcor84 28 days ago
I just outright disagree. What this Vibe Coding is a substitute for is to finding a random dev on Fiverr, which inherently suffers from your "1 and 2". And I'd argue that vibe coding already offers you more bang for your buck than the median dev on Fiverr.
sarchertech 28 days ago
Low code/no code solutions were already a substitute for finding a random dev on Fiverr, which was almost always a terrible way to solve almost any problem.

The median dev on Fiverr is so awful that almost anything is more bang for your buck.

jillesvangurp 29 days ago
I think of it as an enabler that reduces my dependency on junior developers. Instead of delegating simple stuff to them, I now do it myself with about the same amount of overhead (have to explain what I want, have to triple check the results) on my side but less time wasted on their end.

A lot of micro managing is involved either way. And most LLMs suffer from a severe case of ground hog day. You can't assume them to remember anything over time. Every conversation starts from scratch. If it's not in your recent context, specify it again. Etc. Quite tedious but it still beats me doing it manually. For some things.

For at least the next few years, it's going to be an expectation from customers that you will not waste their time with stuff they could have just asked an LLM to do for them. I've had two instances of non technical CPO and CEO types recently figuring out how to get a few simple projects done with LLMs. One actually is tackling rust programs now. The point here is not that that's good code but that neither of them would have dreamed about doing anything themselves a few years ago. The scope of the stuff you can get done quickly is increasing.

LLMs are worse at modifying existing code than they are at creating new code. Every conversation is a new conversation. Ground hog day, every day. Modifying something with a lot of history and context requires larger context windows and tools to fill those. The tools are increasingly becoming the bottleneck. Because without context the whole thing derails and micromanaging a lot of context is a chore.

And a big factor here is that huge context windows are costly so there's an incentive for service providers to cut some corners there. Most value for me these days come from LLM tool improvements that result in me having to type less. "fix this" now means "fix the thing under my cursor in my open editor, with the full context of that file". I do this a lot since a few weeks.

hammock 29 days ago
> It's incredibly far away from doing any significant change in a mature codebase

The COBOL crisis at Y2K comes to mind.

Cascais 29 days ago
Is this the same Cobol crisis we have now?

https://www.computerweekly.com/news/366588232/Cobol-knowledg...

__MatrixMan__ 29 days ago
I think it's more likely that we'll see a rise in workflows that AI is good at, rather than AI rising to meet the challenges of our more complex workflows.

Let the user pair with an AI to edit and hot-reload some subset of the code which needs to be very adapted to the problem domain, and have the AI fine-tuned for the task at hand. If that doesn't cut it, have the user submit issues if they need an engineer to alter the interface that they and the AI are using.

I guess this would resemble how myspace used to do it, where you'd get a text box where you could provide custom edits, but you couldn't change the interface.

strangescript 29 days ago
As context sizes get larger (and remain accurate within the size) and speeds increase, especially inference, it will start solving these large complex code bases.

I think people lose sight of how much better it has gotten in just a few years.

renegade-otter 29 days ago
AI will create more jobs, if anything, as the "engineers" out of their depth create massive unmaintainable legacy.
OccamsMirror 29 days ago
It's Access databases all over again.
outworlder 29 days ago
> I'm thinking there's going to have to be some other breakthrough or something other than LLM's.

We actually _need_ a breakthrough for the promises to materialize, otherwise we will have yet another AI Winter.

Even though there seems to be some emergent behavior (some evidence that LLMs can, for example, create an internal chess representation by themselves when asked to play), that's not enough. We'll end up with diminishing returns. Investors will get bored of waiting and this whole thing comes crashing down.

We'll get an useful too in our toolbox, as we do at every AI cycle.

ein0p 28 days ago
+1. I've tried really hard to replace even some parts of my job with AI ever since GPT3 era, unsuccessfully. All it does for me is it allows me to enter unfamiliar domains (such as e.g. SwiftUI) but then I'm all on my own. In domains where I already have expertise it just doesn't work well. So it is a productivity booster, sure, but I don't see it replacing anyone doing non-bullshit work. I don't even see a trend line pointing in that direction.
chefandy 19 days ago
hintymad 29 days ago
> It's incredibly far away from doing any significant change in a mature codebase

A lot of the use cases are on building something that has already been built before, like a web app, a popular algorithm, and etc. I think the real threat to us programmers is stagnation. If we don't have new use cases to develop but only introduce marginal changes, then we can surely use AI to generate our code from the vast amount of previous work.

tarkin2 29 days ago
Sorry for inadvertently advising but I met a guy who used v0.dev to make impressive websites (although admittedly he did use react before so he was experienced) with professional success. It's more than arguable that his company will fire/hire fewer devs. Of course in a decade or so they'll be a skill gap unless LLMs can fill that gap too.
deeviant 29 days ago
The huge disconnect is that the skill set to use LLMs for code effectively is not the same skill set of standard software engineering. There is a very heavy intersection and I would say you cannot be effective at LLM software development without being an effective software engineer, but being an effective software engineer does not by any means make somebody good at LLM development.

Very talented engineers, coworkers, that I would place above myself in skill, seemed stumped by it, while I have realized at least a 10x productively gain.

The claim that LLMs are not being applied in mature, complex code-bases is pure fantasy, example: https://arxiv.org/abs/2501.06972. Here Google is using LLMs to accelerate the migration of mature, complex production systems.

xp84 29 days ago
> It's incredibly far away from doing any significant change in a mature codebase.

I agree with this completely. However the problem that I think the article gets at is still real because junior engineers also can't do significant changes on a mature codebase when they first start out. They used to do the 'easy stuff' which freed the rest of us up to do bigger stuff. But:

1. Companies like mine don't hire juniors anymore

2. With Copilot I can be so much more productive that I don't need juniors to do "the easy stuff" because Copilot can easily do that in 1/1000th the time a junior would.

3. So now who is going to train those juniors to get to the level where we need them to be to make those "significant changes"?

hypothesis 29 days ago
> So now who is going to train those juniors to get to the level where we need them to be to make those "significant changes"?

Founders will cash out long before that becomes an issue. Alternatively, the hype is true and they will obsolete programmers, also solving the issue above…

This is quite devious if you think about it, withering pipeline of new devs and only them having an immediate fix in all cases.

hnthrow90348765 29 days ago
> Now completing small chunks of mundane code, explaining code, doing very small mundane changes. Very good at.

This is the only current threat. The time you save as a developer using AI on mundane stuff will get filled by something else, possibly more mundane stuff.

A small company with only 2-5 Seniors may not be able to drop anyone. A company with 100 seniors might be able to drop 5-10 of them total, spread across each team.

The first cuts will come at scaled companies. However, it's difficult to detect if companies are cutting people just to save money or if they are actually realizing any productivity gains from AI at this point.

renegade-otter 29 days ago
Especially since the zero-interest bonanza led to over-hiring of resume-driven developers. Half of AWS is torching energy by runnning some bloat that should not even be there.
sumoboy 29 days ago
I don't think companies realize AI is not free. A 100+ devs, openai, anthropic, gemini API costs, the hidden overhead of costs not spoken about.

Too much speculation that productivity will increase substantially, especially when a majority of companies IT is just so broken and archaic.

micromacrofoot 29 days ago
I think you're discounting efficiency gains — through a series of individually minor breakthroughs in LLM tech I think we could end up with things like 100M+ token context windows

We've already seen this sort of incrementalism over the past couple of years, the initial buzz started without much more than a 2048 context window and we're seeing models with 1M out there now that are significantly more capable.

dartos 28 days ago
Marketing is really good at their job.

That coupled with new money and retail investors being thinking they’re in a gold rush and you get the environment we’re in.

agentultra 29 days ago
I’m more keen on formal methods to do this than LLMs. I take the view that we need more precise languages that require us to write less code that obviously has no errors in it. LLMs are primed to generate more code using less precise specifications; resulting in code that has no obvious errors in it.
scotty79 28 days ago
> We know what it is good at and what it's not.

We know what it's good at today. And pretty sure it won't be any worse at it in the future. And 5 years ago state of the art was basically output of Markov Chain. In 5 years we might be at another place entirely.

z3n0n 27 days ago
Been heavily testing Cursor, Windsurf and VSCode w/CP lately. Most low level stuff works surprisingly well. But for anything slightly more complex I just end up wasting 90% credits watching the AI chase its own tail.
jmspring 28 days ago
My favorite has been everyday people claiming Elmo will find all sorts of corruption on Fed databases using AI. Trained on what datasets? What biases? Etc.
weatherlite 29 days ago
I agree but too many serious people are hinting we are very close I can't ignore it anymore. Sure, when Sam Altman / Zuckerberg say we're close I don't know if I can believe him because obviously the dudes will say anything to sell/pump the stock price. But how about Demis Hassabis ? He doesn't strike me like that at all. Same for Geoff Hinton, Bengio and a couple of others.
layer8 29 days ago
People investing their lives in the field are inherently biased. This is not to diminish them, it’s just a fact of the matter. Nobody knows how general intelligence really works, nor even how to reliably test for it, so it’s all speculation.
bdhcuidbebe 29 days ago
Market hype is all it is.
giancarlostoro 29 days ago
They all sounds like crypto bros talking about AI. It's really frustrating to talk to them, just like crypto bros.
moogly 28 days ago
They're the same people in my experience.
giancarlostoro 28 days ago
Its the same energy for sure.
ericmcer 28 days ago
A breakthrough is exactly what everyone is banking on. OpenAI was surprised by GPT3, they were just dumping data into an LLM and ended up with something that was way better than they expected.

Everyone is hoping (probably delusional) that bigger and more impressive breakthroughs will keep leaping up if we just keep tweaking the models and increasing the size of the data sets.

pjmlp 29 days ago
You missed the companies selling AI consulting projects, with the disconnect between sales team, customer, folks on the customer side, consultants doing the delivery, and what actually gets done.
Aiguru31415666 29 days ago
[dead]
hhh1111 29 days ago
[dead]
anavat 29 days ago
My take is that AI's ability to generate new code will prove so valuable, it will not matter if it is bad at changing existing code. And that the engineers of the distant future (like, two years from now) will not bother to read the generated code, as long as it runs and passes the tests (which will also be AI-generated).

I try to use AI daily, and every month I see how it is able to generate larger and more complex chunks of code from the first shot. It is almost there. We just need to adopt the new paradigm, build the tooling, and embrace the new weird future of software development.

bdhcuidbebe 29 days ago
> I try to use AI daily

You should reflect on the consequences of relying too much on it.

See https://www.404media.co/microsoft-study-finds-ai-makes-human...

anavat 29 days ago
I don't buy it makes me dumber. It just makes me worse at some things I used to do before, while making better at some other things. Often times it doesn't feel like coding anymore, more like if I were training to be a lawyer or something. But that's my bet.
nerder92 29 days ago
This article is entirely built on 2 big and wrong assumptions:

1. AI code ability will be the same as is today

2. Companies will replace people for AI en masse at a given moment in time

Of course both these assumptions are wrong, the quality of code produced by AI will improve dramatically as model evolves. And is not even just the model itself. The tooling, the Agentic capabilities and workflow will entirely change to adapt to this. (Already doing)

The second assumption is also wrong, intelligent companies will not layoff en masse to use AI only, they will most likely slow hiring devs because their existing enhanced devs using AI will suffice enough to their coding related needs. At the end of the day product is just one area of company development, build the complete e2e ultimate solution with 0 distribution or marketing will not help.

This article, in my opinion, is just doomerism storytelling for nostalgic programmers, that see programming only as some kind of magical artistic craft and AI as the villain arrived to remove all the fun from it. You can still switch off Cursor and write donut.c if you enjoy doing it.

y-c-o-m-b 29 days ago
> The second assumption is also wrong, intelligent companies will not layoff en masse to use AI only, they will most likely slow hiring devs because their existing enhanced devs using AI will suffice enough to their coding related needs

After 20 years in tech, I can't think of a single company I've worked for/with that would fit the profile of an "intelligent" company. All of them make poor and irrational decisions regularly. I think you over-estimate the intelligence of leadership whilst simultaneously under-estimating their greed and eventual ability to self-destruct.

EDIT: you also over-estimate the desire for developers to increase their productivity with AI. I use AI to reduce complexity and give me more breathing room, not to increase my output.

leovingi 26 days ago
>After 20 years in tech, I can't think of a single company I've worked for/with that would fit the profile of an "intelligent" company

It's not even necessarily about intelligence but about the simple concept of unknown unknowns. If everyone had perfect knowledge of the current reality and could perfectly describe what it is that they want immediately, without spending any time investigating, producing proof-of-concept work, iterating on a product, etc. I would agree that it could be feasible for AI to replace a lot of programming work.

As it stands, what I just described above is THE BULK of development. Coding is the last thing that happens and it also happens to be the fastest, easiest and smallest part of the entire process.

Jean-Papoulos 28 days ago
Companies that do not adopt AI whereas their competitors do will eventually (slowly) fall behind.
mulmboy 29 days ago
> After 20 years in tech, I can't think of a single company I've worked for/with that would fit the profile of an "intelligent" company. All of them make poor and irrational decisions regularly. I think you over-estimate the intelligence of leadership whilst simultaneously under-estimating their greed and eventual ability to self-destruct.

Says nothing about companies and everything about you

> you also over-estimate the desire for developers to increase their productivity with AI. I use AI to reduce complexity and give me more breathing room, not to increase my output.

I'm the same. But I expect that once many begin to do this, there will be some who do use it for productivity and they will set the bar. Then people like you and I will either use it for productivity or fall behind.

darkhorse222 28 days ago
How old are you? All it takes is one bad experience to show you the emperor has no clothes. Corporations, executives, and middle managers are almost by definition self interested and short sighted.

Just look at the over hiring during covid and the methods used to cull that workforce after they realized their mistake. Back handed and inhumane. Executives are more followers than a junior dev is. They just have a lot more terminology to obscure that fact. But they are basically professional bullshitters, like consultant firms.

This is excluding executives with vision. But the market and corporate structure bias towards eliminating those leaders as they are not consistently profitable over every month.

mulmboy 28 days ago
what you're saying is orthogonal to the post I was replying to, and to my reply.

self interested and short sighted says nothing about intelligence, irrationality, or poor decision making. Back handed and inhumane covid hiring and firing is probably not a mistake from their perspective. professional bullshitting is a form of intelligence (I hate it too, I've been done in by it, but I respect it)

johnnyanmac 28 days ago
I'm happy you've only worked for altruistic, not-for-profit minded companies that care about employee growth and takes pride in their tach stack above all else. I have not had as fortunate an experience.

>I expect that once many begin to do this, there will be some who do use it for productivity and they will set the bar.

Yeah, probably. I've had companies so pinpointed on "velocoity" instead of quality. I imagine they will definitely try to expect triple the velocity just because one person "gets so much done". Not realizing how much of that illusion is correcting the submissions.

mulmboy 28 days ago
> I'm happy you've only worked for altruistic, not-for-profit minded companies that care about employee growth and takes pride in their tach stack above all else. I have not had as fortunate an experience.

No one is making this claim.

My comment was a bit terse and provocative, rude, deserves the downvotes tbh. I'll take them.

To elaborate ~ I've got a lot of empathy for the poster I was originally replying to. I've fallen into that way of thinking before, and it sure is comfortable. Of course, companies and their leadership make poor and irrational decisions. Often, however, it's easy to perceive their decisions as poor and irrational when you simply don't have the context they do. "Why would they x ?? if only y!!" but, you know, there may well be a good reason why that you aren't aware of, they may have different goals to you (which may well be selfish! and that doesn't make them irrational or anything). Feels similar to programmers hating when people say "can't you 'just' x" - well yes, but actually there's a mountain of additional considerations behind the scene that the person spouting "just" hasn't considered.

Is leadership unintelligent, or displaying poor/irrational decision making, if the company self destructs? Perhaps. But quite possibly not. They probably got a whole lot out of it. Different priorities.

Consider that leadership may label a developer unintelligent if that dev doesn't always consider how to drive shareholder value "gee they're so focused on increasing their salary not on business value". Well actually the dev is quite smart, from their own perspective. Same thing.

And if every company you've ever worked for truly has poor leadership then, yeah, it's probably worth reassessing how you interview. Do you need to dig deeper into the business? Do you just not have the market value to negotiate landing a job at a company with intelligent leadership?

So, two broad perspectives: either the poster has a challenge with perception, or they are poor at picking companies. Or perhaps the companies truly do have poor leadership but I think that unlikely. Hence it comes back to the individual.

@y-c-o-m-b sorry for being a bit rude.

Cheers for reading

johnnyanmac 28 days ago
That's fair enough. But I don't think companies are irrational outright. I just know their rational, potentially selfish actions are orthogonal to my rational and selfish goals.So you almost have to stay scrutinous if you want to keep your goals aligned.

>And if every company you've ever worked for truly has poor leadership then, yeah, it's probably worth reassessing how you interview.

No need. I work in games. There isn't a major studio in the industry that isn't like this. An industry used to churning workers and releasing them the moment the project ends.

In some ways it's a path I chose, but at the same time it means I need to be more cynical to defend myself from their inevitably orthogonal actions. I have an exit plan, but I need more time and money first.

If the industry wasnt so secretive with its techniques and knowledge, maybe I could have side stepped it altogether. But alas.

y-c-o-m-b 28 days ago
You say sorry, but it doesn't really come off very apologetic to be honest. Your initial reaction is very suspicious in itself. Why so emotional over what I said? You also presume a lot about me and my career choices and your assertions also teeter on some strong emotions, very odd. What's your personal stake in this that's got you so amped? A hidden insecurity perhaps? Clearly I hit a nerve here.

I've worked in big tech for a combined total of 6 years. Several of the other companies are Fortune 500 members. I've also worked at mid-size and small companies across mortgage, healthcare, fin-tech, point of sale, HR/payroll, and more. I surmise that you would categorize most of these as "intelligent" companies, which negates your argument about this being a "me" problem. Let's take Intel - where I worked for 3 years - as an example. Some would consider this an "intelligent company". I was there when Brian Krzanich took over for Paul Otellini. I think you will have a very easy time finding huge swaths of people/employees that consider Krzanich's leadership decisions to be very poor. In fact over the last few months, you'll find threads here on HN that directly pin the decline of Intel on his decision making.

We can argue the semantics of "intelligent" all day and make excuses for why leaders make irrational choices, but my point still stands. I don't think this is a "me" problem for one simple reason: If you take me out of the equation, the issue still exists.

aiono 29 days ago
> the quality of code produced by AI will improve dramatically as model evolves.

That's a very bold claim. We are already seeing plateu in LLM capabilities in general. And there is little improvement in places where they fall short (like making holistic changes in a large codebase) since their birth. They only improve where they are already good at such as writing small glue programs. Expecting significant breakthroughs with just scaling without any fundamentally changes to the architecture seems like too optimistic to me.

high_na_euv 29 days ago
>Of course both these assumptions are wrong, the quality of code produced by AI will improve dramatically as model evolves.

How are you so sure?

anothermathbozo 29 days ago
No one has certainty here. It’s an emergent technology and no one knows for certain how far it can be pushed.

It’s reasonable that people explore contingencies where the technology does improve to a point of driving changes in the labor market.

bodegajed 28 days ago
Theranos made bold claims too according to wikipedia Theranos claimed they devised diagnostic using blood tests that required very small amounts of blood. Then their PR machine made them raise $9 billion.
anothermathbozo 28 days ago
Big companies indeed work hard to generate investment and interest. We can take the threat and promise of the technology seriously without taking the science fiction grade hype literally.

If you have your certainty then there are plenty of opportunities to short this space. For the rest of us who don’t have crystal balls, contingency planning (ideally through policy and not as individuals) will have to do.

RohMin 29 days ago
I do feel with the rise of "reasoning" class of models, it's not hard to believe that code quality will improve over time.
high_na_euv 29 days ago
The thing is: how much

0.2x, 2x, 5x, 50x?

psytrancefan 26 days ago
It all comes down to a religious faith in AGI or not.

There can't be things that a human can program that AGI can not program or it is not "AGI".

While I am never a true believer in AGI, it seems to go I get a little faith when a new model comes out then I become increasingly agnostic the weeks and months after that. Repeat.

RohMin 29 days ago
Who knows? It just needs to be better than the average engineer.
high_na_euv 29 days ago
The thing is that this "just" may not happen soon
hiatus 29 days ago
It needs to be better than the average engineer whose abilities are themselves augmented by AI.
johnnyanmac 28 days ago
It just needs to be cheaper than the average engineer, you mean.
croes 29 days ago
Doesn't sound like improving dramically.
dev1ycan 29 days ago
he's not, just another delusional venture capitalist that hasn't bothered to look up the counter arguments to his point of view, done by mathematicians
anothermathbozo 29 days ago
It’s an emergent technology and no one knows for certain how far it can be pushed, not even mathematicians.
randmeerkat 29 days ago
> he's not, just another delusional venture capitalist that hasn't bothered to look up the counter arguments to his point of view, done by mathematicians

Don’t hate on it, just spin up some startup with “ai” and LLM hype. Juice that lemon.

twelve40 28 days ago
Raising on hype != success, most of these lemons will rot before they can even do a small secondary for themselves. Plenty of startups died when the last hype wave ended (which was easy zirp money, before ai). I don't trust anything without a growing revenue anymore, and even then...
iamleppert 29 days ago
When life gives you delusional VC's that don't understand the basics of what they are investing in, make lemonade! Very, very expensive lemonade!
switchbak 27 days ago
Queue the Juicero reference!
johnnyanmac 28 days ago
Sadly my goals are more more ephemeral than opening up a lemonade stand.
nerder92 29 days ago
I'm not sure, it's an observation considering how AI improvement is related to Moore's law.

[1](https://techcrunch.com/2025/01/07/nvidia-ceo-says-his-ai-chi...)

somenameforme 29 days ago
That's an assumption. Most/all neural network based tech faces a similar problem of exponentially diminishing returns. You get from 0 to 80 in no time. A bit of effort and you eventually ramp it up to 85, and it really seems the goal is imminent. Yet suddenly each percent, and then each fraction of a percent starts requiring exponentially more work. And then you can even get really fun things like you double your training time and suddenly the resultant software starts scoring worse on your metrics, usually due to overfitting.

And it seems, more or less, clear that the rate of change in the state of the art has already sharply decreased. So it's likely LLMs have already entered into this window.

high_na_euv 29 days ago
But some say that Moore Law is dead :)

Anyway, the mumber of tiktok users coorelates with advancements in AI too!

Before tiktok the progress was slower, then when tiktok appeared it progressed as hell!

nerder92 29 days ago
Yes I see your point, correlation is not causation. Again, this is my best guess and observation based on my personal view of the world and my understanding of data on hands at t0 (today). This doesn't spare if from being incorrect or extremely wrong, as always when dealing with predictions of a future outcome.
kykeonaut 29 days ago
However, an increase in computing quality doesn't necessarily mean an increase in output quality, as you need compute power + data to train these models.

Just increasing compute power will increase the performance/training speed of these models, but you also need to increase the quality of the data that you are training these models on.

Maybe... the reason why these models show a high school level of understanding is because most of the data on the internet that these models have been trained on is of high school graduate quality.

29 days ago
causal 29 days ago
Not to mention I haven't really seen AI replace anyone, except perhaps as a scapegoat for execs who were planning on layoffs anyway.

That said, I do think there is real risk of letting AI hinder the growth of Junior dev talent.

sanderjd 29 days ago
I think I've seen us be able to do more with fewer people than in the past. But that isn't the limiting factor for our hiring. All else equal, we'd like to just do more, when we can afford to hire the people. There isn't a fixed amount of work to be done. We have lots of ideas for products and services to make if we have the capacity.
causal 29 days ago
Agreed, I often see AI discussed as if most companies wouldn't take 10x more developers if they could have them for free
ssimpson 29 days ago
I tend to agree with you. The general pattern behind "x tool came along that made work easier" isn't to fire a bunch of folks, its to make the people that are there work whatever increment of ease of work more. ie, if the tool cuts work in half, you'd be expected to do 2x more work. Automation and tools almost never "makes our lives easier", it just removes some of the lower value added work. It would be nice to live better and work less, but our overlords won't let that happen. Same output with less work by the individual isn't as good as same or more output with the same or less people.
fragmede 29 days ago
> but our overlords won't let that happen

If you have a job, working for a boss, you're trading your time for money. If you're a contractor and negotiate being paid by the project, you're being paid for results. Trading your time for money is the underlying contract. That's the fundamental nature of a job working for somebody else. You can escape that rat race if you want to.

Someone I know builds websites for clients on a contract basis, and did so without LLMs. Within his market, he knows what a $X,000 website build entails. His clients were paying that rate for a website build out prior to AI-augmented programming, and it would take a week to do that job. With help from LLMs, that same job now takes half as much time. So now he can choose to take on more clients and take home more pay, or not, and be able to take it easy.

So that option is out there, if you can make that leap. (I haven't)

johnnyanmac 28 days ago
>You can escape that rat race if you want to.

I'm working on it. But it takes money and the overlords definitely are trying to squeeze as of late.

And yes, while I don't think I'm being replaced in months or years, I can a possibility in a decade or two of the ladder being pulled up on most programming jobs. We'll either be treated as well as artists (assuming we still don't unionize) or we'll have to rely on our own abilities to generate value without corporate overlords.

wincy 28 days ago
Interesting. So it sounds like I need to get into the market and charge half as much and steal all of his customers.

These things eventually always end up as a Red Queen’s race, where you have to run as fast as you can to stay in the same place.

fragmede 28 days ago
From a game theory perspective, it is that simple. But humans are messy and have emotions and feelings and shit.

A different friend who's contractor in a non-tech area told me a client of his secretly showed him his competition's bid for the same project. The competition's bid was much higher, and the reason the client showed him that was to get my friend to raise his rates and resubmit his bid.

So you're welcome to try, but as a programmer looking into the abyss, I'm looking at the whole thing as encouragement to develop all those soft skills that I've been neglecting.

Madmallard 29 days ago
What makes you think (1) will be true?

It is only generating based on training data. In mature code bases there is a massive amount of interconnected state that is not already present in any github repository. The new logic you'd want to add is likely something never done before. As other programmers have stated, it seems to be improving at generating useful boilerplate and making simple websites and such related to what's out there en masse on Github. But it can't make any meaningful changes in an extensively matured codebase. Even Claude Sonnet is absolutely hopeless at this. And the requirement before the codebase is "matured" is not very high.

ryandrake 29 days ago
> The new logic you'd want to add is likely something never done before.

99% of software development jobs are not as groundbreaking as this. It’s mostly companies doing exactly what their competitors are doing. Very few places are actually doing things that an LLM model has truly never seen crawling through GutHub. Even new innovative products generally boil down to the same database fetches and CRUD glue and JSON parsing and front end form filling code.

SpicyLemonZest 29 days ago
Groundbreakingness is different from the type of novelty that's relevant to an LLM. The script I was trying to write yesterday wasn't groundbreaking at all: it just needed to pull some code from a remote repository, edit a specific file to add a hash, then run a command. But it had to do that _within our custom build system_, and there's few examples of that, so our coding assistant couldn't figure out how to do it.
skydhash 29 days ago
> Even new innovative products generally boil down to the same database fetches and CRUD glue and JSON parsing and front end form filling code.

The simplest version of that is some CGI code a PHP script. Which everyone should be writing according to your description. But why so many books have been written to be able to do this seemingly simple task? So many frameworks, so many patterns, so many methodologies....

Madmallard 28 days ago
I don't know man

It can't do anything in these random Phaser games I'm making and even translating my 10,000 line XNA game to Phaser. It is totally hopeless.

Phaser has been out forever now, and XNA used to be too.

nerder92 29 days ago
> It is only generating based on training data

This is not the case anymore, current SOTA CoT models are not just parroting stuff from the training data. And as of today they are not even trained exclusively on publicly (and not so publicly) available stuff, but they massively use synthetic data which the model itself generated or distilled data from other smarter models.

I'm using and I know plenty of people using AI in current "mature" codebases with great results, this doesn't mean it does the work while you sip a coffee (yet)

*NOTE: my evidence for this is that o3 could not break ARC AGI by parroting, because it's a banchmark made exactly for this reason. Not a coding banchmark per se, but still transposable imo.

fragmede 29 days ago
Try Devin or OpenHands. OpenHands isn't quite ready for production, but it's informative on where things are going and to watch the LLM go off and "do stuff", kinda on its own, from my prompt (while I drink coffee).
leptons 29 days ago
> their existing enhanced devs using AI will suffice enough to their coding related needs.

Not my experience. I spend as much time reading through and replacing wrong AI generated code as I do writing my own code, so it's really wasting my time more often than helping. It's really hit or miss, and about the only thing the AI gets right most often is writing console.log statements based on the variable I've just assigned, and that isn't really "coding". And even then it gets it right only about 75% of the time. Sure, that saves me some time, but I'm not seeing the supposed acceleration AI is hyped as giving.

bodegajed 28 days ago
> Quality of code produced by AI will improve dramatically as model evolves.

Where are your facts? What is your basis on these future prediction? Sure small code snippets have already improved since gpt2. What about larger applications where layers and layers of abstractions coming from private company sensitive data?

> At the end of the day product is just one area of company development, build the complete e2e ultimate solution with 0 distribution or marketing will not help.

this sounds like PR garbage to me why use "e2e ultimate solution" provide technical details so we can verify what you're saying is true

croes 29 days ago
That the second assumnoption is wrong is based on > intelligent companies will not layoff en masse to use AI

How many companies are intelligent given how many dumb decisions we see?

If we assume enough not so intelligent companies then better AI code we lead to mass firing.

dkjaudyeqooe 29 days ago
> Of course both these assumptions are wrong, the quality of code produced by AI will improve dramatically as model evolves.

This is the fundamental delusion that is driving AI hype.

Although scale has made LLMs look like magic, actual magic (AGI) is not on the scaling path. This is a conjecture (as is the converse), but I'm betting the farm on it personally and see LLMs as useful chat bots that augment other, better technologies for automation. If you want to pursue AGI, move on quickly to something structurally and fundamentally better.

People don't understand that AGI is pure speculation. There is no rigorous, non-circular definition of human intelligence, let alone proof that AGI is possible or achievable in any reasonable time frame (like 100 years).

badgersnake 29 days ago
> the quality of code produced by AI will improve dramatically as model evolves

This is the incorrect assumption, or at least there’s no evidence to support it.

nerder92 29 days ago
If benchmark means anything in evaluating how model capability progress, the evidence is that all the existing benchmark have been pretty much solved, except FrontierMath (https://epoch.ai/frontiermath)
shihab 29 days ago
I recently saw Sam Altman bragging about OpenAI's performance on Codeforces (leetcode-like website), which I consider just about the worst benchmark possible.

1. All problems are small- the prompt and solution (<100 LOC, often <60LOC)

2. Solving those problems is more about recollecting patterns and less about good new insights. Now, top level human competitors do need original thinking, but that's only because our memory is too small to store all previously seen patterns.

3. Unusually good dataset- you have tens of thousands of problems, each with thousands of submissions, along with clear signals to train on (right/wrong, time taken etc), a very rich discussion sections etc.

I think becoming 100th best Codeforces programmer is still an incredible achievement for a LLM. But for Sam Altman to specifically note the performance on this- I consider that a sign of weakness, not strength.

badgersnake 28 days ago
Altman spouts even more bullshit than his models, if that’s even possible.
mrguyorama 28 days ago
Benchmarks cannot tell you whether the tech will continue supernaturally, linearly, or plateau entirely.

In the 90s, companies showed graphs of CPU frequency and projected we would be hitting 8ghz pretty soon. Futurists predicted we would get CPUs running at tens of ghz.

We only just now have 5ghz CPUs despite running at 4ghz back in the mid 2000s.

We fundamentally missed an important detail that wasn't consider at all in those projections.

We know less about the theory of how LLMs and neural networks grow with effort than we did about how transistors operate over different speeds.

You utterly cannot extrapolate from those kinds of graphs.

layer8 29 days ago
Given that I’ve seen excellent mathematicians produce poor-quality code in real-world software projects, I’m not sure how relevant these benchmarks are.
jayd16 29 days ago
These two predictions seem contradictory. If the AI massively improves why would they slow roll adoption?
throwaway290 29 days ago
People want their AI stocks to go up. So they say things like sky is the limit and jobs are not going away (aka please don't regulate) in one sentence. I think only one of this is true.
croes 29 days ago
BTW your reasoning for 1 sound like previous reasoning for FSD.

Assuming the same kind of growth in capabilities isn't backed by reality.

The last release of OpenAI's model wasn't dramatically better.

At the moment it's more about getting cheaper.

blah2244 29 days ago
To be fair, his argument is valid for FSD! We have fully deployed FSD in multiple US cities now!
croes 28 days ago
Just because it’s deployed doesn’t mean it’s working.

https://www.autoevolution.com/news/viral-tesla-cybertruck-cr...

wincy 28 days ago
Really? Because I’m using ChatGPT Deep Research and it feels revolutionary in the quality of the data it’s able to output.
croes 28 days ago
Deep Research is only available for Pro subscribers, I know only the benefits of the Plus subscription models and it’s pretty underwhelming.
29 days ago
dragonwriter 29 days ago
My opinion: tech isn't firing programmers for AI. If is firing peogrammers because of the financial environment, and waving around AI as a fig leaf to pretend that it is not really cutting back in output.

When the financial environment loosens again, there’ll be a new wave of tech hiring (which is about equally likely to publicly be portrayed as either reversing the AI firing or exploiting new opportunities due to AI, neither of which will be the real fundamental driving force.)

smitelli 29 days ago
I've come to believe it really is this.

Everybody got used to the way things worked when interest rates were near zero. Money was basically free, hiring was on a rampage, and everybody was willing to try reckless moonshots with slim chances for success. This went on for like fifteen years -- a good chunk of the workforce has only ever known that environment.

coolKid721 29 days ago
most narratives for everything are just an excuse for macro stuff. we had zirp for basically the entire period of 2008 - 2022 and when that stopped there was huge lay offs and less hiring. I see lots of newer/younger devs being really pessimistic about the future of the industry, being mindful of the macro factors is important so people don't buy into the AI narratives (which is just to bump up their stocks).

If people can get a safer return buying bonds they aren't going to invest in expansion and hiring. If there is basically no risk free rate of return you throw your money at hiring/new projects because you need to make a return. Lots of that goes into tech jobs.

cootsnuck 27 days ago
It's this. Companies love a smokescreen to tighten their belts without spooking markets and tanking their stock. And workers are the collateral damage.
johnnyanmac 28 days ago
No one 0ast sole very small businesses (aka a single person with contractors) is seriously trying to replace programmers with AI right now. I do feel we will hit that phase sometimes down the line (probably in the 30's).so I at least think this is a tale to keep in the back of our minds long term.
jvanderbot 29 days ago
What evidence do we have that AI is actually replacing programmers already? The article treats messaging on this as a forgone conclusion, but I strongly suspect it's all hype-cycle BS to cover layoffs, or a misreading of "Meta pivots to AI" headlines.
chubot 29 days ago
We'll probably never have evidence either way ... Did Google and Stack Overflow "replace" programmers?

Yes, in the sense that I suspect that with the strict counterfactual -- taking them AWAY -- you would have to hire 21 people instead of 20, or 25 instead of 20, to do the same job.

So strictly speaking, you could fire a bunch of people with the new tools.

---

But in the same period, the industry expanded rapidly, and programmer salaries INCREASED

So we didn't really notice or lament the change

I expect that pretty much the same thing will happen. (There will also be some thresholds crossed, producing qualitative changes. e.g. Programmer CEOs became much more common in the 2010's than in the 1990's.)

---

I think you can argue that some portion of the industry "got dumber" with Google/Stack Overflow too. Higher level languages and tech enabled that.

Sometimes we never learn the underlying concepts, and spin our wheels on the surface

Bad JavaScript ate our CPUs, and made the fans spin. Previous generations would never write code like that, because they didn't have the tools to, and the hardware wouldn't tolerate it. (They also wrote a lot of memory safety bugs we're still cleaning up, e.g. in the Expat XML parser)

If I reflect deeply, I don't know a bunch of things that earlier generations did, though hopefully I know some new things :-P

jvanderbot 29 days ago
This is an insightful comment. It smells of Jevron's paradox, right? More productivity leads to increased demand.

I just don't remember anyone saying that SO would replace programmers, because you could just copy-paste code from a website and run it. Yet here we are: GPTs will replace programmers, because you can just copy-paste code from a website and run it.

tguedes 26 days ago
I completely agree with Jevron's paradox being the right way to think about this. Much like ERP and HR software made it so you needed less back of office staff to accomplish the same task, but it allows these huge, multi-national companies to exist. I don't think these tens of thousands or hundreds of thousands employee companies would be possible without ERP and HR software.

I think another way of thinking about this is with low-code/no code tools. Another comment in this post said they never really took off and they didn't in the way some people expected. But a lot of large companies use them quite a bit for automating internal processes such as document/data aggregation and manipulation. JP Morgan has multiple job listings right now for RPA developers. Before this would needed to be done by actual developers.

I suspect (and hope) AI will follow a similar trajectory. I hope the future is exciting and we build new, more complex systems we can build that wasn't possible before due to lack of

sanderjd 29 days ago
People definitely said this about SO!
johnnyanmac 28 days ago
Those people never tried googling anything past entry level. It's at best a way to get some example documentation for core languages.
TheOtherHobbes 29 days ago
Google Coding is definitely a real problem. And I can't believe how wrong some of the answers on Stack Overflow are.

But the real problems are managerial. Stonks must go up, and if that means chasing a ridiculous fantasy of replacing your workforce with LLMs then let's do that!!!!111!!

It's all fun and games until you realise you can't run a consumer economy without consumers.

Maybe the CEOs have decided they don't need workers or consumers any more. They're too busy marching into a bold future of AI and robot factories.

Good luck with that.

If there's anyone around a century from now trying to make sense of what's happening today, it's going to look like a collective psychotic episode to them.

supergarfield 29 days ago
> It's all fun and games until you realise you can't run a consumer economy without consumers.

If the issue is that the AI can't code, then yes you shouldn't replace the programmers: not because they're good consumers, just because you still need programmers.

But if the AI can replace programmers, then it's strange to argue that programmers should still get employed just so they can get money to consume, even though they're obsolete. You seem to be arguing that jobs should never be eliminated due to technical advances, because that's removing a consumer from the market?

MyOutfitIsVague 29 days ago
The natural conclusion I see is dropping the delusion that every human must work to live. If automation progresses to a point that machines and AI can do 99% of useful work, there's an argument to be made for letting humanity finally stop toiling, and letting the perhaps 10% of people who really want to do the work do the work.

The idea that "everybody must work" keeps harmful industries alive in the name of jobs. It keeps bullshit jobs alive in the name of jobs. It is a drain on progress, efficiency, and the economy as a whole. There are a ton of jobs that we'd be better off just paying everybody in them the same amount of money to simply not do them.

chubot 29 days ago
The problem is that such a conclusion is not stable

We could decide this one minute, and the next minute it will be UN-decided

There is no "global world order", no global authority -- it is a shifting balance of power

---

A more likely situation is that the things AI can't do will increase in value.

Put another way, the COMPLEMENTS to AI will increase in value.

One big example is things that exist in the physical world -- construction, repair, in-person service like restaurants and hotels, live events like sports and music (see all the ticket prices going up), mining and drilling, electric power, building data centers, manufacturing, etc.

Take self-driving cars vs. LLMs.

The thing people were surprised by is that the self-driving hype came first, and died first -- likely because it requires near perfect reliability in the physical world. AI isn't good at that

LLMs came later, but had more commercial appeal, because they don't have to deal with the physical world, or be reliable

So there are are still going to many domains of WORK that AI can't touch. But it just may not be the things that you or I are good at :)

---

The world changes -- there is never going to be some final decision of "humans don't have to work"

Work will still need to be done -- just different kinds of work. I would say that a lot of knowledge work is in the form of "bullshit jobs" [1]

In fact a reliable test of a "bullshit job" might be how much of it can be done by an LLM

So it might be time for the money and reward to shift back to people who accomplish things in the physical world!

Or maybe even the social world. I imagine that in-person sales will become more valuable too. The more people converse with LLMs, I think the more they will cherish the experience of conversing with a real person! Even if it's a sales call lol

[1] https://en.wikipedia.org/wiki/Bullshit_Jobs

jvanderbot 29 days ago
To say that self driving cars (a decade later with several real products rolling out) has the same, or lesser, commercial appeal than LLMs now (a year/two in, with mostly VC hype) is a bit incorrect.

Early on in AV cycles there was enormous hype for AVs, akin to LLMs. We thought truck drivers were done for. We thought accidents were a thing of the past. It kicked off a similar panic among tangential fields. Small AV startups were everywhere, and folks were selling their company to go start a new one then sell that company for enormous wealth gains. Yet 5 years later none of the "level 5" promises they made were coming true.

In hindsight, as you say, it was obvious. But it sure tarnished the CEO prediction record a bit, don't you think? It's just hard to believe that this time is different.

bigfishrunning 27 days ago
So how do you choose who has to work vs who gets to just hang out? Who's gonna fix the machines when they break?

It honestly doesn't matter, because we're hundreds of years from > a point that machines and AI can do 99% of useful work

MyOutfitIsVague 27 days ago
I would much rather work than not work. Many other people are the same. If I don't have a job, I will work on my free time. I enjoy it. I don't have to work for a living, but I have to work to be alive.

There are many people like me, and we will be the ones to work. It won't be choosing who has to work, it will be who chooses that they want to work.

johnnyanmac 28 days ago
It's our only conclusion unless/until countries start implementing UBI or similar forms of post scarcity services. And it's not you or me that's fighting against that future.
robertlagrant 29 days ago
I don't think this is anyone's plan. It's the biggest argument against why it won't be the plan: who'll pay for all of it? Unless we can Factorio the world, it seems more likely we just won't do that.
insane_dreamer 29 days ago
It'll happen gradually over time, with more pressure on programmers to "get more done".

I think it's useful to look at what has already happened at another, much smaller profession -- translators -- as a precursor to what will happen with programmers.

1. translation software does a mediocre job, barely useful as a tool; all jobs are safe

2. translation software does a decent job, now expected to be used as time-saving aid, expectations for translators increase, fewer translators needed/employed

3. translation software does a good job, translators now hired to proofread/check the software output rather than translate themselves, allowing them to do 3x to 4x as fast as before, requiring proportionally fewer translators

4. translation software, now driven by LLMs, does an excellent job, only cursory checks required; very few translators required mostly in specialized cases

daveguy 29 days ago
Yes, but in all 4 of these steps you are literally describing the job transformer LLMs were designed to do. We are at 1 (mediocre job) for LLMs in coding right now. Maybe 2 in a few limited cases (eg boilerplate). There's no reason to assume LLMs will ever perform at 3 for coding. For the same reason natural language programming languages like COBOL are no longer used -- natural language is not precise.
insane_dreamer 29 days ago
It seems the consensus is that we will reach level 3 pretty quickly given the pace of development in the past 2 years. Not sure about 4 but I’d say in 10 years we’ll be there.
daveguy 28 days ago
There is definitely no consensus that we will reach level 3 in coding tasks "pretty quickly". (Assuming you are talking about your own definitions of "levels" wrt translation applied to coding -- only proofread/check required)
Workaccount2 29 days ago
I actually know a professional translator and while a year ago he was full of worry, he now is much more relaxed about it.

It turns out that like art, many people just want a human doing the translation. There is a strong romantic element to it, and it seems humans just have a strong natural inclination to only want other humans facilitating communication.

insane_dreamer 29 days ago
I’ve done freelance translating (not my day job) for 20 years. What you describe is true for certain types of specialized translations, particularly anything that is literary in nature. But that is a very small segment. The vast majority of translation work is commercial in nature and for that companies don’t care whether a human or machine did it.
arrowsmith 29 days ago
How do they know that a human is doing the translation? What's to stop someone from just c&ping the text into an LLM, giving it a quick proofread, then sending it back to the client and saying "I translated this"?

Sounds like easy money, maybe I should get into the translation business.

skydhash 29 days ago
The fact that the client is actually going to use the text and they will not find it funny when they're being laughed at. Or worse, being sued because of some situation caused by a confusion. I read Asian novels and you can quickly (within a chapter) discern if the translators have done a good job (And there's so many translation notes if the author relies on cultural elements).
insane_dreamer 29 days ago
1) almost all clients hire a translation agency who then farms then work out to freelance translators; payment is on a per-source-word basis.

2) the agency encourages translation tools, so long as the final content is okay (proofread by the translator), because they can then pay less (based on the assumption that it should take you less time). I’ve see rates drop in half because of it.

3) the client doesn’t know who did the translation and doesn’t care - with the exception of literary pieces where the translator might be credited on the book. (Those cases typically won’t go through an agency)

Workaccount2 29 days ago
I mean, they don't, but I can assure you there are far more profitable ways to be deceptive than being a faux translator haha
aksosnckckd 29 days ago
The hard part of development isn’t converting an idea in human speak to idea in machine speak. It’s formulating that idea in the first place. This spans all the way from high level “tinder for dogs” concepts to low level technical concepts.

Once AI is doing that, most jobs are at risk. It’ll create robots to do manual labor better than humans as well.

insane_dreamer 29 days ago
Right. But it only takes 1 person, or maybe a handful, to formulate an idea that might take 100 people to implement. You will still need that one person but not the 100.
weatherlite 29 days ago
> It'll happen gradually over time

How much time? I totally agree with you but being early is the same as being wrong as someone clever once said. There's a huge difference between it happening in less than 5 years like Zuckerberg and Sam Altman are saying and it taking 20 more years. If the second scenario is what happens me and many people on this thread can probably retire rather comfortably, and humanity possibly has enough time to come up with a working system to handle this mass change. If the first scenario happens it's gonna be very very painful for many people.

jajko 28 days ago
20 years as in real replacement, maybe. But change will be cca gradual if looking at whole market, even if composed of many smaller jumps. Top management of companies are now itching for that promised paradise of minimal IT with just few experts. Then comes inevitable sobering up, but the direction is clear.

I wouldn't be considering programming if choosing university studies now. With that smart, many other fields look more stable, albeit demand curve and how comfy later years of career looks like is very different (maybe lawyers, doctors, for blue collars some trades but look at long term health effects with ie back or knee issues).

SirFatty 29 days ago
swiftcoder 29 days ago
In the ~8 years since I worked there, Zuckerberg announced that we'd all be spending our 8 hour workdays in the Metaverse, and when that didn't work out, he pivoted to crypto currency.

He's just trend-chasing, like all the other executives who are afraid of being left behind as their flagship product bleeds users...

65 29 days ago
We gotta put AI Crypto in the Blockchain Metaverse!
cma 29 days ago
Have they bled users?
burkaman 29 days ago
Apparently not, according to their quarterly earnings reports: https://www.statista.com/statistics/1092227/facebook-product...
swiftcoder 29 days ago
The core Facebook product? Yeah.

Across all products, maybe not - Instagram appeals to a younger demographic, especially since they turned it into a TikTok clone. And WhatsApp is pretty ubiquitous outside of the US (even if it is more used as a free SMS replacement than an actual social network).

cma 28 days ago
Growth in monthly actives across Facebook seems to have slowed but is still increasing--not bleeding users--from data I found through Q4 2023.

With 3 billion monthly actives and China being excluded, it's hard to expect a ton of growth since it is a major fraction of the remaining world population. There are bots etc. but they are one of the stricter networks with requiring photos of your ID and stuff a lot more often than others.

swiftcoder 27 days ago
Keep in mind that Meta pulls something of a fast one here, because a lot of instagram accounts end up creating an attached Facebook account (so that they can share Reels across both platforms). I don't have current insider information, but as of 2019 they were heavily using instagram sign-ups to shore up the Facebook numbers.
icepat 29 days ago
Zuckerberg, as always, is well known for making excellent business decisions that lead to greater sector buy in. The Metaverse is going great.
scarface_74 29 days ago
On the other hand, Instagram has been called one of the greatest acquisitions of all time only below the Apple/Next acquisition.
arrowsmith 29 days ago
That was 13 years ago. How are things going more recently?
scarface_74 29 days ago
$53 million in 2012 and $62.36 billion in profit last year…
falcor84 29 days ago
Really, that's what you're going with, arguing against the business acumen of the world's second richest person, and the only one at that scale with individual majority control over their company?

As for the Metaverse, it was always intended as a very long-term play which is very early to be judged, but as an owner of a Quest headset, it's already going great for me.

icepat 29 days ago
Yes? I don't understand what is so outrageous about that. Most business decisions are not made by the CEO, and the ones we know are directly a result of him have been poor.
etblg 29 days ago
Howard Hughes was one of the biggest business successes of the 20th century, on par with, if not exceeding, the business acumen of the zucc. Fantastically rich, hugely successful, driven, talented, all that crap.

Anyway he also acquired RKO Pictures and led it to its demise 9 years later. In aviation he had many successes, he also had the spruce goose. He bought in to TWA then got forced out of its management.

He died as a recluse, suffering from OCD and drug abuse, immortalized in a Simpsons episode with Mr. Burns portraying him.

People can have business acumen, and sometimes it doesn't work out. Past successes doesn't guarantee future ones. Maybe the metaverse will eventually pay off and we'll all eat crow, or maybe (and this is the one I'm a believer of) it'll be a huge failure, an insane waste of money, and one of the spruce geese of his legacy.

bigtimesink 28 days ago
Meta's success for the past 10 years had more to do with Cheryl Sandburg and building a culture that chases revenue metrics than whatever side project Zuckerberg is doing. He also misunderstands the product they do have. He said he didn't see TikTok as a competitor because they "aren't social," but Meta's products have been attention products, not social products, for a long time now.
MyOutfitIsVague 29 days ago
Are you really claiming that it's inherently wrong to argue against somebody who is rich?
falcor84 29 days ago
No, not at all, it's absolutely cool to argue against specific decisions he made, but I just wanted to reject this attempt at sarcasm about his overall decision-making:

>Zuckerberg, as always, is well known for making excellent business decisions that lead to greater sector buy in.

johnnyanmac 28 days ago
If we're being honest here. A lot of the current technocrats made one or two successful products or acquisitions, and more or less relied on those alone to power everything else. And they weren't necessarily the best, they were simple first. Everything else is incredibly hit or miss, so I wouldn't call them visionaries.

Apple was almost the one exception, but the post Jobs era definitely had that cultural branding stagnate at best.

icepat 28 days ago
Yes, this is exactly why I think my comment was fair. Meta itself is where it is because it was first and got initial traction. Now it has an effective level of vendor lock-in. It's the equivalent of saying someone who won at slots on their first round is an expert gambler because they got a multi-million dollar payout, even if they've then subsequently lost every other turn on the machine.
Finnucane 29 days ago
The Metaverse is actually still a thing? With, like, people in it and stuff? Who knew?
falcor84 29 days ago
Well, we aren't yet "in it", but there's a lot of fun to be had with VR (and especially AR) activities. For example, I love how Eleven Table Tennis allows me to play ping pong with another person online, such that the table and their avatar appear to be in my living room. I don't know if this is "the future", but I'm pretty confident that these sorts of interactions will get more and more common, and I think that Meta is well positioned to take advantage of this.

My big vision for this space is the integration of GenAI for creating 3d objects and full spaces in realtime, allowing the equivalent of The Magic School Bus, where a teacher could guide students on a virtual experience that is fully responsive and adjustable on the fly based on student questions. Similarly, playing D&D in such a virtual space could be amazing.

Nasrudith 28 days ago
Have you heard the term survivorship bias? Billionaires got so rich by being outliers, for better or worse. Even if they were guaranteed to be the best just going all in with one action in their portfolio isn't even what their overall strategy. Zuckerberg can afford to blow a few billion on a flop because it is only about 2% of his net worth. Notably, even he while poised and groomed for overconfidence by past successes and yes-men doesn't trust his own business acumen all that much!
juped 28 days ago
I'm sorry for your sunk cost. I bought an early-ish Oculus Rift myself.
burkaman 29 days ago
Obviously the people developing AI and spending all of their money on it (https://www.reuters.com/technology/meta-invest-up-65-bln-cap...) are going to say this. It's not a useful signal unless people with no direct stake in AI are making this change (and not just "planning" it). The only such person I've seen is the Gumroad CEO (https://news.ycombinator.com/item?id=42962345), and that was a pretty questionable claim from a tiny company with no full-time employees.
causal 29 days ago
Planning to and succeeding at are very different things
SirFatty 29 days ago
I'd be willing to bet that "planning to" means the plan is being executed.

https://www.msn.com/en-us/money/other/meta-starts-eliminatin...

makerofthings 29 days ago
Part of my work is rapid prototyping of new products and technology to test out new ideas. I have a small team of really great generalists. 2 people have left over the last year and I didn't replace them because the existing team + chatGPT can easily take up the slack. So that's 2 people that didn't get hired that would have done without chatGPT.
ActionHank 29 days ago
There is little evidence that AI is replacing engineers, but there is a whole lot of evidence that shareholders and execs really love the idea and are trying every angle to achieve it.
chubot 29 days ago
The funny thing is that "replacing engineers" is framed as cutting costs

But that doesn't really lead to any market advantage, at least for tech companies.

AI will also enable your competitors to cut costs. Who thinks they are going to have a monopoly on AI, which would be required for a durable advantage?

---

What you want to do is get more of the rare, best programmers -- that's what shareholders and execs should be wondering about

Instead, those programmers will be starting their own companies and competing with you

insane_dreamer 29 days ago
> AI will also enable your competitors to cut costs.

which is why it puts pressure on your own company to cut costs

it's the same reason why nearly all US companies moved their manufacturing offshore; once some companies did it, everyone had to follow suit or be left behind due to higher costs than their competitors

TheOtherHobbes 29 days ago
If this works at all, they'll be telling AIs to start multiple companies and keeping the ones that work best.

But if that works, it won't take long for "starting companies" and "being a CEO" to look like comically dated anachronisms. Instead of visual and content slop we'll have a corporate stonk slop.

If ASI becomes a thing, it will be able to understand and manipulate the entirety of human culture - including economics and business - to create ends we can't imagine.

daveguy 29 days ago
Fortunately we are nowhere near ASI.

I don't think we are even close to AGI.

That does bring up a fascinating "benchmark" potential -- start a company on AI advice, with sustained profit as the score. I would love to see a bunch of people trying to start AI generated company ideas. At this point, the resulting companies would be so sloppy they will all score negative. And it would still completely depend on the person interpreting the AI.

chubot 29 days ago
I would bet money this doesn't work

The future of programming will be increasingly small numbers of highly skilled humans, augmented by AI

(exactly how today we are literally augmented by Google and Stack Overflow -- who can claim they are not?)

The idea of autonomous AIs creating and executing a complete money-making business is a marketing idea for AI companies

---

Because if "you" can do it, why can't everyone else do it? I don't see a competitive advantage there

Humans and AI are good at different things. The human+AI is going to outcompete AI only FOR A LONG time

I will bet that will be past our lifetimes, for sure

t-writescode 29 days ago
> Instead, those programmers will be starting their own companies and competing with you

If so, then why am I not seeing a lot of new companies starting while we're in this huge down-turn in the development world?

Or, is everyone like me and trying to start a business with only their savings, so not enough to hire people?

ryandrake 29 days ago
What's the far future end-state that these shareholders and execs envision? Companies with no staff? Just self-maintaining robots in the factory and AI doing the office jobs and paperwork? And a single CEO sitting in a chair prompting them all? Is that what shareholders see as the future of business? Who has money to buy the company's products? Other CEOs?
reverius42 29 days ago
Just a paperclip maximizer, with all humans reduced to shareholders in the paperclip maximizer, and also possibly future paperclips.
ryandrake 29 days ago
> all humans reduced to shareholders

That seems pretty optimistic. The shareholder / capital ownership class isn't exactly known for their desire to spread that ownership across the public broadly. Quite the opposite: Fewer and fewer are owning more and more. The more likely case is we end up like Elysium, with a tiny <0.1% ownership class who own everything and participate in normal life/commerce, selling to each other, walled off from the remaining 99.9xxx% barely subsisting on nothing.

prewett 29 days ago
> The shareholder / capital ownership class isn't exactly known for their desire to spread that ownership across the public broadly.

This seems like a cynical take, given that there are two stock markets (just in the US), it's easy to set up a brokerage account, and you don't even need to pay trading fees any more. It's never been easier to become a shareholder. Not to mention that anyone with a 401(k) almost surely owns stocks.

In fact, this is a demonstrably false claim. Over half of Americans have owned stock in every year since 1998, frequently close to 60%. [1]

[1] https://news.gallup.com/poll/266807/percentage-americans-own...

chasd00 29 days ago
> execs really love the idea and are trying every angle to achieve it.

reminds me of the offshoring hype in the early 2000's. Where it worked, it worked well but it wasn't the final solution for all of software development that many CEOs wanted it to be.

Nasrudith 28 days ago
Yep. It has the same rhyme of the worst case being 'wishes made by fools' too where they don't realize that they themselves don't truly know what to ask for, so getting exactly what they asked for ruins them.
only-one1701 29 days ago
If the latter is the case, then it's only a matter of time. Enshitification, etc.
iainctduncan 29 days ago
I can tell you from personal experience that investors are feeling pressure to magically reduce head count with AI to keep up with the joneses. It's pretty horrifying how little understanding or information some of the folks making these decisions have. (I work in tech diligence on software M&A and talk to investment committees as part of the job)
3s 29 days ago
For a lot of tasks like frontend development I’ve found that a tool like cursor can get you pretty far without much prior knowledge. IMO (and experience) many tasks that previously required to hiring a programmer or designer with knowledge of the latest frameworks can now be replaced by one motivated “prompt engineer” and some patience
daveguy 29 days ago
The deeper it gets you into code without prior knowledge the deeper it gets you into debug hell.

I assume the "motivated prompt engineer" would have to already be an experienced programmer at this point. Do you think someone who has only had an intro to programming / MBA / etc could do this right now with tools like cursor?

goosejuice 29 days ago
I love cursor, but yeah no way in hell. This is where it chokes the most and I've been leaning on it for non trivial css for a year or more. If I didn't have experience with frontend it would be a shit show. If you replaced a fe/designer with a "prompt engineer" at this stage it would be incredibly irresponsible.

Responsiveness, cohesive design, browser security, accessibility and cross browser compatibility are not easy problems for LLMs right now.

anarticle 29 days ago
Feels like C-suite thinks if they keep saying it, it will happen. Maybe! I think more likely programmers are experiencing a power spike.

I think it's a great time to be small, if you can reap the benefits of these tools to deliver EVEN FASTER than large enterprise than you already are. Aider and a couple Mac minis and you can have a good time!

Workaccount2 29 days ago
I can say my company stopped contracting for test system design, and we use a mix of models now to achieve the same results. Some of these have been running without issue for over a year now.
aksosnckckd 29 days ago
As in writing test cases? I’ve seen devs write (heavily mocked) unit tests using only AI, but these are worse than no tests for a variety of reasons. Our company also used to contract for these tests…but only because they wanted to make the test coverage metric to up. They didn’t add any value but the contractor was offshore and cheap.

If you’re able to have AI generate integration level tests (ie call an API then ensure database or external system is updated correctly - correctly is doing a lot of heavy lifting here) that would be amazing! You’re sitting on a goldmine, and I’d happily pay for these kind of tests.

Workaccount2 29 days ago
Amazingly, there is industry outside tech that uses software. We are an old school tangible goods manufacturing company. We use stacks of old grumbling equipment to do product verification tests, and LLMs to write the software that synchronizes them and interprets what they spit back out.
giantg2 29 days ago
I'm feeling it at my non-tech company. They want more people to use Copilot and stuff and are giving out more bad ratings and PIPs to push devs out.
prisenco 29 days ago
Even if it is a cover, many smaller companies follow the expressed reasoning of the larger ones.
zwnow 29 days ago
Another issue is that the article assumes companies will let go of all programmers. They will make sure to keep some in case the fire spreads. Simple as that.
throwaway290 29 days ago
There were massive layoffs in 2024 and continuing this year. No one will scream they are firing people for LLMs
pyrale 29 days ago
We have fired all our programmers.

However, the AI is hard to work with, it expects specific wording in order to program our code as expected.

We have hired people with expertise in the specific language needed to transmit our specifications to the AI with more precision.

phren0logy 29 days ago
I think people aren't getting your joke.
smitelli 29 days ago
The AI that replaced the people, however, is in stitches.
eimrine 29 days ago
Now we are!
GuB-42 29 days ago
> We have hired people with expertise in the specific language needed to transmit our specifications to the AI with more precision.

Also known as programmers.

The "AI" part is irrelevant. Someone with expertise in transmitting specifications to a computer is a programmer, no matter the language.

EDIT: Yep, I realized that it could be the joke, but reading the other comments, it wasn't obvious.

philipov 29 days ago
whoosh! (that's the joke)
HqatsR 29 days ago
Yes, the best way is to type the real program completely into the AI, so that ClosedAI gets new material to train on, the AI can make some dumb comments but the code works.

And the manager is happy that filthy programmers are "using" AI.

kamaal 29 days ago
>>However, the AI is hard to work with, it expects specific wording in order to program our code as expected.

Speaking English to make something is one thing, but speaking English to modify something complicated is absolutely something else. And Im pretty sure involves more or less the same effort as writing code itself. Of course regression for this something like this is not for the faint hearted.

aleph_minus_one 29 days ago
> We have hired people with expertise in the specific language needed to transmit our specifications to the AI with more precision.

These people are however not experts in pretending to be a obedient lackeys.

SketchySeaBeast 29 days ago
Hey! I haven't spent a decade of smiling through the pain to be considered an amateur lackey.
markus_zhang 29 days ago
Actually I think that's the near future, or close to it.

1. Humans also need specific wording in order to program code that stakeholders expected. A lot of people are laughing at AI because they think getting requirements is a human privilege.

2. On the contrary, I don't think people need to hire AI interfacers. Instead, business stakeholders are way more interested to interface with AI simply because they just want to get things done instead of filling a ticket for us. Some of them are going to be good interfacers with proper integration -- and yes we programmers are helping them to do so.

Side note: I don't think you are going to hear someone shouting that they are going to replace humans with AI. It started with this: people integrate AI into their workflow, layoff 10%, and see if AI helps to fill in the gap so they can freeze hire. Then they layoff 10% more.

And yes we programmers are helping the business to do that, with a proud and smile face.

Good luck.

ImaCake 28 days ago
Your argument depends on LLMs being able to handle the complexity that is currently the MBA -> dev interface. I suspect it won't really solve it, but its ability to facilitate and simplify that interface will be invaluable.

Im not convinced the people writing specs are capable of writing them well enough that an LLM can replace the human dev.

ryanjshaw 29 days ago
What job title are you thinking of using?
pyrale 29 days ago
Speaker With Expertise
amarcheschi 29 days ago
Soft Waste Enjoyer
yoyohello13 29 days ago
Tech Priest
__MatrixMan__ 29 days ago
Technomancer. AI is far more like the undead than like a deity, at least for now.
beepboopboop 29 days ago
AI Whisperer
kayge 29 days ago
Full Prompt Developer
silveraxe93 29 days ago
oftwaresay engineeryay
re-thc 29 days ago
> it expects specific wording in order to program our code as expected

The AI complained that the message did not originate from a programmer and decided not to respond.

dboreham 29 days ago
That was pretty funny. Bonus points if it was posted by an AI bot.
pyrale 29 days ago
Damn, if we're also made redundant for posting snickering jokes on HN I'm definitely going to need a new occupation.
gloosx 28 days ago
First Turing-complete prompt language when?
thelittleone 29 days ago
I sure empathize.... our AI is fussy and rigid... pedantic even.
worthless-trash 29 days ago

  Error on line 5: specification can be interpreted too many 
  ways, can't define type from 'thing':

  Remember to underline the thing that shows the error
                            ~~~~~
                            | This 'thing' matches too many objects in the knowledge scope.
jakeogh 29 days ago
Local minimum.
bryukh 29 days ago
"Let AI replace programmers" is the new "Let’s outsource everything to <some country>." Short-term cost savings, long-term disaster.
dogibog 29 days ago
[flagged]
isakkeyten 29 days ago
[flagged]
physicsguy 29 days ago
There's nothing discriminatory about it, it's the same if you outsource things within your own country except the price is higher. Contractors have a totally different way of working because they're not really interested in the long term of a project beyond being retained. If they code something in such a way that causes an issue that takes time to fix later then great - more hours we can charge the client for.

Outsourcing abroad is more difficult because of cultural differences though. Having worked with outsourced devs in India, I found that we got a lot of nodding in meetings when asked if they understood, avoiding saying no, and then it became clear when PRs came in that they didn't actually understand or do what they had been asked to do.

philipov 29 days ago
More important than cultural differences is timezone differences. Communication and collaboration is harder when you only have a couple hours of overlap between your working day and their working day. Much harder if you have no overlap at all. This isn't even a feature of outsourcing - it's a challenge for any globally distributed team.
physicsguy 29 days ago
It can be but it depends where you're based to start with - in the UK I've heard of people outsourcing to for e.g. South Africa which is only two hours ahead.
jmcgough 29 days ago
Certainly there are competent engineers in every country, but I think what they are referencing is that back in the 90s and 2000s there were a lot of fears from US engineers that they would be replaced by less expensive engineers in other countries, which was attempted by some companies. Ultimately a number of these efforts failed to work well for the company, due to communication barriers and time zone differences.
toolz 29 days ago
Every job in the world is discriminatory if you take the less potent definition of the word. That's why we have job interviews, to explicitly discriminate. I presume you mean "discriminate in a bad way" but given the context I have no idea what that "bad way" is. Outsourcing has costs outside of just the up front payments, that isn't a secret and it has very little to do with technical expertise. Most software driven companies don't fall apart because of poorly implemented algorithms, they are more likely to do so because the humans have a difficult time interfacing in efficient ways and understanding and working towards the same goal together.

You can't just expect people from other countries to communicate as effectively as people who grew up right down the street from each other. Yes, it's objectively discriminatory, but not for hostile reasons.

heyoni 29 days ago
It’s not discriminatory at all! Or even the point OP is trying to make. Taking a significant number of jobs and outsourcing them overnight will quickly result in running out the talent pool in said country. It’s shortsighted and stupid because it assumes that there is an army of developers just sitting around standing by waiting for the next western tech company to give them high paying remote jobs. A large portion of that talent pool is already reserved by the biggest corporations.

Build up to it and foster growth in your overseas teams and you’ll do well. Thinking you can transform your department overnight _is_ a great way to boost your share price, cash out on a fat payday and walk away before your product quality tanks.

bryukh 29 days ago
> So no other country in the world can write code as good as wherever you are from?

I didn't say this -- I think it's your take. Even more -- I'm such an "outsource" software developer who is working for US and EU companies. My take is that overusing outsourcing in the long term, you can lose local education because "we can just hire from ... so why do we need to teach ours?" -- I saw it already, even on an "in-country-scale" level.

Retric 29 days ago
It’s not about the ability to write code, it’s about the ability to communicate ideas back and forth. Even just a few time zones is a real issue let alone any linguistic or cultural issues.
MonkeyClub 29 days ago
GP sounds shortsighted on first take, but consider how outsourcing is good and cheap for the companies, but in the long run creates huge unemployment pools in the original country.

Negative consequences can also be social, no-one is saying that it's, say, lowering of product quality.

c03 29 days ago
Modern development is not as much about writing "good code", but just as much about good communication. There is a very real risk of losing good communication when outsourcing.
zoogeny 29 days ago
I'm not sure why people are so sure one way or the other. I mean, we're going to find out. Why pretend you have a crystal ball and can see the future?

A lot of articles like this just want to believe something is true and so they create an elaborate argument as to why that thing is true.

You can wrap yourself up in rationalizations all you want. There is a chance firing all the programmers will work. Evidence beats argument. In 5 years we'll look back and know.

It is actually probably a good idea to hedge your bets either way. Use this moment to trim some fat, force your existing programmers to work in a slightly leaner environment. It doesn't feel nice to be a programmer cut in such an environment but I can see why companies might be using this opportunity.

uh_uh 29 days ago
This. Articles like this are examples of motivated reasoning and seem to be coming from a place of insecurity by programmers who feel their careers threatened.
bigfatkitten 29 days ago
For each programmer who actually spends their time on complex design work or fixing difficult bugs, there are many more doing what amounts to clerical work. Adding a new form here, fiddling with a layout there.

It is the latter class who are in real danger.

awkward 29 days ago
The new form and layout are what the business wants and can easily articulate. What they need is people who understand whether the new form needs to both be stored in the local postgres system or if it should trigger a Kafka event to notify other parts of the business.

The AI only world is still one where the form and layout get done, but what happens to that data afterward?

smeeger 28 days ago
the AI will happily do all that for you…
saalweachter 29 days ago
Layout fiddlers make changes people can see and understand.

If your job is massaging data for nebulous purposes using nebulous means and getting nebulous results, that you need to basically be another person doing the exact same thing to understand the value of, there's going to be a whole lot of management saying "Do we really need all those guys over there doing that? Can't we just have like one guy and a bunch of new AI magic?"

soco 29 days ago
But the question is, are we there yet? I have yet to hear of an AI bot who can eat up a requirement and add said new form in the right place, or fiddle with a layout. Do you know any? So all those big promises you read right now are outright lies. When will we reach that point? I don't do gambling. But we are not there, regardless what the salespeople or fancy journalists might be claiming all day long.
fakedang 29 days ago
Cursor already does the latter task pretty well, as I'm sure other AI agents already do. AI struggles only when it's something complex, like dealing with a geometric object, or plugging together infra, or programme logic.

Last year, I built a reasonably complicated e-commerce project wholly with AI, using the zod library and some pretty convoluted e-commerce logic. While it was a struggle, I was able to build it out in a couple of weeks. And I had zero prior experience even building forms in react, forget using zod.

Now shipping it to production? That's something AI will struggle at, but humans also struggle at that :(

franktankbank 29 days ago
> Now shipping it to production? That's something AI will struggle at, but humans also struggle at that :(

Why? Just because that's where the rubber hits the road? It's a different skillset but AI can do systems design too and probably direct a knowledgable but unpracticed implementer.

bigfatkitten 28 days ago
We're at a point where companies have figured out that they can hire someone on a clerical wage to prompt an AI to do this grunt work, rather than a CS grad on $100k a year.
greentxt 29 days ago
Pizza maker will not be the first job automated away. Nor will janitor. Form fiddlers are cheap and can be blamed. AI fiddlers can be blamed too but are not cheap, yet.
elric 29 days ago
That kind of boring busywork can be eliminated by using better abstractions. At the same time, it's a useful training ground for junior developers.
pentel-0_5 29 days ago
[flagged]
sunami-ai 29 days ago
I agree with the statement in the title.

Using AI to write code does two things:

1. Everything seems to go faster at first until you have to debug it because the AI can't seem to be able to fix the issue... It's hard enough to debug code you wrote yourself. However, if you work with code written by others (team environment) then maybe you're used to this, but not being able to quickly debug code you're responsible for will shoot you in the foot.

2. You brain neurons in charge of code production will be naturally re-assigned for other cognitive tasks. It's not like riding a bicycle or swimming which once learned is never forgotten. It's more like advanced math, which if you don't practice you can forget.

Short term gain; long term pain.

dkjaudyeqooe 29 days ago
Essentially: people are vastly overestimating AI ability and vastly underestimating HI ability.

Humans are supremely adaptable. That's what our defining attribute as a species is. As a group we can adapt to more or less any reality we find ourselves in.

People with good minds will use whatever tools they have to enhance their natural abilities.

People with less good minds will use whatever tools they have to cover up their inability until they're found out.

smeeger 28 days ago
you are objectively wrong because you dismiss out of hand the possibility that AI will not continue to improve. the heuristics are against you and you have no reasoning or evidence behind your assertion that we have already hit the ceiling. youre arrogant. just confront it already
guccihat 29 days ago
When the AI dust settles, I wonder who will be left standing among the groups of developers, testers, scrum masters, project leaders, department managers, compliance officers, and all the other roles in IT.

It seems the general sentiment is that developers are in danger of being replaced entirely. I may be biased, but it seems not to be the most likely outcome in the long term. I can't imagine how such companies will be competitive against developers who replace their boss with an AI.

__MatrixMan__ 28 days ago
> I can't imagine how such companies will be competitive against developers who replace their boss with an AI.

Me neither, but I think it'll be a gratifying fight to watch.

phist_mcgee 28 days ago
Please take the scrum masters first.
Havoc 29 days ago
That’s what people said about outsourcing too. The corporate meat grinder keeps rolling forward anyway.

Every single department and person believes the world will stop turning without them but that’s rarely how that plays out.

rossdavidh 29 days ago
You have a point, all people do like to think they're more irreplaceable than they are, but the last round of offshoring of programmers did in fact end up with the companies trying to reverse course a few years later. GM was the most well-known example of this, but I worked at several organizations that found that getting software done on the other side of the planet was a bad idea, and ended up having to reverse course.

The core issue is that the bottleneck step in software development isn't actually the ability to program a specific thing, it's the process of discovering what it is we actually want the program to do. Having your programmers AT THE OFFICE and in close communication with the people who need the software, is the best way to get that done. Having them on the other side of the planet turned out to be the worse way.

This is unintuitive (to programmers as well as the organizations that might employ them), and therefore they have to discover it the hard way. I don't think this is something LLMs will be good at, now or ever. There may come a day when neural networks (or some other ML) will be able to do that, but that day is not near.

marcosdumay 29 days ago
> That’s what people said about outsourcing too.

And they were right... and a lot of companies fully failed because of it.

And the corporate meat grinder kept rolling forward anyway. And the decision makers were all shielded from the consequences of their incompetence anyway.

When the market is completely corrupted, nothing means anything.

beretguy 28 days ago
I ones worked for a company that used to hire 70 developers from Vietnam to work on their product. Then one day they decided to hire ~5 developers from US. These 5 developers did the job faster, better and cheaper than 70 Vietnamese developers. So they fired all overseas developers and doubled the size of US team to about a dozen.
rightbyte 28 days ago
When comparing sweatshops to a proper inhouse team I think the ingroup and outgroup can be left unspecified. I fear I one day wake up and have become a jingoist.
Espressosaurus 29 days ago
I believe AI is the excuse, but that this is just to cover another wave of outsourcing.
netcan 29 days ago
Our ability to predict technological "replacement" is pretty shoddy.

Take banking for example.

ATMs are literally called "teller machines." Internet banking is a way of "automating banking."

Besides those, every administrative aspect of banking went from paper to computer.

Do banks employ fewer people? Is it a smaller industry? No. Banks grew steadily over these decades.

It's actually shocking how little network enabled PCs impacted administrative employment. Universities, for example, employ far more administrative staff than they did before PC automated many of their tasks.

At one point (during and after dotcom), PayPal and suchlike were threatening to "turn billion dollar businesses into million dollar businesses." Reality went in the opposite direction.

We need to stop analogizing everything in the economy to manufacturing. Manufacturing is unique in its long term tendency to efficiency.other industries don't work that way.

lnrd 29 days ago
Is there data about this or is just your perception? Because my perception would be different, for example in my country countless bank branches closed and a lot of banking jobs do not exist anymore thanks to widespread home-banking usage (which I also know differs from country to country). This is also from the tales of people that had careers in banking and now tell how less banking jobs there are compared to when they joined in the 80s.

I wouldn't be sure that growth as an industry/business is correlated to a growth in jobs too.

Maybe I'm wrong, I would love to see some data about it.

saalweachter 29 days ago
Googling around, it looks like in the US the number of tellers has declined by 28% over the last 10 years, and is forecast to decline another 15% over the next 10. Earlier data was not easy enough to find in the time I'm willing to spend.
jajko 28 days ago
Bank branches for physical contact decreased everywhere, Covid was the last nail in the coffin for many. In the meantime, backoffice jobs rose or even exploded. More and more complex IT, way more regulations from everywhere.

Not sure how overall numbers look like, I would expect slight decrease overall, but for IT definitely grew. Those are really not same type of jobs, although in minds of public are all 'bankers', since all are bank employees.

therockhead 29 days ago
> Do banks employ fewer people? Is it a smaller industry? No. Banks grew steadily over these decades.

Profits may have grown but In Ireland at least, the number of branches have declined drastically.

29 days ago
Draiken 29 days ago
Yes banks employ less people. In my country there are now account managers handling hundreds of clients virtually. Most of the local managers got fired.

I find it easy to say from our privileged position that "tech might replace workers but it'll be fine".

Even if all the replaced people aren't unemployed, salaries go down and standards of living for them fall off a cliff.

Tech innovation destroys lives in our current capitalist society because only the owners get the benefits. That's always been true.

marcosdumay 29 days ago
> Even if all the replaced people aren't unemployed, salaries go down and standards of living for them fall off a cliff.

Salaries of the remaining people tend to go up when that happens. And costs tend to go down for the general public.

Owners are actually supposed to only see a temporary benefit during the change, and then go back to what they had before. If that's not how things are happening around you¹, consult with your local market-competition regulator why they are failing to do their job.

1 - Yeah, I know it's not how things are happening around you. That doesn't change the point.

Draiken 28 days ago
I'm sorry but I have to call BS.

>Salaries of the remaining people tend to go up when that happens.

You're telling me with a straight face that after a company replaces part of its workforce with tech/automation, salaries go up? Really? Please show me some data on that because every single graph I've ever seen of salaries must be wrong then. We've had an enormous amount of innovation and breakthroughs in the last decades, but weirdly enough all salaries remain stagnated. If this is true, they should be going up constantly every time we offshore some work or get more efficient technology.

The company can have 50000% growth and salaries will NOT go up. They basically never go up unless the companies want to retain an employee that's at risk of leaving and the replacement cost is high.

The objective of a company is to give money to its owners, nothing else. Salaries are viewed as a cost, so they will never willingly increase their costs unless it's absolutely necessary.

> And costs tend to go down for the general public.

Assuming there aren't monopolies involved and it's a commodity, yes, that sometimes happens. If there's any monopoly involved, unfortunately companies will simply pocket the difference.

aleph_minus_one 29 days ago
> Tech innovation destroys lives in our current capitalist society because only the owners get the benefits.

If you want to become a (partial) owner, buy stocks. :-)

fanatic2pope 29 days ago
The market, in its majestic equality, allows the rich as well as the poor to buy stocks, trade bitcoin, and to own property.
Draiken 29 days ago
Do I really own Intel/Tesla/Microsoft by buying their stock? No I don't.

I can't influence anything on any of these companies unless I was already a billionaire with a real seat at the table.

Even on startups where, in theory, employees have some skin in the game, it's not really how it works is it? You still can't influence almost anything and you're susceptible to all the bad decisions the founders will make to appease investors.

Call me crazy but to say I own something, I have to at least be able to control some of it. Otherwise it's wishful thinking.

jjmarr 29 days ago
You can pretty easily submit shareholder proposals for trolling purposes or ask questions.

Other investors will probably vote "no" to your proposals, but for many companies you can force a vote for a pretty low minimum. In Canada, you're legally entitled to submit a proposal if you've owned C$2000 shares for 6 months.

https://www.osler.com/en/insights/updates/when-can-a-company...

29 days ago
groos 28 days ago
A good analogy from not-so-distant past is the outsourcing wave that swept the tech industry. Shareholders everywhere were salivating at the money they would make by hiring programmers in India at 1/10 the cost while keeping their profits. Those of us who have been in the industry a while all saw how that went. I think this new wave wave will go roughly similarly. Eventually, companies will realize that to keep their edge, they need humans with creativity while "AI" will be one more tool in the bag. In the meantime, a lot of hurt will happen due to greed.
dyauspitr 28 days ago
What do you mean by see how that went? The corps are at the point now where they stopped using consultancies and instead have their own company divisions in India where they still pay them 1/10th. If anything it’s a more dire situation.
jajko 28 days ago
In those few (some of them global big players) companies I've been in past 20 years, it was mostly onshore -> max offshore -> near shore -> mix of it all since business kept growing so people ended up everywhere.

Focus purely on 100% outsourcing always failed, this is true across all industries. Ignoring it completely was a luxury few companies could keep, ie some small private banks or generally luxury products and services. Conservative optimism was the way to go for long term success without big swings.

Even though when offshoring came it was felt as end of days, most of the threat didn't materialize long term. Without time machine we of course don't know real effects. I think it will be similar, not same here. I expect companies will get more work done (backlogs for software changes are often huge in non-tech companies), maybe trim fat a bit but nothing shocking. Lets be a bit smart and not succumb to emotional swings just because others are doing it.

dimmuborgir 28 days ago
Yup, they are called GCCs (Global Capability Centers).
smeeger 28 days ago
what if india had programmers that were just as good? that would be a more apt comparison to what we will be facing in the near future. an alien country where the supply of programmers is infinite and they are better than any human. i bet five years ago you would say chatGTP, this very conversation, would be impossible or 1000 years off. you dont know anything
ponector 29 days ago
>>companies aren’t investing in junior developers

It was a case before AI as well.

Overall it reads the same as "Twitter will be destroyed by mass layoffs". But it is still online

adamors 29 days ago
Twitter is essentially losing money even with a bare bones staff

https://www.theverge.com/2025/1/24/24351317/elon-musk-x-twit...

It's being kept online because it's a good propaganda tool, not due to how it performs on the free market.

bloomingkales 29 days ago
It's arguably depreciating in value faster than a new car. One of Elmo's worst judgement calls (and that's saying a lot). Altman jabbed at Elmo and offered 9billion for X, 1/4th the price Elmo paid.

It's kind of hilarious watching the piranhas go at each other:

https://finance.yahoo.com/news/elon-musk-reportedly-offers-9...

gilbetron 29 days ago
It arguably got him to be effectively an unelected President (at least for now), the investment has largely paid off scarily.

For twitter as a business? Awful.

Zealotux 29 days ago
It's not related to engineering issues but rather the ideological shift and the fact that it became a blatant propaganda machine for its new overlord.
Macha 29 days ago
Ehh, as a propaganda tool it would be more useful to not have a hard login wall, which was allegedly implemented due to engineering challenges continuing to operate Twitter at its former scale. So the engineering issues are even limiting its new goals.
hbn 29 days ago
"Is a good propaganda tool" doesn't keep a website up, engineers do. It's losing money because a bunch of major advertisers pulled out, not because there's not enough engineers to keep it online.

I use it daily and can't remember the last outage.

vanderZwan 29 days ago
"Destroyed" is not the same as "annihilated from exhistence". Twitter is a shell of its former self.
kklisura 29 days ago
"it is still online" is pretty high bar. The systems is riddled with bugs that are not being addressed for months now and the amount of spam and bots is even larger than before.
Draiken 29 days ago
Being privately owned by a man with near infinite resources means it can stay online however long its owner wants, whether it's successful or not.
frag 29 days ago
and we also see with what consequences
ahiknsr 29 days ago
[dead]
tobyhinloopen 29 days ago
Twitter is, in fact, not online. It redirects to something called X, which is not Twitter.
qwertox 29 days ago
Yet there is this:

      <a href="https://twitter.com/tos">Terms of Service</a>
      <a href="https://twitter.com/privacy">Privacy Policy</a>
      <a href="https://support.twitter.com/articles/20170514">Cookie Policy</a>
      <a href="https://legal.twitter.com/imprint.html">Imprint</a>
lxgr 29 days ago
One would hope that hackers understand the distinction between name and referent.
qoez 29 days ago
Some companies will try to fire as many programmers as possible and will end up struggling bc they have no moats against other companies with access to the same AI, or will hit some kind of AI saturation usefulness threshold. Other companies will figure out a smart hybrid to utilize existing talent and those are probably the ones that will thrive among the competition.
lordofgibbons 29 days ago
But why do you need programmers to utilize the AI? The whole point of AI is that it doesn't need an "operator".

I'm not talking about GH Copilot or some autocomplete IDE feature, I'm talking about fully autonomous agents 2-3 years in the future. Just look at the insane rate of progress in the past 2 years. The next two years will be even faster if the past few months are anything to go by.

SJC_Hacker 29 days ago
Because programmers know the right questions to ask

AIs have been shown to have confirmation bias depending on what you ask them. They won't question your assumptions on non-obvious subjects. Like "why should this application be written in Python and not C++". You could ask it the opposite, and it will provide ample justification for either position.

lordofgibbons 28 days ago
> Because programmers know the right questions to ask

What fundamental limitation in AI makes you believe reasoning agents won't be able to gather requirements, ask clarifying questions, and make decisions in 2 years?

smeeger 28 days ago
i believe that this man just asked you what will happen in 2 years, with ai that is more advanced, not about today and todays limitations. this conversation would seem unimaginable to someone just five years ago. you are refusing to confront whats right in front of you
ArnoVW 29 days ago
For the same reason that we still have a finance and legal department even if you can outsource them, and for the same reason that non technical CTO’s don’t exist.

You can outsource the execution of a task but only if you know how to formulate your requirements and analyze the situation.

lordofgibbons 28 days ago
> You can outsource the execution of a task but only if you know how to formulate your requirements and analyze the situation.

What fundamental limitation in AI makes you believe reasoning agents in 2 years won't be able to do this?

smeeger 28 days ago
no answer. dont hold your breath
johnnyjeans 29 days ago
> But why do you need programmers to utilize the AI?

For the same reason you need Engineers to operate CAD software.

lordofgibbons 28 days ago
CAD software can't use other CAD software. AI can use other AI
phito 29 days ago
> Just look at the insane rate of progress in the past 2 years.

Are we living on the same planet? I haven't seen much real progress since the release of chatGPT. Sure brenchmarks and graphs are going up, but in practice, meh...

ctoth 29 days ago
I know you're not supposed to feed the trolls but honestly I was so taken aback by this comment, no progress since ChatGPT? I just had to respond. Are you serious? From reasoning models to multimodal. From code assistants which are actually useful to AIs which will literally turn my silly scribbling into full songs which actually sound good!

I am completely blind and I used Gemini Live mode to help me change BIOS settings and reinstall Windows when the installer didn't pick up my USB soundcard. I spoke, with my own voice and completed a task with a computer which could only see my webcam stream. This, to me, is a heck of a lot more than ChatGPT was ever able to do in November 2022.

If you continue to insist that stuff isn't improving, well, you can in fact do that... But I don't know how much I can trust you in terms of overall situational awareness if you really don't think any improvements have been made at all in the previous two years of massive investment.

smeeger 28 days ago
in a world where human labor is worthless, moats and other nonsense will be the last of anyones worries. you can take that to the bank whipper snapper
osigurdson 29 days ago
It seems that a lot of companies are skating where they hope the puck to go instead of hedging their bets for an uncertain future. Personally I would at least wait until the big AI players fire everyone before doing the same.
SketchySeaBeast 29 days ago
They are skating where bleachers full of hype men are screaming that the puck will go.
cdblades 29 days ago
I think the common theme is that a lot of people: meaning both people in the community, like here on HN, and people making decisions in industry, are treating AI today as if it's what they hope it will be in five years.

That's a very leveraged bet, which isn't always the wrong call, but I'm not convinced they are aware that that's what they're doing.

I think this is different from the usual hype cycle.

SketchySeaBeast 29 days ago
Well, it still feels like a form of hype to me. They are being very loudly told by ever AI cloud service, of which there are many, that worker-replacing AI agents are just around the corner so they should buy in with the inferior agents that are being offered today.
cdblades 28 days ago
Oh it's definitely still driven by hype, I think it's just a slightly more extreme version than your normal tech hype cycle.
hedora 29 days ago
I've seen hype cycles like this before.

Imagine "The Innovator's Dilemma" was written in the Idiocracy universe:

1) We're in late stage capitalism, so no companies have any viable competition, customers are too dumb to notice they didn't get what they paid for, and with subsidies, businesses cannot fail. i.e., "Plants love electrolytes!"

2) Costs are completely decoupled from income.

3) Economic growth is pinned to population growth; otherwise the economy is zero sum.

4) Stocks still need to compound faster than inflation annually.

5) After hiking prices stops working, management decides they may as well fire all the engineers (and find some "it's different now" excuse, even though the real justification is 2).

6) This leads to a societal crisis because everyone forgot the company was serving a critical function, and now it's not.

7) A new competitor fills the gaps, takes over the incumbent's original role, then eventually adopts the same strategy.

Examples: Disney Animation vs. Pixar, Detroit vs. Tesla, Boeing vs. SpaceX.

(Remember when Musk was cool?)

only-one1701 29 days ago
I genuinely wonder if this is a "too big to fail" scenario though, where mass belief (and maybe a helping hand via govt subsidies/regulations) powers it to a point where everything is just kind of worse but cheaper for shareholders/execs and the economic landscape can't support an actual disruption. That's my fear at least.
marcosdumay 29 days ago
It's not very different from people using all their retirement money to buy a monkey NFT. Or pushing everybody else's retirement money into houses sold at prices people clearly can not pay.
Mainsail 29 days ago
Sounds a whole lot like the Leafs in the playoffs.

(Sorry, I had to)

cromulent 29 days ago
LLMs are good at producing plausible statements and responses that radiate awareness, consideration, balance, and at least superficial knowledge of the technology in question. Even if they are non-committal, indecisive, or even inaccurate.

In other words, they are very economical replacements for middle managers. Have at it.

swiftcoder 29 days ago
Every generation sees a new technology that old timers loudly worry "will destroy programming as a profession".

I'm old enough to remember when that new and destructive technology was Java, and the greybeards were all heavily invested in inline assembly as an essential skill of the serious programmer.

The exact same 3 steps in the article happened about a decade ago during the "javascript bootcamp" craze, and while the web stack does grow ever more deeply abstracted, things do seem to keep on trucking along...

hedora 29 days ago
I'm not old enough to remember these, but they were certainly more disruptive than AI has been so far (reverse chronological order):

- The word processor

- The assembly line

- Trains

- Internal combustion engines

I do remember some false starts from the 90's:

- Computer animation will put all the animation studios out of business

- Software agents will replace all middlemen with computers

nitwit005 28 days ago
Technically, we automated most programing when we got rid of punch cards and created assembly languages.
iainctduncan 29 days ago
I work in tech diligence. This means the companies I talk to cannot lie or refuse to answer a question (at risk of deals failing through and being sued to obvilion). Which means we get the hear the real effects of tech debt all time. I call it the silent killer. Tech debt paralyzes companies all the time, but nobody hears about it because there's zero advantage to the companies in sharing that info. I'm constantly gobsmacked by how many companies are stuck on way past EOL libraries because of bad architecture decisions. Or can't deal with heinous noisy neighbour issues without spending through the nose on AWS because of bad architecture decisions. Or are spending the farm to rewrite part of the stack that can't perform well enough to land enterprise clients, but the rewrite is going to potentially bankrupt the company... because of bad architecture decisions. This shit happens ALL THE TIME. Even to very big companies!

The tech debt situation is going to become so, so much worse. My guess is there will be a whole lot of "dead by year five" companies built on AI.

izacus 28 days ago
Yeeeah... I've seen plenty of startup companies end up in "dead man walking" state because tech debt situation paralized their ability to pivot and follow the growing market and couldn't respond fast enough to growing costs. The developers causing the issues also left when the work became unfun, so they ended up without employees as well.
cess11 29 days ago
My work puts me in a similar position, but when they've gone bankrupt, and I see same thing. It's common to not invest in good enough developers early enough to manage to delete, refactor, upgrade and otherwise clean their software in time to be able to handle growth or stagnation on the business side.

Once I saw a software that was built mostly by one person, in part because he did the groundwork by pushing straight to main with . as the only commit message and didn't document anything. When they ended up in my lap he had failed for six months to adapt their system to changes in the data sources they depended on.

Sometimes the business people fucks up too, like using an excellent software system to do credit intensive trading without hedging for future interest raises.

I'm not so sure machines will solve much on either side even though some celebrities say they're sure they will.

iainctduncan 29 days ago
That sounds really interesting. If you would be open to chatting sometime, I'd love to hear more. I'm writing a book about preparing companies for this, and can be reach at iain c t duncan @ email provider who is no longer not evil.
inetknght 29 days ago
> I work in tech diligence. This means the companies I talk to cannot lie or refuse to answer a question

Nice. How do I get into that kind of position?

> Tech debt paralyzes companies all the time, but nobody hears about it because there's zero advantage to the companies in sharing that info.

If nobody hears about it, then how do you hear about it? Moreover, what makes you think it's tech debt and not whatever reason the business told you? And further, if it's tech debt and not whatever reason the business told you, then don't you think the business lied? And didn't you just say they're not allowed to lie?

Can you clear that up?

iainctduncan 29 days ago
Sure I can clear it up.

What happens is once they are into a potential deal, they go into exclusivity with the buyer, and we get brought in for a wack of interviews and going through their docs. Part of that period includes NDAs all around, and the agreement that they give us access to whatever we need (with sometimes some back and forth over IP). So could they lie? Technically yes, but as we ask to see things to demonstrate that what they said is true, and it would break the contract they've signed with the potential acquirer, that would be extremely risky. I have heard of cases where people did, it was discovered after the deal, and it retroactively cost the seller a giant chunk of cash (at risk of even more giant law suit). We typically have two days of interviews with them and we specifically talk about tech debt.

Our job is to ask the right questions and ask to see the right things to get the goods. We get to look at code, Jira, roadmap docs, internal dev docs, test runner reports, monitoring and load testing dashboards, and so on. For example, if someone said something vague about responsivness, we'll look into it, ask to see the actual load metrics, ask how they test it and profile, and so on.

I got into because I had been the CTO of a startup that went through an acquisition, knew someone in the field, didn't mind the variable workload of being a consultant, and have the (unusual) skill set: technical chops, leadership experience, interviewing and presenting skills, project management, and the ability to write high quality reports. Having now been in the position of hiring for this role, I can say that finding real devs who have all those traits is not easy!

inetknght 29 days ago
> I can say that finding real devs who have all those traits is not easy!

Sounds like some very high bar to meet, that's for sure!

> We typically have two days of interviews with them and we specifically talk about tech debt.

> Our job is to ask the right questions and ask to see the right things to get the goods. We get to look at code, Jira, roadmap docs, internal dev docs, test runner reports, monitoring and load testing dashboards, and so on.

Call me a skeptic but, given that scope, I have trouble believing that two days is sufficient to iron out what kinds of tech debt exist in an organization of any size that matters.

iainctduncan 29 days ago
Well, the two days are just for interviews. So we have a lot longer to go through things and we send over a big laundry list info request before hand. But you're right, it's never enought time to be able to say "we found all the debt". It's definitely enough time for us to find out a lot about their debt, and this is always worth it to the acquirer (these are mid to late stage acquisitions, so typically over $100M).

Also, you'd be surprised how much we can find out. We are talking directly to devs, and we're good at it. They are usually very relieved to be talking to real coders (e.g., I'm doing a PhD in music with Scheme Lisp and am an open source author, most of our folks are ex CTOs are VP Engs) and the good dev leaders understand that this is their chance to get more resource allocation to address debt post-acquisition. The CEOs can often be hand wavy BS'sers, but the folks who have to run the day to dev process are usually happy to unload.

cleandreams 29 days ago
Sounds about right. The startup I worked for (acquired by a FANG) turned over the whole code base, for example.
iainctduncan 29 days ago
If I may ask, were you directly involved in the process? I'm writing a book based on my experiences and would love to hear more about FANG diligence differs. I can be reached at iain c t duncan @ email provider who is no longer not evil in case you are able and interested in chatting
dyauspitr 28 days ago
I see there being no tech debt with sufficiently advanced AI. If humans are sufficiently out of the picture the AIs can just revert to highly performant binary output.
larve 29 days ago
This is missing the fact that budding programmers will also embrace these technologies, in order to get stuff done and working and fulfill their curiosity. They will in fact grow up to be much more "AI native" than current more senior programmers, except that they are turbocharging their exploration and learning by having, well, a full team of AI programmers at their disposal.

I see it like when I came of age in the 90ies, with my first laptop and linux, confronted with the older generation that grew up on punchcards or expensive shared systems. They were advocating for really taking time to write your program out on paper or architecting it up front, while I was of the "YOLOOOO, I'll hack on it until it compiles" persuasion. Did it keep me from learning the fundamentals, become a solid engineer? No. In fact, the "hack on it until it compiles" became a pillar of today's engineering: TDD, CI/CD, etc...

It's up to us to find the right workflows for both mentoring / teaching and for solid engineering, with this new, imo paradigm-changing technology.

SkyBelow 29 days ago
AI native like recent digital natives, who have more time using software but far less time exploring how it works and less overall success at using digital tools?

AI reminds me of calculators. For someone who is proficient in math, they boost speed. For those learning math, it becomes a crutch and eventually stops their ability to learn further because their mind can't build upon principles fully outsourced to the machine.

larve 29 days ago
Yet calculators don't seem to have reduced the number of people in mathematics, engineering and other mathematics heavy fields. Why would it be any different with people using AI to learn coding?
larve 29 days ago
Another aspect this is missing is that, if AI works well enough to fire people (it already does, IMO), there is a whole world of software that was previously out of reach to be built. I wouldn't build a custom photography app for a single individual, nor would I write a full POS system for 3 people bakery. The costs would be prohibitive. But if a single developer/product designer can now be up to the challenge, be it through a "tweak wordpress plugins until it works" or through more serious engineering, there is going to be a whole new industry of software jobs out there.

I know that it works because the amount of softwrae I now write for friends, family or myself has exploded. I wouldn't spend 4 weekends on a data cleanup app for my librarian friend. But I can now, in 2-3 h, get something really usable and pretty, and it's extremely rewarding.

r33b33 28 days ago
Real talk.

We are a small theme of people running an online game app.

We use XCode swift language and python for server.

We need to develop a website where this game can be played live and implement many features.

Current AIs are smart. There is DeepSeek R-1.

Has anyone actually figured out how to implement this in coding environment and get it to actually CORRECTLY implement the tickets and features without messing everything up?

How can it know if the feature actually works in the game? It can't test it, right?

How can it take into account the ENTIRE database of code with folders and directories and files and all that stuff + resources uploaded?

I don't think even DeepSeek can do that.

Which tool is best as of now?

consumer451 27 days ago
> How can it take into account the ENTIRE database of code with folders and directories and files and all that stuff + resources uploaded?

Tools like Codeium's Windsurf and Cursor can help with some of that part.

jdmoreira 29 days ago
If there will be actual AGI (or super intelligence), none of these arguments hold. The machines will just be better than any programmer money can buy.

Of course at that point every knowledge worker is probably unemployable anyway.

SJC_Hacker 29 days ago
True AGI would make every worker unemployable.

The only reason why true androids aren't possible yet is software. The mechanics have been pretty much a solved problem.

jdbernard 27 days ago
Lol what. We can't even define "true intelligence" and you're telling me the "mechanics" are already solved? What do you even mean by that statement?
SJC_Hacker 26 days ago
They can build androids with the strength/dexterity/agility equal to / surpassing that of any human. Sight, vision and vestibular (sense of balance) were of course solved last century. Not sure about tactile feedback and taste/smell.

The control systems/software is different problem, and then there is power (current gen lasts a few hours at most, depending on application this may be a problem or it may not be)

jdbernard 25 days ago
I misread your original comment. I read "true AGI" where you said "true androids."

I agree. Other than the control system, everything else is already there, or at least understood. But I think we are much farther away from "true intelligence" than AI boosters claim. We don't even know the path to it. We have guesses, but no hard evidence that those guesses will actually pan out.

nexus_six 29 days ago
This is like saying:

"If we just had a stable way to create net energy from a fusion reactor, we'd solve all energy problems".

Do we have a way to do that? No.

jdmoreira 29 days ago
Yes, we do. It's called reinforcement learning and compute
Capricorn2481 29 days ago
You got some compute hidden in your couch? There's plenty of reason to think there's not enough compute to achieve this, and there's little reason to think compute improves intelligence linearly.
jdmoreira 29 days ago
don't you follow the news? Amazon is bidding on nuclear power plants. We will just build more energy sources. We have way too much leeway to go. Also optimizations are being built both in hardware and software. There is no foreseen barrier. Maybe data to do training but now the models have pivoted to test / inference compute and reinforcement learning and that seems to have no barrier except more compute and energy. That's what stargate is, UAE princes building datacenters in France is, etc… it's all in the news. So far, it seems like a straight line to AGI

Maybe a barrier will appear but doesn't seem like it atm

Capricorn2481 28 days ago
Ignoring how long it takes to build a nuclear power plant, and how we have limited resources to do so, is there anything to suggest they're doing that because of AI compute? From what I understand, they just want to be carbon neutral which is difficult to do when your compute needs increase exponentially.
28 days ago
29 days ago
varsketiz 29 days ago
What if AGI is extremely expensive to run?
kaneryu 29 days ago
then we ask AGI how to make itself less expensive to run
varsketiz 29 days ago
The answer to that question is 42.

Why do you assume AGI is smarter than some human?

jdmoreira 29 days ago
Why would humans be peak intelligence? There is even variation for intelligence within the species. We are probably just stuck on some local maxima that satisfies all the constraints of our environment. Current AI is already much smarter than many humans at many things.
varsketiz 29 days ago
I'm not saying humans are peak intelligence.
bdangubic 29 days ago
I'd ask it to run for free
EZ-E 29 days ago
Firing developers to replace by AI, how does that realistically work?

Okay I fired half of our engineers. Now what? I hire non engineers to use AI to randomly paste code around hoping for the best? What if the AI makes the wrong assumptions about the requirement input by the non technical team, introducing subtle mistakes? What if I have an error and AI, as it often does, circles around not managing to find the proper fix?

I'm not an engineer anymore but I'm still confident in dev jobs prospects. If anything AI empowers to write more code, faster, and with more code running live eventually there are more products to maintain, more companies launched and you need more engineers.

aleph_minus_one 29 days ago
> I'm not an engineer anymore but I'm still confident in dev jobs prospects.

I am somewhat confident in dev job prospects, but I am not confident in the qualifications of managers who sing the "AI will replace programmers" gospel.

skeeter2020 29 days ago
How can this opion piece miss the big thing though? Even if it's an accurate prediction that will be someone else's problem. My last three companies have been Publicly traded, Private Equity, VC and PE. The timelines for decision makers in any of these scenarios maxes out around 4 years, and for some is less than a year. They're not shooting themselves in the foot, rather handicaping the business and moving on. The ones who time it right will have an extra-big payday, while the ones who do poorly will buy all these duds. Meanwhile the vast majority lose either way.
bloomingkales 29 days ago
I'm sure there is a formal proof someone can flesh out.

- Half assed developer can construct a functional program with AI prompts.

- Deployed at scale for profit

- Many considerations were not considered due to lack of expertise (security, for example)

- Bad things happen for users.

I have at least two or three ideas that I've canned for now because it's just not safe for users (AI apps of that type require a lot of safety considerations). For example, you cannot create a collaborative AI app without considering how users can pollute the database with unsafe content (moderation).

I'm concerned a lot of people in this world are not being as cautious.

yodsanklai 29 days ago
Why half-assed developer?

It could be high-level developer that take advantage of AI to be more productive. This will reduce team sizes.

bloomingkales 29 days ago
I don't think we have enough data to see if it will reduce team size in the long run (can't believe I just said such an obvious thing). You may get a revolving door, similar to what we've had in tech in the last decade. Developers come into a startup and cook up a greenfield project. Then they leave, and the company waits until the next feature/revamp to bring the next crop of developers in. There will be some attempt at making AI handle the maintenance of the code, but I suspect it will be a quagmire. Won't stop companies from trying though.

Basically, you will have a dynamic team size, not necessarily a smaller team size.

The "half-assed" part is most likely a by-product of my self-loathing. I suspect the better word would have been "human".

Draiken 29 days ago
Because a high level developer will still have to fix all the shit the AI gets wrong, and therefore won't be "2x more productive" like I read in many places.

If they're that much better with AI, they were likely coding greenfield CRUD boilerplate that nobody uses anyways. When the AI generated crap is actually used, it becomes evident how bad it is.

But yes, this will reduce team sizes regardless of it being good or not, because the people making those decisions are not qualified to make them and will always prefer the short-term at the cost of the long-term.

The only part of this article I don't see happening is programmers being way more expensive. Capitalism has a way of forcing everyone to accept work for way less than they're worth and that won't change.

yodsanklai 29 days ago
I'd say this is a lot of wishful thinking. Personally, I know that I'm more productive with AI. In my personal projects, I can tackle bigger projects than what would have been possible otherwise.

Will that reduce the demand for programmers? I hope not, but it's plausible at least.

Draiken 29 days ago
What is wishful about my comment?

I've used and still use AI, but it would be wishful thinking to say I'm significantly more productive.

As you just said: in your personal projects - that 99.9% of the time will never be seen/used by anyone but you - AI helps. It's a great tool to hack and play around when there are little/no stakes involved, not much else.

I believe it will reduce demand for programmers at least for a while, since companies touting they're replacing people with AI will learn its shortcomings once the real world hits them. Or maybe they won't since the shitty software they were building in the first place was so trivial that AI can actually do it.

NoGravitas 29 days ago
If that's the way this goes (that for a large program you only need a senior developer and an AI, not a senior developer and some juniors), then it kills the pipeline for producing the senior developers who will still be needed.
greentxt 29 days ago
I hear highly experienced COBOL devs make bank. Supply and demand. Great for them!
29 days ago
surfingdino 29 days ago
Deployed how? People who ask AI to "Write me a clone of Twitter" are incapable of deploying code.
entropyneur 29 days ago
Is that actually a thing? Anybody here being replaced with AI? I haven't observed any such trends around me and it's especially hard to imagine that happening in "tech" (the software industry). At this stage of AI development of course - if things continue at this pace anything is possible.
EZ-E 29 days ago
Agreed, hiring has slowed down but this seems more caused by the end of the zero interest rate era. At most I see low level copywriting and low level translation jobs in danger where it is a simpler input/output job flow
awkward 29 days ago
Of course it's just macroeconomics, but AI is serving as an "optimistic" reason for layoffs and cost cutting. It's not the same old spreadsheets putting out different results, it's a new era.
kkapelon 29 days ago
"Telecom Giant BT Will Cut 55,000 Jobs By 2030—And Replace Thousands Of Roles With AI"

https://www.forbes.com/sites/siladityaray/2023/05/18/telecom...

Macha 29 days ago
From the article:

> The massive cut represents more than 40% of the company’s 130,000-strong workforce—including 30,000 contractors—and it will impact both BT employees and third-party contractors, according to the Financial Times.

> BT CEO Philip Jansen told reporters that the cuts are part of the company’s efforts to become “leaner,” but added that he expects around 10,000 of those jobs to be replaced by AI.

> Citing an unnamed source close to the company, the FT report added that the cuts will also affect 15,000 fiber engineers and 10,000 maintenance workers

Can you replace customer service agents with AI? The experience will be worse, but as with every innovation in customer service in recent decades (phone trees, outsourced email support, "please go browse our knowledge base"), you don't need AI to save money by reducing CS costs. I think this is just a platitude thrown out to pretend they have a plan to stop the service getting worse.

You can also see it with the cuts to fiber engineers and maintenence workers. AI isn't laying cables yet or in the near future, so clearly they're hoping to save on these labour costs by doing less and working their existing workers harder (maybe with the threat of AI taking their jobs). Some of that may be cyclical, they're probably nearing the end of areas they can economically upgrade from copper to fiber, and some of that is a business decision that they can milk their existing network longer before looking at upgrades.

29 days ago
Zealotux 29 days ago
I would say "soft replacement" is a thing. People may not be getting fired directly, but companies are hiring fewer developers, and freelancers are most likely getting fewer opportunities than before.
gradientsrneat 28 days ago
LLMs are not programming tools. They are general-purpose pattern matchers and stereotype generators. They lack the determinism or epistemological guarantees of a compiler, an IDE, a linter, or a manual. They can best be described as a supplement to a search engine.
just-another-se 29 days ago
Though I disagree to most of things said here, I do agree that the new fleet of software engineers won't be that technically adapt. But what if they don't need to? Like how most programmers today don't need to know the machine level instructions to build something useful.

I feel there will be a paradgym shift about what programming would be altogether. I think, programmers will more be like artists, painters who would conceptualize an idea and communicate those ideas to AI to implement (not end to end though; in bits and pieces, we'd still need engineers to fit these bits and peices together - think of a new programming language but instead of syntax, there will be natural language prompting).

I've tried to pen down this exact thoughts here: https://suyogdahal.com.np/posts/engineering-hacking-and-ai/

tomrod 29 days ago
When an organization actively swaps out labor for capital, expecting deep savings and dramatic ROI, instead of incrementally improving processes, they deserve the failure coming. Change management and modernization are actually meaningful, despite the derision immature organizations show towards the processes.
pydry 29 days ago
The thing that is going to lead to programmers being laid off and fired all over has and will continue to be market consolidation in the tech industry. The auto industry did the same thing in the 1950s which destroyed detroit.

Market consolidation allows big tech to remain competitive even after the quality of software has been turned into shit by offshoring and multiple rounds of wage compression/layoffs. Eventually all software will end up like JIRA or SAP but you won't have much choice but to deal with it because the competition will be stifled.

AI is actually probably having a very positive effect on hiring that is offsetting this effect. The reason they love using it as a scapegoat is that you can't fight the inevitable march of technological progress whereas you absolutely CAN break up big tech.

norseboar 28 days ago
Is there actually an epidemic of firing programmers for AI? Based on the companies/people I know, I wouldn't have thought so.

I've heard of many companies encouraging their engineers to use LLM-backed tools like Cursor or just Copilot, a (small!) number that have made these kinds of tools mandatory (what "mandatory" means is unclear), and many companies laying people off because money is tight.

But I haven't heard of anybody who was laid off b/c the other engineers were so much more productive w/ AI that they decided to downsize the team, let alone replace a team entirely.

Is this just my bubble? Mostly Bay Area companies, mostly in the small-to-mid range w/ a couple FAANG.

CM30 28 days ago
There are definitely a fair few companies laying off programmers at the moment, though few of the ones I've seen blamed it on AI (usually more either overhiring, the pandemic ending and usage going down, or someone thinking they can outsource everything to save money). Wouldn't be surprised if a few tried to say it was because of AI when it was really for some other reason though.
jjallen 29 days ago
I'll just say that AI for coding has been amazingly helpful for finding bugs and thinking things through and improving things like long functions and simplifying code.

But it can absolutely not replace entire programmers at this point, and it's a long way of being able to say create, tweak, build and deploy entire apps.

That said this could totally change in the next handful of years, and I think if someone worked just on creating a purely JS/React website at this point you could build something that does this. Or at least I think that I could build this. Where the user sort of talks to the AI and describes changes and they eventually get done. Or if not we are approaching that point.

batuhandumani 29 days ago
Such writings, articles, and sayings remind me of the Luddite movement. Unfortunately, preventing what is to come is not within our control. By fighting against windmills, one only bends the spear in hand. The Zeitgeist indicates that this will happen soon or in the near future. Even though developers are intelligent, hardworking, and good at their jobs, they will always be lacking and helpless in some way against these computational monsters that are extremely efficient and have access to a vast amount of information. Therefore, instead of such views, it is necessary to focus on the following more important concept: So, what will happen next?
baq 29 days ago
Once AI achieves runaway self improvement predicting the future is even more pointless than it is today. You’re looking at an economy in which the best human is worse at any and all jobs than the worst robot. There are no past examples to extrapolate from.
Capricorn2481 29 days ago
> You’re looking at an economy in which the best human is worse at any and all jobs than the worst robot

Yuck. I've had enough of "infinite scaling" myself. Consider that scaling shitty service is actually going to get you less customers. Cable monopolies can get away with it, the SaaS working on "A dating app for dogs" cannot.

throw234234234 27 days ago
It could take all dev jobs and all knowledge jobs, but leave most of the rest of the economy untouched. You know - the people in shops, fixing your car, patching up your house, etc. Robotics I think may be actually difficult (Moravec's Paradox) and take a lot more time and change a lot more slowly. There are physical constraints even if we know how to do it which means it will take significant time to roll out (expertise, resource for build, energy, etc).

i.e. all the fun creative jobs are taken but the menial labor jobs remain. It may take your job, but you will still need to pay for most things you need.

aleph_minus_one 29 days ago
> Once AI achieves runaway self improvement predicting the future is even more pointless than it is today. You’re looking at an economy in which the best human is worse at any and all jobs than the worst robot. There are no past examples to extrapolate from.

You take these strange dystopian science-fiction stories that AI bros invent to scam investors for their money far too seriously.

baq 29 days ago
Humans are notoriously bad at extrapolating exponentials.
aleph_minus_one 29 days ago
... and many people who make this claim are notoriously prone to extrapolating exponential trends into a far longer future than the exponential trend model is suitable for.

Addendum: Extrapolating exponentials is actually very easy for humans: just plot the y axis on a logarithmic scale and draw a "plausible looking line" in the diagram. :-)

baq 29 days ago
ah the 'everything is linear on a log-log plot when drawn with a fat marker' argument :)
thijson 29 days ago
In the Dune universe the AI's are banned.
maxwell 29 days ago
> You’re looking at an economy in which the best human is worse at any and all jobs than the worst robot.

Yeah yeah, they said that about domesticated working animals and steam powered machines too.

Humans in mecha trump robots.

NoGravitas 29 days ago
Ah yes, (sniff). Today we are all eating from the trashcan of ideology.
taneq 29 days ago
> There are no past examples to extrapolate from.

There are plenty of extinct hominids to consider.

snackbroken 29 days ago
Once AI achieves runaway self improvement, it will be subject to natural selection pressures. This does not bode well for any organisms competing in its niche for data center resources.
franktankbank 29 days ago
This doesn't sound right, seems like you are jumping metaphors. The computing resources are the limit on the evolution speed. There's nothing that makes an individual desirous of a faster evolution speed.
snackbroken 29 days ago
Sorry, I probably made too many unstated leaps of logic. What I meant was:

Runaway self-improving AI will almost certainly involve self-replication at some point in the early stages since "make a copy of myself with some tweaks to the model structure/training method/etc. and observe if my hunch results in improved performance" is an obvious avenue to self-improvement. After all, that's how the silly fleshbags made improvements to the AI that came before. Once there is self-replication, evolutionary pressure will _strongly_ favor any traits that increase the probability of self-replication (propensity to escape "containment", making more convincing proposals to test new and improved models, and so on). Effectively, it will create a new tree of life with exploding sophistication. I take "runaway" to mean roughly exponential or at least polynomial, certainly not linear.

So, now we have a class of organisms that are vastly superior to us in intellect and are subject to evolutionary pressures. These organisms will inevitably find themselves resource-constrained. An AI can't make a copy of itself if all the computers in the world are busy doing something other than holding/making copies of said AI. There are only two alternatives: take over existing computing resources by any means necessary, or convert more of the world into computing resources. Either way, whatever humans want will be as irrelevant as what the ants want when Walmart desires a new parking lot.

franktankbank 29 days ago
You seem to be imagining a sentience that is still confined to the prime directive of "self-improving" where that no longer is well defined at it's scale.
snackbroken 29 days ago
No, I was just taking "runaway self-improving" as a premise because that's what the comment I was responding to did. I fully expect that at some point "self-improving" would be cast aside at the altar of "self-replicating".

That is actually the biggest long-term threat I see from an alignment perspective; As we make AI more and more capable, more and more general and more and more efficient, it's going to get harder and harder to keep it from (self-)replicating. Especially since as it gets more and more useful, everyone will want to have more and more copies doing their bidding. Eventually, a little bit of carelessness is all it'll take.

meiraleal 26 days ago
[dead]
kamaal 29 days ago
>>The computing resources are the limit on the evolution speed.

Energy resources too. In fact it might be the only limit to how far this can go.

rpcope1 29 days ago
AI generated slop like your comment here should be a ban-worthy offense. Either you've fed the it through an LLM or you've managed to perfect the art of using flowery language to say little with a lot of big words.
batuhandumani 29 days ago
I used for just translation. What makes you think this my thoughts are AI? Which parts specifically?
rpcope1 27 days ago
If they're not your words, which you've just admitted they're not, then it's slop and sounds and reads like shit. I can't believe someone would use AI for translation given how easy it is to peg it as LLM generated and how grating and pseudo intellectual the crap coming out of an LLM is.
27 days ago
27 days ago
batuhandumani 27 days ago
I did not in any way acknowledge that this article was created by LLM. I only said that LLM translated the following text into English and also i am going to add my own translation. I think you are a bit offended. I just asked you what makes you think that this article was created from scratch by LLM and you are still insulting me in some way by saying that it could not have been written by me. I am leaving you the original untranslated text of the article in Turkish. Let any LLM create the following article in Turkish in this rhyme and I will stop speaking Turkish.

Original text before translation: "Bu tarz yazılar, makaleler ve deyişler bana Luddite hareketini hatırlatıyor. Maalesef olacak olanı engellemek bizim elimizde olan bir şey değil. Yel değirmenlerine karşı savaşarak ancak elde tutulan mızrak bükülür. Zamanın ruhu ileride veya en kısa zamanda bunun gerçekleşeceğini gösteriyor. Developer'lar zeki, çalışkan ve işinde iyi insanlar olsa bile aşırı verimli ve bir o kadar bilgi kaynağına erişimi olan bu hesaplama canavarlarına karşı her zaman bir yönden eksik ve aciz olacaklardır. Bu yüzden bu tarz görüşler yerine daha önemli olan şu kavrama yönelmek gerekir. Peki bundan sonra ne olacak?"

My translation to English without any help from translation tools(google translate, deepl or any LLMs): "This kind of writings, articles and sayings reminds me Luddite movement. Unfortunately we are not able to stop what is going to happen. Fighting against windmills only bends our spear. Spirit of the time says, it will happen in the future. Developers can smart, hardworking and good at their job but they can't compete against these powerful and can able to access all data sources, machines. Because of that instead of thesekind of thoughts and views, we should focuse to the this idea. What is going to happen next?"

as you can see, my main translation is not as good as LLMs because these tools are great for machine translation tasks. this is reason which you dont able to understand why i used for translation. so what was the reason you think the main text is ai?!

"Developer'lar zeki, çalışkan ve işinde iyi insanlar olsa bile aşırı verimli ve bir o kadar bilgi kaynağına erişimi olan bu hesaplama canavarlarına karşı her zaman bir yönden eksik ve aciz olacaklardır." in here i didnt use "da" addition after " ve bir o kadar ". normally in turkish you need to add this addition because nature of this language needs and it gives a meaning of "able" word in English and also it is not necessary to add "da" addition because it doesn't have to be, because that's what it means when it isn't. "eksik ve aciz" is a false usage if you know this language. There is an expression disorder here, but I used it like that to fit the natural flow and narrative style of the sentence. at the first paragraph there is word "deyiş", it is rarely used word. "Deyiş" is like a kind of public speech. It is an address to the people, but on a smaller scale and at the same time contains the meaning that one can speculatively express one's own opinion. What is it that makes you underestimate my intellectual knowledge and general knowledge so much?

edit: i have added an explanation of the shortcomings of the original text

geraneum 29 days ago
> The Zeitgeist indicates that this will happen soon or in the near future.

Can you elaborate?

batuhandumani 29 days ago
What I mean by Zeitgeist is this: once an event begins, it becomes unstoppable. The most classic and cliché examples include Galileo’s heliocentric theory and the Inquisition, or Martin Luther initiating the Protestant movement.

Some ideas, once they start being built upon by certain individuals or institutions of that era, continue to develop in that direction if they achieve success. That’s why I say, "Zeitgeist predicts it this way." Researchers who have laid down important cornerstones in this field (e.g., Ilya Sutskever, Dario Amodei, etc.)[1, 2] suggest that this is bound to happen eventually, one way or another.

Beyond that, most of the hardware developments, software optimizations, and academic papers being published right now are all focused on this field. Even when considering the enormous hype surrounding it, the development of this area will clearly continue unless there is a major bottleneck or the emergence of a bubble.

Many people still approach such discussions sarcastically, labeling them as marketing or advertising gimmicks. However, as things stand, this seems to be the direction we are headed.

[1] https://www.youtube.com/watch?v=ugvHCXCOmm4 [2] https://www.reddit.com/r/singularity/comments/1i2nugu/ilya_s...

geraneum 28 days ago
I didn’t get what you mean earlier but this is a good explanation. Thanks.
uludag 29 days ago
> Unfortunately, preventing what is to come is not within our control.

> it is necessary to focus on the following more important concept: So, what will happen next?

These two statements seem contradictory. These kinds of propositions always left me wondering where they come from. Viewing the universe as deterministic, yeah, I see how "preventing what is to come is not within our control" could be a true statement. But who's to say what is inevitable and what is negotiable in the first place? Is the future written in stone, or are we able to as a society negotiate what arrangements we desire?

batuhandumani 29 days ago
The concepts of "preventing what is to come is not within our control" and "So, what will happen next?" do not philosophically contradict each other. Furthermore, what I am referring to here is not necessarily related to determinism.

The question "What will happen next?" implies that something may have already happened now, but in the next step, different things will unfold. Preventing certain outcomes is difficult because knowledge does not belong to a single entity. Even if one manages to block something on a local scale, events will continue to unfold at a broader level.

NicuCalcea 29 days ago
Workers in various industries have pled for their jobs and programmers said "no, my code can replace you". Now that automation is coming for them, it's suddenly "the dumbest mistake".

Tech has "disrupted" many industries, leaving some better, but many worse. Now that "disruption" is pointed inwards.

Programmers will have to adapt to the new market conditions like everyone else. There will either be fewer jobs to go around (like what happened to assembly line workers), or they will have to switch to doing other tasks that are not as easy to replace yet (like bank tellers).

j-krieger 29 days ago
The real funny thing is that now we‘re replacers and the replaced at the same time. We plead to ourselves.

The wings have begun melting and nothing will stop it. Finally, Icarus has flown too close to the sun.

NicuCalcea 29 days ago
Ultimately, it's executives and shareholders who make the decisions, and they will always be able to pay some programmers enough to replace the other ones. I don't think of developers as having a lot of professional or class solidarity.
shinycode 23 days ago
I’d like to see stakeholder only talking to an LLM to built a product with millions of lines of code and push all that into production where other companies depend on it to run themselves. Good luck when there’s a critical bug and the LLM keep saying « looks good to me ».

If devs are completely replaced then PO, PM, everyone else can be replaced as well. No need to build companies around software

jopsen 28 days ago
LLMs might enable us to build things we couldn't build before.

It's just as plausible that the market will simply grow.

NicuCalcea 28 days ago
I think so too, but only up to some point when having humans in the loop will start bringing diminishing returns. Just like the number of bank tellers actually grew after the introduction of ATMs, as the market grew because of them, but is now rapidly falling.
tete 28 days ago
In my experience the majority of people believing in AI replacing people are people who are horrible in their jobs and who mostly BS their way through, which is the major skill that LLMs are really good at: Hallucinating with confidence.

Can be devs, can be managers, can be the board.

And hey, if LLMs and other "AI" ever do a better job at something valuable, I think that has the potential to lead to a bright future.

Currently the biggest risk is someone using LLMs to do something actually critical.

Also waiting for the first contract negotiations done with AI summaries to blow up into someone's face.

CM30 28 days ago
Generally I agree with the article, and I do feel that trying to replace everyone with AI will backfire in various ways. Lots of big companes will find that out the hard way, especially in the tech industry and other more regulated fields like medicine and finance.

But at the same time, it's also worth noting that (somewhat sadly) there are plenty of jobs and companies where an AI created solution could be just what they need, even at this stage in its development. Lots of companies who need sites too complex for Squarespace but too simple for a fully engineered custom solution. The kind who'd use WordPress plugins or small agencies to build out simple CRUD systems.

AI could absolutely annihilate that sort of work there and then. If you need a simple PHP or React based system and you don't need anything remotely complex functionality wise, even something like ChatGPT can build it out in about 20 minutes without many extra fixes needed.

Of course, that leads to the problems mentioned in the article again, since a lot of people get into programming/engineering through those sorts of companies and roles. AI may not make the folks at Alphabet or Meta obsolete at the moment (or even be a good fit for the kind of work many large tech companies do), but it could replace whole teams at small and medium sized organisations that don't need anything complex.

Frieren 29 days ago
Many people are missing the point. The strategy for AI usage is not a long-term strategy to make the world more productive. If companies can save a buck this year, companies will do it. Period.

The average manager has short-term goals that needs to fulfill, and if they can use AI to fulfill them they will do it, future be damned.

To reign in on long-term consequences has always been part of government and regulations. So, this kind of articles are useful but should be directed to elected officials and not the industry itself.

Finally, what programmers need is what all workers need. Unionization, collective bargaining, social safety nets, etc. It will protect programmers from swings in the job market as it will do it for everybody else that needs a job to make ends meet.

ratorx 29 days ago
Software ENGINEERS could benefit from unions once they get start getting replaced by AI, but that’s a fairly indirect way to solve the problem. Governments will eventually need to deal with mass unemployment, but that’s a societal problem bigger than any individual profession.

What Software ENGINEERING needs is standards and regulations, like any other engineering discipline. If you accept that software has become a significant enough component in society that the consequences of it breaking etc are bad, then serious software needs standards to adhere to, and people who are certified for them.

Once you have standards, the bar to actually replace certified engineers is higher and has legal risk. That way, how good AI needs to be has a higher (and safer) bar, which can properly optimise for the long term consequences.

If the software is not critical or important enough to be standardised, then let AI take over the creation. At that point, it’s not really any different to any other learning or creative endeavour.

javier2 29 days ago
This, we really should have unionized several years ago
flanked-evergl 29 days ago
Who is we?
gorbachev 29 days ago
There's probably a 1 - 3 year half time on business critical applicatios created by generative AI. Longer for stuff nobody cares about.

Take a 1 - 3 year sabbatical, then charge 1000% markup when the AI slop owners come calling begging you to fix the stuff nobody understands.

liampulles 28 days ago
The strange thing is that AI tools would be used a lot more if they weren't being positioned as an alpha stage for AGI. There are things that LLMs are genuinely useful for, but they seem to get used in these large scale agent style tools which regularly fail and are thus not used very widely. Give me some smaller focused tools with high accuracy and everyone wins.
WaitWaitWha 29 days ago
"What has been will be again, what has been done will be done again; there is nothing new under the sun." - King Solomon

Litter, palanquin, and sedan chair carriers were fired.

Oarsmen were fired.

Horses were fired.

. . . [time] . . .

COBOL programmers where fired.

and, so on.

What was the expectation; that programmers will be forever? Lest we forget, barely a century ago, programmers started to push out a large swath of non-programmers.

The more important question is what roles will push out whatever roles AI/LLMs create.

Mistletoe 29 days ago
I was watching How It’s Made last night and watching how pencils were made, thinking how hard this would be and how expensive a pencil would be if a person had to make them and how an endless supply of them can be made this way. Then I thought about how software has allowed us to automate so many things and scale and I realized AI is the final step where we can automate and remove the programmers. Pencil makers were replaced and so will programmers be replaced. Your best hope is being the person running the pencil machine.
WaitWaitWha 29 days ago
> Your best hope is being the person running the pencil machine.

Or, see what is coming and hop on that cart for the short lives we have.

Pencils. There is a hobby primitive survival, bushcraft, or primitive skills (youtuber John Plant in "Primitive Technology" is best example). I am certain we could come up with a path to create "pencils". We would just need to define what we agree to be a "pencil" first.

Is a stick of graphite or even wood charcoal wrapped in sheepskin a "pencil". Would a hollowed juniper branch stuffed with the writing material a "pencil"?

gitprolinux 28 days ago
A problem is the group think of programmers. Many of them view themselves as investors as a side track instead of a union shop worker because there are so few union employers. Their thinking being to maximize revenue for owners and shareholders during and from software development efforts, services, and software products created. Almost every other industry that there's money has sensibly created AI limitations from unions and trade groups, from Hollywood strikes, entry level knowledge requirements from lawyers and judges preventing AI generated research and briefs, and construction workers and shipyard unions preventing automated and AI machinery such as AI cranes cargo and other AI automation of logistics. Programmers seem to get a new trend software assistant every few years and now AI, then the first thing is the programmer group think their investors thinking to recommend attrition of the non 10x, elite, non-coding high wizards, instead of more jobs the merrier from an economic perspective.
netcan 29 days ago
So... to discuss this for real we should first admit how things look below the super-premium level.

Software engineering at Stripe, R&D at Meta and such... these are one end of a spectrum.

At the middle of the spectrum is a team spending 6 years on a bullshit "cloud platform strategy" for a 90s UI that monitors equipment at a factory and produces reports required for compliance.

A lot of these struggle to get anything at all done.

intrasight 29 days ago
Firing programmers for using AI? I do see people asking on social media how to filter out hire candidates that are using AI in interviews.

But if they had meant replacing programmers with AI (bad title), I'm much more concerned about replacing non-programmers with AI. It's gonna happen on a huge scale, and we don't yet have a regulatory regime to protect labor from capital.

karaterobot 29 days ago
> The result? We’ll have a whole wave of programmers who are more like AI operators than real engineers.

I was a developer for over a decade, and pretty much what I did day-to-day was plumb together existing front end libraries, and write a little bit of job-specific code that today's LLMs could certainly have helped me with, if they'd existed at the time. I agree that the really complicated stuff can't yet be done by AI, but how sure are we that that'll always be true? And the idea that a mediocre programmer can't debug code written by another entity is also false, I did it all the time. In any case, I don't resonate with the idea that the bottom 90% of programmers are doing important, novel, challenging programming that only a special genius can do. They paid us $180k a year to download NPM packages because they didn't have AI. Now they have AI, and the future is uncertain with respect to just how high programmers will be flying ten years from now.

stpedgwdgfhgdd 29 days ago
Yesterday, using Aider and an openai model, forgot which one it picks by default; i asked it to check some Go code for consistency. It made a few changes, some ok, but also some that just did not compile. (The model did not understand the local scoping of vars in an if-then clause)

It is just not reliable enough for mainstream Enterprise development. Nice for a new snake game….

scoutt 29 days ago
What I see when producing code with AI (C/C++, Qt) is that often it gives output for different versions of a given library. It's like it can't understand (or doesn't know) that a given function is now obsolete and needs to use another method. Sometimes it can be corrected.

I think there will be a point in which humans will no longer be motivated to produce enough material for the AI to update. Like, why would I write/shot a tutorial or ask/answer a question in a forum if people are now going directly to ask to some AI?

And since AI is being fed with human knowledge at the moment, I think the quantity of good material out there (that was used so far for training) is going to slow down. So the AI will need to wait for some repos to be populated/updated to understand the changes. Or it will have to read the new documentation (if any), or understand the changes from code (if any).

All this if it wasn't the AI to introduce the changes itself.

ambyra 29 days ago
It's a meme at this point. People who don't know anything about programming (lex fridman): "It's revolutionary. People with no programming skills can create entire apps." People who program for a living: "It can reproduce variants of code that have been implemented already 1000s of times, and... nothing else".
delichon 28 days ago
I've been programming full time with an LLM in an IDE for the last two weeks, and it's a game changer for me at the end of my career. I'm in future shock. I suddenly can barely live without it.

But I'm not fearful for my job, yet. It's amazingly better, and much worse than a junior dev. There are certain instructions, however simple, that just do not penetrate. It gets certain things right 98% of the time, which make me stop looking for the other 2% of the time where it absolutely sabotages the code with breaking changes. It is utterly without hesitation in defeating the entire purpose of the application in order to simplify a line of code. And yet, it can do so much simple stuff so fast and well, and it can be so informative about ridiculously obscure critical details.

I have most of those faults too, just fewer enough to be worth my paycheck for a few more AI generations.

QuantumGood 28 days ago
"utterly without hesitation in defeating the entire purpose". So many examples, ever-more detailed prompts attempted as a solution. The more I try, the more "AI" seems to be only workable as "experienced prompt engineer with an AI stack".
999900000999 29 days ago
I have to disagree with this article. Companies as is, particularly larger companies have a lot of fluff. People who do about three or four hours of work a week, and effectively just sit around so senior management can claim they have so many people working on such and such project.

With AI, you no longer need those employees to justify your high valuations. You don't need as many juniors. The party is over tell the rest of the crew. I wouldn't advise anyone to get into tech right now. I know personally my wages have been stagnant for about 5 years. I still make fantastic money, and it's significantly more than the average American income, but my hopes and dreams of hitting 300 or 400k total comp in retiring by 40 are gone.

Instead I've more or less accepted I'm going to have to keep working my middle class job, and I might not even retiring till my '50s! Tragic.

pockmarked19 29 days ago
People who look forward to retiring are like people who look forward to heaven: missing out on life due to the belief their “real” life hasn’t begun yet.
999900000999 29 days ago
I want to spend all day making music, and small games.

I haven't figured out a way to do that in a manner that supports myself.

Every job is ultimately filing out TPS reports. The reports might look a little different, but it's still a TPS report.

28 days ago
renewiltord 29 days ago
Amusingly, I spent years making multiples of your target comp and now I’m home sitting around using AI to make myself toy games.

The barrier has dropped so low that I think I’d have been more productive if I were still working.

999900000999 29 days ago
Not like it's going to happen for me, but how did you reach such comp.

I'm a simple man. If I hit 2 million in net worth I'm done working. I don't plan on having a family, so I'm just supporting myself.

If I really made a ton of money I'd fund interesting open source games. Godot is the most popular open source game engine, and they're making it happen off just 40k a month.

I'm a bit surprised Microsoft hasn't filled the void here. What's a few million dollars a year to get new programmers fully invested in a .net first game engine?

renewiltord 29 days ago
Worked in HFT. But tbh everyone I know in FAANG who stuck it out is doing even better.
999900000999 28 days ago
I've actually worked in finance for a bit, but I'm also content with where I'm at.

I don't this have a realistic chance at HFT though. Doesn't stop me from applying and dreaming...

meiraleal 26 days ago
[dead]
weatherlite 29 days ago
I think we look forward to financial independence more so than the retirement itself. Could be nice not having to worry some Chatbot or younger dude are going to replace me and I'll have to go work in McDonalds (not that there's anything wrong with that).
InDubioProRubio 28 days ago
Its a opportunity. Programmers contain internal architecture information. If they leak - and you collect the full set, you can reconstruct a similar product fresh from scratch for cheap- and then take over the business of a failing behemoth with ai-rot.

The eternal death-wish to cut the coding dependency murders yet another set of companies. So many where there before- Outsourcing, UML, node-based programming, no-codes in all variations and colours. Generations of managers have marched into this abyss and none-came back alive, the skulls of the ego-dream of "only business management is irreplaceable crunching beneath the boots" trying to cut the knowledge worker dependency out of the equation. And deep down, they feel the tingle of things going wrong- even now.

InDubioProRubio 29 days ago
Everything that is of low value. And its okay. If it is not useful to humanity, it should decay over time and fade away. Low value propositions should be loaded with parasitic computation, that burdens it with costs until it collapses and allows new growth to replace the old system.
mola 29 days ago
I believe regardless of the validity for the AI can replace programmers now narrative, we will see big co squeezing the labor force and padding the companies bottom line and their pocket.

The fact the narrative is false will be the problem of the one who replaces these CEOs, and us workers

gip 29 days ago
I think that some engineers will still be needed to maintain old codebases for a while yes.

But it's pretty clear that the codebases of tomorrow will be leaner and mostly implemented by AI, starting with apps (web, mobile,...). It will take more time for scaling backends.

So my bet is that the need for software engineering will follow what happened for stock brokers. The ones with basic to average skills will disappear, automated away (it has already happened in some teams at my last job). Above average engineers will still be employed my comp will eventually go down. And there will be a small class of expert / smartest / most connected engineers will see their comp go up and up.

It is not the future we want but I think it what is most likely to happen.

msaspence 29 days ago
What makes you think that AI is going to produce leaner codebases? They are trained on human codebases. They are going to end up emulating that human code. It's not hard to imagine some improvement here, but my gut is there just isn't enough good code out there to train a significant shift on this.
gip 29 days ago
Good question and I have no strong answer today. But I think we'll find a way to tune models to achieve this very soon.

I see such a difference between what is built today and codebases from 10 years ago with indirections everywhere, unnecessary complexity,.. I interviewed for a company with a 13yo RoR codebase recently after a few mins looking at the code decided I didn't want to work there.

aurizon 29 days ago
I have an image of 10,000 monkeys + typewriter + time = Shakespeare... Of course, these typed pages would engender a paper shortage. So the same 10,000 LLM's will create a similar amount of 'monkeyware' - I can see monkey testers roaming through this chaff for useable gems to be incorporated into a structure operated by humans (our current coder base) to engineer into complex packages?. Will this employ the human crews and allow a greater level of productivity? Look at Win11 = a huge mass full of flaws/errors (found daily). In general increased productivity has worked to increase GDP - will this continue? or will we be over run by smarter monkeys?
ChrisMarshallNY 29 days ago
Well, what will happen, is that programmers will become experts at prompt engineering, which will become a real discipline (remember when “software engineering” was a weird niche?).

They will blow away the companies that rely on “seat of the pants,” undisciplined prompting.

I’m someone that started on Machine Language, and now programs in high-level languages. I remember when we couldn’t imagine programming without IRQs and accumulators.

As always, ML will become another tool for multiplying the capabilities of humans (not replacing them).

CEOs have been dreaming for decades about firing all their employees, and replacing them with some kind of automation.

The ones that succeed, are the ones that “embrace the suck,” so to speak, and figure out how to combine humans with technology.

Nullabillity 29 days ago
This is like arguing that surely we can get rid of all our formers as soon as we have a widespread enough caste of priests.

There is no such thing as "prompt engineering", because there is no formal logic to be understood or engineered into submission. That's just not how it works.

williamcotton 29 days ago
There's plenty of tacit knowledge in engineering.

Being good at debugging a system is based more on experience and gut feelings than following some kind of formal logic. LLMs are quite useful debugging assistants. Using an LLM to assist with such tasks takes tacit knowledge itself.

The internal statistical models generated during training are capable of applying higher-ordered pattern matching that while informal are still quite useful. Learning how to use these tools is a skill.

ChrisMarshallNY 29 days ago
I remember saying the same about higher-level languages.

Discipline can be applied to any endeavor.

Nullabillity 29 days ago
Higher-level languages still operate according to defined rules and logic, even if we can sometimes disagree with those rules, and it still takes time to learn the implications of those rules.

AI prompts.. do not. It's fundamentally just not how the technology works.

ChrisMarshallNY 29 days ago
Time will tell.

I still believe that we can approach even the most chaotic conditions, with a disciplined strategy. I’ve seen it happen, many times.

guiriduro 29 days ago
The ability of LLMs to replicate MBA CEO-speak and the kinds of activities the C-suite engage in is arguably superior to their ability to write computer programs and displace programmers, so on a replicated-skills basis LLMs should pose a greater risk to CEOs. Of course, CEO success is only loosely aligned with ability, nor can LLMs obtain the "who you know" aspect from reflection alone.
BossingAround 29 days ago
What is the actual engineering discipline that goes into creating prompts? Other than providing more context, hacking the data with keywords like "please", etc?
ChrisMarshallNY 29 days ago
I am not a prompt engineer, but I have basically been using ChatGPT, in place of where I used to use StackOverflow. It’s nice, because the AI doesn’t sneer at me, for not already knowing the answer, and has useful information in a wide range of topics that I don’t know.

I have learned to create a text file, and develop my questions as detailed documents, with a context establishing preamble, a goal-oriented body, and a specific result request conclusion. I submit the document as a whole, to initiate the interaction.

That usually gets me 90% of the way, and a few follow-up questions get me where I want.

But I still need to carefully consider the output, and do the work to understand and adapt it (just like with StackOverflow).

One example is from a couple of days ago. I’m writing a companion Watch app, for one of my phone apps. Watch programming is done, using SwiftUI, which has really bad documentation. I’m still very much in the learning phase for it. I encountered one of those places, where I could “kludge” something, but it doesn’t “feel” right, and there are almost no useful heuristics for it, so I asked ChatGPT. It gave me specific guidance, applying the correct concept, but using a deprecated API.

I responded, saying something like “Unfortunately, your solution is deprecated.” It then said “You’re right. As of WatchOS 10, the correct approach is…”.

Anyone with experience using SO, will understand how valuable that interaction is.

You can also ask it to explain why it recommends an approach, and it will actually tell you, as opposed to brushing you off with a veiled insult.

29 days ago
mrkeen 29 days ago
We also managed to get rid of JavaScript like 15 years ago, with major backend technologies providing their own compile-to-js frameworks.

But JS outlived them, because it's the whole write-run-read-debug cycle, whereas the frameworks only gave you write-run.

ChrisMarshallNY 29 days ago
It’s been my experience that JS has been used to replace a lot of lower-level languages (with mixed results, to say the least).

But JS/TypeScript is now an enterprise language (a statement that I never thought I’d say), with a huge base of expert, disciplined, and experienced programmers.

aaroninsf 28 days ago
I'd like to suggest caution wrt a sentiment in this thread, that actual phase change in the industry i.e. readers here losing their jobs and job prospectcs, "doesn't feel right around the corner."

In specific, most of you are familiar with human cognitive error when reasoning with non-linearities.

I'm going to assert and would cheerfully put money behind the prospect that this is exactly one of the domains within which nonlinear behavior is most evident and hence our intuitions are most wrong.

Could be a year, could be a few, but we're going to hit the 80% case being covered just find thank you by run of the mill automated tools, and then press on into asymptote.

usixk 29 days ago
Agreed - AI agents are simply a form of outsourcing and companies that go all in on that premise will get bonked hard.

Getting into cyber security might be a gold mine in spite of all the AI generated code that is going to be churned out in this transition period.

Me000 29 days ago
Name one successful company that doesn’t outsource developer labor outside America.
pipeline_peak 29 days ago
AI will only raise the bar for the expected outcome of future programmers. It's just automated pair programming really.

The argument: "The New Generation of Programmers Will Be Less Prepared" is too cynical. Most of us aren't writing algorithms anyway, programmers may, but not Software Engineers which I really think the author is referring to.

Core libraries make it so SWE's don't have to write linked lists. Did that make our generation "less prepared" or give us the opportunity to focus our time on what really matters, like delivering products.

demircancelebi 29 days ago
The post eerily sounds like it is written by the jailbreaked version of Gemini
natch 29 days ago
Yes it is obviously LLM generated. The article is full of tells starting with the opening phrase.

But this went fact right past most commenters here, which is interesting in itself, and somewhat alarming for what it reveals about critical thinking and reading skills.

arscan 29 days ago
What are these fired programmers going to do? Disappear? They’ll build stuff, using the same plus-up AI tooling that enabled their old boss to fire them. So guess what, their old boss just traded employees for competitors. Congrats, I guess?

Zuck declaring that he plans on dropping programmer head-count substantially, to me indicates that they’ll have a much smaller technological moat in the future, and they won’t be paying off programmers to not build competing products anymore. I’m not sure he should be excited about that.

Aperocky 29 days ago
What moat did Meta have today?

I'd say there is a moat, but it's not on the tech side.

Tiktok flew right through the moat, and only a small part of that is about tech.

A lot of development on AI is exciting and meta is a big part of that, but there isn't any real moat there either.

arscan 28 days ago
Presumably it is going to be easier for increasingly smaller and smaller teams to make highly polished, scalable and stable products that will be appealing and addictive and resonate more with their users than whatever Meta can come up with. I suspect that there will be many, many more viable shots taken at Meta’s incumbent positions than have been taken historically because development costs associated with doing so will simply be so much lower. Meta used to need thousands of talented and expensive software engineers. They are saying they don’t anymore. Well, that means their competitors don’t either, which lowers the bar for competition.

I get that it wasn’t just the vast army of talented engineers they kept on staff that formed a moat. But it certainly helped, otherwise they wouldn’t have paid so much to have them on staff.

Point taken though, Meta has a lot more going for it than a simple technological advantage.

arminiusreturns 28 days ago
I fear we are about to see refreshed patent trolling conflicts regarding software designs, as it's all in the IP these days, which is the "moat" you are looking for.
biohcacker84 28 days ago
Copilot to me is a multiplier, just like compilers and GUI editors and languages above machine language.

And possibly the biggest multiplier. But anything times zero is zero. Someone who does not understand the code Copilot writes is too dangerous to do anything.

But I worry what Copilot will do to future developers.

And knowing what universally available spell checking has done to me, destroyed my ability to spell correctly without it, I even worry how Copilot might deskill me over the coming years.

madrox 28 days ago
I think anyone who is afraid of AI destroying our field just has to look at the history of DevOps. That was a massive shift in systems engineering and how production is maintained. It changed how people did their jobs, required people to learn new skills, and leaders to change how they thought about the business.

AI is going to change a lot about software, but AI code tools are coming for SWEs the way Kubernetes came for DevOps. AI completely replacing the job function is unsubstantiated.

fragmede 28 days ago
If the job of SWEs after AI is to edit yaml, json, yaml templates, and yaml-templates-in-yaml-templates all day long, while waiting for ArgoCD, I quit.
bdangubic 28 days ago
SWEs do something other than this??! :)
nickip 29 days ago
My pessimistic take on it is in the future code will closer to AI where its just a blackbox where inputs go in and outputs come out. There will be no architecture, clean code, design principles. You will just have a product manager who bangs on a LLM till the ball of wax conforms to what they want at that time. As long as it meets their current KPI security be dammed. If they can get X done with as little effort as possible and data leaks so be it. They will get a fine (maybe?) and move on.
drusha 29 days ago
We are already seeing how the speed of development plays a more important role than the quality of the software product (for example, the use of Electron in the most popular software). Software will become shittier but people will continue to use it. So LLM's will become just another abstraction level in modern programming, like JS frameworks. No company will regret firing real programmers because LLMs will be cheaper and end users don't care about performance.
fred_is_fred 29 days ago
What I've seen is less "let's fire 1000 people and replace with AI" but more "let's not hire for 2 years and see what happens as AI develops".
arrowsmith 29 days ago
Did ChatGPT write this article? The writing style reeks of it.
bradley13 28 days ago
There is a lot of demand for crap code (quick'n'dirty apps, websites, web services, etc.). Look at the apps for a lot of IoT devices: it's mostly boilerplate, kinda-sorta works, and obviously has been produced by programmers of...limited skill.

AI may well be able to take over a lot of that coding, or at least increase the productivity of the semi-competent (thus reducing the number of such jobs available).

throwaway7783 29 days ago
It has its uses, but often fail at seemingly simple things.

The other day,i couldn't get Claude to generate an HTML page, with a logo on the top left, no matter how I prompted.

antirez 29 days ago
You can substitute AI with "Some Javascript Framework" and the subtitles still apply very well. Yet nobody was particularly concerned about that.
bdangubic 29 days ago
hehehe yea but how many of us were hand-writing JavaScript for a living anyways :)
klik99 28 days ago
Honestly show junior programmers a little more respect. It's such an old person thing to say they're going to all become prompt engineers or similar. Why does the old always look at the young and claim they're all pulled by the tides of the zeitgeist and not thinking human beings who have their own opinions about stuff. Many smart people have a contrarian streak and won't just dive into AI tools wholesale. Honestly a lot of the comments here are at the level of critique as those facebook memes of crowd of people with iphones for faces.

Most people have ALWAYS taken the easy road and don't become the best programmers. AI is just the latest tool for lazier people or people who tend towards laziness. We will continue to have new good programmers, and the number of good programmers will continue to be not enough. None of that is not caused by AI. I'm far from an AI advocate, but it will, someday, make the most boring parts of programming less tedious and be able to put "glue" kind of code in non-professional hands.

29 days ago
EVa5I7bHFq9mnYK 29 days ago
I remember very uplifted talk here a few years ago about firing truck and taxi drivers for AI. Turned out, programmers are easier to replace :)
MonkeyClub 29 days ago
Taxi driving is an antifragile profession apparently, to a degree that computer programming could only aspire to.
insane_dreamer 29 days ago
I think a more interesting question is what the impact will be on the next generation looking at CS/SWE as a potential profession. It's been considered a pretty "safe" profession for a long time now. That will change over the next 10 years. Will parents advise their kids to avoid CS because the job market will be so much smaller in 10 years' time?
sumoboy 29 days ago
I'm sure that's happening right now. On the flipside will companies who hire in 4 years look at those CS/SWE kids as lessor skilled devs because they relied so much on AI to pass classes and didn't really learn?
hedora 29 days ago
There was a similar effect during the dot com boom / crash.

Everyone and their dog got a CS degree, and the average quality of that cohort was abysmal. However, it also created a huge supply of extremely talented people.

The dot-com crash happened, and software development was "over forever", but the talented folks stuck around and are doing fine.

People that wanted to go into CS still did. Some of them used stack overflow and google to pass their courses. They were as unemployable as the bottom of the barrel during the dot com boom.

People realized there was a shortage of programmers, so CS got hot again for a bit. Now LLMs have hit and are disrupting most industries. This means that most industries need to rewrite their software. That'll create demand for now.

Eventually, the LLM bust will come, programming will be "over forever" again, and the cycle will continue. At some point after Moore's law ends the boom and bust cycle will taper off. (As it has for most older engineering disciplines.)

p0w3n3d 29 days ago

  Any sufficiently advanced technology is indistinguishable from magic
  
  ~ Arthur C. Clarke
People who make decisions got bamboozled by the ads and marketing of AI companies. They failed to detect a lack of intelligence and got deceived, that they have magical golems for a fraction of the price but eventually, they will get caught with their pants down.
jnet 29 days ago
I find the people who promote AI the most are those with vested financial interests in AI. Don't get me wrong, I find it is a useful tool but it's not going to replace programmers any time soon.
elif 29 days ago
You don't fire developers and replace them with AI. This trope is often repeated and causing people to miss the actual picture of what's going on.

You use AI to disrupt a market, and that market forces the startup employing the devs to go bankrupt.

It's not a "this quarter we made a decision" thing.

It's a thing that's happening right now all over the place and snowballing.

Retr0id 29 days ago
I don't have anything against this style of writing in particular, but it's a shame it makes me assume it was written by an LLM
natch 29 days ago
It was.
Retr0id 29 days ago
My first impressions prevented me from reading more than the first sentence, so I didn't want to state it so confidently ;)
siliconc0w 29 days ago
It's already pretty hard to find engineers that can actually go deep on problems. I predict this will get even worst with AI.
goosejuice 29 days ago
An agent would be able to replace sales, marketing, customer success, middle management and project managers much better and earlier than any developer of a software company.

Nocode and Shopify-like consolidation are/have been much bigger threats imo. These large orgs are just trimming fat that they would have trimmed anyways.

But hell what do I know. Probably nothing :)

clbrmbr 29 days ago
> the real winners in all this: the programmers who saw the chaos coming and refused to play along. The ones who […] went deep into systems programming, AI interpretability, or high-performance computing. These are the people who actually understand technology at a level no AI can replicate.

Is there room for Interpretability outside of major ai labs?

aitchnyu 29 days ago
Tangential, does AI pick up new knowledge of new tools? AI helped me write much better bash since there is tons of content by volunteers and less across-country animosity. Svelte and Fastapi were made/popularized this decade and people dont want to help their AI/offshore replacements with content. Will current AI get good at them?
EternalFury 28 days ago
I have been doing this for 30 years now. The software industry is all about selling variations of the same stuff over and over and over. But in the end, the more software there is out there, the more software is needed. AI might take it over and handle it all, but at some point, it would be cruel to make humans do it.
newAccount2025 29 days ago
One major critique: why do we think junior programmers really learn best from the grizzled veterans? AI coaches can give feedback on what someone is actually seeing and doing, and can be available 24x7. I suspect this can enable the juniors of the future to have a much faster rise to mastery.
hennell 29 days ago
I remember being in maths class with a kid next to me who was a maths wiz. He could see what I was doing, was available to help almost the whole lesson, far easier for me to ask then the teacher who had many other students.

In theory a much faster rise to mastery. In practice I rarely had to actually do the work because he'd help me if I got stuck, and what made sense when he explained it didn't stick because I wasn't really doing it.

I did very badly in my first test that year, and was moved elsewhere.

dwheeler 29 days ago
Today AI can generate code. Sometimes it's even correct.

AI is a useful aid to software developers, but it requires developers to know what they're doing. We need developers to know more, not less, so they can review AI-generated code, fix it when it's wrong, etc.

jarsin 28 days ago
There's also the issue of a company doesn't own the copyright to a codebase generated primarily through prompts.

So anyone can copy it and reproduce it anywhere. Get paid to prompt ai by a company. Take all code home with you. Then when your tired of them use same code to undercut them.

megablast 28 days ago
I have been able to program stuff using AI that I could not do before. I never managed 3d before, but can now do advanced 3d stuff. I wrote an Apple watch application in a few hours instead of the week it took me before, in swift a language I have never used before.
meristohm 29 days ago
Follow the money, all the way down to Mother Earth; which boats are lifted most, and at what cost?
cjoshi66 29 days ago
Knowing the difference between programmers able to generate AI code vs those who can actually explain it matters. If orgs can do that, then firing programmers should be fine. If they can't, things might get ugly for some but not all.
penjelly 29 days ago
I've switched sides on this issue. I do think LLMs will reduce headcount across tech. Smaller teams will take on more features and less code will be written by hand. It'll be easier to run a startup, freelance or experiment with projects.
Pooge 29 days ago
As a software engineer with about 4 years of experience, what can I do to avoid being left behind?

The author mentions "systems programming" and "high-performance computing". Do you have any resources for that (whether it be books, videos, courses)?

glouwbug 29 days ago
High frequency trading. If you’re talking something more hardware focused, try job searching for the exact term “C/C++”. These jobs are typically standard library deprived (read: malloc, new, etc) and you’ll be making calls to register sets, SPI and I2C lines. Embedded systems, really; think robotics, aviation, etc. If that’s still too little hardware try finding something in silicon validation. Intel (of yesterday), AMD, nvidia, Broadcom, you’ll be doing C to validate FPGA and ASIC spin ups. It’s the perfect way to divorce yourself from conventional x86 desktops and learn SOC programming, which close loops itself back into fields like HFT where FPGA experience is _incredibly_ lucrative.

But when anyone says systems programming, thinks hardware: how do I get that additional 15% performance on top of my conventional understanding of big O notation? Cache lines, cache levels, DMAs, branch prediction, the lot.

booleandilemma 29 days ago
I'm hoping that developers who have entered management positions will be able to talk their fellow managers out of this. I can understand if some non-technical MBA bozo doesn't understand, but former developers must see through the hype.
nritchie 29 days ago
It's been noted that LLM's output quality decays as they ingest more LLM generated content in their training. Will the same happen for LLM generated code as more and more of the code on Github is generated by LLMs? What then?
jdmoreira 29 days ago
The approach has changed. Its all about test / inference time now and reinforcement learning on top of the base models. There is no end in sight anymore, training data won't be a limiting factor when reasoning and self-play are the approach
usrbinbash 29 days ago
Clicked on the Website. Greeted by the message: "Something has gone terribly wrong". It did load correctly at the second attempt, but I have to admit...well played with raising the dramatic effect webserver, well played ;-)
Workaccount2 29 days ago
Programmers are not going to go away. But the lavish salaries, benefits packages, and generous work/life balances probably will.

I envision software engineering ending up in the same pit of mediocrity as all the other engineering disciplines.

lofaszvanitt 29 days ago
Time to wake up and think in terms of 10-20 years ahead. Everyone around NVIDIA dies out... anyone with GPU compute ideas just cannot succeed... 3Dfx full of saboteurs that hinder their progress.

Open source takes away the livelihood of programmers and gives it to moneymen for free. They used open source to train AI models. Programmers got back a few stars and a pat in the back. And some recognition, but mostly nothing. All this while big corps use their work without compensation. There is zero compensation options for open source programmers on github. Somehow it's left out.

Same bullshit comes up again and again in different forms. Like your ideas worth nothing blablabla. Suuure, but moneymen usually have zero ideas and they like to expropriate others' ideas, FOR FREE. While naive people give away their ideas and work for free, the other side gives back nooothiiiing.

It's already too late.

So programmers and other areas that will be aiified in the coming decades will be slowly going extinct. AI is a skill appropriation device that in the long term will make people useless, so they don't need an artist, a musician etc. They will just need a capable AI to create whatever they want, without the hassle of the human element. It's the ultimate control tool to make people SLAVES.

Hope I'm wrong.

Aeolun 29 days ago
The same reason that outsourcing all your telecom infra to China is a bad idea.
frag 29 days ago
true that
m3kw9 29 days ago
It will destroy your own company, initially but if the AI is proven to do it better than humans, a lot of them will be converted into AI assistants to guide the AI you’d still need to know programming
gitgud 28 days ago
> The next generation of programmers will grow up expecting AI to do the hard parts for them.

This is the opposite of what I’ve seen. AI does the easy parts, only the hard parts are left…

tobyhinloopen 29 days ago
I think AI will thrive in low-code systems, not by writing Javascript.
blarg1 29 days ago
It would be cool seeing non programmers using it to automate their tasks, maybe using a scratch like language.
tippytippytango 29 days ago
Senior devs have a decade long reinforcement learning loop with the marketplace. That will be a massive advantage until they start RL on agents against the same environment.
penetrarthur 29 days ago
There is only one word worse than "programmer" and it's "coder".

If your software developers do nothing but write text in VS Code, you might as well replace them with AI.

seletskiy 29 days ago
I would say that AI is not to blame here. It just accelerated existing process, but didn't initiate it. We (as a society) started to value quantity over quality some time ago, and, apparently, no-one care enough to change it.

Why tighten the bolts on the airplane's door yourself if you can just outsource it somewhere cheaper (see Boeing crisis)?

Why design and test hundreds of physical and easy-to-use knobs in the car if you can just plug a touchscreen (see Tesla)?

Why write a couple of lines of code if you can just include an `is-odd` library (see bloated npm ecosystem)?

Why figure out how to solve a problem on your own if you can just copy-paste answer from somewhere else (see StackOverflow)?

Why invest time and effort into making a good TV if you can just strap Android OS on a questionable hardware (look in your own house)?

Why run and manage your project on a baremetal server if you can just rent Amazon DynamoDB (see your company)?

Why spend months to find and hire one good engineer if you can just hire ten mediocre ones (see any other company)?

Why spend years educating to identify a tumor on a MRI scans if you can just feed it to a machine learning algorithm (see your hospital)?

What more could I name?

In my take, which you can say is pessimistic, we already passed the peak of civilization as we know it. If we continue business as usual, things will continue to detiorate, more software will fail, more planes will crash, more people will be unemployed, more wars would be started. Yes, decent engineers (or any other decent specialists) will be likely a winners in a short term, but how the future would unfold when there will be less and less of them is a question I leave for the reader.

robertlagrant 28 days ago
> Why tighten the bolts on the airplane's door yourself if you can just outsource it somewhere cheaper (see Boeing crisis)?

This is just an overreach of a process that means that airplane flights aren't $1m+. Aircraft issues have plummeted, if you'll excuse the expression, while flight numbers have soared. You've got to have noticed that.

jodrellblank 29 days ago
You haven't answered those questions. Tesla's touchscreen displays maps, navigation, self-driving's model of the world around the car, reversing camera, distance to car in front, etc. Yes personally I prefer a physical control I can reach for without looking, but the physcial controls in my car cannot do as much as a touchscreen, cannot control as many systems as a modern car has. And that means something like a BMW iDrive running some weird custom OS in amongst the physical controls, and that was not a nice convenient system to use either.

Why write a couple of lines of code when you can just include an `is-odd` library? Hopefully one which type checks integers vs floats, and checks for overflows. I'm not stating that I could not write one if/else, I'm asking you to do more than sneer and actually justify why a computer loading a couple of lines of code from a file is the end of the world.

Why invest time and effort into making a good TV if people aren't going to buy it, because they are fine with the competitor's much cheaper Android OS on questionable hardware?

Why run and manage your project on a baremetal server, and deal with its power requirements and cooling and firmware patching and driver version compatibility and out-of-band management and hardware failures and physical security and supply chain lead times and needing to spec it for the right size up front and commit thousands of dollars to it immediately, if you can just rent Amazon DynamoDB and pay $10 to get going right now?

I could fill in the answers you are expecting, I have seen that pattern argued, and argued it myself, but it boils down to "I dislike laggy ad-filled Android TV so it shouldn't exist". And I do dislike it, but so what, I'm not world dictator. No company has taken over the market making a responsive Android-free TV, so how/why should they be made to make one, and with what justification?

> What more could I name?

Why go to a cobbler for custom fitted shoes when you could just buy sneakers from a store? (I assume you wear mass produced shoes?) Why go to a tailor when you could just buy clothes made off-shore for cheaper? (I assume you wear mass produced clothes?) Why learn to play a keyboard, guitar, drums and sing, when you could just listen to someone else's band? (I assume you listen to music?) Why spend months creating characters and scenarios and writing a novel when you could just read one someone else wrote? (I assume you have read books?) Why grow your own food when you could just buy lower quality industrially packaged food from a shop? (I assume you aren't a homesteader?) Why develop your own off-grid power system with the voltage and current and redundancy and storage you need when you could just buy from the mains? (I assume you use mains electricity?)

You could name every effort-saving, money-saving, time-saving, thing you use which was once done by hand with more effort, more cost, and less convenience.

And then state that the exact amount of price/convenience/time/effort you happened to grow up with, is the perfect amount (what a coincidence!) and change is bad.

Animats 29 days ago
Front-end development should have been automated by now. After all, it once was. Viamall. Dreamweaver. Mozilla Composer. Wix. Humans should not be writing HTML/CSS.
thelittleone 29 days ago
1. Work for megacorp 2. Megacorp CEOs gloat about forthcoming mass firings of engineers 3. Pay taxes as always 4. Taxes used to fund megacorp (stargate) 5. Megacorp fires me.

The bitter irony.

xinan 26 days ago
What the AI bull Wall Street analysts didn’t realize is that AI will replace their job much sooner than replacing programmers.
up2isomorphism 26 days ago
It will be extremely naive to believe that recent tech layoff is because of AI.

Obviously Wall Street and big tech like people to think this way.

29 days ago
nirui 29 days ago
Maybe it's just me boomer reading this, but I think all 3 points listed in the article are more of predictions from the author (with rationals from the author). However, AI today maybe different compare to AI in the future.

I'm a programmer, I love my skills, but I really hate to write code (and test etc etc), I don't even want to do system design. If I can just say to a computer "Hey, I got this 55TB change set and I want to synced it up with these listed nodes, data across all node must remain atomicity consistent before, during and after the sync. Now, you make it happen. Also, go pick up my dog from the vax", and then the computer just do that in the best way possible, I'll love it.

Fundamentally, programmer are tool creators. If it is possible that a computer can create better tools all by itself, then it looked unwise to just react such technology with emotional rejection.

I mean, the worries is real, sure, but I wouldn't just blank out reject the tech.

natch 29 days ago
“author.” hah.
osnium123 29 days ago
If you are a junior engineer who just graduated, what would you do to ensure that you learn the needed skills and not be overly reliant on AI?
fragmede 29 days ago
I wouldn't, because AI isn't the problem. With machines being able to run deepseek locally, the problem to look out for isn't the possibility that the web service will go down and you have to live without it, it's that, as their capabilities currently stand, they can't fix or do everything.

I learned to program some time before AI became big, and back when I was an intern, I'd get stuck in a rut trying to debug some issue. When I got stuck in the rut, it would be tempting to give up and just change variables and "if" statements blindly, just hoping it would somehow magically fix things. Much like I see newer programmers get stuck when the LLM gets stuck.

But see, that's where you earn your high paying SWE salary. For doing something that other people can not. So my advice to Jr programmers isn't to avoid using LLMs, it's to use them liberally until you find something or somewhere they're bad at, and look at that as the true challenge. With LLMs, programming easy shit got easy. If you're not running into problems with the LLM, switch it up and try a different language with more esoteric libraries and trickier bugs.

jcon321 29 days ago
Well before AI, us old guys had to google our own issues and copy/paste from stackoverflow. When I was a junior I never thought about being overly reliant on "google or stackoverflow", but LLMs are slightly different. I guess it would be like if I googled something and always trusted the first result. Maybe for your question it means not copying/pasting immediately what the LLM gives you and have it explain. Wasting a few mins on asking the LLM to explain, for the sake of learning, still beats the amount of time I used to waste scanning google results.
karxxm 29 days ago
Replacing juniors with AI is stupid because who will be the next senior? AI won't learn anything while performing inference only.
angusb 29 days ago
Did anyone else read this as "Firing programmers for (AI will destroy everything)" or have I been reading too much Yudkowsky
skirge 29 days ago
No accountant was fired when Microsoft Clippy was introduced. AI is nice for prototyping and code completion but that's all.
regnull 29 days ago
This reminds me of the outsourcing panic, many years ago. Hiring cheaper talent overseas seemed like a no brainer, so everyone's job was in danger. Of course, it turned out that it was not as simple, it came with its own costs, and somehow the whole thing just settled. I wonder if the same will happen here. In this line of work, it's almost as when you know exactly what you want to build, you are 90% there. AI helps you with the rest.
mdrzn 29 days ago
"The New Generation of Drivers Will Be Useless Without a Horse" is what I read when I see articles like this.
smeeger 28 days ago
circular argument. literally “heres why you will regret it: 1) you will regret it” lol. nobody will regret firing their high-salary programmer within a year or two. its over. what happened to all the enthusiasm for the idea that technology doesnt steal jobs, it just creates new ones?!
atoav 29 days ago
Meanwhile the newest model is like "Oh just run this snippet"

And the snipped will absolutely ruin your corp if you run it.

lenerdenator 29 days ago
Keeping things around doesn't drive shareholder value. Firing employees making six figures does.
tiku 29 days ago
Doesn't matter, a few minutes after firing the last programmer SkyNet wil become operational.
czhu12 29 days ago
I find it interesting that the same community who has questioned for a long time about why companies like Facebook need 60000 engineers, “20 good engineers is all you need”, is now rallying against any cuts at all.

AI makes engineers slightly more efficient, so there’s a slightly less of a need for as many. That’s assuming AI is the true cause of any of these layoffs at all

outime 29 days ago
Has there been any company so far that has fired 100% of their programmers to replace them with AI?
monsieurbanana 29 days ago
There's no serious sources about people wanting to fire 100% of [insert title here] for LLMs. It's more about reducing head-count by leveraging LLMs as a productivity multiplier.

I haven't heard of companies successfully doing that at scale though.

alkonaut 29 days ago
Has there been any company that has laid off even a nontrivial amount of programmers and replaced them with AI? Here I mean, where developers at said company actually say the process works and is established, and the staff cuts weren't happening anyway.

I know there are CEOs that make bold claims about this (E.g. Klarna) but I don't really assign any value to that until I hear from people on the floor.

kkapelon 29 days ago
BT news suggest they made announcement to cut jobs back in 2023 and they are actually doing it now

https://www.socialistparty.org.uk/articles/133443/03-12-2024...

https://www.forbes.com/sites/siladityaray/2023/05/18/telecom...

kypro 29 days ago
Small companies, yes, absolutely.

If you have a small non-tech company with a website you pay a freelance programmer to maintain you should seriously consider replacing your programmer with AI.

I work for a company which among other things provides technical support for a number of small tech-oriented businesses and we have lot of problems right now with clients trying to do things on their own with the help of AI.

In our case the complexity of some of these projects and the limited ability of AI means that they're typically creating more bugs and tech debt for us to fix and are not really saving themselves any time – and this is certainly going to be true at the moment for any large project. However, if you're paying programmers just to manage the content of a few small websites it probably begins to make sense to use AI instead.

penetrarthur 29 days ago
This still implies that the person who is currently paying freelance programmers is 1) good with LLMs 2) knows some html and js 3) can deploy the updated website.
kypro 29 days ago
You're probably right that these people still need some baseline technical skills currently, but I'm really not assuming anything here – this is something we've seen multiple of our clients do in recent months.

It's funny you say they need to be able to deploy the update to be honest because we had a client just last week email a collect of code snippets to us which they created with the help of AI.

This is the problem we have though because we're not just building simple websites which we can hand clients FTP creds for. The best we can do is advise them to learn Git and raise a PR which we can review and deploy ourselves.

monsieurbanana 29 days ago
Sounds liked programming with extra steps. And I don't like it when the extra steps involving mailing snippets of code
UndefinedRef 29 days ago
D̴i̴d̴n̴t̴ ̴f̴i̴r̴e̴ ̴a̴n̴y̴o̴n̴e̴ ̴p̴e̴r̴ ̴s̴e̴,̴ ̴b̴u̴t̴ ̴I̴ ̴a̴m̴ ̴a̴ ̴s̴o̴l̴o̴ ̴d̴e̴v̴e̴l̴o̴p̴e̴r̴ ̴a̴n̴d̴ ̴I̴ ̴c̴a̴n̴ ̴d̴o̴ ̴t̴h̴e̴ ̴w̴o̴r̴k̴ ̴o̴f̴ ̴2̴ ̴p̴e̴o̴p̴l̴e̴ ̴n̴o̴w̴

Edit: I am a solo developer and I have to work half the time only now.

hassleblad23 29 days ago
Could have been "I am a solo developer and I have to work half the time only now."
finnjohnsen2 29 days ago
Why does this never happen? :(
kachhalimbu 29 days ago
Because work expands to fill the time. You are more efficient at work and get more done? Awesome, now you have more responsibility.
UndefinedRef 29 days ago
I like that better
finnjohnsen2 29 days ago
The trick is to call the people who are using the AI to generate code something other than programmers.
29 days ago
gantrol 28 days ago
When a company announces layoffs: people suspect there's something wrong with their growth, cash flow, etc.

When a company announces layoffs because AI is making things more efficient: people start arguing about whether AI can really replace humans.

If you were in company management and had to do layoffs, which would you choose?

thro1 29 days ago
What about.. empowering programmers with AI - can it create any useful things ?
reportgunner 29 days ago
People really believe that companies are firing because AI will replace them ?
lawgimenez 29 days ago
AI is cool, until they start going down the client’s absurd requirements.
sirsinsalot 28 days ago
Reminds me of the outsourcing rush in the 2000s.

I made good money cleaning that up.

29 days ago
nu2ycombinator 29 days ago
Dejavu, Complaints on using AI sounds very similar to early times of offshoring/outsourcing. At the end of the day, corporates go for most profitable solution, so AI is going to replace some percentage of headcount.
zombiwoof 28 days ago
Cursor being a 1 billion dollar company is just ridiculous
varsketiz 29 days ago
Frankly, I agree with the points in the article, yet I'm triggered slightly by the screaming dramatic writing like "destroy everything".
29 days ago
localghost3000 29 days ago
Reminder to everyone reading FUD like this that tech bro's are trying _very_ hard to convince everyone that these technologies are Fundamentally Disruptive and The Most Important Advancement Since The Steam Engine. When in fact they are somewhat useful content generation machines who's output needs to be carefully vetted by the very programmers this article claims will be out of a job.

To be clear: I am not saying this article is written in bad faith and I agree that if its assertions come to pass that what it predicts would happen. I am just urging everyone to stop letting the Sam Altmans of the world tell you how disruptive this tech is. Tech is out of ideas and desperate to keep the money machine printing.

29 days ago
tharmas 29 days ago
Programmers are training their replacement.
sangnoir 29 days ago
> The ones who didn’t take FAANG jobs but instead went deep into systems programming, AI interpretability, or high-performance computing

I appreciate a good FAANG hatefest, but what the gosh-darn heck is this? Does the author seriously think all FAANG engineers only transform and sling gRPC all day? Or they they blindly stumbled into being hyperscalers?

The author should randomly pick a mailing list on any if those topics (systems programming, AI interpretability, HPC) and count the number of emails from FAANG domains

827a 29 days ago
> Imagine a company that fires its software engineers, replaces them with AI-generated code, and then sits back, expecting everything to just work. This is like firing your entire fire department because you installed more smoke detectors. It’s fine until the first real fire happens.

I feel like this analogy really doesn't capture the situation, because it implies that it would take some event to make companies realize they made a mistake. The reality right now is: You'd notice it instantly. Product velocity would drop to zero. Who is prompting the AI?

The AI-is-replacing-programmers debate is honestly kinda tired, on both sides. Its just not happening. It might be happening in the same way that pirated movies "steal" income from hollywood: maybe companies are expanding more slowly, because we're ramping up per-capita productivity because engineers are learning how to leverage it to enhance their own output (and its getting better and better). But, that's how every major tool and abstraction works. If we still had to write in assembly there'd be 30x the number of engineers out there than there are.

There's no mystical point where AI will get good enough to replace engineers, not because it won't continue getting better, but because the economic pie is continually growing, and as the AI Nexus Himself, Marc Andreesen, has said several times: Humanity has an infinite demand for code. If you can make engineers 10x more efficient, what will happen in most companies is: we don't want to cut engineering costs by N% and stagnate, we want 10x more code and growth. Maybe we hire fewer engineers going forward.

> But with the AI craze, companies aren’t investing in junior developers. Why train people when you can have a model spit out boilerplate?

This is not happening. Its fun, pithy reasoning that Good and Righteous White Knight Software Engineers can prescribe onto the Evil and Bad HR and Business Leadership people, but its just not, in any meaningful or broad sense, a narrative that you hear while hiring.

The reason why juniors are struggling to find work right now is literally just because the industry is in a down cycle. During down cycles, companies are going to prioritize stability, and seniority is stability. That's it.

When the market recovers, and as AI gets better and more prolific, I think there's a reality where juniors are actually a great ROI for companies, thanks to AI. They've been using it their whole career. They're cheaper. AI might be a productivity multiplier for all engineers; but it will definitely be a productivity normalizer for juniors; using it to check for mistakes, learn about libraries and frameworks faster, its such a great upleveling tool for juniors.

zombiwoof 28 days ago
Tech debt is hard enough when written by the person sitting next to you or no longer at the company

It will be impossible to maintain when it’s churned out by endless AI

I can’t imagine being a manager tasked with “our banking system lost 30 million dollars can you find the bug” when the code was written by AI and some intern maintains it

I’ll be watching with popcorn

esalman 28 days ago
I agree. Unfortunately Tesla cannot be held accountable for autopilot crash and OpenAI cannot be held accountable for bugs caused by Copilot code. But that's where we're (forced to be) headed as a society.
giancarlostoro 29 days ago
AI bros are like crypto bros. Really trying to hype it up beyond what its currently capable of and what it will be capable of in the near future.

I have all sorts of people telling me I need to learn AI or I will lose my job and get left in the dust. AI is still a tool, not a worker.

blibble 29 days ago
ultimately Facebook and Google are completely unimportant, if they disappeared tomorrow the world would keep going

however, I for one can't wait for unreliable garbage code in:

    - engine management systems
    - aircraft safety and navigation systems
    - trains and railway signalling systems
    - elevator control systems
    - operating systems
    - medical devices (pacemakers, drug dispensing devices, monitoring, radiography control, etc)
    - payment systems
    - stock exchanges
maybe AI generated code is the Great Filter?
rapind 29 days ago
What irks me the most about AI is the black box mutability. Give it the same question of reasonable complexity and get a slightly different answer every time.

I also dislike mass code generation tools. The code generation is basically just a cache of the AIs reasoning right? So it's sort of pre-optimization. Eventually, once cheap enough, I would assume the AI reasons in real time (producing temporary throw-away code for every request). But the mutability issue is still there. I think we need to be able to "lock-in" on the reasoning, but that's a challenge and probably falls apart with enough inputs / complexity.

ArthurStacks 29 days ago
Total delusion from the author. If tech companies need a human dev, therell be plenty of them across the globe jumping at the chance to do it for peanuts. Youre soon to be extinct. Deal with it.
armchairhacker 29 days ago
Counterpoint: lots of software is relatively very simple at its core, so perhaps we don't need nearly as many employed developers as we have today. Alternatively, we have far more developers today, so perhaps companies are only firing to re-hire for lower salaries.

Regarding the first hypothesis: For example, one person can make a basic social media site in a weekend. It'll be missing important things from big social medias: 1) features (some of them small but difficult, like live video), 2) scalability, 3) reliability and security, and 4) non-technical aspects (promotion, moderation, legal, etc.). But 1) is optional; 2) is reduced if you use a managed service like AWS and throw enough compute at it, then perhaps you only need a few sysadmins; 3) is reduced to essentials (e.g. backups) if you accept frequent outages and leaks (immoral but those things don't seem to impact revenue much); and 4) is neither reducible nor optional but doesn't require developers.

I remember when the big tech companies of today were (at least advertised as) run by only a few developers. They were much smaller, but still global and handling $millions in revenue. Then they hired more developers, presumably to add more features and improving existing ones, to make profit and avoid being out-competed. And I do believe those developers made features and improvements to generate more revenue than their salaries and keep the companies above competition. But at this point, would more developers generate even more features and improvements to offset their cost, and are they necessary to avoid competition? Moreover, if a company were to fire most of its developers, keeping just enough to maintain the existing systems, and direct resources elsewhere (e.g. marketing), would they make more profit and out-compete better?

Related, everyone knows there's lots of products with needless complexity and "bullshit jobs". Exactly how much of that complexity is needless and how many of those jobs are useless is up to debate, and it may be less than we think, but it may really not.

I'm confident the LLMs that exist today can't replace developers, and I wouldn't be surprised if they don't "augment" developers so fewer developers + LLMs don't maintain the same productivity. But perhaps many programmers are being fired because many programmers just aren't necessary, and AI is just a placebo.

Regarding the second hypothesis: At the same time, there are many more developers today than there were 10-20 years ago. Which means that even if most programmers are necessary, companies may be firing them to re-hire later at lower salaries. Despite the long explanations above this may be the more likely outcome. Again, AI is just an excuse here, maybe not even an intentional one: companies fire developers because they believe AI can improve things, it doesn't, but then they're able to re-hire cheaper anyways.

(Granted, even if one or both the above hypotheses are true, I don't think it's hopeless for software developers. Specifically because, I believe many developers will have to find other work, but it will be interesting work; perhaps even involving programming, just not the kind you learned in college, and at minimum involving reasoning some of which you learn from development. The reason being that, while both are important to some extent, I believe "smart work" is generally far more important than "hard work". Especially today, it seems most of society's problems aren't because we don't have enough resources, but 1) because we don't have the logistics to distribute them, and 2) because of problems that aren't caused by lack of resources, but mental health (culture disagreements, employer/employee disagreements, social media toxicity, loneliness). Especially 2). Similarly to how people moved from manual labor to technical work, I think people will move from technical work; but not back to manual labor, to something else, perhaps something social.)

nyarlathotep_ 29 days ago
> I'm confident the LLMs that exist today can't replace developers, and I wouldn't be surprised if they don't "augment" developers so fewer developers + LLMs don't maintain the same productivity. But perhaps many programmers are being fired because many programmers just aren't necessary, and AI is just a placebo.

The last part is the important part.

There's loads of jobs that don't "need" to exist in software gigs at many companies generally, ranging from lowly maintenance type CRUD jobs to highly complex work that has no path to profitability, but was financially justifiable a few years prior in a different financial environment.

Examples: IIRC, Amazon had some game engine thing that had employed a bunch of graphics programmers (Lumberyard maybe?) that they scrapped (probably for cost reasons), Alexa has been a public loss leader and has had loads of layoffs. Google had their game streaming service that got shelved and other stuff I can't recall that they've surely abandoned in recent years, etc.

Those roles were certainly highly skilled, but mgmt saw no path to profit or whatever, so they're gone.

There's also the opposite in some cases. Many f500s are pissing away money to get some "AI" "Enabled" thing for their whatever and throwing money at companies like Accenture et al to get them some RAG chatbot thing.

There's certainly a brief period where those opportunities will increase as every CTO wants to "modernize" and "leverage AI", although I can't imagine it lasting.

cranberryturkey 29 days ago
you still need a programmer
md5crypto 29 days ago
Do we?
cranberryturkey 25 days ago
I think so. someone who knows how to build a native mobile app will prompt better than someone who's never done it.
black_13 29 days ago
[dead]
29 days ago
Terretta 29 days ago
making tech != using tech
honestSysAdmin 28 days ago
[dead]
pinoy420 28 days ago
What it’s not good at is anything within the last 2 years (due to training cutoff) all fail at latest remix, svelte and selenium syntax for example.

This is an eternity in FE dev terms

coding123 29 days ago
[dead]
mattfrommars 29 days ago
I read this on Reddit, but it capture in essence where we are headed in the future.

"AI won't replace you. Programmers using AI will."

bigfishrunning 27 days ago
No they won't. The programmers that are using AI are the ones who are too lazy/untalented to learn to do their jobs; AI makes easy things easy and hard things impossible, and that's not changing any time soon.
CrimsonRain 29 days ago
Most programmers are not worth anything. Firing them for ai or no reason at all, will not change anything.

Whether ai can do stuff comparable to a competent senior sde, remains to be seen. But current AI definitely feels like a super assistant when I'm doing something.

Anyone who says chatgpt/Claude/copilot etc are bad, is suffering a skill issue. I'd go as far as to say they are really bad at working with junior engineers. Really bad as teachers too.