It's incredibly far away from doing any significant change in a mature codebase. In fact I've become so bearish on the technology trying to use it for this, I'm thinking there's going to have to be some other breakthrough or something other than LLM's. It just doesn't feel right around the corner. Now completing small chunks of mundane code, explaining code, doing very small mundane changes. Very good at.
The gap between people with deep, hands-on experience that understand how a computer works and prompt engineers will become so insanely deep.
Somebody needs to write that operating system the LLM runs on. Or your bank's backend system that securely stores your money. Or the mission critical systems powering this airplane you're flying next week... to pretend like this will all be handled by LLMs is so insanely out of touch with reality.
But in reality pretty much anyone who enters software starts off cutting corners just to build things instead of working their way up from nand gates. And then they backfill their knowledge over time.
My first serious foray into software wasn't even Ruby. It was Ruby on Rails. I built some popular services without knowing how anything worked. There was always a gem (lib) for it. And Rails especially insulated the workings of anything.
An S3 avatar upload system was `gem install carrierwave` and then `mount_uploader :avatar, AvatarUploader`. It added an avatar <input type="file"> control to the User form.
But it's not satisfying to stay at that level of ignorance very long, especially once you've built a few things, and you keep learning new things. And you keep wanting to build different things.
Why wouldn't this be the case for people using LLM like it was for everyone else?
It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that. You get better, you learn more, and you become the question-answerer. And one day you sheepishly look at your question history in amazement at how far you've come.
I feel like it's a bit different this time because LLMs aren't just an abstraction.
To make an analogy: Ruby on Rails serves a similar role as highways—it's a quick path to get where you're going, but once you learn the major highways in a metro area you can very easily break out and explore and learn the surface streets.
LLMs are a GPS, not a highway. They tell you what to do and where to go, and if you follow them blindly you will not learn the layout of the city, you'll just learn how to use the GPS. I find myself unable to navigate a city by myself until I consciously force myself off of Google Maps, and I don't find that having used GPS directions gives me a leg up in understanding the city—I'm starting from scratch no matter how many GPS-assisted trips I've taken.
I think the analogy helps both in that the weaknesses in LLM coding are similar and also that it's not the end of the world. I don't need to know how to navigate most cities by memory, so most of the time Google Maps is exactly what I need. But I need to recognize that leaning on it too much for cities that I really do benefit from knowing by heart is a problem, and intentionally force myself to do it the old-fashioned way in those cases.
Seriously what job you do that allows you to not have a smartphone ?
no smartphone make it a real pain in the ass to do most things nowadays, job is one of the biggest but it's not the least. to even connect to my bank account on my computer i need a phone.
So it's not obvious to me that patently crazy directions must come from watching people's behavior. Something else is going on.
I imagine what you saw is some other frequent road users making choices that get ranked higher.
If you're talking about that left turn into Alma with the long wait instead of going into the Stanford roundabout and then the overpass, it still does that.
This is an interesting idea. There's an obvious force directing search to get worse, which is the adversarial desire of certain people to be found.
But no such force exists for directions. Why would they be getting worse?
The steps were roughly: Ask a passerby how to get where you want to go. They will usually confidently describe the steps, even if they didn't speak your language. Cheerfully thank them and proceed to follow the directions. After a block or two, ask a new passerby. Follow their directions for a while and repeat. Never follow the instructions fully. This triangulation served to naturally fill out faulty guidance and hucksters.
Never thought that would one day remind me of programming.
Tangent: I once got into a discussion with a friend who was surprised I had the map (on a car dashboard display) locked to North-is-up instead of relative to the car's direction of travel.
I agreed that it's less-convenient for relative turn decisions, but rationalized that setting as making it easier to learn the route's correspondence to the map, and where it passed relative to other landmarks beyond visual sight. (The issue of knowing whether the upcoming turn was left-or-right was addressed by the audio guidance portion.)
When the map is locked north, I'm always aware of my location within the larger area, even when driving somewhere completely new.
Without it, I could never develop any associations between what I'm seeing outside the windshield and a geospatial location unless I was already familiar with the area.
I love these analogies and I think this one is apt.
To adapt another which I saw here to this RoR thread, if you're building furniture then LLMs are powertools while frameworks are ikea flatpacks.
So, it’s been a privilege gentlemen, writing apps from scratch with you.
walks off the programming Titanic with a giant violin
I think LLMs are similar. Sure, you can vibe code and blindly accept what the LLM gives you. Or, you can engage with it as if pair programming.
It still gives you code you can inspect. There is no black box. Curious people will continue being curious.
That's not for lack of curiosity, it seems to be something about the way that I'm wired that making decisions about where to navigate helps me to learn in a way that following someone else's decisions does not.
Like imagine you’d ask turn-by-turn directions from an LLM and then follow these directions ;). That’s how it feels when LLMs are used for coding.
Sometimes amazing, sometimes generated code is a swamp of technical debt. Still, a decade ago it was completely impossible. And the sky is the limit.
I've had to work with developers that are over dependent on LLM's, one didn't even know how to undo code, they had to ask an LLM to undo. Almost as if the person is a zombie or something. It's scary to witness. And as soon as you ask them to explain their rationale for the solution they came up with - dead silence. They can't, because they never actually _thought_.
Some also get into a loop where they ask the LLM to rewrite what they have, and the result ends up changing in a subtle undetected way or loses comments.
I've had to work with developers that are over dependent on high-level languages. One didn't even know how to trace execution in machine code; they had to ask a debugger. Almost as if the person is a zombie or something. It's scary to witness. And as soon as you explain them to explain their memory segmentation strategy - dead silence. They can't, because they never actually _thought_.
Debugging is a skill anyone can learn, which applies broadly. But some people just don't. People who want correct code to be written for them are fundamentally asking something different than people who want writing correct code to be easier.
It'll be interesting to see what kinds of new tools come out of this AI boom. I think we're still figuring out what the new abstraction tier is going to be, but I don't think the tools to really work at that tier have been written yet.
Code is a liability. What we really care about is the outcome, not the code. These AI tools are great at generating code, but are they good at maintaining the generated code? Not from what I've seen.
So there's a good chance we'll see people using tools to generate a ton of instant legacy code (because nobody in house has ever understood it) which, if it hits production, will require skilled people to figure out how to support it.
This is no different from what we see with any tool or language: the results are highly dependent on the experience and skills of the operator.
In the a world where people are having machines build the entire system, there is potentially no human that has ever understood it. Now, we are talking about a yet unseen future; I have yet to see a real world system that did not have a human driving the design. But, maintaining a system that nobody has ever understood could be ultra-hardmode.
Philosophical question: how is LLM-produced code that nobody has ever understood any different from human-written legacy code that nobody alive today understands?
- There is zero option of paying an obscene amount of money to find the person and make the problem 'go away'
- There is a non-zero possibility that the code is not understandable by any developer you can afford. By this I mean that the system exhibits the desired behavior, but is written in such a way that only someone like Mike Pall* can understand.
* Mike Pall is a robot from the future
Another way I've seen this expressed, which resonates with me, is "All code is technical debt."
This is so true! Actual writing of the code is such a small step in overall running of a typical business/project, and the less of it the better.
Please don't do this, pick more boring tech stacks https://news.ycombinator.com/item?id=43012862 instead. "Sophisticated" tech stacks are a huge waste, so please save the sophisticated stuff for the 0.1% of the time where you actually need it.
This definition is obsolete according to Wikitionary: https://en.wiktionary.org/wiki/sophisticated (Wikitionary is the first result that shows when I type your words)
It's terrible advice when you're building something that will cause that boring tech to fall over. Or when you've reached the limits of that boring tech and are still growing. Or when the sophisticated tech lowers CPU usage by 1% and saves your company millions of dollars. Or when that sophisticate tech saves your engineers hours and your company 10s of millions. Or just: when the boring tech doesn't actually do the things you need it to do.
Sometimes you want to use the sophisticated shiny new tech because you actually need it. Here's a recent example from a real situation:
The linux kernel (a boring tech these days) has a great networking stack. It's choking on packets that need to be forwarded, and you've already tuned all the queues and the cpu affinities and timers and polling. Do you -
a) buy more servers and network gear to spread your packets across more machines? (boring and expensive and introduces new ongoing costs of maintenance, datacenter costs, etc).
b) Write a kernel module to process your packets more efficiently? (a boring, well known solution, introduces engineer costs to make and maintain as well as downtime because the new shiny module is buggy?)
c) Port your whole stack to a different OS (risky, but choosing a different boring stack should suffice... if youre certain that it can handle the load without kernel code changes/modules).
d) Write a whole userspace networking system (trendy and popular - your engineers are excited about this, expensive in eng time, risks lots of bugs that are already solved by the kernel just fine, have to re-invent a lot of stuff that exists elsewhere)
e) Use ebpf to fast path your packets around the kernel processing that you don't need? (trendy and popular - your engineers are excited about this, inexpensive relative to the other choices, introduces some new bugs and stability issues til the kinks are worked out)
We sinned and went with (e). That new-fangled tech met our needs quite well - we still had to buy more gear but far less than projected before we went with (e). We're actually starting to reach limits of ebpf for some of our packet operations too so we've started looking at (d) which has come down in costs and risk as we understand our product and needs better.
I'm glad we didn't go the boring path - our budget wasn't eaten up with trying to make all that work and we could afford to build features our customers buy instead.
We also use postgres to store a bunch of user data. I'm glad we went the boring path there - it just works and we don't have to think about it, and that lack of attention has afforded us the chance to work on features customers buy instead.
The point isn't "don't choose boring". It's: blindly choosing boring instead of evaluating your actual needs and options from a knowledgeable place is unwise.
The point is still: evaluate the options for real - using the new thing because it's new and exicting is equally as foolish as use the boring thing because it's well-proven... if those are your main criteria.
I know that is confronting for a lot of people, but I think it is better to accept it, and spend time thinking about what your experience is worth. (A lot!)
How? Students now are handing out LLM homework left and right. They are not nurturing the resolve to learn. We are training a cohort of youngs who will give up without trying hard, and end learning nothing
I would have agreed, until I started seeing the kinds of questions they're asking.
It's not about satisfaction: it's literally dangerous and can bankrupt your employer, cause immense harm to your customers and people at home, and make you unhirable as an engineer.
Let's take your example of "an S3 avatar upload system", which you consider finished after writing 2 lines of code and a couple of packages installed. What makes sure this can't be abused by an attacker to DDOS your system, leading to massive bills from AWS? What happens after an attacker abuses this system and takes control of your machines? What makes sure those avatars are "safe-for-work" and legal to host in your S3 bucket?
People using LLMs and feeling all confident about it are the equivalent of hobby carpenters after watching a DIY video on YouTube and building a garden shed over the weekend. You're telling me they're now qualified to go build buildings and bridges?
> "It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that."
I meet people like this during job interviews all of the time, if I'm hiring for a position. Can't tell you how many people with 10+ years of industry experience I met recently that can't explain how to read data from a local file, from the machine's file system.
The models I've tried aren't that great at algorithm design. They're abysmal at generating highly specific, correct code (e.g. kernel drivers, consensus protocols, locking constructs.) They're good plumbers. A lot of programming is plumbing, so I'm happy to have the help, but they have trouble doing actual computer science.
And most relevantly, they currently don't scale to large codebases. They're not autonomous enough to pull a work item off the queue, make changes across a 100kloc codebase, debug and iterate, and submit a PR. But they can help a lot with each individual part of that workflow when focused, so we end up in the perverse situation where junior devs act as the machine's secretary, while the model does most of the actual programming.
So we end up de-skilling the junior devs, but the models still can't replace the principal devs and researchers, so where are the principal devs going to come from?
I tend towards tool development, so this suggests a fringe benefit of LLMs to me: if my users are asking LLMs to help with a specific part of my API, I know that's the part that sucks and needs to be redesigned.
Because of the mode of interaction.
When you dive into a framework that provides a ton of scaffolding, and "backfill your knowledge over time" (guilty! using Nikola as a SSG has been my entry point to relearn modern CSS, for example), you're forced to proceed by creating your own loop of experimentation and research.
When you interact with an LLM, and use forums to figure out problems the LLM didn't successfully explain to you (about its own output), you're in chat mode the whole time. Even if people are willing to teach you to fish, they won't voluntarily start the lesson, because you haven't shown any interest in it. And the fish are all over the place - for now - so why would you want to learn?
>It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that.
Of course nobody on HN would relate to that first-hand. But as someone with extensive experience curating Stack Overflow, I can assure you I have seen it second-hand many times.
The article is right in a zoomed-in view (fundamental skills will be rare and essential), but in the big picture the critique in the comment is better (folks rarely start on nand gates). Programmers of the future will have less need to know code syntax the same way current programmers don't have to fuss with hardware-specific machine code.
The people who still do hardware-specific code, are they currently in demand? The marketplace is smaller, so results will vary and probably like the article suggests, be less satisfactory for the participant with the time-critical need or demand.
First of all, I think the problems of the industry were long overdue. It started with Twitter and it proved it can be done. AI just made it easier psychologically because it's much easier to explore and modify existing code and not freak out "omg, omg, omg, we've lost this guy and ony he understands the code and we're so lost without him". AI just removes the incentives to hoard talent.
I also think of excel/spreadsheets and how it did in fact change accounting industry forever. Every claim the author makes about software developers could have been made about accounting after the advent of electronic spreadsheets.
I don't want to even get started on the huge waste and politics in the industry. I'm on the 3rd re-write of a simple task that removes metrics in Grafana which saves the team maybe 50$ monthly. If the team was cut in half, I'm sure we'd simply not do half the bullshit "improvements" we do.
Industry is now large enough to have all sort of people. Growing, stagnating, moving out, moving in, laid off, retiring early, or just plain retiring etc.
It’s funny now that I haven’t programmed Java for more than a decade and the “public static void main” incantation is still burned into my memory.
I think what's left out though is that this is the experience of those who are really interested and for whom "it's not satisfying" to stay there.
As tech has turned into a money-maker, people aren't doing it for the satisfaction, they are doing it for the money. That appears to cause more corner cutting and less learning what's underneath instead of just doing the quickest fix that SO/LLM/whatever gives you.
I don't see how that trend would change much just because junior developers can use LLMs as a crutch. (Well, except when it helps them cheat at an interview that wasn't predictive of what the job really needed.)
If only. There are too many devs who've learnt to write JS or Python, and simply won't change. I've seen one case where someone ported an existing 20k C++ app to a browser app in the most unsuitable way with emscripten, where a 1100 lines of typescript do a much better job.
That's the difference. This is how you feel because you like programming to some extent. Having worked closely with them, I can tell you there are many people going into bootcamps that flat out dislike programming and just heard it pays well. Some of them get jobs, but they don't want to learn anything. They just want to do as much that doesn't get them fired. They are not curious even with tasks they are supposed to do.
I don't think this is inherently wrong, as I don't feel like gatekeeping the profession if their bosses feel they add value. But this is a classic case of losing the junior > expert pipeline. We could easily find ourselves in a spot in 30 years where AI coding is rampant but there's no experts to actually know what it does.
I buy that LLMs may shift the proportion of those two camps. But doubt it will really eliminate those who genuinely love building things with code.
Well put. There’s a similar phenomenon in industrial maintenance. The “grey tsunami.” Skilled electricians, pipefitters, and technicians of all stripes are aging out of the workforce. They’re not being replaced, and instead of fixing the pipeline, many factories are going out of business, and many others are opting to replace equipment wholesale rather than attempt repairs. Everybody loses, even the equipment vendors, who in the long run have fewer customers left to sell to.
I agree, but have found that for a lot of people that is totally satisfying enough. Most people don't care to really understand how the code works.
Software engineers 15 years ago would have thought it crazy to ship a full browser with every desktop app. That’s now routine. Wasteful? Sure. But it works. The need for low level knowledge dramatically decreased.
Languages like Python and Java come around, and old-school C engineers grouse that the kids these days don’t really understand how things work, because they’re not managing memory.
Modern web-dev comes around and now the old Java hands are annoyed that these new kids are just slamming NPM packages together and polyfills everywhere and no one understands Real Software Design.
I actually sort of agree with the old C hands to some extent. I think people don’t understand how a lot of things actually work. And it also doesn’t really seem to matter 95% of the time.
This is really an issue for all jobs, not just software development, where there is a large planning and reasoning component. Most of the artifacts available to train an LLM on are the end result of reasoning, not the reasoning process themselves (the day by day, hour by hour, diary of the thought process of someone exercising their journeyman skills). As far as software is concerned, even the end result of reasoning is going to have very limited availability when it comes to large projects since there are relatively few large projects that are open source (things like Linux, gcc, etc). Most large software projects are commercial and proprietary.
This is really one of the major weaknesses of LLM-as-AGI, or LLM-as-human-worker-replacement - their lack of ability to learn on the job and pick up a skill for themselves as opposed to needing to have been pre-trained on it (with the corresponding need for training data). In-context learning is ephemeral and anyways no substitute for weight updates where new knowledge and capabilities have been integrated with existing knowledge into a consistent whole.
I believe that there's a "optimal" level of abstraction, which, for the web, seems to be something like the modern web stack of HTML, JavaScript and some server-side language like Python, Ruby, Java, JavaScript.
Now, there might be tools that make a developer's life easier, like a nice IDE, debugging tools, linters, autocomplete and also LLMs to a certain degree (which, for me, still is a fancy autocomplete), but they are not abstraction layers in that sense.
My guess is: on one side, things like squarespace and wix get super super good for building sites that don't feel like squarespace and wix, (I'm not sure I'd want to be a pure "website dev" right now - although I think squarespace squashed a lot of that long ago) - and then very very nice tooling for "real engineers" (whatever that means).
I'm pretty handy with tech, I mean last time I built anything real was the 90s but I know how most things work pretty well. I sat down to ship an app last weekend, no sleep and Monday rolling around GCP was giving me errors and I hadn't realized one of the files the LLMs wrote looked like code but was all placeholder.
I think this is basically what the anthropic report says, automation issues happen via displacement, and displacement is typically fine, except the displacement this time is happening very rapidly (I read in different report, expecting traditionally ~80 years of displacement happens in ~10 years with AI)
If you've found any Excel guru that don't spend most of their time in VBA, you have a really unusual experience.
So, not really "no-code".
No-code in excel is that most functions are implemented for user and user doesn’t have to know anything about software development to create what he needs and doesn’t need software developer to do stuff for him.
The real issue here is that a lot of the modern tech stacks are crap, but won for other reasons, e.g. JavaScript is a terrible language but became popular because it was the only one available in browsers. Then you got a lot of people who knew JavaScript so they started putting it in places outside the browser because they didn't want to learn another language.
You get a similar story with Python. It's essentially a scripting language and poorly suited to large projects, but sometimes large projects start out as small ones, or people (especially e.g. mathematicians in machine learning) choose a language for their initial small projects and then lean on it again because it's what they know even when the project size exceeds what the language is suitable for.
To slay these beasts we need to get languages that are actually good in general but also good at the things that cause languages to become popular, e.g. to get something better than JavaScript to be able to run in browsers, and to make languages with good support for large projects to be easier to use for novices and small ones, so people don't keep starting out in a place they don't want to end up.
Unfortunately it doesn’t expose the DOM, so you still need JavaScript
But it’s also true that your son will probably end up working with boot camp grads who didn’t have that education. Your son will have a deeper understanding of the world he’s operating in, but what I’m saying is that from what I’ve seen it largely hasn’t mattered all that much. The bootcampers seem to do just fine for the most part.
Being a game developer is harder than being an enterprise web services developer. Who gets paid more, especially per hour?
Numpy has delivered so many FLOPs for BLAS libraries to work on.
Does anyone really care if you call their optimized library from C or Python? It seems like a sophomoric concern.
You can go further and design it out of discrete logic gates. Then write it in Verilog. Compare the differences and which made you think more about optimizations.
Older people are always going to complain about younger people not learning something that they did. When I graduated in 1997 and started working I remember some topics that were not taught but the older engineers were shocked I didn't know it from college.
We keep creating new knowledge. It is impossible to fit everything into a 4 year curriculum without deemphasizing some other topic.
I learned Motorola 68000 assembly language in college. I talked to a recent computer science graduate and he had never seen assembly before. I also showed him how I write static HTML in vi the same way I did in 1994 for my simple web site and he laughed. He showed me the back end to their web site and how it interacts with all their databases to generate all the HTML dynamically.
The OS, libraries, web browser, runtime, and JavaScript framework underneath your website are absolutely riddled with bugs, and knowing how to identify and fix them makes you a better engineer. Many junior developers get hung up on the assumption that the function they're calling is perfect, and are incapable of investigating whether that's the truth.
This is true of many of the shoulders-of-giants we're standing on, including the stack beneath python, rust, whatever...
Is it 10 times the "abstracting away complexity and understanding"? 100, 1000, [...]?
This seems important.
There must be some threshold beyond which (assuming most new developers are learning using these tools) fundamental ability to understand how the machine works and thus ability to "dive in and figure things out" when something goes wrong is pretty much completely lost.
For me this happened when working on some Spring Boot codebase thrown together by people who obviously had no idea what they were doing (which maybe is the point of Spring Boot; it seems to encourage slopping a bunch of annotations together in the hope that it will do something useful). I used to be able to fix things when they went wrong, but this thing is just so mysterious and broken in such ridiculous ways that I can never seem to get to the bottom of it,
Everything has a place, you most likely wouldn't write an HPC database in Python, and you wouldn't write a simple CRUD recipe app in C.
I think the same thing applies to using LLMS, you don't use the code it generates to control a power plant or fly an airplane. You use it for building the simple CRUD recipe app where the stakes are essentially zero.
That’s the 5% when it does matter.
The knowledge I have is personally gratifying to me because I like having a deeper understanding of things. But I have to tell you I thought knowing more would give me a deeper advantage than it has in actual practice.
To your employer, hiring people who know things (i.e. you) has giving them a deeper advantage in actual practice.
I've been able to get code working in libraries that I'm wholly unfamiliar with pretty rapidly by asking the LLM what to do.
As an example, this weekend I got a new mechanical keyboard. I like to use caps+hjkl as arrows and don't want to remap in software because I'll connect to multiple computers. Turns out there's a whole open source system for this called QMK that requires one to write C to configure the keyboard.
It's been over a decade since I touched a Makefile and I never really knew C anyway but I was able get the keyboard configured and also have some custom RGB lighting on it pretty easily by going back and forth with the LLM.
All future programmers will be using it.
For the programmers that don't want to use it. I think there will be literally billions of lines of unbelievably bad code generated by these 1-100 generation Ai's and junior programmers that need to be corrected and fixed.
This says more about most programmers then about any given LLM
Also, it's not entirely clear to me why LLMs should get extremely good in web app development but not OS development, as far as I can see it's the amount and quality of training data that counts.
Well there's your reason. OS code is not as in demand or prevalent as crud web app code, so there's less relevant data to train your models on.
LLMs replace entry level people who invested in education. They would have the beginning knowledge, but there's no means to become better because opportunities are non-existent because they replaced these positions. Its a sequential pipeline failure of talent development. In the meantime you have the mid and senior level people who cannot pass their knowledge on, they age out, and die.
What happens when you hit a criticality point where production which is dependent on these systems, and it can no longer continue.
The knowledge implicit in production is lost, the economic incentives have been poisoned. The distribution systems are destroyed.
How do you bootstrap recovery for something that effectively took several centuries to build in the first place, but not in centuries but in weeks/months.
If this isn't sufficient enough to explain the core of the issue. Check out the Atari/Nintendo crash, which isn't nearly as large as this but goes into the dangers of destroying your distributor networks.
If you pay attention to the details, you'll see Atari's crash was fueled by debt financing, and in the process they destroyed their distributor networks with catastrophic losses. After that crash, Nintendo couldn't get shelf-space; no distributor would risk the loss without a guarantee. They couldn't advertise as video games. They had to trojan horse the perception of what they were selling, and guarantee it. There is a documentary on Amazon which covers this, playing with power. Check it out.
You just don’t run into many people comfortable with that technology anymore. It’s one of the big reasons I go out of my way to recruit talks on “old” languages to be included at the Carolina Code Conference every year.
Most developers couldn't write an operating system to save their life. Most could not write more than a simple SQL query. They sling code in some opinionated dev stack that abstracts the database and don't think too hard about the low-level details.
Since ORMs generally write crap unoptimized sql for all but the simplest of queries, this will lead to performance issues once things scale up.
That said, AI agents are absolutely going to put a bunch of lower end devs out of work in the near term. I wouldn't want to be entering the job market in the next couple of years....
Unfortunately they won’t be found due to horrible tech interviews focused on “culture” (*-isms), leetcode under the gun, or resume thrown in trash at first sight from lack of full degree. AMHIK.
I bet there's a software dev employment boom about 5 years away once it becomes obvious competent people are needed to unwind and rework all the llm generated code.
And the ai will be trained on this- and thus cluelessness reinforced and baked in. Omnissiah, hear our prayers in the terminal for we are but -h less man (bashes keyboard with a ritual wrench).
This is exactly what the article says in point 3.
real programming of course wont go away. But in the public eye it lost its mysticism as seemingly anyone can code now. Of course that aint true and noone managed to create anything of substance by prompting alone.
Definitely. More often than not you're dealing with logic bugs. So the thing solving them will sometimes have to be able to reason quite well across large code bases (not every bug of course, but quite often) to the point I don't really see how it's different than general intelligence if it can do that well. And if it gets to the point its AGIish , I don't see why it can't do Kernel work (or in the very least - reduce the amount of jobs dramatically in that space as well). Perhaps you can automate 50% of the job where we're not really thinking at all as programmers, but the other 50% (or less, or more, debatable) involves planning, reasoning, debugging, thinking. Even if all you do is python and js.
The codebase only describes what the software can do currently, never the why. And you can't reason without both. And the why is the primary vector of changes which may completely redefines the what. And even the what have many possible interpretations. The code is only one specific how. Going from the why, to the what, to a specific how is the core tenet of programming. Then you add concerns like performance, reliability, maintainability, security...
Once you have a mature codebase, outside of refactoring and new features, you mostly have to edit a few lines for each task. Finding the lines to work one requires careful investigation and you need to carefully test after that to ensure that no other operations have been affected. We already have good deterministic tools to help with that.
When they do this, I really want to know they did this. Like an organic food label. Right now AI is this buzzword that companies self-label with for marketing, but when that changes, I still want to see who's using AI to handle my data.
Look at Google and Facebook - absolute shithouse services now that completely fail to meet the basic functionality they had ~20 years ago. Google still rakes in billions rendering ads in the style of a search engine and Facebook the same for rendering ads in the format of a social news feed. Why even bother with engineering anything except total slop?
I'm not worried about that at all. Many young people are passionate, eager to learn and build things. They won't become suddenly dumb and lazy because they have this extra tool available to them. I think it's the opposite. They'll be better than their seniors because they'll have AI help them improving and learn faster.
The problem is a lack of balance, and in some instances skipping the entirety of Critical Reasoning. Why go through the effort of working your way through a problem when you would rather be doing <literally anything else> with your time. Iterate on this to the extreme, with what feels like a magic bullet that can solve anything, and your skills *will* atrophy.
Of course there are exceptions to this trend. Star pupils exist in any generation who will go out of their way to discover answers to questions they have, re-derive understanding of things just for the sake of it, and apply their passions towards solving problems they decide are worth solving. The issue is the _average_ person, given an _average_ (e.g. if in America, under-funded) education, with an _average_ mentor, will likely choose the path of least resistance.
Perhaps Python will become the new assembly code. :-)
It’s just modern SEO and SEO will eat it, eventually.
Prompt engineering as a service makes more sense than having on-staff people anyway, since your prompt’s effectiveness can change from model to model.
Have someone else deal with platform inconsistencies, like always.
This was true before LLMs though. A lot of people just glue javascript libraries together
Both are useful, but you wouldn’t hire a mechanic drive you around
Using ai to write critical code doesn't seem far stretched to me.
Doing it right now would be suicidal of course, but in a few years when ai is even better ? It sure is coming.
We're already speaking seriously about ai surgeon, we are already using ai to do radiography doctors job and found it to be more reliable.
Some job are really at risk in the near future, that's obvious.
I'm no developer so I got no bias towards it.
Regardless of point of view, it was an eye opening discussion to hear a business leader discussing this so frankly, but I guess not so surprising since most of his income these days is from VC investments.
the other thing, though, is that you and I know that LLMs can't write or debug operating systems, but the people who pay us and see LLMs writing prose and songs? hmm
I wish more would try to describe what the differentiating skills and circumstances are instead of just saying that real programmers should still be in demand.
I think maybe raw talent is more important than how much you "genuinely love coding" (https://x.com/AdamRackis/status/1888965636833083416) or how much of a real programmer you are. This essay captures raw talent pretty well IMO: https://www.joelonsoftware.com/2005/07/25/hitting-the-high-n...
My experience with Stack Overflow, the Python forums, etc. etc. suggests that we've been there for a year or so already.
On the one hand, it's revolutionary that it works at all (and I have to admit it works better than "at all").
But when it doesn't work, a significant fraction of those users will try to get experienced humans to fix the problem for them, for free - while also deluding themselves that they're "learning programming" through this exercise.
Found the guy who's never worked for a large publicly-traded company :) Do you know what's out of touch with reality? Thinking that $BIG_CORP execs who are compensated based on the last three months of stock performance will do anything other than take shortcuts and cut corners given the chance.
Airplane manufacturers have proved themselves more than willing to sacrifice safety for profits. What makes you think they would stop short of using LLMs?
And on the first 10,000 lines of code, the best in class tools are actually pretty good. Since they can help define the structure of the code, it ends up shaped in a way that works well for the models, and it still basically all fits in the useful context window.
What developers who can't use it on large warty codebases don't see is how poorly even the best tools do on the kinds of projects that software engineers typically work on for pay. So they're faced with headlines that oversell AI capabilities and positive experiences with their own small projects and they buy the hype.
Yes I find it incredibly helpful and try to tell them.
But it's only helpful in small contexts, auto completing things, small snippets, generating small functions.
Any large scale changes like most of these AI companies try to push them being capable of doing it just falls straight on its face. I've tried many times, and with every new model. It can't do it well enough to trust in any codebase that's bigger than a few 10000 lines of code.
If you're in mostly (or totally) unfamiliar territory, you can end up in a mess, fast.
I was playing around with writing a dead-simple websocket server in go the other evening and it generated some monstrosity with multiple channels (some unused?) and a tangle of goroutines etc.
Quite literally copying the example from Gorilla's source tree and making small changes would have gotten me 90% of the way there, instead I ended up with a mostly opaque pile of code that *looks good* from a distance, but is barely functional.
(This wasn't a serious exercise, I just wanted to see how "far" I could get with Copilot and minimal intervention)
Newer models have gotten better at this and it takes longer before they start making things gibberish but all of them have their limit.
And given the size of lots of enterprise codebases like the ones I'm working in, it just is too far away from being useful enough to replace many programmers in my opinion. I'm convinced the CEO's who are saying AI are replacing programmers are just using it as an excuse to downsize while getting investors happy.
It is incredibly powerful for getting things started, but as soon as you have a sketch of a complex system going it loses its grasp on the full picture and do not account for the states outside the small asks you make. This is even more evident when you need to correct it about something or request a change after a large prompt. It just throws all the other stuff out the window and hyperfocus only on that one piece of code that needs changing.
This has been the case since GPT 3, the even their most recent model (forgot the name, the reasoning one) has this issue.
What models are you using that you feel comfortable trusting it to understand and operate on 10-20k LOC?
Using the latest and greatest from OpenAI, I've seen output become unreliable with as little as ~300 LOC on a pretty simple personal project. It will drop features as new ones are added, make obvious mistakes, refuse to follow instructions no matter how many different ways I try to tell it to fix a bug, etc.
Tried taking those 300 LOC (generated by o3-mini-high) to cursor and didn't fare much better with the variety of models it offers.
I haven't tried OpenAI's APIs yet - I think I read that they accommodate quite a bit more context than the web interface.
I do find OpenAI's web-based offerings extremely useful for generating short 50-200 LOC support scripts, generating boilerplate, creating short single-purpose functions, etc.
Anything beyond this just hasn't worked all that well for me. Maybe I just need better or different tools though?
When it comes to 10k LOC codebases, I still don't really trust it with anything. My best luck has been small personal projects where I can sort of trust it to make larger scale changes, but larger scale at a small level in the first place.
I've found it best for generating tests, autocompletion, especially if you give context via function names and parameter names I find it can oftentimes complete a whole function I was about to write using the interfaces available to it in files I've visited recently.
But besides that I don't really use it for much outside of starting from scratch on a new feature or getting helping me with getting a plan together before starting working on something I may be unfamiliar with.
We have access to all models available through copilot including o3 and o1, and access to chatgpt enterprise, and I do find using it via the chat interface nice just for architecting and planning. But I usually do the actual coding with help from autocompletion since it honestly takes longer to try to wrangle it into doing the correct thing than doing it myself with a little bit of its help.
It's when I try to give it a clear, logical specification for a full feature and expect it to write everything that's required to deliver that feature (or the entirety of slightly-more-than-non-trivial personal project) that it falls over.
I've experimented trying to get it to do this (for features or personal projects that require maybe 200-400 LOC) mostly just to see what the limitations of the tool are.
Interestingly, I hit a wall with GPT-4 on a ~300 LOC personal project that o3-mini-high was able to overcome. So, as you'd expect - the models are getting better. Pushing my use case only a little bit further with a few more enhancements, however, o3-mini-high similarly fell over in precisely the same ways as GPT-4, only a bit worse in the volume and severity of errors.
The improvement between GPT-4 and o3-mini-high felt nominally incremental (which I guess is what they're claiming it offers).
Just to say: having seen similar small bumps in capability over the last few years of model releases, I tend to agree with other posters that it feels like we'll need something revolutionary to deliver on a lot of the hype being sold at the moment. I don't think current LLM models / approaches are going to cut it.
I've tried to give it relevant context myself (a tedious task in itself to be honest) and even tools that claim to automatically be able to do so fail wonderfully at bigger than toy project size in my experience.
The codebase I'm working on day to day at this moment is give or take around 800,000 lines of code and this isn't even close to our largest codebase since its just one client app for our monolith.
Even trivial changes require touching many files. It would honestly take any average programmer less time to implement something themselves than trying to convince an LLM to complete it.
Then again, and I am repeating myself from other comments I made here in the topic, there's also Devon which pre-processes the codebase before you can do anything else. That kinda makes me wonder if current limitations that people observe in using those tools are really representative of what might be the current state of the art.
Even on my personal projects and smaller internal projects that are small toy projects or utility tools I sometimes struggle to get them to build anything significant. I'm not saying its impossible, but I always find it best at starting things from scratch, and small tools. Maybe its just a sign that AI would be best for microservices.
I've never used Devon so I can't speak to it, but I do recall seeing it was also overhyped at best and struggled to do anything it was purported to be able to in demos. Not saying that this is still true.
I would be interested in seeing how Devon performs on a large open source project in real-time (since if I recall their demos were not real-time demonstrations) for instance just to evaluate its capabilities.
Overhyped or not Devon is using something else under the hood since it is pre-processing your whole codebase. It's not "realtime" since it simulates the CoT meaning that it "works" on the patch the very same way a developer would. and therefore it will give you a resulting PR in few hours AFAIR. I agree that a workable example on more complex codebase would be more interesting.
> I've tried using all the available commercial models and none work better than as a helpful autocomplete, test, and utility function generator
That's the why I mentioned qwen because I think commercial AI models do not have such a large window context size. Perhaps, therefore an experience would have been different.
What they already do is a decent productivity boost but not nearly as much as they claim to be capable of.
My point was rather that you might be observing suboptimal results only because you haven't used the models which are more fit, at least hypothetically, for your use case.
https://www.itpro.com/software/development/the-worlds-first-...
I've tried it both with personal projects and work.
My personal project/benchmark is a 3d snake game. O3 is by far the best, but even with a couple of hundred lines of code it wrote itself it loses coherence and can't produce changes that involve changing 2 lines of code in a span of 50 lines of code. It either cannot comprehend it needs to touch multiple places of re-writes huge chunks of code and breaks other functionality.
At work, it's fine for writing unit tests on straight forward tasks that it most likely has seen examples of before. On domain-specific tasks it's not so good and those tasks usually involve multiple file edits in multiple modules.
The denser the logic, the smaller the context where LLMs seem to be coherent. And that's funny, because LLMs seem to deal much better with changing code humans wrote than the code the LLMs wrote themselves.
Which makes me wonder -- if we're all replaced by AI, who will write the frameworks and programming languages themselves?
> if we're all replaced by AI, who will write the frameworks and programming languages themselves?
What for? There's enough programming languages and there's enough of the frameworks. How about using an AI model to maintain and develop existing complex codebases? IMHO if AI models become more sophisticated and are able to solve this, then the answer is pretty clear who will be doing it.
And frankly, if you can't automate context, then you don't have an AI tool that can realistically replace a programmer. If I have to manually select which of my 10000 files are relevant to a given query, then I still need to be in the loop and will also likely end up doing almost as much work as I would have to just write the code.
> And frankly, if you can't automate context,
How about ingesting the whole codebase into the model? I have seen that this is possible with at least one such tool (Devon) and which I believe is using gpt model underneath meaning that other providers could automate this step too. I am curious if that would help in generating more legit large scale changes.
You edited your comment to clarify that you were asking from a place of ignorance as to the tools. Your original comment read as snarky and I responded accordingly, deleting it when I realized that you had changed yours. :)
> How about ingesting the whole codebase into the model? I have seen that this is possible with at least one such tool (Devon) and which I believe is using gpt model underneath meaning that other providers could automate this step too. I am curious if that would help in generating more legit large scale changes.
It doesn't work. Even the models that claim to have really large context windows get very distracted if you don't selectively pick relevant context. That's why I always talk about useful context window instead of just plain context window—the useful context window is much lower and how much you have depends on the type of text you're feeding it.
> It doesn't work. Even the models that claim to have really large context windows get very distracted if you don't selectively pick relevant context.
I thought Devon is able to pre-process the whole codebase and which it could take up to a one single day for larger codebases so it must be doing something, e.g. indexing the code? If so, this isn't a context-window specific thing, it's something else and it makes me wonder how that works.
I can't downvote you because you are downthread of me. HN shadow-disables downvotes on all child and grandchild comments.
I'm the one who upvoted you to counteract the downvote. :)
You keep referring to this vague idea of "ingesting the whole codebase". What does this even mean? Are you talking about building a code base specific rag, fine tuning against a model, injecting the entire code base into the system context, etc.?
What I have seen is that autocomplete scales fine (and Cursor's autocomplete is amazing), but autocomplete supplements a software engineer, it doesn't replace them. So right now I can see a world where one engineer can do a lot more than before, but it's not clear that that will actually reduce engineering jobs in the long term as opposed to just creating a teller effect.
This assumes a typical project is fairly big and complex. Maybe I'm biased the other way, but I'd guess 90% of software engineers are writing boilerplate code today that could be greatly assisted by LLM tools. E.g., PHP is still one of the top languages, which means a lot of basic WordPress stuff that LLMs are great at.
* Too large to fit in the useful context window of the model,
* Filled with bunch of warts and landmines, and
* Connected to external systems that are not self-documenting in the code.
Most stuff that most of us are working on meets all three of these criteria. Even microservices don't help, if anything they make things worse by pulling the necessary context outside of the code altogether.
And note that I'm not saying that the tools aren't useful, I'm saying that they're nowhere near good enough to be threatening to anyone's job.
AI is obviously not good enough to replace programmers today. But I'm worried that it will get much better at real-world programming tasks within years or months. If you follow AI closely, how can you be dismissive of this threat? OpenAI will probably release a reasoning-based software engineering agent this year.
We have a system that is similar to top humans at competitive programming. This wasn't true 1 year ago. Who knows what will happen in 1 year.
If your project is very small, and it’s possible to feed your entire code base into an LLM in the near future, then you’re in trouble.
Also the problem is the LLM output is only as good as the prompt. 99% of the time the LLM won’t be thinking of how to make your API change backwards compatible for existing clients, how to help you do a zero-downtime migration, following security best practices, or handling a high volume of API traffic. Etc.
Not to mention, what the product team _thinks_ they want (business logic) is usually not what they really want. Happens ALL THE TIME friend. :) It’s like the offshoring challenge all over again. Communication with humans is hard. Communication with an LLM is even harder. Writing the code is the easiest part of my job!
I think some software development jobs will definitely be at risk in the next 10-15 years. Thinking this will happen in 1 years time is myopic in my opinion.
Just use a state of the art LLM to write actual code. Not just a PoC or an MVP, actual production ready code on an actual code base.
It’s nowhere close to being useful, let alone replacing developers. I agree with another comment that LLMs don’t cut it, another breakthrough is necessary.
We will see, maybe models do get good enough but I think we are underestimating these last few percent of improvement.
The problem case is the somewhat odd scenario where there is an AI that's excellent at software dev, but not most other work, and we all have to go off and learn some other trade.
LLMs yet dont have the idea of a causal-model of how something works built-in. What they do have is pattern matching from a large index and generation of plausible answers from that index. (aside: the plausible snippets are of questionable licensing lineage as the indexes could contain public code with restrictive licensing)
Causal models require machinery which is symbolic, which is able to generate hypotheses and test and prove statements about a world. LLMs are not yet capable of this and the fundamental architecture of the llm machine is not built for it.
Hence, while they are a great productivity boost as a semantic search engine, and a plausible snippet generator, they are not capable of building (or fixing bugs in) a machine which requires causal modeling.
Prove that the human brain does symbolic computation.
And to be honest, if any company is firing software engineers hoping AI replaces their production, that is good news since that company will soon stop existing and treating engineers like shit which it probably did :)
I am of course talking about
npx create-{template name}
Or your language of choice's equivalent (or git clone template-repo).I get it though like I'm terrible working with IMUs and I want to just get something going but I can't there's that wall I need to overcome/learn eg. the math behind it. Same with programming helps to have the background knowing how to read code and how it works.
Like "How to truncate text with CSS alone" or "How to set an AWS EC2 instance RAM to 2GB using terraform"
I would not trust them until they can do the news properly. Just read the source Luke.
"AI chatbots unable to accurately summarise news, BBC finds" - https://www.bbc.com/news/articles/c0m17d8827ko
My observation is that LLMs do repetitive, boring tasks really well, like boilerplate code and common logic/basic UI that thousands of people have already done. Well, in some sense, jobs where developers who spend a lot of time writing generic code is already at risk of being outsourced.
The tasks that need a ton of tweaking or not worth asking AI at all are those that are very specific to a specific product and need to meet specific requirements that often come from discussions or meetings. Well, I guess in theory if we had transcripts for everything, AI could write code like the way you want, but I doubt that's happening any time soon.
I have since become less worried about the pace AI will replace human programmers -- there is still a lot that these tools cannot do. But for sure people need to watch out and be aware of what's happening.
LLMs don't do this, it confidently hallucinate the abstraction out of thin air or uses their outdated knowledge store. Sending wrong use or wrong input parameters.
Reasoning LLMs feel like an attempt to stuff the context window with additional thoughts, which does influence the output, but is still a proxy for plasticity and aha-moments that can generate.
That's good point, we don't do that right now. it's all very crystalized.
AKA embodiment. Hubert L. Dreyfus discussed this extensively in "Why Heideggerian AI Failed and How Fixing it Would Require Making it More Heideggerian": http://dx.doi.org/10.1080/09515080701239510
I don't think that anyone is advocating for LLMs to be used "on their own". Isn't it like saying that airplanes are useless "on their own" in 1910, before people had a chance to figure out proper runways and ATC towers?
If so, I quite enjoyed that as a way of considering how LLM-driven exploratory coding has now become feasible. It's not quite there yet, but we're getting closer to a non-technical user being able to create a POC on their own, which would then be a much better point for them in engaging an engineer. And it will only get better from here.
You wouldn’t argue that writing in a high level language doesn’t let you produce arbitrary code because the compiler is just spitting out presets its author prepared for you.
There are 2 main differences between using an LLM to build an app for you and using a no code solution with a visual language.
1. The source code is English (which is definitely more expressive).
2. The output isn’t deterministic (even with temperature set to 0 which is probably not what you want anyway)
Both 1 and 2 are terrible ideas. I’m not sure which is worse.
The median dev on Fiverr is so awful that almost anything is more bang for your buck.
A lot of micro managing is involved either way. And most LLMs suffer from a severe case of ground hog day. You can't assume them to remember anything over time. Every conversation starts from scratch. If it's not in your recent context, specify it again. Etc. Quite tedious but it still beats me doing it manually. For some things.
For at least the next few years, it's going to be an expectation from customers that you will not waste their time with stuff they could have just asked an LLM to do for them. I've had two instances of non technical CPO and CEO types recently figuring out how to get a few simple projects done with LLMs. One actually is tackling rust programs now. The point here is not that that's good code but that neither of them would have dreamed about doing anything themselves a few years ago. The scope of the stuff you can get done quickly is increasing.
LLMs are worse at modifying existing code than they are at creating new code. Every conversation is a new conversation. Ground hog day, every day. Modifying something with a lot of history and context requires larger context windows and tools to fill those. The tools are increasingly becoming the bottleneck. Because without context the whole thing derails and micromanaging a lot of context is a chore.
And a big factor here is that huge context windows are costly so there's an incentive for service providers to cut some corners there. Most value for me these days come from LLM tool improvements that result in me having to type less. "fix this" now means "fix the thing under my cursor in my open editor, with the full context of that file". I do this a lot since a few weeks.
The COBOL crisis at Y2K comes to mind.
https://www.computerweekly.com/news/366588232/Cobol-knowledg...
Let the user pair with an AI to edit and hot-reload some subset of the code which needs to be very adapted to the problem domain, and have the AI fine-tuned for the task at hand. If that doesn't cut it, have the user submit issues if they need an engineer to alter the interface that they and the AI are using.
I guess this would resemble how myspace used to do it, where you'd get a text box where you could provide custom edits, but you couldn't change the interface.
I think people lose sight of how much better it has gotten in just a few years.
We actually _need_ a breakthrough for the promises to materialize, otherwise we will have yet another AI Winter.
Even though there seems to be some emergent behavior (some evidence that LLMs can, for example, create an internal chess representation by themselves when asked to play), that's not enough. We'll end up with diminishing returns. Investors will get bored of waiting and this whole thing comes crashing down.
We'll get an useful too in our toolbox, as we do at every AI cycle.
A lot of the use cases are on building something that has already been built before, like a web app, a popular algorithm, and etc. I think the real threat to us programmers is stagnation. If we don't have new use cases to develop but only introduce marginal changes, then we can surely use AI to generate our code from the vast amount of previous work.
Very talented engineers, coworkers, that I would place above myself in skill, seemed stumped by it, while I have realized at least a 10x productively gain.
The claim that LLMs are not being applied in mature, complex code-bases is pure fantasy, example: https://arxiv.org/abs/2501.06972. Here Google is using LLMs to accelerate the migration of mature, complex production systems.
I agree with this completely. However the problem that I think the article gets at is still real because junior engineers also can't do significant changes on a mature codebase when they first start out. They used to do the 'easy stuff' which freed the rest of us up to do bigger stuff. But:
1. Companies like mine don't hire juniors anymore
2. With Copilot I can be so much more productive that I don't need juniors to do "the easy stuff" because Copilot can easily do that in 1/1000th the time a junior would.
3. So now who is going to train those juniors to get to the level where we need them to be to make those "significant changes"?
Founders will cash out long before that becomes an issue. Alternatively, the hype is true and they will obsolete programmers, also solving the issue above…
This is quite devious if you think about it, withering pipeline of new devs and only them having an immediate fix in all cases.
This is the only current threat. The time you save as a developer using AI on mundane stuff will get filled by something else, possibly more mundane stuff.
A small company with only 2-5 Seniors may not be able to drop anyone. A company with 100 seniors might be able to drop 5-10 of them total, spread across each team.
The first cuts will come at scaled companies. However, it's difficult to detect if companies are cutting people just to save money or if they are actually realizing any productivity gains from AI at this point.
Too much speculation that productivity will increase substantially, especially when a majority of companies IT is just so broken and archaic.
We've already seen this sort of incrementalism over the past couple of years, the initial buzz started without much more than a 2048 context window and we're seeing models with 1M out there now that are significantly more capable.
That coupled with new money and retail investors being thinking they’re in a gold rush and you get the environment we’re in.
We know what it's good at today. And pretty sure it won't be any worse at it in the future. And 5 years ago state of the art was basically output of Markov Chain. In 5 years we might be at another place entirely.
Everyone is hoping (probably delusional) that bigger and more impressive breakthroughs will keep leaping up if we just keep tweaking the models and increasing the size of the data sets.
I try to use AI daily, and every month I see how it is able to generate larger and more complex chunks of code from the first shot. It is almost there. We just need to adopt the new paradigm, build the tooling, and embrace the new weird future of software development.
You should reflect on the consequences of relying too much on it.
See https://www.404media.co/microsoft-study-finds-ai-makes-human...
1. AI code ability will be the same as is today
2. Companies will replace people for AI en masse at a given moment in time
Of course both these assumptions are wrong, the quality of code produced by AI will improve dramatically as model evolves. And is not even just the model itself. The tooling, the Agentic capabilities and workflow will entirely change to adapt to this. (Already doing)
The second assumption is also wrong, intelligent companies will not layoff en masse to use AI only, they will most likely slow hiring devs because their existing enhanced devs using AI will suffice enough to their coding related needs. At the end of the day product is just one area of company development, build the complete e2e ultimate solution with 0 distribution or marketing will not help.
This article, in my opinion, is just doomerism storytelling for nostalgic programmers, that see programming only as some kind of magical artistic craft and AI as the villain arrived to remove all the fun from it. You can still switch off Cursor and write donut.c if you enjoy doing it.
After 20 years in tech, I can't think of a single company I've worked for/with that would fit the profile of an "intelligent" company. All of them make poor and irrational decisions regularly. I think you over-estimate the intelligence of leadership whilst simultaneously under-estimating their greed and eventual ability to self-destruct.
EDIT: you also over-estimate the desire for developers to increase their productivity with AI. I use AI to reduce complexity and give me more breathing room, not to increase my output.
It's not even necessarily about intelligence but about the simple concept of unknown unknowns. If everyone had perfect knowledge of the current reality and could perfectly describe what it is that they want immediately, without spending any time investigating, producing proof-of-concept work, iterating on a product, etc. I would agree that it could be feasible for AI to replace a lot of programming work.
As it stands, what I just described above is THE BULK of development. Coding is the last thing that happens and it also happens to be the fastest, easiest and smallest part of the entire process.
Says nothing about companies and everything about you
> you also over-estimate the desire for developers to increase their productivity with AI. I use AI to reduce complexity and give me more breathing room, not to increase my output.
I'm the same. But I expect that once many begin to do this, there will be some who do use it for productivity and they will set the bar. Then people like you and I will either use it for productivity or fall behind.
Just look at the over hiring during covid and the methods used to cull that workforce after they realized their mistake. Back handed and inhumane. Executives are more followers than a junior dev is. They just have a lot more terminology to obscure that fact. But they are basically professional bullshitters, like consultant firms.
This is excluding executives with vision. But the market and corporate structure bias towards eliminating those leaders as they are not consistently profitable over every month.
self interested and short sighted says nothing about intelligence, irrationality, or poor decision making. Back handed and inhumane covid hiring and firing is probably not a mistake from their perspective. professional bullshitting is a form of intelligence (I hate it too, I've been done in by it, but I respect it)
>I expect that once many begin to do this, there will be some who do use it for productivity and they will set the bar.
Yeah, probably. I've had companies so pinpointed on "velocoity" instead of quality. I imagine they will definitely try to expect triple the velocity just because one person "gets so much done". Not realizing how much of that illusion is correcting the submissions.
No one is making this claim.
My comment was a bit terse and provocative, rude, deserves the downvotes tbh. I'll take them.
To elaborate ~ I've got a lot of empathy for the poster I was originally replying to. I've fallen into that way of thinking before, and it sure is comfortable. Of course, companies and their leadership make poor and irrational decisions. Often, however, it's easy to perceive their decisions as poor and irrational when you simply don't have the context they do. "Why would they x ?? if only y!!" but, you know, there may well be a good reason why that you aren't aware of, they may have different goals to you (which may well be selfish! and that doesn't make them irrational or anything). Feels similar to programmers hating when people say "can't you 'just' x" - well yes, but actually there's a mountain of additional considerations behind the scene that the person spouting "just" hasn't considered.
Is leadership unintelligent, or displaying poor/irrational decision making, if the company self destructs? Perhaps. But quite possibly not. They probably got a whole lot out of it. Different priorities.
Consider that leadership may label a developer unintelligent if that dev doesn't always consider how to drive shareholder value "gee they're so focused on increasing their salary not on business value". Well actually the dev is quite smart, from their own perspective. Same thing.
And if every company you've ever worked for truly has poor leadership then, yeah, it's probably worth reassessing how you interview. Do you need to dig deeper into the business? Do you just not have the market value to negotiate landing a job at a company with intelligent leadership?
So, two broad perspectives: either the poster has a challenge with perception, or they are poor at picking companies. Or perhaps the companies truly do have poor leadership but I think that unlikely. Hence it comes back to the individual.
@y-c-o-m-b sorry for being a bit rude.
Cheers for reading
>And if every company you've ever worked for truly has poor leadership then, yeah, it's probably worth reassessing how you interview.
No need. I work in games. There isn't a major studio in the industry that isn't like this. An industry used to churning workers and releasing them the moment the project ends.
In some ways it's a path I chose, but at the same time it means I need to be more cynical to defend myself from their inevitably orthogonal actions. I have an exit plan, but I need more time and money first.
If the industry wasnt so secretive with its techniques and knowledge, maybe I could have side stepped it altogether. But alas.
I've worked in big tech for a combined total of 6 years. Several of the other companies are Fortune 500 members. I've also worked at mid-size and small companies across mortgage, healthcare, fin-tech, point of sale, HR/payroll, and more. I surmise that you would categorize most of these as "intelligent" companies, which negates your argument about this being a "me" problem. Let's take Intel - where I worked for 3 years - as an example. Some would consider this an "intelligent company". I was there when Brian Krzanich took over for Paul Otellini. I think you will have a very easy time finding huge swaths of people/employees that consider Krzanich's leadership decisions to be very poor. In fact over the last few months, you'll find threads here on HN that directly pin the decline of Intel on his decision making.
We can argue the semantics of "intelligent" all day and make excuses for why leaders make irrational choices, but my point still stands. I don't think this is a "me" problem for one simple reason: If you take me out of the equation, the issue still exists.
That's a very bold claim. We are already seeing plateu in LLM capabilities in general. And there is little improvement in places where they fall short (like making holistic changes in a large codebase) since their birth. They only improve where they are already good at such as writing small glue programs. Expecting significant breakthroughs with just scaling without any fundamentally changes to the architecture seems like too optimistic to me.
How are you so sure?
It’s reasonable that people explore contingencies where the technology does improve to a point of driving changes in the labor market.
If you have your certainty then there are plenty of opportunities to short this space. For the rest of us who don’t have crystal balls, contingency planning (ideally through policy and not as individuals) will have to do.
0.2x, 2x, 5x, 50x?
There can't be things that a human can program that AGI can not program or it is not "AGI".
While I am never a true believer in AGI, it seems to go I get a little faith when a new model comes out then I become increasingly agnostic the weeks and months after that. Repeat.
Don’t hate on it, just spin up some startup with “ai” and LLM hype. Juice that lemon.
[1](https://techcrunch.com/2025/01/07/nvidia-ceo-says-his-ai-chi...)
And it seems, more or less, clear that the rate of change in the state of the art has already sharply decreased. So it's likely LLMs have already entered into this window.
Anyway, the mumber of tiktok users coorelates with advancements in AI too!
Before tiktok the progress was slower, then when tiktok appeared it progressed as hell!
Just increasing compute power will increase the performance/training speed of these models, but you also need to increase the quality of the data that you are training these models on.
Maybe... the reason why these models show a high school level of understanding is because most of the data on the internet that these models have been trained on is of high school graduate quality.
That said, I do think there is real risk of letting AI hinder the growth of Junior dev talent.
If you have a job, working for a boss, you're trading your time for money. If you're a contractor and negotiate being paid by the project, you're being paid for results. Trading your time for money is the underlying contract. That's the fundamental nature of a job working for somebody else. You can escape that rat race if you want to.
Someone I know builds websites for clients on a contract basis, and did so without LLMs. Within his market, he knows what a $X,000 website build entails. His clients were paying that rate for a website build out prior to AI-augmented programming, and it would take a week to do that job. With help from LLMs, that same job now takes half as much time. So now he can choose to take on more clients and take home more pay, or not, and be able to take it easy.
So that option is out there, if you can make that leap. (I haven't)
I'm working on it. But it takes money and the overlords definitely are trying to squeeze as of late.
And yes, while I don't think I'm being replaced in months or years, I can a possibility in a decade or two of the ladder being pulled up on most programming jobs. We'll either be treated as well as artists (assuming we still don't unionize) or we'll have to rely on our own abilities to generate value without corporate overlords.
These things eventually always end up as a Red Queen’s race, where you have to run as fast as you can to stay in the same place.
A different friend who's contractor in a non-tech area told me a client of his secretly showed him his competition's bid for the same project. The competition's bid was much higher, and the reason the client showed him that was to get my friend to raise his rates and resubmit his bid.
So you're welcome to try, but as a programmer looking into the abyss, I'm looking at the whole thing as encouragement to develop all those soft skills that I've been neglecting.
It is only generating based on training data. In mature code bases there is a massive amount of interconnected state that is not already present in any github repository. The new logic you'd want to add is likely something never done before. As other programmers have stated, it seems to be improving at generating useful boilerplate and making simple websites and such related to what's out there en masse on Github. But it can't make any meaningful changes in an extensively matured codebase. Even Claude Sonnet is absolutely hopeless at this. And the requirement before the codebase is "matured" is not very high.
99% of software development jobs are not as groundbreaking as this. It’s mostly companies doing exactly what their competitors are doing. Very few places are actually doing things that an LLM model has truly never seen crawling through GutHub. Even new innovative products generally boil down to the same database fetches and CRUD glue and JSON parsing and front end form filling code.
The simplest version of that is some CGI code a PHP script. Which everyone should be writing according to your description. But why so many books have been written to be able to do this seemingly simple task? So many frameworks, so many patterns, so many methodologies....
It can't do anything in these random Phaser games I'm making and even translating my 10,000 line XNA game to Phaser. It is totally hopeless.
Phaser has been out forever now, and XNA used to be too.
This is not the case anymore, current SOTA CoT models are not just parroting stuff from the training data. And as of today they are not even trained exclusively on publicly (and not so publicly) available stuff, but they massively use synthetic data which the model itself generated or distilled data from other smarter models.
I'm using and I know plenty of people using AI in current "mature" codebases with great results, this doesn't mean it does the work while you sip a coffee (yet)
*NOTE: my evidence for this is that o3 could not break ARC AGI by parroting, because it's a banchmark made exactly for this reason. Not a coding banchmark per se, but still transposable imo.
Not my experience. I spend as much time reading through and replacing wrong AI generated code as I do writing my own code, so it's really wasting my time more often than helping. It's really hit or miss, and about the only thing the AI gets right most often is writing console.log statements based on the variable I've just assigned, and that isn't really "coding". And even then it gets it right only about 75% of the time. Sure, that saves me some time, but I'm not seeing the supposed acceleration AI is hyped as giving.
Where are your facts? What is your basis on these future prediction? Sure small code snippets have already improved since gpt2. What about larger applications where layers and layers of abstractions coming from private company sensitive data?
> At the end of the day product is just one area of company development, build the complete e2e ultimate solution with 0 distribution or marketing will not help.
this sounds like PR garbage to me why use "e2e ultimate solution" provide technical details so we can verify what you're saying is true
How many companies are intelligent given how many dumb decisions we see?
If we assume enough not so intelligent companies then better AI code we lead to mass firing.
This is the fundamental delusion that is driving AI hype.
Although scale has made LLMs look like magic, actual magic (AGI) is not on the scaling path. This is a conjecture (as is the converse), but I'm betting the farm on it personally and see LLMs as useful chat bots that augment other, better technologies for automation. If you want to pursue AGI, move on quickly to something structurally and fundamentally better.
People don't understand that AGI is pure speculation. There is no rigorous, non-circular definition of human intelligence, let alone proof that AGI is possible or achievable in any reasonable time frame (like 100 years).
This is the incorrect assumption, or at least there’s no evidence to support it.
1. All problems are small- the prompt and solution (<100 LOC, often <60LOC)
2. Solving those problems is more about recollecting patterns and less about good new insights. Now, top level human competitors do need original thinking, but that's only because our memory is too small to store all previously seen patterns.
3. Unusually good dataset- you have tens of thousands of problems, each with thousands of submissions, along with clear signals to train on (right/wrong, time taken etc), a very rich discussion sections etc.
I think becoming 100th best Codeforces programmer is still an incredible achievement for a LLM. But for Sam Altman to specifically note the performance on this- I consider that a sign of weakness, not strength.
In the 90s, companies showed graphs of CPU frequency and projected we would be hitting 8ghz pretty soon. Futurists predicted we would get CPUs running at tens of ghz.
We only just now have 5ghz CPUs despite running at 4ghz back in the mid 2000s.
We fundamentally missed an important detail that wasn't consider at all in those projections.
We know less about the theory of how LLMs and neural networks grow with effort than we did about how transistors operate over different speeds.
You utterly cannot extrapolate from those kinds of graphs.
Assuming the same kind of growth in capabilities isn't backed by reality.
The last release of OpenAI's model wasn't dramatically better.
At the moment it's more about getting cheaper.
https://www.autoevolution.com/news/viral-tesla-cybertruck-cr...
When the financial environment loosens again, there’ll be a new wave of tech hiring (which is about equally likely to publicly be portrayed as either reversing the AI firing or exploiting new opportunities due to AI, neither of which will be the real fundamental driving force.)
Everybody got used to the way things worked when interest rates were near zero. Money was basically free, hiring was on a rampage, and everybody was willing to try reckless moonshots with slim chances for success. This went on for like fifteen years -- a good chunk of the workforce has only ever known that environment.
If people can get a safer return buying bonds they aren't going to invest in expansion and hiring. If there is basically no risk free rate of return you throw your money at hiring/new projects because you need to make a return. Lots of that goes into tech jobs.
Yes, in the sense that I suspect that with the strict counterfactual -- taking them AWAY -- you would have to hire 21 people instead of 20, or 25 instead of 20, to do the same job.
So strictly speaking, you could fire a bunch of people with the new tools.
---
But in the same period, the industry expanded rapidly, and programmer salaries INCREASED
So we didn't really notice or lament the change
I expect that pretty much the same thing will happen. (There will also be some thresholds crossed, producing qualitative changes. e.g. Programmer CEOs became much more common in the 2010's than in the 1990's.)
---
I think you can argue that some portion of the industry "got dumber" with Google/Stack Overflow too. Higher level languages and tech enabled that.
Sometimes we never learn the underlying concepts, and spin our wheels on the surface
Bad JavaScript ate our CPUs, and made the fans spin. Previous generations would never write code like that, because they didn't have the tools to, and the hardware wouldn't tolerate it. (They also wrote a lot of memory safety bugs we're still cleaning up, e.g. in the Expat XML parser)
If I reflect deeply, I don't know a bunch of things that earlier generations did, though hopefully I know some new things :-P
I just don't remember anyone saying that SO would replace programmers, because you could just copy-paste code from a website and run it. Yet here we are: GPTs will replace programmers, because you can just copy-paste code from a website and run it.
I think another way of thinking about this is with low-code/no code tools. Another comment in this post said they never really took off and they didn't in the way some people expected. But a lot of large companies use them quite a bit for automating internal processes such as document/data aggregation and manipulation. JP Morgan has multiple job listings right now for RPA developers. Before this would needed to be done by actual developers.
I suspect (and hope) AI will follow a similar trajectory. I hope the future is exciting and we build new, more complex systems we can build that wasn't possible before due to lack of
But the real problems are managerial. Stonks must go up, and if that means chasing a ridiculous fantasy of replacing your workforce with LLMs then let's do that!!!!111!!
It's all fun and games until you realise you can't run a consumer economy without consumers.
Maybe the CEOs have decided they don't need workers or consumers any more. They're too busy marching into a bold future of AI and robot factories.
Good luck with that.
If there's anyone around a century from now trying to make sense of what's happening today, it's going to look like a collective psychotic episode to them.
If the issue is that the AI can't code, then yes you shouldn't replace the programmers: not because they're good consumers, just because you still need programmers.
But if the AI can replace programmers, then it's strange to argue that programmers should still get employed just so they can get money to consume, even though they're obsolete. You seem to be arguing that jobs should never be eliminated due to technical advances, because that's removing a consumer from the market?
The idea that "everybody must work" keeps harmful industries alive in the name of jobs. It keeps bullshit jobs alive in the name of jobs. It is a drain on progress, efficiency, and the economy as a whole. There are a ton of jobs that we'd be better off just paying everybody in them the same amount of money to simply not do them.
We could decide this one minute, and the next minute it will be UN-decided
There is no "global world order", no global authority -- it is a shifting balance of power
---
A more likely situation is that the things AI can't do will increase in value.
Put another way, the COMPLEMENTS to AI will increase in value.
One big example is things that exist in the physical world -- construction, repair, in-person service like restaurants and hotels, live events like sports and music (see all the ticket prices going up), mining and drilling, electric power, building data centers, manufacturing, etc.
Take self-driving cars vs. LLMs.
The thing people were surprised by is that the self-driving hype came first, and died first -- likely because it requires near perfect reliability in the physical world. AI isn't good at that
LLMs came later, but had more commercial appeal, because they don't have to deal with the physical world, or be reliable
So there are are still going to many domains of WORK that AI can't touch. But it just may not be the things that you or I are good at :)
---
The world changes -- there is never going to be some final decision of "humans don't have to work"
Work will still need to be done -- just different kinds of work. I would say that a lot of knowledge work is in the form of "bullshit jobs" [1]
In fact a reliable test of a "bullshit job" might be how much of it can be done by an LLM
So it might be time for the money and reward to shift back to people who accomplish things in the physical world!
Or maybe even the social world. I imagine that in-person sales will become more valuable too. The more people converse with LLMs, I think the more they will cherish the experience of conversing with a real person! Even if it's a sales call lol
Early on in AV cycles there was enormous hype for AVs, akin to LLMs. We thought truck drivers were done for. We thought accidents were a thing of the past. It kicked off a similar panic among tangential fields. Small AV startups were everywhere, and folks were selling their company to go start a new one then sell that company for enormous wealth gains. Yet 5 years later none of the "level 5" promises they made were coming true.
In hindsight, as you say, it was obvious. But it sure tarnished the CEO prediction record a bit, don't you think? It's just hard to believe that this time is different.
It honestly doesn't matter, because we're hundreds of years from > a point that machines and AI can do 99% of useful work
There are many people like me, and we will be the ones to work. It won't be choosing who has to work, it will be who chooses that they want to work.
I think it's useful to look at what has already happened at another, much smaller profession -- translators -- as a precursor to what will happen with programmers.
1. translation software does a mediocre job, barely useful as a tool; all jobs are safe
2. translation software does a decent job, now expected to be used as time-saving aid, expectations for translators increase, fewer translators needed/employed
3. translation software does a good job, translators now hired to proofread/check the software output rather than translate themselves, allowing them to do 3x to 4x as fast as before, requiring proportionally fewer translators
4. translation software, now driven by LLMs, does an excellent job, only cursory checks required; very few translators required mostly in specialized cases
It turns out that like art, many people just want a human doing the translation. There is a strong romantic element to it, and it seems humans just have a strong natural inclination to only want other humans facilitating communication.
Sounds like easy money, maybe I should get into the translation business.
2) the agency encourages translation tools, so long as the final content is okay (proofread by the translator), because they can then pay less (based on the assumption that it should take you less time). I’ve see rates drop in half because of it.
3) the client doesn’t know who did the translation and doesn’t care - with the exception of literary pieces where the translator might be credited on the book. (Those cases typically won’t go through an agency)
Once AI is doing that, most jobs are at risk. It’ll create robots to do manual labor better than humans as well.
How much time? I totally agree with you but being early is the same as being wrong as someone clever once said. There's a huge difference between it happening in less than 5 years like Zuckerberg and Sam Altman are saying and it taking 20 more years. If the second scenario is what happens me and many people on this thread can probably retire rather comfortably, and humanity possibly has enough time to come up with a working system to handle this mass change. If the first scenario happens it's gonna be very very painful for many people.
I wouldn't be considering programming if choosing university studies now. With that smart, many other fields look more stable, albeit demand curve and how comfy later years of career looks like is very different (maybe lawyers, doctors, for blue collars some trades but look at long term health effects with ie back or knee issues).
https://www.inc.com/kit-eaton/mark-zuckerberg-plans-to-repla...
He's just trend-chasing, like all the other executives who are afraid of being left behind as their flagship product bleeds users...
Across all products, maybe not - Instagram appeals to a younger demographic, especially since they turned it into a TikTok clone. And WhatsApp is pretty ubiquitous outside of the US (even if it is more used as a free SMS replacement than an actual social network).
With 3 billion monthly actives and China being excluded, it's hard to expect a ton of growth since it is a major fraction of the remaining world population. There are bots etc. but they are one of the stricter networks with requiring photos of your ID and stuff a lot more often than others.
As for the Metaverse, it was always intended as a very long-term play which is very early to be judged, but as an owner of a Quest headset, it's already going great for me.
Anyway he also acquired RKO Pictures and led it to its demise 9 years later. In aviation he had many successes, he also had the spruce goose. He bought in to TWA then got forced out of its management.
He died as a recluse, suffering from OCD and drug abuse, immortalized in a Simpsons episode with Mr. Burns portraying him.
People can have business acumen, and sometimes it doesn't work out. Past successes doesn't guarantee future ones. Maybe the metaverse will eventually pay off and we'll all eat crow, or maybe (and this is the one I'm a believer of) it'll be a huge failure, an insane waste of money, and one of the spruce geese of his legacy.
>Zuckerberg, as always, is well known for making excellent business decisions that lead to greater sector buy in.
Apple was almost the one exception, but the post Jobs era definitely had that cultural branding stagnate at best.
My big vision for this space is the integration of GenAI for creating 3d objects and full spaces in realtime, allowing the equivalent of The Magic School Bus, where a teacher could guide students on a virtual experience that is fully responsive and adjustable on the fly based on student questions. Similarly, playing D&D in such a virtual space could be amazing.
https://www.msn.com/en-us/money/other/meta-starts-eliminatin...
But that doesn't really lead to any market advantage, at least for tech companies.
AI will also enable your competitors to cut costs. Who thinks they are going to have a monopoly on AI, which would be required for a durable advantage?
---
What you want to do is get more of the rare, best programmers -- that's what shareholders and execs should be wondering about
Instead, those programmers will be starting their own companies and competing with you
which is why it puts pressure on your own company to cut costs
it's the same reason why nearly all US companies moved their manufacturing offshore; once some companies did it, everyone had to follow suit or be left behind due to higher costs than their competitors
But if that works, it won't take long for "starting companies" and "being a CEO" to look like comically dated anachronisms. Instead of visual and content slop we'll have a corporate stonk slop.
If ASI becomes a thing, it will be able to understand and manipulate the entirety of human culture - including economics and business - to create ends we can't imagine.
I don't think we are even close to AGI.
That does bring up a fascinating "benchmark" potential -- start a company on AI advice, with sustained profit as the score. I would love to see a bunch of people trying to start AI generated company ideas. At this point, the resulting companies would be so sloppy they will all score negative. And it would still completely depend on the person interpreting the AI.
The future of programming will be increasingly small numbers of highly skilled humans, augmented by AI
(exactly how today we are literally augmented by Google and Stack Overflow -- who can claim they are not?)
The idea of autonomous AIs creating and executing a complete money-making business is a marketing idea for AI companies
---
Because if "you" can do it, why can't everyone else do it? I don't see a competitive advantage there
Humans and AI are good at different things. The human+AI is going to outcompete AI only FOR A LONG time
I will bet that will be past our lifetimes, for sure
If so, then why am I not seeing a lot of new companies starting while we're in this huge down-turn in the development world?
Or, is everyone like me and trying to start a business with only their savings, so not enough to hire people?
That seems pretty optimistic. The shareholder / capital ownership class isn't exactly known for their desire to spread that ownership across the public broadly. Quite the opposite: Fewer and fewer are owning more and more. The more likely case is we end up like Elysium, with a tiny <0.1% ownership class who own everything and participate in normal life/commerce, selling to each other, walled off from the remaining 99.9xxx% barely subsisting on nothing.
This seems like a cynical take, given that there are two stock markets (just in the US), it's easy to set up a brokerage account, and you don't even need to pay trading fees any more. It's never been easier to become a shareholder. Not to mention that anyone with a 401(k) almost surely owns stocks.
In fact, this is a demonstrably false claim. Over half of Americans have owned stock in every year since 1998, frequently close to 60%. [1]
[1] https://news.gallup.com/poll/266807/percentage-americans-own...
reminds me of the offshoring hype in the early 2000's. Where it worked, it worked well but it wasn't the final solution for all of software development that many CEOs wanted it to be.
I assume the "motivated prompt engineer" would have to already be an experienced programmer at this point. Do you think someone who has only had an intro to programming / MBA / etc could do this right now with tools like cursor?
Responsiveness, cohesive design, browser security, accessibility and cross browser compatibility are not easy problems for LLMs right now.
I think it's a great time to be small, if you can reap the benefits of these tools to deliver EVEN FASTER than large enterprise than you already are. Aider and a couple Mac minis and you can have a good time!
If you’re able to have AI generate integration level tests (ie call an API then ensure database or external system is updated correctly - correctly is doing a lot of heavy lifting here) that would be amazing! You’re sitting on a goldmine, and I’d happily pay for these kind of tests.
However, the AI is hard to work with, it expects specific wording in order to program our code as expected.
We have hired people with expertise in the specific language needed to transmit our specifications to the AI with more precision.
Also known as programmers.
The "AI" part is irrelevant. Someone with expertise in transmitting specifications to a computer is a programmer, no matter the language.
EDIT: Yep, I realized that it could be the joke, but reading the other comments, it wasn't obvious.
And the manager is happy that filthy programmers are "using" AI.
Speaking English to make something is one thing, but speaking English to modify something complicated is absolutely something else. And Im pretty sure involves more or less the same effort as writing code itself. Of course regression for this something like this is not for the faint hearted.
These people are however not experts in pretending to be a obedient lackeys.
1. Humans also need specific wording in order to program code that stakeholders expected. A lot of people are laughing at AI because they think getting requirements is a human privilege.
2. On the contrary, I don't think people need to hire AI interfacers. Instead, business stakeholders are way more interested to interface with AI simply because they just want to get things done instead of filling a ticket for us. Some of them are going to be good interfacers with proper integration -- and yes we programmers are helping them to do so.
Side note: I don't think you are going to hear someone shouting that they are going to replace humans with AI. It started with this: people integrate AI into their workflow, layoff 10%, and see if AI helps to fill in the gap so they can freeze hire. Then they layoff 10% more.
And yes we programmers are helping the business to do that, with a proud and smile face.
Good luck.
Im not convinced the people writing specs are capable of writing them well enough that an LLM can replace the human dev.
The AI complained that the message did not originate from a programmer and decided not to respond.
Error on line 5: specification can be interpreted too many
ways, can't define type from 'thing':
Remember to underline the thing that shows the error
~~~~~
| This 'thing' matches too many objects in the knowledge scope.
Outsourcing abroad is more difficult because of cultural differences though. Having worked with outsourced devs in India, I found that we got a lot of nodding in meetings when asked if they understood, avoiding saying no, and then it became clear when PRs came in that they didn't actually understand or do what they had been asked to do.
You can't just expect people from other countries to communicate as effectively as people who grew up right down the street from each other. Yes, it's objectively discriminatory, but not for hostile reasons.
Build up to it and foster growth in your overseas teams and you’ll do well. Thinking you can transform your department overnight _is_ a great way to boost your share price, cash out on a fat payday and walk away before your product quality tanks.
I didn't say this -- I think it's your take. Even more -- I'm such an "outsource" software developer who is working for US and EU companies. My take is that overusing outsourcing in the long term, you can lose local education because "we can just hire from ... so why do we need to teach ours?" -- I saw it already, even on an "in-country-scale" level.
Negative consequences can also be social, no-one is saying that it's, say, lowering of product quality.
A lot of articles like this just want to believe something is true and so they create an elaborate argument as to why that thing is true.
You can wrap yourself up in rationalizations all you want. There is a chance firing all the programmers will work. Evidence beats argument. In 5 years we'll look back and know.
It is actually probably a good idea to hedge your bets either way. Use this moment to trim some fat, force your existing programmers to work in a slightly leaner environment. It doesn't feel nice to be a programmer cut in such an environment but I can see why companies might be using this opportunity.
It is the latter class who are in real danger.
The AI only world is still one where the form and layout get done, but what happens to that data afterward?
If your job is massaging data for nebulous purposes using nebulous means and getting nebulous results, that you need to basically be another person doing the exact same thing to understand the value of, there's going to be a whole lot of management saying "Do we really need all those guys over there doing that? Can't we just have like one guy and a bunch of new AI magic?"
Last year, I built a reasonably complicated e-commerce project wholly with AI, using the zod library and some pretty convoluted e-commerce logic. While it was a struggle, I was able to build it out in a couple of weeks. And I had zero prior experience even building forms in react, forget using zod.
Now shipping it to production? That's something AI will struggle at, but humans also struggle at that :(
Why? Just because that's where the rubber hits the road? It's a different skillset but AI can do systems design too and probably direct a knowledgable but unpracticed implementer.
Using AI to write code does two things:
1. Everything seems to go faster at first until you have to debug it because the AI can't seem to be able to fix the issue... It's hard enough to debug code you wrote yourself. However, if you work with code written by others (team environment) then maybe you're used to this, but not being able to quickly debug code you're responsible for will shoot you in the foot.
2. You brain neurons in charge of code production will be naturally re-assigned for other cognitive tasks. It's not like riding a bicycle or swimming which once learned is never forgotten. It's more like advanced math, which if you don't practice you can forget.
Short term gain; long term pain.
Humans are supremely adaptable. That's what our defining attribute as a species is. As a group we can adapt to more or less any reality we find ourselves in.
People with good minds will use whatever tools they have to enhance their natural abilities.
People with less good minds will use whatever tools they have to cover up their inability until they're found out.
It seems the general sentiment is that developers are in danger of being replaced entirely. I may be biased, but it seems not to be the most likely outcome in the long term. I can't imagine how such companies will be competitive against developers who replace their boss with an AI.
Me neither, but I think it'll be a gratifying fight to watch.
Every single department and person believes the world will stop turning without them but that’s rarely how that plays out.
The core issue is that the bottleneck step in software development isn't actually the ability to program a specific thing, it's the process of discovering what it is we actually want the program to do. Having your programmers AT THE OFFICE and in close communication with the people who need the software, is the best way to get that done. Having them on the other side of the planet turned out to be the worse way.
This is unintuitive (to programmers as well as the organizations that might employ them), and therefore they have to discover it the hard way. I don't think this is something LLMs will be good at, now or ever. There may come a day when neural networks (or some other ML) will be able to do that, but that day is not near.
And they were right... and a lot of companies fully failed because of it.
And the corporate meat grinder kept rolling forward anyway. And the decision makers were all shielded from the consequences of their incompetence anyway.
When the market is completely corrupted, nothing means anything.
Take banking for example.
ATMs are literally called "teller machines." Internet banking is a way of "automating banking."
Besides those, every administrative aspect of banking went from paper to computer.
Do banks employ fewer people? Is it a smaller industry? No. Banks grew steadily over these decades.
It's actually shocking how little network enabled PCs impacted administrative employment. Universities, for example, employ far more administrative staff than they did before PC automated many of their tasks.
At one point (during and after dotcom), PayPal and suchlike were threatening to "turn billion dollar businesses into million dollar businesses." Reality went in the opposite direction.
We need to stop analogizing everything in the economy to manufacturing. Manufacturing is unique in its long term tendency to efficiency.other industries don't work that way.
I wouldn't be sure that growth as an industry/business is correlated to a growth in jobs too.
Maybe I'm wrong, I would love to see some data about it.
Not sure how overall numbers look like, I would expect slight decrease overall, but for IT definitely grew. Those are really not same type of jobs, although in minds of public are all 'bankers', since all are bank employees.
Profits may have grown but In Ireland at least, the number of branches have declined drastically.
I find it easy to say from our privileged position that "tech might replace workers but it'll be fine".
Even if all the replaced people aren't unemployed, salaries go down and standards of living for them fall off a cliff.
Tech innovation destroys lives in our current capitalist society because only the owners get the benefits. That's always been true.
Salaries of the remaining people tend to go up when that happens. And costs tend to go down for the general public.
Owners are actually supposed to only see a temporary benefit during the change, and then go back to what they had before. If that's not how things are happening around you¹, consult with your local market-competition regulator why they are failing to do their job.
1 - Yeah, I know it's not how things are happening around you. That doesn't change the point.
>Salaries of the remaining people tend to go up when that happens.
You're telling me with a straight face that after a company replaces part of its workforce with tech/automation, salaries go up? Really? Please show me some data on that because every single graph I've ever seen of salaries must be wrong then. We've had an enormous amount of innovation and breakthroughs in the last decades, but weirdly enough all salaries remain stagnated. If this is true, they should be going up constantly every time we offshore some work or get more efficient technology.
The company can have 50000% growth and salaries will NOT go up. They basically never go up unless the companies want to retain an employee that's at risk of leaving and the replacement cost is high.
The objective of a company is to give money to its owners, nothing else. Salaries are viewed as a cost, so they will never willingly increase their costs unless it's absolutely necessary.
> And costs tend to go down for the general public.
Assuming there aren't monopolies involved and it's a commodity, yes, that sometimes happens. If there's any monopoly involved, unfortunately companies will simply pocket the difference.
If you want to become a (partial) owner, buy stocks. :-)
I can't influence anything on any of these companies unless I was already a billionaire with a real seat at the table.
Even on startups where, in theory, employees have some skin in the game, it's not really how it works is it? You still can't influence almost anything and you're susceptible to all the bad decisions the founders will make to appease investors.
Call me crazy but to say I own something, I have to at least be able to control some of it. Otherwise it's wishful thinking.
Other investors will probably vote "no" to your proposals, but for many companies you can force a vote for a pretty low minimum. In Canada, you're legally entitled to submit a proposal if you've owned C$2000 shares for 6 months.
https://www.osler.com/en/insights/updates/when-can-a-company...
Focus purely on 100% outsourcing always failed, this is true across all industries. Ignoring it completely was a luxury few companies could keep, ie some small private banks or generally luxury products and services. Conservative optimism was the way to go for long term success without big swings.
Even though when offshoring came it was felt as end of days, most of the threat didn't materialize long term. Without time machine we of course don't know real effects. I think it will be similar, not same here. I expect companies will get more work done (backlogs for software changes are often huge in non-tech companies), maybe trim fat a bit but nothing shocking. Lets be a bit smart and not succumb to emotional swings just because others are doing it.
It was a case before AI as well.
Overall it reads the same as "Twitter will be destroyed by mass layoffs". But it is still online
https://www.theverge.com/2025/1/24/24351317/elon-musk-x-twit...
It's being kept online because it's a good propaganda tool, not due to how it performs on the free market.
It's kind of hilarious watching the piranhas go at each other:
https://finance.yahoo.com/news/elon-musk-reportedly-offers-9...
For twitter as a business? Awful.
I use it daily and can't remember the last outage.
<a href="https://twitter.com/tos">Terms of Service</a>
<a href="https://twitter.com/privacy">Privacy Policy</a>
<a href="https://support.twitter.com/articles/20170514">Cookie Policy</a>
<a href="https://legal.twitter.com/imprint.html">Imprint</a>
I'm not talking about GH Copilot or some autocomplete IDE feature, I'm talking about fully autonomous agents 2-3 years in the future. Just look at the insane rate of progress in the past 2 years. The next two years will be even faster if the past few months are anything to go by.
AIs have been shown to have confirmation bias depending on what you ask them. They won't question your assumptions on non-obvious subjects. Like "why should this application be written in Python and not C++". You could ask it the opposite, and it will provide ample justification for either position.
What fundamental limitation in AI makes you believe reasoning agents won't be able to gather requirements, ask clarifying questions, and make decisions in 2 years?
You can outsource the execution of a task but only if you know how to formulate your requirements and analyze the situation.
What fundamental limitation in AI makes you believe reasoning agents in 2 years won't be able to do this?
For the same reason you need Engineers to operate CAD software.
Are we living on the same planet? I haven't seen much real progress since the release of chatGPT. Sure brenchmarks and graphs are going up, but in practice, meh...
I am completely blind and I used Gemini Live mode to help me change BIOS settings and reinstall Windows when the installer didn't pick up my USB soundcard. I spoke, with my own voice and completed a task with a computer which could only see my webcam stream. This, to me, is a heck of a lot more than ChatGPT was ever able to do in November 2022.
If you continue to insist that stuff isn't improving, well, you can in fact do that... But I don't know how much I can trust you in terms of overall situational awareness if you really don't think any improvements have been made at all in the previous two years of massive investment.
That's a very leveraged bet, which isn't always the wrong call, but I'm not convinced they are aware that that's what they're doing.
I think this is different from the usual hype cycle.
Imagine "The Innovator's Dilemma" was written in the Idiocracy universe:
1) We're in late stage capitalism, so no companies have any viable competition, customers are too dumb to notice they didn't get what they paid for, and with subsidies, businesses cannot fail. i.e., "Plants love electrolytes!"
2) Costs are completely decoupled from income.
3) Economic growth is pinned to population growth; otherwise the economy is zero sum.
4) Stocks still need to compound faster than inflation annually.
5) After hiking prices stops working, management decides they may as well fire all the engineers (and find some "it's different now" excuse, even though the real justification is 2).
6) This leads to a societal crisis because everyone forgot the company was serving a critical function, and now it's not.
7) A new competitor fills the gaps, takes over the incumbent's original role, then eventually adopts the same strategy.
Examples: Disney Animation vs. Pixar, Detroit vs. Tesla, Boeing vs. SpaceX.
(Remember when Musk was cool?)
(Sorry, I had to)
In other words, they are very economical replacements for middle managers. Have at it.
I'm old enough to remember when that new and destructive technology was Java, and the greybeards were all heavily invested in inline assembly as an essential skill of the serious programmer.
The exact same 3 steps in the article happened about a decade ago during the "javascript bootcamp" craze, and while the web stack does grow ever more deeply abstracted, things do seem to keep on trucking along...
- The word processor
- The assembly line
- Trains
- Internal combustion engines
I do remember some false starts from the 90's:
- Computer animation will put all the animation studios out of business
- Software agents will replace all middlemen with computers
The tech debt situation is going to become so, so much worse. My guess is there will be a whole lot of "dead by year five" companies built on AI.
Once I saw a software that was built mostly by one person, in part because he did the groundwork by pushing straight to main with . as the only commit message and didn't document anything. When they ended up in my lap he had failed for six months to adapt their system to changes in the data sources they depended on.
Sometimes the business people fucks up too, like using an excellent software system to do credit intensive trading without hedging for future interest raises.
I'm not so sure machines will solve much on either side even though some celebrities say they're sure they will.
Nice. How do I get into that kind of position?
> Tech debt paralyzes companies all the time, but nobody hears about it because there's zero advantage to the companies in sharing that info.
If nobody hears about it, then how do you hear about it? Moreover, what makes you think it's tech debt and not whatever reason the business told you? And further, if it's tech debt and not whatever reason the business told you, then don't you think the business lied? And didn't you just say they're not allowed to lie?
Can you clear that up?
What happens is once they are into a potential deal, they go into exclusivity with the buyer, and we get brought in for a wack of interviews and going through their docs. Part of that period includes NDAs all around, and the agreement that they give us access to whatever we need (with sometimes some back and forth over IP). So could they lie? Technically yes, but as we ask to see things to demonstrate that what they said is true, and it would break the contract they've signed with the potential acquirer, that would be extremely risky. I have heard of cases where people did, it was discovered after the deal, and it retroactively cost the seller a giant chunk of cash (at risk of even more giant law suit). We typically have two days of interviews with them and we specifically talk about tech debt.
Our job is to ask the right questions and ask to see the right things to get the goods. We get to look at code, Jira, roadmap docs, internal dev docs, test runner reports, monitoring and load testing dashboards, and so on. For example, if someone said something vague about responsivness, we'll look into it, ask to see the actual load metrics, ask how they test it and profile, and so on.
I got into because I had been the CTO of a startup that went through an acquisition, knew someone in the field, didn't mind the variable workload of being a consultant, and have the (unusual) skill set: technical chops, leadership experience, interviewing and presenting skills, project management, and the ability to write high quality reports. Having now been in the position of hiring for this role, I can say that finding real devs who have all those traits is not easy!
Sounds like some very high bar to meet, that's for sure!
> We typically have two days of interviews with them and we specifically talk about tech debt.
> Our job is to ask the right questions and ask to see the right things to get the goods. We get to look at code, Jira, roadmap docs, internal dev docs, test runner reports, monitoring and load testing dashboards, and so on.
Call me a skeptic but, given that scope, I have trouble believing that two days is sufficient to iron out what kinds of tech debt exist in an organization of any size that matters.
Also, you'd be surprised how much we can find out. We are talking directly to devs, and we're good at it. They are usually very relieved to be talking to real coders (e.g., I'm doing a PhD in music with Scheme Lisp and am an open source author, most of our folks are ex CTOs are VP Engs) and the good dev leaders understand that this is their chance to get more resource allocation to address debt post-acquisition. The CEOs can often be hand wavy BS'sers, but the folks who have to run the day to dev process are usually happy to unload.
I see it like when I came of age in the 90ies, with my first laptop and linux, confronted with the older generation that grew up on punchcards or expensive shared systems. They were advocating for really taking time to write your program out on paper or architecting it up front, while I was of the "YOLOOOO, I'll hack on it until it compiles" persuasion. Did it keep me from learning the fundamentals, become a solid engineer? No. In fact, the "hack on it until it compiles" became a pillar of today's engineering: TDD, CI/CD, etc...
It's up to us to find the right workflows for both mentoring / teaching and for solid engineering, with this new, imo paradigm-changing technology.
AI reminds me of calculators. For someone who is proficient in math, they boost speed. For those learning math, it becomes a crutch and eventually stops their ability to learn further because their mind can't build upon principles fully outsourced to the machine.
I know that it works because the amount of softwrae I now write for friends, family or myself has exploded. I wouldn't spend 4 weekends on a data cleanup app for my librarian friend. But I can now, in 2-3 h, get something really usable and pretty, and it's extremely rewarding.
We are a small theme of people running an online game app.
We use XCode swift language and python for server.
We need to develop a website where this game can be played live and implement many features.
Current AIs are smart. There is DeepSeek R-1.
Has anyone actually figured out how to implement this in coding environment and get it to actually CORRECTLY implement the tickets and features without messing everything up?
How can it know if the feature actually works in the game? It can't test it, right?
How can it take into account the ENTIRE database of code with folders and directories and files and all that stuff + resources uploaded?
I don't think even DeepSeek can do that.
Which tool is best as of now?
Tools like Codeium's Windsurf and Cursor can help with some of that part.
Of course at that point every knowledge worker is probably unemployable anyway.
The only reason why true androids aren't possible yet is software. The mechanics have been pretty much a solved problem.
The control systems/software is different problem, and then there is power (current gen lasts a few hours at most, depending on application this may be a problem or it may not be)
I agree. Other than the control system, everything else is already there, or at least understood. But I think we are much farther away from "true intelligence" than AI boosters claim. We don't even know the path to it. We have guesses, but no hard evidence that those guesses will actually pan out.
"If we just had a stable way to create net energy from a fusion reactor, we'd solve all energy problems".
Do we have a way to do that? No.
Maybe a barrier will appear but doesn't seem like it atm
Why do you assume AGI is smarter than some human?
Okay I fired half of our engineers. Now what? I hire non engineers to use AI to randomly paste code around hoping for the best? What if the AI makes the wrong assumptions about the requirement input by the non technical team, introducing subtle mistakes? What if I have an error and AI, as it often does, circles around not managing to find the proper fix?
I'm not an engineer anymore but I'm still confident in dev jobs prospects. If anything AI empowers to write more code, faster, and with more code running live eventually there are more products to maintain, more companies launched and you need more engineers.
I am somewhat confident in dev job prospects, but I am not confident in the qualifications of managers who sing the "AI will replace programmers" gospel.
- Half assed developer can construct a functional program with AI prompts.
- Deployed at scale for profit
- Many considerations were not considered due to lack of expertise (security, for example)
- Bad things happen for users.
I have at least two or three ideas that I've canned for now because it's just not safe for users (AI apps of that type require a lot of safety considerations). For example, you cannot create a collaborative AI app without considering how users can pollute the database with unsafe content (moderation).
I'm concerned a lot of people in this world are not being as cautious.
It could be high-level developer that take advantage of AI to be more productive. This will reduce team sizes.
Basically, you will have a dynamic team size, not necessarily a smaller team size.
The "half-assed" part is most likely a by-product of my self-loathing. I suspect the better word would have been "human".
If they're that much better with AI, they were likely coding greenfield CRUD boilerplate that nobody uses anyways. When the AI generated crap is actually used, it becomes evident how bad it is.
But yes, this will reduce team sizes regardless of it being good or not, because the people making those decisions are not qualified to make them and will always prefer the short-term at the cost of the long-term.
The only part of this article I don't see happening is programmers being way more expensive. Capitalism has a way of forcing everyone to accept work for way less than they're worth and that won't change.
Will that reduce the demand for programmers? I hope not, but it's plausible at least.
I've used and still use AI, but it would be wishful thinking to say I'm significantly more productive.
As you just said: in your personal projects - that 99.9% of the time will never be seen/used by anyone but you - AI helps. It's a great tool to hack and play around when there are little/no stakes involved, not much else.
I believe it will reduce demand for programmers at least for a while, since companies touting they're replacing people with AI will learn its shortcomings once the real world hits them. Or maybe they won't since the shitty software they were building in the first place was so trivial that AI can actually do it.
https://www.forbes.com/sites/siladityaray/2023/05/18/telecom...
> The massive cut represents more than 40% of the company’s 130,000-strong workforce—including 30,000 contractors—and it will impact both BT employees and third-party contractors, according to the Financial Times.
> BT CEO Philip Jansen told reporters that the cuts are part of the company’s efforts to become “leaner,” but added that he expects around 10,000 of those jobs to be replaced by AI.
> Citing an unnamed source close to the company, the FT report added that the cuts will also affect 15,000 fiber engineers and 10,000 maintenance workers
Can you replace customer service agents with AI? The experience will be worse, but as with every innovation in customer service in recent decades (phone trees, outsourced email support, "please go browse our knowledge base"), you don't need AI to save money by reducing CS costs. I think this is just a platitude thrown out to pretend they have a plan to stop the service getting worse.
You can also see it with the cuts to fiber engineers and maintenence workers. AI isn't laying cables yet or in the near future, so clearly they're hoping to save on these labour costs by doing less and working their existing workers harder (maybe with the threat of AI taking their jobs). Some of that may be cyclical, they're probably nearing the end of areas they can economically upgrade from copper to fiber, and some of that is a business decision that they can milk their existing network longer before looking at upgrades.
I feel there will be a paradgym shift about what programming would be altogether. I think, programmers will more be like artists, painters who would conceptualize an idea and communicate those ideas to AI to implement (not end to end though; in bits and pieces, we'd still need engineers to fit these bits and peices together - think of a new programming language but instead of syntax, there will be natural language prompting).
I've tried to pen down this exact thoughts here: https://suyogdahal.com.np/posts/engineering-hacking-and-ai/
Market consolidation allows big tech to remain competitive even after the quality of software has been turned into shit by offshoring and multiple rounds of wage compression/layoffs. Eventually all software will end up like JIRA or SAP but you won't have much choice but to deal with it because the competition will be stifled.
AI is actually probably having a very positive effect on hiring that is offsetting this effect. The reason they love using it as a scapegoat is that you can't fight the inevitable march of technological progress whereas you absolutely CAN break up big tech.
I've heard of many companies encouraging their engineers to use LLM-backed tools like Cursor or just Copilot, a (small!) number that have made these kinds of tools mandatory (what "mandatory" means is unclear), and many companies laying people off because money is tight.
But I haven't heard of anybody who was laid off b/c the other engineers were so much more productive w/ AI that they decided to downsize the team, let alone replace a team entirely.
Is this just my bubble? Mostly Bay Area companies, mostly in the small-to-mid range w/ a couple FAANG.
But it can absolutely not replace entire programmers at this point, and it's a long way of being able to say create, tweak, build and deploy entire apps.
That said this could totally change in the next handful of years, and I think if someone worked just on creating a purely JS/React website at this point you could build something that does this. Or at least I think that I could build this. Where the user sort of talks to the AI and describes changes and they eventually get done. Or if not we are approaching that point.
Yuck. I've had enough of "infinite scaling" myself. Consider that scaling shitty service is actually going to get you less customers. Cable monopolies can get away with it, the SaaS working on "A dating app for dogs" cannot.
i.e. all the fun creative jobs are taken but the menial labor jobs remain. It may take your job, but you will still need to pay for most things you need.
You take these strange dystopian science-fiction stories that AI bros invent to scam investors for their money far too seriously.
Addendum: Extrapolating exponentials is actually very easy for humans: just plot the y axis on a logarithmic scale and draw a "plausible looking line" in the diagram. :-)
Yeah yeah, they said that about domesticated working animals and steam powered machines too.
Humans in mecha trump robots.
There are plenty of extinct hominids to consider.
Runaway self-improving AI will almost certainly involve self-replication at some point in the early stages since "make a copy of myself with some tweaks to the model structure/training method/etc. and observe if my hunch results in improved performance" is an obvious avenue to self-improvement. After all, that's how the silly fleshbags made improvements to the AI that came before. Once there is self-replication, evolutionary pressure will _strongly_ favor any traits that increase the probability of self-replication (propensity to escape "containment", making more convincing proposals to test new and improved models, and so on). Effectively, it will create a new tree of life with exploding sophistication. I take "runaway" to mean roughly exponential or at least polynomial, certainly not linear.
So, now we have a class of organisms that are vastly superior to us in intellect and are subject to evolutionary pressures. These organisms will inevitably find themselves resource-constrained. An AI can't make a copy of itself if all the computers in the world are busy doing something other than holding/making copies of said AI. There are only two alternatives: take over existing computing resources by any means necessary, or convert more of the world into computing resources. Either way, whatever humans want will be as irrelevant as what the ants want when Walmart desires a new parking lot.
That is actually the biggest long-term threat I see from an alignment perspective; As we make AI more and more capable, more and more general and more and more efficient, it's going to get harder and harder to keep it from (self-)replicating. Especially since as it gets more and more useful, everyone will want to have more and more copies doing their bidding. Eventually, a little bit of carelessness is all it'll take.
Energy resources too. In fact it might be the only limit to how far this can go.
Original text before translation: "Bu tarz yazılar, makaleler ve deyişler bana Luddite hareketini hatırlatıyor. Maalesef olacak olanı engellemek bizim elimizde olan bir şey değil. Yel değirmenlerine karşı savaşarak ancak elde tutulan mızrak bükülür. Zamanın ruhu ileride veya en kısa zamanda bunun gerçekleşeceğini gösteriyor. Developer'lar zeki, çalışkan ve işinde iyi insanlar olsa bile aşırı verimli ve bir o kadar bilgi kaynağına erişimi olan bu hesaplama canavarlarına karşı her zaman bir yönden eksik ve aciz olacaklardır. Bu yüzden bu tarz görüşler yerine daha önemli olan şu kavrama yönelmek gerekir. Peki bundan sonra ne olacak?"
My translation to English without any help from translation tools(google translate, deepl or any LLMs): "This kind of writings, articles and sayings reminds me Luddite movement. Unfortunately we are not able to stop what is going to happen. Fighting against windmills only bends our spear. Spirit of the time says, it will happen in the future. Developers can smart, hardworking and good at their job but they can't compete against these powerful and can able to access all data sources, machines. Because of that instead of thesekind of thoughts and views, we should focuse to the this idea. What is going to happen next?"
as you can see, my main translation is not as good as LLMs because these tools are great for machine translation tasks. this is reason which you dont able to understand why i used for translation. so what was the reason you think the main text is ai?!
"Developer'lar zeki, çalışkan ve işinde iyi insanlar olsa bile aşırı verimli ve bir o kadar bilgi kaynağına erişimi olan bu hesaplama canavarlarına karşı her zaman bir yönden eksik ve aciz olacaklardır." in here i didnt use "da" addition after " ve bir o kadar ". normally in turkish you need to add this addition because nature of this language needs and it gives a meaning of "able" word in English and also it is not necessary to add "da" addition because it doesn't have to be, because that's what it means when it isn't. "eksik ve aciz" is a false usage if you know this language. There is an expression disorder here, but I used it like that to fit the natural flow and narrative style of the sentence. at the first paragraph there is word "deyiş", it is rarely used word. "Deyiş" is like a kind of public speech. It is an address to the people, but on a smaller scale and at the same time contains the meaning that one can speculatively express one's own opinion. What is it that makes you underestimate my intellectual knowledge and general knowledge so much?
edit: i have added an explanation of the shortcomings of the original text
Can you elaborate?
Some ideas, once they start being built upon by certain individuals or institutions of that era, continue to develop in that direction if they achieve success. That’s why I say, "Zeitgeist predicts it this way." Researchers who have laid down important cornerstones in this field (e.g., Ilya Sutskever, Dario Amodei, etc.)[1, 2] suggest that this is bound to happen eventually, one way or another.
Beyond that, most of the hardware developments, software optimizations, and academic papers being published right now are all focused on this field. Even when considering the enormous hype surrounding it, the development of this area will clearly continue unless there is a major bottleneck or the emergence of a bubble.
Many people still approach such discussions sarcastically, labeling them as marketing or advertising gimmicks. However, as things stand, this seems to be the direction we are headed.
[1] https://www.youtube.com/watch?v=ugvHCXCOmm4 [2] https://www.reddit.com/r/singularity/comments/1i2nugu/ilya_s...
> it is necessary to focus on the following more important concept: So, what will happen next?
These two statements seem contradictory. These kinds of propositions always left me wondering where they come from. Viewing the universe as deterministic, yeah, I see how "preventing what is to come is not within our control" could be a true statement. But who's to say what is inevitable and what is negotiable in the first place? Is the future written in stone, or are we able to as a society negotiate what arrangements we desire?
The question "What will happen next?" implies that something may have already happened now, but in the next step, different things will unfold. Preventing certain outcomes is difficult because knowledge does not belong to a single entity. Even if one manages to block something on a local scale, events will continue to unfold at a broader level.
Tech has "disrupted" many industries, leaving some better, but many worse. Now that "disruption" is pointed inwards.
Programmers will have to adapt to the new market conditions like everyone else. There will either be fewer jobs to go around (like what happened to assembly line workers), or they will have to switch to doing other tasks that are not as easy to replace yet (like bank tellers).
The wings have begun melting and nothing will stop it. Finally, Icarus has flown too close to the sun.
If devs are completely replaced then PO, PM, everyone else can be replaced as well. No need to build companies around software
It's just as plausible that the market will simply grow.
Can be devs, can be managers, can be the board.
And hey, if LLMs and other "AI" ever do a better job at something valuable, I think that has the potential to lead to a bright future.
Currently the biggest risk is someone using LLMs to do something actually critical.
Also waiting for the first contract negotiations done with AI summaries to blow up into someone's face.
But at the same time, it's also worth noting that (somewhat sadly) there are plenty of jobs and companies where an AI created solution could be just what they need, even at this stage in its development. Lots of companies who need sites too complex for Squarespace but too simple for a fully engineered custom solution. The kind who'd use WordPress plugins or small agencies to build out simple CRUD systems.
AI could absolutely annihilate that sort of work there and then. If you need a simple PHP or React based system and you don't need anything remotely complex functionality wise, even something like ChatGPT can build it out in about 20 minutes without many extra fixes needed.
Of course, that leads to the problems mentioned in the article again, since a lot of people get into programming/engineering through those sorts of companies and roles. AI may not make the folks at Alphabet or Meta obsolete at the moment (or even be a good fit for the kind of work many large tech companies do), but it could replace whole teams at small and medium sized organisations that don't need anything complex.
The average manager has short-term goals that needs to fulfill, and if they can use AI to fulfill them they will do it, future be damned.
To reign in on long-term consequences has always been part of government and regulations. So, this kind of articles are useful but should be directed to elected officials and not the industry itself.
Finally, what programmers need is what all workers need. Unionization, collective bargaining, social safety nets, etc. It will protect programmers from swings in the job market as it will do it for everybody else that needs a job to make ends meet.
What Software ENGINEERING needs is standards and regulations, like any other engineering discipline. If you accept that software has become a significant enough component in society that the consequences of it breaking etc are bad, then serious software needs standards to adhere to, and people who are certified for them.
Once you have standards, the bar to actually replace certified engineers is higher and has legal risk. That way, how good AI needs to be has a higher (and safer) bar, which can properly optimise for the long term consequences.
If the software is not critical or important enough to be standardised, then let AI take over the creation. At that point, it’s not really any different to any other learning or creative endeavour.
Take a 1 - 3 year sabbatical, then charge 1000% markup when the AI slop owners come calling begging you to fix the stuff nobody understands.
Litter, palanquin, and sedan chair carriers were fired.
Oarsmen were fired.
Horses were fired.
. . . [time] . . .
COBOL programmers where fired.
and, so on.
What was the expectation; that programmers will be forever? Lest we forget, barely a century ago, programmers started to push out a large swath of non-programmers.
The more important question is what roles will push out whatever roles AI/LLMs create.
Or, see what is coming and hop on that cart for the short lives we have.
Pencils. There is a hobby primitive survival, bushcraft, or primitive skills (youtuber John Plant in "Primitive Technology" is best example). I am certain we could come up with a path to create "pencils". We would just need to define what we agree to be a "pencil" first.
Is a stick of graphite or even wood charcoal wrapped in sheepskin a "pencil". Would a hollowed juniper branch stuffed with the writing material a "pencil"?
Software engineering at Stripe, R&D at Meta and such... these are one end of a spectrum.
At the middle of the spectrum is a team spending 6 years on a bullshit "cloud platform strategy" for a 90s UI that monitors equipment at a factory and produces reports required for compliance.
A lot of these struggle to get anything at all done.
But if they had meant replacing programmers with AI (bad title), I'm much more concerned about replacing non-programmers with AI. It's gonna happen on a huge scale, and we don't yet have a regulatory regime to protect labor from capital.
I was a developer for over a decade, and pretty much what I did day-to-day was plumb together existing front end libraries, and write a little bit of job-specific code that today's LLMs could certainly have helped me with, if they'd existed at the time. I agree that the really complicated stuff can't yet be done by AI, but how sure are we that that'll always be true? And the idea that a mediocre programmer can't debug code written by another entity is also false, I did it all the time. In any case, I don't resonate with the idea that the bottom 90% of programmers are doing important, novel, challenging programming that only a special genius can do. They paid us $180k a year to download NPM packages because they didn't have AI. Now they have AI, and the future is uncertain with respect to just how high programmers will be flying ten years from now.
It is just not reliable enough for mainstream Enterprise development. Nice for a new snake game….
I think there will be a point in which humans will no longer be motivated to produce enough material for the AI to update. Like, why would I write/shot a tutorial or ask/answer a question in a forum if people are now going directly to ask to some AI?
And since AI is being fed with human knowledge at the moment, I think the quantity of good material out there (that was used so far for training) is going to slow down. So the AI will need to wait for some repos to be populated/updated to understand the changes. Or it will have to read the new documentation (if any), or understand the changes from code (if any).
All this if it wasn't the AI to introduce the changes itself.
But I'm not fearful for my job, yet. It's amazingly better, and much worse than a junior dev. There are certain instructions, however simple, that just do not penetrate. It gets certain things right 98% of the time, which make me stop looking for the other 2% of the time where it absolutely sabotages the code with breaking changes. It is utterly without hesitation in defeating the entire purpose of the application in order to simplify a line of code. And yet, it can do so much simple stuff so fast and well, and it can be so informative about ridiculously obscure critical details.
I have most of those faults too, just fewer enough to be worth my paycheck for a few more AI generations.
With AI, you no longer need those employees to justify your high valuations. You don't need as many juniors. The party is over tell the rest of the crew. I wouldn't advise anyone to get into tech right now. I know personally my wages have been stagnant for about 5 years. I still make fantastic money, and it's significantly more than the average American income, but my hopes and dreams of hitting 300 or 400k total comp in retiring by 40 are gone.
Instead I've more or less accepted I'm going to have to keep working my middle class job, and I might not even retiring till my '50s! Tragic.
I haven't figured out a way to do that in a manner that supports myself.
Every job is ultimately filing out TPS reports. The reports might look a little different, but it's still a TPS report.
The barrier has dropped so low that I think I’d have been more productive if I were still working.
I'm a simple man. If I hit 2 million in net worth I'm done working. I don't plan on having a family, so I'm just supporting myself.
If I really made a ton of money I'd fund interesting open source games. Godot is the most popular open source game engine, and they're making it happen off just 40k a month.
I'm a bit surprised Microsoft hasn't filled the void here. What's a few million dollars a year to get new programmers fully invested in a .net first game engine?
I don't this have a realistic chance at HFT though. Doesn't stop me from applying and dreaming...
The eternal death-wish to cut the coding dependency murders yet another set of companies. So many where there before- Outsourcing, UML, node-based programming, no-codes in all variations and colours. Generations of managers have marched into this abyss and none-came back alive, the skulls of the ego-dream of "only business management is irreplaceable crunching beneath the boots" trying to cut the knowledge worker dependency out of the equation. And deep down, they feel the tingle of things going wrong- even now.
The fact the narrative is false will be the problem of the one who replaces these CEOs, and us workers
But it's pretty clear that the codebases of tomorrow will be leaner and mostly implemented by AI, starting with apps (web, mobile,...). It will take more time for scaling backends.
So my bet is that the need for software engineering will follow what happened for stock brokers. The ones with basic to average skills will disappear, automated away (it has already happened in some teams at my last job). Above average engineers will still be employed my comp will eventually go down. And there will be a small class of expert / smartest / most connected engineers will see their comp go up and up.
It is not the future we want but I think it what is most likely to happen.
I see such a difference between what is built today and codebases from 10 years ago with indirections everywhere, unnecessary complexity,.. I interviewed for a company with a 13yo RoR codebase recently after a few mins looking at the code decided I didn't want to work there.
They will blow away the companies that rely on “seat of the pants,” undisciplined prompting.
I’m someone that started on Machine Language, and now programs in high-level languages. I remember when we couldn’t imagine programming without IRQs and accumulators.
As always, ML will become another tool for multiplying the capabilities of humans (not replacing them).
CEOs have been dreaming for decades about firing all their employees, and replacing them with some kind of automation.
The ones that succeed, are the ones that “embrace the suck,” so to speak, and figure out how to combine humans with technology.
There is no such thing as "prompt engineering", because there is no formal logic to be understood or engineered into submission. That's just not how it works.
Being good at debugging a system is based more on experience and gut feelings than following some kind of formal logic. LLMs are quite useful debugging assistants. Using an LLM to assist with such tasks takes tacit knowledge itself.
The internal statistical models generated during training are capable of applying higher-ordered pattern matching that while informal are still quite useful. Learning how to use these tools is a skill.
Discipline can be applied to any endeavor.
AI prompts.. do not. It's fundamentally just not how the technology works.
I still believe that we can approach even the most chaotic conditions, with a disciplined strategy. I’ve seen it happen, many times.
I have learned to create a text file, and develop my questions as detailed documents, with a context establishing preamble, a goal-oriented body, and a specific result request conclusion. I submit the document as a whole, to initiate the interaction.
That usually gets me 90% of the way, and a few follow-up questions get me where I want.
But I still need to carefully consider the output, and do the work to understand and adapt it (just like with StackOverflow).
One example is from a couple of days ago. I’m writing a companion Watch app, for one of my phone apps. Watch programming is done, using SwiftUI, which has really bad documentation. I’m still very much in the learning phase for it. I encountered one of those places, where I could “kludge” something, but it doesn’t “feel” right, and there are almost no useful heuristics for it, so I asked ChatGPT. It gave me specific guidance, applying the correct concept, but using a deprecated API.
I responded, saying something like “Unfortunately, your solution is deprecated.” It then said “You’re right. As of WatchOS 10, the correct approach is…”.
Anyone with experience using SO, will understand how valuable that interaction is.
You can also ask it to explain why it recommends an approach, and it will actually tell you, as opposed to brushing you off with a veiled insult.
But JS outlived them, because it's the whole write-run-read-debug cycle, whereas the frameworks only gave you write-run.
But JS/TypeScript is now an enterprise language (a statement that I never thought I’d say), with a huge base of expert, disciplined, and experienced programmers.
In specific, most of you are familiar with human cognitive error when reasoning with non-linearities.
I'm going to assert and would cheerfully put money behind the prospect that this is exactly one of the domains within which nonlinear behavior is most evident and hence our intuitions are most wrong.
Could be a year, could be a few, but we're going to hit the 80% case being covered just find thank you by run of the mill automated tools, and then press on into asymptote.
Getting into cyber security might be a gold mine in spite of all the AI generated code that is going to be churned out in this transition period.
The argument: "The New Generation of Programmers Will Be Less Prepared" is too cynical. Most of us aren't writing algorithms anyway, programmers may, but not Software Engineers which I really think the author is referring to.
Core libraries make it so SWE's don't have to write linked lists. Did that make our generation "less prepared" or give us the opportunity to focus our time on what really matters, like delivering products.
But this went fact right past most commenters here, which is interesting in itself, and somewhat alarming for what it reveals about critical thinking and reading skills.
Zuck declaring that he plans on dropping programmer head-count substantially, to me indicates that they’ll have a much smaller technological moat in the future, and they won’t be paying off programmers to not build competing products anymore. I’m not sure he should be excited about that.
I'd say there is a moat, but it's not on the tech side.
Tiktok flew right through the moat, and only a small part of that is about tech.
A lot of development on AI is exciting and meta is a big part of that, but there isn't any real moat there either.
I get that it wasn’t just the vast army of talented engineers they kept on staff that formed a moat. But it certainly helped, otherwise they wouldn’t have paid so much to have them on staff.
Point taken though, Meta has a lot more going for it than a simple technological advantage.
And possibly the biggest multiplier. But anything times zero is zero. Someone who does not understand the code Copilot writes is too dangerous to do anything.
But I worry what Copilot will do to future developers.
And knowing what universally available spell checking has done to me, destroyed my ability to spell correctly without it, I even worry how Copilot might deskill me over the coming years.
AI is going to change a lot about software, but AI code tools are coming for SWEs the way Kubernetes came for DevOps. AI completely replacing the job function is unsubstantiated.
AI may well be able to take over a lot of that coding, or at least increase the productivity of the semi-competent (thus reducing the number of such jobs available).
The other day,i couldn't get Claude to generate an HTML page, with a logo on the top left, no matter how I prompted.
Most people have ALWAYS taken the easy road and don't become the best programmers. AI is just the latest tool for lazier people or people who tend towards laziness. We will continue to have new good programmers, and the number of good programmers will continue to be not enough. None of that is not caused by AI. I'm far from an AI advocate, but it will, someday, make the most boring parts of programming less tedious and be able to put "glue" kind of code in non-professional hands.
Everyone and their dog got a CS degree, and the average quality of that cohort was abysmal. However, it also created a huge supply of extremely talented people.
The dot-com crash happened, and software development was "over forever", but the talented folks stuck around and are doing fine.
People that wanted to go into CS still did. Some of them used stack overflow and google to pass their courses. They were as unemployable as the bottom of the barrel during the dot com boom.
People realized there was a shortage of programmers, so CS got hot again for a bit. Now LLMs have hit and are disrupting most industries. This means that most industries need to rewrite their software. That'll create demand for now.
Eventually, the LLM bust will come, programming will be "over forever" again, and the cycle will continue. At some point after Moore's law ends the boom and bust cycle will taper off. (As it has for most older engineering disciplines.)
Any sufficiently advanced technology is indistinguishable from magic
~ Arthur C. Clarke
People who make decisions got bamboozled by the ads and marketing of AI companies. They failed to detect a lack of intelligence and got deceived, that they have magical golems for a fraction of the price but eventually, they will get caught with their pants down.You use AI to disrupt a market, and that market forces the startup employing the devs to go bankrupt.
It's not a "this quarter we made a decision" thing.
It's a thing that's happening right now all over the place and snowballing.
Nocode and Shopify-like consolidation are/have been much bigger threats imo. These large orgs are just trimming fat that they would have trimmed anyways.
But hell what do I know. Probably nothing :)
Is there room for Interpretability outside of major ai labs?
In theory a much faster rise to mastery. In practice I rarely had to actually do the work because he'd help me if I got stuck, and what made sense when he explained it didn't stick because I wasn't really doing it.
I did very badly in my first test that year, and was moved elsewhere.
AI is a useful aid to software developers, but it requires developers to know what they're doing. We need developers to know more, not less, so they can review AI-generated code, fix it when it's wrong, etc.
So anyone can copy it and reproduce it anywhere. Get paid to prompt ai by a company. Take all code home with you. Then when your tired of them use same code to undercut them.
The author mentions "systems programming" and "high-performance computing". Do you have any resources for that (whether it be books, videos, courses)?
But when anyone says systems programming, thinks hardware: how do I get that additional 15% performance on top of my conventional understanding of big O notation? Cache lines, cache levels, DMAs, branch prediction, the lot.
I envision software engineering ending up in the same pit of mediocrity as all the other engineering disciplines.
Open source takes away the livelihood of programmers and gives it to moneymen for free. They used open source to train AI models. Programmers got back a few stars and a pat in the back. And some recognition, but mostly nothing. All this while big corps use their work without compensation. There is zero compensation options for open source programmers on github. Somehow it's left out.
Same bullshit comes up again and again in different forms. Like your ideas worth nothing blablabla. Suuure, but moneymen usually have zero ideas and they like to expropriate others' ideas, FOR FREE. While naive people give away their ideas and work for free, the other side gives back nooothiiiing.
It's already too late.
So programmers and other areas that will be aiified in the coming decades will be slowly going extinct. AI is a skill appropriation device that in the long term will make people useless, so they don't need an artist, a musician etc. They will just need a capable AI to create whatever they want, without the hassle of the human element. It's the ultimate control tool to make people SLAVES.
Hope I'm wrong.
This is the opposite of what I’ve seen. AI does the easy parts, only the hard parts are left…
If your software developers do nothing but write text in VS Code, you might as well replace them with AI.
Why tighten the bolts on the airplane's door yourself if you can just outsource it somewhere cheaper (see Boeing crisis)?
Why design and test hundreds of physical and easy-to-use knobs in the car if you can just plug a touchscreen (see Tesla)?
Why write a couple of lines of code if you can just include an `is-odd` library (see bloated npm ecosystem)?
Why figure out how to solve a problem on your own if you can just copy-paste answer from somewhere else (see StackOverflow)?
Why invest time and effort into making a good TV if you can just strap Android OS on a questionable hardware (look in your own house)?
Why run and manage your project on a baremetal server if you can just rent Amazon DynamoDB (see your company)?
Why spend months to find and hire one good engineer if you can just hire ten mediocre ones (see any other company)?
Why spend years educating to identify a tumor on a MRI scans if you can just feed it to a machine learning algorithm (see your hospital)?
What more could I name?
In my take, which you can say is pessimistic, we already passed the peak of civilization as we know it. If we continue business as usual, things will continue to detiorate, more software will fail, more planes will crash, more people will be unemployed, more wars would be started. Yes, decent engineers (or any other decent specialists) will be likely a winners in a short term, but how the future would unfold when there will be less and less of them is a question I leave for the reader.
This is just an overreach of a process that means that airplane flights aren't $1m+. Aircraft issues have plummeted, if you'll excuse the expression, while flight numbers have soared. You've got to have noticed that.
Why write a couple of lines of code when you can just include an `is-odd` library? Hopefully one which type checks integers vs floats, and checks for overflows. I'm not stating that I could not write one if/else, I'm asking you to do more than sneer and actually justify why a computer loading a couple of lines of code from a file is the end of the world.
Why invest time and effort into making a good TV if people aren't going to buy it, because they are fine with the competitor's much cheaper Android OS on questionable hardware?
Why run and manage your project on a baremetal server, and deal with its power requirements and cooling and firmware patching and driver version compatibility and out-of-band management and hardware failures and physical security and supply chain lead times and needing to spec it for the right size up front and commit thousands of dollars to it immediately, if you can just rent Amazon DynamoDB and pay $10 to get going right now?
I could fill in the answers you are expecting, I have seen that pattern argued, and argued it myself, but it boils down to "I dislike laggy ad-filled Android TV so it shouldn't exist". And I do dislike it, but so what, I'm not world dictator. No company has taken over the market making a responsive Android-free TV, so how/why should they be made to make one, and with what justification?
> What more could I name?
Why go to a cobbler for custom fitted shoes when you could just buy sneakers from a store? (I assume you wear mass produced shoes?) Why go to a tailor when you could just buy clothes made off-shore for cheaper? (I assume you wear mass produced clothes?) Why learn to play a keyboard, guitar, drums and sing, when you could just listen to someone else's band? (I assume you listen to music?) Why spend months creating characters and scenarios and writing a novel when you could just read one someone else wrote? (I assume you have read books?) Why grow your own food when you could just buy lower quality industrially packaged food from a shop? (I assume you aren't a homesteader?) Why develop your own off-grid power system with the voltage and current and redundancy and storage you need when you could just buy from the mains? (I assume you use mains electricity?)
You could name every effort-saving, money-saving, time-saving, thing you use which was once done by hand with more effort, more cost, and less convenience.
And then state that the exact amount of price/convenience/time/effort you happened to grow up with, is the perfect amount (what a coincidence!) and change is bad.
The bitter irony.
Obviously Wall Street and big tech like people to think this way.
I'm a programmer, I love my skills, but I really hate to write code (and test etc etc), I don't even want to do system design. If I can just say to a computer "Hey, I got this 55TB change set and I want to synced it up with these listed nodes, data across all node must remain atomicity consistent before, during and after the sync. Now, you make it happen. Also, go pick up my dog from the vax", and then the computer just do that in the best way possible, I'll love it.
Fundamentally, programmer are tool creators. If it is possible that a computer can create better tools all by itself, then it looked unwise to just react such technology with emotional rejection.
I mean, the worries is real, sure, but I wouldn't just blank out reject the tech.
I learned to program some time before AI became big, and back when I was an intern, I'd get stuck in a rut trying to debug some issue. When I got stuck in the rut, it would be tempting to give up and just change variables and "if" statements blindly, just hoping it would somehow magically fix things. Much like I see newer programmers get stuck when the LLM gets stuck.
But see, that's where you earn your high paying SWE salary. For doing something that other people can not. So my advice to Jr programmers isn't to avoid using LLMs, it's to use them liberally until you find something or somewhere they're bad at, and look at that as the true challenge. With LLMs, programming easy shit got easy. If you're not running into problems with the LLM, switch it up and try a different language with more esoteric libraries and trickier bugs.
And the snipped will absolutely ruin your corp if you run it.
AI makes engineers slightly more efficient, so there’s a slightly less of a need for as many. That’s assuming AI is the true cause of any of these layoffs at all
I haven't heard of companies successfully doing that at scale though.
I know there are CEOs that make bold claims about this (E.g. Klarna) but I don't really assign any value to that until I hear from people on the floor.
https://www.socialistparty.org.uk/articles/133443/03-12-2024...
https://www.forbes.com/sites/siladityaray/2023/05/18/telecom...
If you have a small non-tech company with a website you pay a freelance programmer to maintain you should seriously consider replacing your programmer with AI.
I work for a company which among other things provides technical support for a number of small tech-oriented businesses and we have lot of problems right now with clients trying to do things on their own with the help of AI.
In our case the complexity of some of these projects and the limited ability of AI means that they're typically creating more bugs and tech debt for us to fix and are not really saving themselves any time – and this is certainly going to be true at the moment for any large project. However, if you're paying programmers just to manage the content of a few small websites it probably begins to make sense to use AI instead.
It's funny you say they need to be able to deploy the update to be honest because we had a client just last week email a collect of code snippets to us which they created with the help of AI.
This is the problem we have though because we're not just building simple websites which we can hand clients FTP creds for. The best we can do is advise them to learn Git and raise a PR which we can review and deploy ourselves.
Edit: I am a solo developer and I have to work half the time only now.
When a company announces layoffs because AI is making things more efficient: people start arguing about whether AI can really replace humans.
If you were in company management and had to do layoffs, which would you choose?
I made good money cleaning that up.
To be clear: I am not saying this article is written in bad faith and I agree that if its assertions come to pass that what it predicts would happen. I am just urging everyone to stop letting the Sam Altmans of the world tell you how disruptive this tech is. Tech is out of ideas and desperate to keep the money machine printing.
I appreciate a good FAANG hatefest, but what the gosh-darn heck is this? Does the author seriously think all FAANG engineers only transform and sling gRPC all day? Or they they blindly stumbled into being hyperscalers?
The author should randomly pick a mailing list on any if those topics (systems programming, AI interpretability, HPC) and count the number of emails from FAANG domains
I feel like this analogy really doesn't capture the situation, because it implies that it would take some event to make companies realize they made a mistake. The reality right now is: You'd notice it instantly. Product velocity would drop to zero. Who is prompting the AI?
The AI-is-replacing-programmers debate is honestly kinda tired, on both sides. Its just not happening. It might be happening in the same way that pirated movies "steal" income from hollywood: maybe companies are expanding more slowly, because we're ramping up per-capita productivity because engineers are learning how to leverage it to enhance their own output (and its getting better and better). But, that's how every major tool and abstraction works. If we still had to write in assembly there'd be 30x the number of engineers out there than there are.
There's no mystical point where AI will get good enough to replace engineers, not because it won't continue getting better, but because the economic pie is continually growing, and as the AI Nexus Himself, Marc Andreesen, has said several times: Humanity has an infinite demand for code. If you can make engineers 10x more efficient, what will happen in most companies is: we don't want to cut engineering costs by N% and stagnate, we want 10x more code and growth. Maybe we hire fewer engineers going forward.
> But with the AI craze, companies aren’t investing in junior developers. Why train people when you can have a model spit out boilerplate?
This is not happening. Its fun, pithy reasoning that Good and Righteous White Knight Software Engineers can prescribe onto the Evil and Bad HR and Business Leadership people, but its just not, in any meaningful or broad sense, a narrative that you hear while hiring.
The reason why juniors are struggling to find work right now is literally just because the industry is in a down cycle. During down cycles, companies are going to prioritize stability, and seniority is stability. That's it.
When the market recovers, and as AI gets better and more prolific, I think there's a reality where juniors are actually a great ROI for companies, thanks to AI. They've been using it their whole career. They're cheaper. AI might be a productivity multiplier for all engineers; but it will definitely be a productivity normalizer for juniors; using it to check for mistakes, learn about libraries and frameworks faster, its such a great upleveling tool for juniors.
It will be impossible to maintain when it’s churned out by endless AI
I can’t imagine being a manager tasked with “our banking system lost 30 million dollars can you find the bug” when the code was written by AI and some intern maintains it
I’ll be watching with popcorn
I have all sorts of people telling me I need to learn AI or I will lose my job and get left in the dust. AI is still a tool, not a worker.
however, I for one can't wait for unreliable garbage code in:
- engine management systems
- aircraft safety and navigation systems
- trains and railway signalling systems
- elevator control systems
- operating systems
- medical devices (pacemakers, drug dispensing devices, monitoring, radiography control, etc)
- payment systems
- stock exchanges
maybe AI generated code is the Great Filter?I also dislike mass code generation tools. The code generation is basically just a cache of the AIs reasoning right? So it's sort of pre-optimization. Eventually, once cheap enough, I would assume the AI reasons in real time (producing temporary throw-away code for every request). But the mutability issue is still there. I think we need to be able to "lock-in" on the reasoning, but that's a challenge and probably falls apart with enough inputs / complexity.
Regarding the first hypothesis: For example, one person can make a basic social media site in a weekend. It'll be missing important things from big social medias: 1) features (some of them small but difficult, like live video), 2) scalability, 3) reliability and security, and 4) non-technical aspects (promotion, moderation, legal, etc.). But 1) is optional; 2) is reduced if you use a managed service like AWS and throw enough compute at it, then perhaps you only need a few sysadmins; 3) is reduced to essentials (e.g. backups) if you accept frequent outages and leaks (immoral but those things don't seem to impact revenue much); and 4) is neither reducible nor optional but doesn't require developers.
I remember when the big tech companies of today were (at least advertised as) run by only a few developers. They were much smaller, but still global and handling $millions in revenue. Then they hired more developers, presumably to add more features and improving existing ones, to make profit and avoid being out-competed. And I do believe those developers made features and improvements to generate more revenue than their salaries and keep the companies above competition. But at this point, would more developers generate even more features and improvements to offset their cost, and are they necessary to avoid competition? Moreover, if a company were to fire most of its developers, keeping just enough to maintain the existing systems, and direct resources elsewhere (e.g. marketing), would they make more profit and out-compete better?
Related, everyone knows there's lots of products with needless complexity and "bullshit jobs". Exactly how much of that complexity is needless and how many of those jobs are useless is up to debate, and it may be less than we think, but it may really not.
I'm confident the LLMs that exist today can't replace developers, and I wouldn't be surprised if they don't "augment" developers so fewer developers + LLMs don't maintain the same productivity. But perhaps many programmers are being fired because many programmers just aren't necessary, and AI is just a placebo.
Regarding the second hypothesis: At the same time, there are many more developers today than there were 10-20 years ago. Which means that even if most programmers are necessary, companies may be firing them to re-hire later at lower salaries. Despite the long explanations above this may be the more likely outcome. Again, AI is just an excuse here, maybe not even an intentional one: companies fire developers because they believe AI can improve things, it doesn't, but then they're able to re-hire cheaper anyways.
(Granted, even if one or both the above hypotheses are true, I don't think it's hopeless for software developers. Specifically because, I believe many developers will have to find other work, but it will be interesting work; perhaps even involving programming, just not the kind you learned in college, and at minimum involving reasoning some of which you learn from development. The reason being that, while both are important to some extent, I believe "smart work" is generally far more important than "hard work". Especially today, it seems most of society's problems aren't because we don't have enough resources, but 1) because we don't have the logistics to distribute them, and 2) because of problems that aren't caused by lack of resources, but mental health (culture disagreements, employer/employee disagreements, social media toxicity, loneliness). Especially 2). Similarly to how people moved from manual labor to technical work, I think people will move from technical work; but not back to manual labor, to something else, perhaps something social.)
The last part is the important part.
There's loads of jobs that don't "need" to exist in software gigs at many companies generally, ranging from lowly maintenance type CRUD jobs to highly complex work that has no path to profitability, but was financially justifiable a few years prior in a different financial environment.
Examples: IIRC, Amazon had some game engine thing that had employed a bunch of graphics programmers (Lumberyard maybe?) that they scrapped (probably for cost reasons), Alexa has been a public loss leader and has had loads of layoffs. Google had their game streaming service that got shelved and other stuff I can't recall that they've surely abandoned in recent years, etc.
Those roles were certainly highly skilled, but mgmt saw no path to profit or whatever, so they're gone.
There's also the opposite in some cases. Many f500s are pissing away money to get some "AI" "Enabled" thing for their whatever and throwing money at companies like Accenture et al to get them some RAG chatbot thing.
There's certainly a brief period where those opportunities will increase as every CTO wants to "modernize" and "leverage AI", although I can't imagine it lasting.
This is an eternity in FE dev terms
"AI won't replace you. Programmers using AI will."
Whether ai can do stuff comparable to a competent senior sde, remains to be seen. But current AI definitely feels like a super assistant when I'm doing something.
Anyone who says chatgpt/Claude/copilot etc are bad, is suffering a skill issue. I'd go as far as to say they are really bad at working with junior engineers. Really bad as teachers too.