Other people see all that as an means to an end - and find no joy from the technical aspect of creating something. They're more interested in the end result / product, rather than the process itself.
I think that if you're in group A, it can be difficult to understand group B. In vice versa.
I'm a musician, so I love everything about creating music. From the theory, to the mastery of the instrument, the tens of thousands of hours I've poured into it...finally being able to play something I never thought I'd be able to, just by sheer willpower and practice. Coming up with melodies that feel something to me, or I can relate to something.
On the other hand, I know people that want to jump straight to the end result. They have some melody or idea in their head, and they just want to generate some song that revolves around that idea.
I don't really look down on those people, even though the snobs might argue that they're not "real musicians". I don't understand them, but that's not really something I have to understand either.
So I think there are a lot of devs these days, that have been honing their skills and love for the craft for years, that don't understand why people just want things to be generated, with no effort.
> Other people see all that as an means to an end
I think it's worth pointing out that most people are both these things at different times.
There's things I care about and want a deep understanding of but there's plenty of tasks I want to just "go away". If I had an junior coder - I'd be delegating these. Instead I use AI when I can.
There's also tasks where I want a jump start. I prefer fixing/improving code over writing from scratch so often a bad AI attempt is still valuable to me.
i say this as someone who cut my teeth on this stuff growing up and seeing the evolution, it's both. and at some point it's honestly elitism and gatekeeping. i sort of cringe when it's called a "craft" because it's not like woodworking or something. the process is both full of joy but so is the end result, and the nature of our industry is that the process is ALWAYS changing.
you accumulate a depth of knowledge and watch as it washes away in a few years. that kind of change, and the big kind of change that AI brings scares people so they start clinging to it like it's some kind of centuries old trade lol.
What helps me is to think of it like I'm a kid again, learning to code full of ideas but without any pre-conceived notions. Rather than the Microsoft QuickBasic manual in my hands, I've got Gemini & Claude Code. I would be gleefully coding up a storm of games, websites, dubious webcrawlers, robots, and lord knows what else. Plenty of flow to be had.
And if something's not obvious I can always fetch the specifics of any particular calls. But at least I didn't have to find the name of that call in the first place.
Thanks for the comment. You articulated how I feel about this situation very well.
Honestly, I suspect the people who would prefer to have someone or something else do their coding, are probably the devs who are already outputting the worst code right now.
> This comment section really shows the stark divide between people who love singing and thus hate AI-assisted singing, and people who hate singing and thus love AI-assisted singing.
> Honestly, I suspect the people who would prefer to have someone or something else do their singing, are probably the singers who are already outputting the worst singing right now.
The point is: just because you love something, doesn't mean you're good at it. It is of course positively correlated with it. I am in fact a better singer because I love to sing compared to if I never practiced. But I am not a good singer, I am mediocre at best (I chose this example for a reason, I love singing as well as coding! :-D)
And while it is easier to become good at coding than at singing - for professional purposes at least - I believe that the effect still holds.
And I think we do tend to (rightfully) look down on e.g. singers who lip-sync concerts or use autotune to sing at pitches they otherwise can't, nevermind how we'd react if one used AI singing instead of themselves.
Yes, loving something is no guarantee of skill at it, but hating something is very likely to correspond to not being good at it, since skills take time and dedication to hone. Being bad at something is the default state.
AI is like jet fuel for me. It’s the translation layer between specs and code I’ve always wanted. It’s a great advisor for implementation strategies. It’s a way to try new strategies in code quickly.
I don’t need to get anyone else to review my code. Most of this is for personal projects.
I don’t really write professionally, so I don’t have a ton of need for it to manage realities of software engineering (large codebases, peer reviews, black box internal systems, etc). That being said - I do a reasonable amount of embedded Linux work, and AI understands the Linux kernel and device drivers very well.
To extend your metaphor: AI is like a magic microphone that makes all of my singing sound like Tony Rice, my personal favorite singer. I’ve always wanted to sound like him - but I never will. I don’t have the range or the training. But AI allows my coding level to get to that corresponding level with writing software.
I absolutely love it.
Secondly, you are not writing anything you get from an LLM. You prompt it and it spits out other people's code, stripped of attribution.
This is what children do: Ask someone to fix something for you without understanding the result.
It's definitely a personality thing, but that's so much more productive for me, than convincing myself to do all the work from scratch after I had a design.
I guess this means I hate coding, and I only love the dopamine from designing and polishing my work instead of making things work. I'm not sure though, this feels like the opposite of hate coding.
Or are you calling an LLM a "clone" of you? In that case, it's more, "if you create a flawed enough starting premise, anything is possible".
That's where we start to disagree what future looks like, then.
It's not there yet, in that the LLM-clone isn't good enough. But amusingly a not nearly good enough clone of me already made me more productive, in that I'm able to deliver more while maintaining the same level of personal satisfaction with my code.
If I want to spend my time refactoring and bugfixing and rewriting and integrating, rather than writing from scratch and bugfixing, I can definitely achieve that by using LLM code, but the overall time has never felt different to me, and in many cases I've thrown out the LLM code after several hours due to either sheer frustration with how it's written, or due to discovering that the structure it's using doesn't work with the rest of the program (see: anything related to threading).
> a far higher intensity
I'm not sure what this is supposed to mean. The code that I've gotten is riddled with mistakes and fabrications. If I were to use it directly, it would significantly slow my pace. Likewise, when I use LLMs to offer alternative methods to accomplish something, I have to take the time to sit down and understand what they're proposing, how to actually make it work, and whether that route(s) would be better than my original idea. That is a significant speed reduction.
The only way I can imagine LLMs resulting in "far higher intensity" is if I was just yolo'ing the code into my program, and then doing frantic integration, correction, and bugfix work afterwards.
Sure, that's "higher intensity", but that's just working harder and not smarter.
You don't need a multi-million dollar LLM to give you slightly different boilerplate snippets when you already have a text editor on your computer to save them.
Documentation needs to be by humans for humans, it's not a box that's there to be filled with slop.
This is true for producing the documentation but if there is an LLM that can take said documentation and answer questions about it is a great tool. I think I get the answer far quicker with LLM than sifting through documentation when looking for existence of a function in a library or a property on an object.
(And I don't enjoy the value judgement)
Yes, there are tasks or parts of the code that I'm less interested in, and would happily either delegate or negotiate-off to someone else, but I wouldn't give those to a writer who happens to be able to write in a way that approximates program code, I'd give them to another dev on the team. A junior dev gets junior dev tasks, not tasks that they don't have the skills to actually perform, and LLMs aren't even at an actual junior dev level, imhe.
I noted in another comment that I've also used LLMs to get ideas for alternate ways to implement something, or to as you said "jump start" new files or programs. I'm usually not actually pasting that code into my IDE, though- I've tried that, and the work to make LLM-generated code fit into my programs is way more than just writing out the bits I need, where I need. That is clearly not the case for a lot of people using LLMs, though.
I've seen devs submitting PRs with giant blocks of clearly LLM-gen'd code, that they've tried to monkey-wrench into working with the rest of the program (often breaking conventions or secure coding standards). And these aren't junior devs, they're devs that have been working here for years and years.
When you force them to do a code review, they know it's not up to par, but there is a weird attitude that LLM-gen'd code is more acceptable to be submitted with issues than personally-written code. As though it's the LLM's fault or job to fix, even though they prompted and copied and badly-integrated and PR'd it.
And that's where I think there is a stark divide. I think you're on my side of the divide (at least, I didn't get the impression that you hate coding), it just sounds like you haven't really seen the other side.
My personal dime-store psych theory is that it's the same mental mechanism that non-technical computer users fall into of improperly trusting/ believing computers to produce correct information, but now happening to otherwise technical folks too because "AI" is still a black box technology to most of us, like computers in general are to non-techies.
LLMs are really really cool, and really really impressive to me, and I've had 'wow' moments where they did something that makes you forget what they are and how they work, but you can't let that emotional reaction towards it override the part that knows it's just a token chain. When you do, people end up (obviously on the very extreme end) 'dating' them, letting them make consequential "decisions", or just over-trusting their output/code.
Alright, please stop using SDK's, google, stackoverflow, any system libraries. You prefer to do it for yourself right?
SDKs and libraries are there to provide common (as in, used repeatedly, by many) functions that serve as BUILDING BLOCKS.
If you import a library and now your program is complete, then you didn't actually make a useful program, you just made a likely less efficient interface for the library.
BUT ALSO-
SDKs and libraries are *vetted* code. The advantage you are getting isn't just about it having been written for you, it's about the hundreds of hours of human code review, iteration, and thought, that goes into those libraries.
LLM code doesn't have that, so it's not about you benefitting from the knowledge and experience of others, it's purely about reducing personally-typed LoC.
And yes, if you're wholesale copy-pasting major portions of your program from stack overflow, I'd say that's about as bad as copy-pasting from ChatGPT.
Have we forgotten that we advanced in software by building on the work of others?
People aren't taking LLM code and then thoughtfully refactoring and improving it, they're using it to *avoid* doing that, by treating the generated code as though it's already had that done.
That's why the pro-LLM-code people in this very thread are talking about using it to automate away the parts of the coding they don't like. You really think they're then going to go back and improve on the code past it minimally working?
There will be no advancement from that, just mediocre or bad code going unreviewed and ignored until it breaks.
Even when I'm stuck in hell, fighting the latest undocumented change in some obscure library or other grey-bearded creation, the LLM, although not always right, is there for me to talk to, when before I'd often have no one. It doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.
Even more so, I remember making a Chrome extension and feeling intimidated. I knew that I'd be comfortable with most of it given that JS is used but I just didn't know how to start.
With an LLM it is way faster to spin up some default config and get going versus reading a tutorial. What I've noticed in that respect is that I just read what it does and then immediately reason why it's there. "Oh, there's a manifest.json file with permissions and a few other things, fair, makes sense. Oh, so you have the HTML/CSS/JS of the extension, you have the HTML/CSS/JS of the page you're injecting some code into and you have the JS of a background worker. Ah yea, I get that."
And then I just get immediately on coding.
How if it hallucinate and gives you wrong code and explanation? It is better to read documentations and tutorials first.
1. Code doesn't compile. This case is obvious on what to do.
2. Code does compile.
I don't work in Cursor, I read the code quick, to see the intent. And when done with that decide to copy/paste it and test the output.
You can learn a lot by simply reading the code. For example, when I see in polars a `group_by` function call but I didn't know polars could do that, now I know because I know SQL. Then I need to check the output, if the output corresponds to what I expect a group by function to do, then I'll move on.
There comes a point in time where I need more granularity and more precision. That's the moment where I ditch the AI and start to use things such as documentation and my own mind. This happens one to two hours after bootstrapping a project with AI in a language/library/framework I initially knew nothing about. But now I do, I know a few hours worth of it. That's enough to roughly know where everything is and not be in setup hell and similar things. Moreover, by just reading the code, I get a rough idea on how beginner to intermediate programmers think about the problem space the code is written in as there's always a certain style of writing certain code. This points me into the direction on how to think about it. I see it as a hint, not as the definitive answer. I suspect that experts think differently about it, but given that I'm just a "few hours old" in the particular language/lib/framework, I think knowing all of this is already really amazing.
AI helps with quicker bootstrapping by virtue of reading code. And when it gets actually complicated and/or interesting, then I ditch it :)
Then the code won't compile, or more likely your editor/IDE will say that it's invalid code. If you're using something like Cursor in agent mode, if invalid code is generated then it gets detected and the LLM keeps re-running until something is valid.
> It is better to read documentations and tutorials first.
I "trust" LLM's more than tutorials, there's so much garbage out there. For documentation, if the LLM suggests something, you can see the docstrings in your IDE. A lot of the time that's enough. If not, I usually go read the implementation if I _actually_ care about how something works, because you can't always trust documentation either.
As for my editor saying it is invalid..? That is just as untrustworthy as an LLM.
>I "trust" LLM's more than tutorials, there's so much garbage out there.
Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.
I interpreted the "hallucination" part as the AI using functions that don't exist. I don't consider that a problem because it's immediately obvious.
Yes, AI can suggest syntactically valid code that does the wrong thing. If it obviously does the wrong thing, then that's not really an issue either because it should be immediately obvious that it's wrong.
The problem is when it suggests something that is syntactically valid and looks like it works but is ever slightly wrong. But in my experience, it's pretty common to come across that stuff like that in "tutorials" as well.
> Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.
I pretty strongly disagree. As soon as it became popular for developers to have a "brand", the amount of garbage started growing. The stuff written before the late 00's was mostly good, but after that the balance began slowly shifting towards garbage. AI definitely increased the rate at which garbage was generated though.
Hallucinations are a thing. With a competent human on the other end of the screen, they are not such an issue. And the benefits you can reap from having LLMs as a sometimes-mistaken advisory tool in your personal toolbox are immense.
Also something are meant to be approached with the correct foundational knowledge (you can’t do 3D without geometry, trigonometry, and matrixes. And a healthy dose of physics). Almost every time I see people strugling with documentation, it was because they lacked domain knowledge.
I work as a consultant assessing other people's code and it's hard not to lose my religion, sort of speak.
That’s a lot of trauma you’re dealing with.
Haven't much used AI to assist. After all, hard enough finding authentic humans capable and willing to voluntarily review/critique one's code. So far AI doesn't consistently provide that kind of help. OTOH seems almost certain over time AI systems will improve in terms of specific and comprehensive "insights" into the particular types of code one is writing.
I think an issue is that human creativity is hard to measure. Likely enough AI is even tougher to assess. Probably AI will increasingly be assigned tasks like constructing project skeletons, assuring parts can be joined together without undue strain, handling "boilerplate" and other routine chores. To be sure the landscape will look different in 50 years, I'm certain we'd be amazed were we able to see what future systems will be doing.
In any case, we shouldn't hesitate to use tools that genuinely boost our creativity. One badly needed role would be enabling development of higher reliability software. Still that's a far cry from the contributions emanating from the best of human originality, talent and motivation.
Sadly, I find it sorely lacking at dealing with build systems and that particular type of boilerplate, mostly because it seems to mix up different versions of things too much and gives you totally broken setups more often than not. I’d just as soon never deal with the he’ll that is front end build/lint/test config again.
I keep seeing people saying to use an LLM to write boilerplate, but like... do you not just copy that from another project where you already wrote it?
Also, I know that people love to joke on modern software and JS in particular. But if you take react code from 2020 and drop it into a new react codebase it still works. Even class based components work. Yes, if you jumped on the newest framework bandwagon every time stuff will break all the time, but AI won’t be able to help you with that either. If you went for relatively stable frameworks, you can re use boilerplate completely or with relatively minimal adjustments
If you take a project from 2020 it's a bit of a pain to upgrade it.
If you keep re-using boilerplate once in a while copying it from elsewhere is fine. If you re-use it all the time, just get a macro setup in your editor of choice. IMHO that is way more efficient than asking AI to produce somewhat consistent boilerplate
So yes, boilerplate, but also yes, there is definitely something to be gained from using ai assistants.
Frankly I don't want to spend 2 hours reading documentation just to find out some arcane incantation that gets the computer to do what I want it to do.
The interesting part of programming to me is designing the logic. It's the 'this, then that, except when this' flow that I'm really interested in, not the search for some obscure library that has some function that will parse this csv.
Llms are great for that, and let me get away from the pointless grind and into the things that I enjoy and are actually providing value.
The pair programming is also a super good thing. I work best when I can spitball and throw out random ideas and get quick feedback. Llms let me do that without bothering others who have their own work to do.
Of course, it all depends how you use the LLM. While the same can be true for StackOverflow, the LLMs just scale the issues up.
> The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about.
Except you do care. It's why you're frustrated and annoyed. And good!!! That feeling is because what you're describing requires solving. If something is routine, automate it. But it's really not good to automate in a statistical way, especially when that statistical tool is optimized for human preference. Because remember that also means mistakes are optimized to be missed by humans.[0]With expertise in anything, I'm sorry, but you also got to do the shit work. To be a great musician you gotta practice boring scales. It's true even if you just want to be a sub par one.
But a little grumpy is good. It drives you to fix things, and frankly, that's our job. The things that are annoying and creating friction don't need be repeated over and over, they need alternative solutions. The scripts you build are valuable. The "useless" knowledge you gain isn't so useless. Those little details add up without you knowing and make you better.
That undocumented code makes you frustrated and reminds you to document your own. You don't want to be a hypocrite. The author of the thing you're using probably thought the same thing: "No one is gonna use this garbage, I'm not going to waste my time documenting it". Yet here we are. Over and over again yet we don't learn the lesson.
I'm not gonna deny there's assholes. There are. But even assholes teach you. At worst, they teach you how not to act.
And some people are you telling you to RTM and not RTFM. Sure, it has lots of extra information in it that you don't need to get your specific job done, but have you also considered that it has lots of extra information in it? The person that wrote it clearly thought the context was important. Maybe it isn't. In that case, you learned a lesson in how not to write documentation!
What I'm getting at is that there's a lot of learning done all over the place. Trying to take out all the work and only have "the fun" is harming yourself and has a large potential to make less time for the fun stuff[0]. I'd be surprised if I'm alone in this, but a lot of stuff I enjoy now was stuff that originally frustrated me. IME this is pretty common! It's true for every person I know. Similarly, it's also true for things I learned that I thought I'd never use again. It always has a way of coming back.
I'm not going to pretend it's all fun and games. I'm explicitly saying it's not. But I'm confident in the long run it's better. Despite the lack of accuracy, I use LLMs (and Google, and even the TFM) like I would a solution guide the homework problems when I was in school. Try first, then consult. The struggle is an investment in your future. It sucks, but if all the best things in life were easy then we'd all have them. I'm just trying to convince you that it pays off.
I'm completely aware this is all context dependent. There's a time and place for everything. But given the percentages you mention (even taken as exaggeration), something sounds wrong. It's hard to suggest specific solutions without details but I'd be surprised if there weren't better and more rewarding solutions than having the LLM do it for you
[0] That's the big danger and what drives distrust in them. Because you need to work extra hard to find mistakes, increasing workload, not decreasing, because debugging is most of the job!
Solving problems for real people. Isn't the answer here kind of obvious?
Our field has a whole ethos of open-source side projects people do for love and enjoyment. In the same way that you might spend your weekends in a basement woodworking shop without furnishing your entire house by hand, I think the craft of programming will be just fine.
No. There are a thousand other ways of solving problems for real people, so that doesn't explain why some choose software development as their preferred method.
Presumably, the reason for choosing software development as the method of solving problems for people is because software development itself brings joy. Different people find joy in different aspects even of that, though.
For my part, the stuff that AI is promising to automate away is much of the stuff that I enjoy about software development. If I don't get to do that, that would turn my career into miserable drudgery.
Perhaps that's the future, though. I hope not, but if it is, then I need to face up to the truth that there is no role for me in the industry anymore. That would pretty much be a life crisis, as I'd have to find and train for something else.
There's inertia in the industry. It's not like what you're describing could happen in the blink of an eye. You may well be at the end of your career when this prophecy is fulfilled, if it ever comes true. I sure will be at the end of mine and I'll probably work for at least another 20 years.
And what happened? Programmers make the queries and embed them into code that creates dashboards that managers look at. Or managers ask analysts who have to interpret the dashboards for them... It rather created a need for more programmers.
Compare embedded SQL with prompts - SQL queries compared to assembler or FORTRAN code is closer to English prose for sure. Did it take some fun away? Perhaps, if manually traversing a network database is fun to anyone, instead of declaratively specifying what set of data to retrieve. But it sure gave new fun to people who wanted to see results faster (let's call them "designers" rather than "coders"), and it made programming more elegant due to the declarativity of SQL queries (although that is cancelled out again by the ugliness of mixing two languages in the code).
Maybe the question is: Does LLM-based coding enable a new kind of higher level "design flow" to replace "coding flow"? (Maybe it will make a slightly different group of people happy?)
Software development is almost unique in the scale that it operates at. I can write code once and have it solve problems for dozens, hundreds, thousands or even millions of people.
If you want your work to solve problems for large numbers of people I have trouble thinking of any other form of work that's this accessible but allows you to help this many others.
Fields like civil engineering are a lot harder to break into!
I don't see why we should seek an explanation if there are thousands of ways to be useful to people. Is being a lawyer particularly better than being an accountant?
What used to be a project doing a CMS backend, now is spent doing configurations on a SaaS product, and if we are lucky, a few containers/serveless for integrations.
There are already AI based products that can automate those integrations if given enough data samples.
Many believe AI will keep using current programming languages as translation step, just like those Assembly developers thought compiling via Assembly text generation and feeding into an Assembly would still be around.
Now you don't really have to be precise at any level to get something 'working'. You may not be familiar with the generated language or libaries but it could look good enough (like the assembly would have looked good enough). So, sure, if you are very familiar with the generated language and libraries and you inspect every line of generated code then maybe you will be ok. But often the reason you are using an LLM is because e.g. you don't understand or use bash frequently enough to get it to do what you want. Well, the LLM doesn't understand it either. So that weird bash construct that it emitted - did you read the documentation for it? You might have if you had to write it yourself.
In the end there could be code in there that nothing (machine or human) understands. The less hard-won experience you have with the target and the more time-pressed you are the more likely it is that this will occur.
The output of the LLM is determined by the weights (parameters of the artificial neural network) estimated in the training as well as a pseudo-random number generator (unless its influence, called "temperature", is set to 0).
That means LLMs behave as "processes" rather than algorithms, unlike any code that may be generated from them, which is algorithmic (unless instrcuted otherwise; you could also tell an LLM to generate an LLM).
Look at the majority of the tech sector for the last ten years or so and tell me this answer again.
Like I guess this is kind of true, if "problems for real people" equals "compensating for inefficiencies in our system for people with money" and "solutions" equals "making a poor person do it for them and paying them as little as legally possible."
(Interesting how people talk about AI destroying programming jobs all the time, but rarely mention the impact of billions of dollars of code being given away.)
Open source software is not just different in the license, it’s different in the design
Linux also doesn’t take jobs away - the majority of contributors are paid by companies, afaik
"Yes, yes... Satellites stay in orbit for a while. What about it?"
"Looks a bit cramped in there."
"Stop complaining, at least it's a real job, now get in, we're about to launch."
How true that is depends on what sort of software you write. Very little of what I've accomplished in my career can be fairly described as "automating other people's jobs away".
I've worked in a medical space writing software so that people can automate away the job that their bodies used to do before they broke.
Now all those jobs are gone because of you.
Haven't we been automating jobs away since the industrial revolution? I know AI may be an exception to this trend, but at least with classical programming, demand goes up, GDP per capita goes up, and new industries are born.
I mean, there's three ways to get stuff done: do it yourself, get someone else to do it, or get a machine to do it.
#2 doesn't scale, since someone still has to do it. If we want every person to not be required to do it (washing, growing food, etc), #3 is the only way forward. Automation and specialization have made the unthinkable possible for an average person. We've a long way to go, but I don't see automation as a fundamentally bad thing, as long as there's a simultaneous effort to help (especially those who are poor) transition to a new form of working.
What is qualitatively different this time is that it affects intellectual abilities - there is nothing higher up in the work "food chain". Replacing physical work you could always argue you'd have time to focus on making decisions. Replacing decision making might mean telling people go sit on the beach and take your universal basic income (UBI) cheque, we don't need you anymore.
Sitting on the beach is not as nice as it sounds for some; if you don't agree, try doing it for 5 years. Most people require work to have some sense of purpose, it gives identity, and it structures their time.
Furthermore, if you replaced lorry drivers with self-driving cars, you'd destroy the most commonly held job in North America as well as South America, and don't tell me that they can be retrained to be AI engineers or social media influencers instead (some can only be on the road, some only want to be on the road).
Somehow everyone who says this misses that never in the history of the United States (and most other countries tbh) has this been true.
We just consign people to the streets in industrial quantity. More underserved to act as the lubricant for capitalism.
I see capitalism invoked as a "boogey man" a lot, which fair enough, you can make an emotional argument, but it's not specific enough to actually be helpful in coming up with a solution to help these people.
In fact, capitalism has been the exact thing that has lifted so many out of poverty. Things can be simultaneously bad and also have gotten better over time.
I would argue that the biggest issue is education, but that's another tangent...
I'll be sure to alert the next person I encounter working UberEats for slave wages that the resources exist that they cannot use. I'm sure this difference will impact their lives greatly.
Edit: My point isn't that UberEats drivers make slave wages (though they do): My point is that from the POV of said people and others who need the aforementioned resources, whether they don't exist or exist and are unusable is fucking irrelevant.
[1] https://babel.hathitrust.org/cgi/pt?id=mdp.39015022383221&se...
[2] https://www.indeed.com/cmp/Uber/salaries/Driver (select United States as location)
I think they were using “slave wages” as a non-literal relative term to the era.
As you did.
A hundred years before your example, the “slave wages” were actually slave wages.
I think it’s fair to say a lot of gig workers, especially those with families, are having a very difficult time economically.
I expect gig jobs lower unemployment substantially, due to being convenient and easy to get, and potentially flexible with hours, but they lower average employment compensation.
Depends what you write. What I work on isn't about eliminating jobs at all, if anything it creates them. And like, actual, good jobs that people would want, not, again, paying someone below the poverty line $5 to deliver an overpriced burrito across town.
Not always. Recruitment budgets have limits, so it's a fixed number of employees either providing services to a larger number of customers thanks to software, or serving fewer customers or do so less often without the software.
Would you mind naming a few instance of the workers coming out ahead?
I doubt the displaced computers managed to find a better job on average. Probably even their kids were disadvantaged since the parents had fewer options to support their education.
So, who knows if this specific group of people and their descendants ever fully recovered let alone got ahead.
For being one of the few lucky ones that gets to stay around taking care of the software factory robots, or designing them, while everyone else that used to work at the factory is now queueing somewhere else.
I like programming but I have other hobbies I find fulfilling, and nothing stops me from programming with a pen and paper.
The bad vibes are not caused by lack of programming, they're caused by headsman sharpening his axe behind me.
A few lucky programmers will be elevated to God status and we're all fighting for those spots now.
Not everyone gets a seat at the starship.
In my case, I couldn't agree more, with the premise of the article, but my life today, is centered around writing software the very best that I can; regardless of value or price.
It's not very effective, if I were to be trying to make a profit.
It's really hard to argue for something, if the something doesn't result in value, as perceived by others.
For me, the value is the process. I often walk away from my work, once I have it up and shipping. I do like to take my work all the way through shipping, support, and maintenance, but find that my eye is always drawn towards new shores[0].
“A ship in harbor is safe, but that is not what ships are built for.”
–John A. Shedd
[0] https://littlegreenviper.com/miscellany/thats-not-what-ships...Then I shared it on HN and was subject to literal harassment.
AI help me a lot, you don't need search, just ask AI, and it provide the answer directly. After using AI, I have more time used on coding, more fun.
I just wish I saw more people doing this, rather than asking them to 'draw 80% of the owl'.
I taught a lecture in my first-semester programming course yesterday. This is in a program for older students, mostly working while going back to school. Each time, a few students are selected to present their code for an exercise that I pick randomly from those they were assigned.
This guy had fancy slides showing his code, but he was basically just reading the code off the page. So I ask him: “hey, that method you call, what exactly does it do?”.
Um…
So I ask "Ok, the result from that method is assigned to a variable. What kind of variable is it?" Note that this is Java, the data type is explicitly declared, so the answer is sitting there on his slide.
Um…
So I tear into him. You got this from ChatGPT. That’s fine, if you need the help, but you need to understand what you get. Otherwise you’ll never get a job in IT.
His answer: “I already have a job in IT.”
Fsck. There is your vibe coder. You really do not want them working on anything that you care about.
Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.
Still, prompting LLMs well requires eloquence and expressiveness that many programmers don't have. I have started deriving a lot of value from those LLMs I chose to interact with by specifying clear boundaries on what's the priority and what can wait for later and what should be completely ignored due to this or that objective (and a number of other parameters I am giving them). When you do that well, they are extremely useful.
I saw your objections to other comments on the basis of them seemingly not having a disdainful attitude towards coding they do for work, specifically.
I absolutely do have tasks, coding included, that I don't want to do, and find no joy in. If I can have my manager assign the task to someone else, great! But using an LLM isn't that, so I'm still on the hook for ensuring all the most boring parts of that task (bugfixing, reworks, integration, tests, etc) get done.
My experience with LLMs is that they simply shift the division of time away from coding, and towards all the other bits.
And it can't possibly just be about prompting. How many hundreds of lines of prompting would you need to get an LLM to understand your coding conventions, security baselines, documentation reqs, logging, tests, allowed libraries, OSS license restrictions (i.e. disallowed libraries), etc? Or are you just refactoring for all that afterwards?
Maybe you work somewhere that doesn't require that level of rigor, but that doesn't strike me as a good thing to be entrenching in the industry by increasing coders' reliance on LLMs.
Where I use LLMs:
1. Super boring and annoying tasks. Yes, they include various coding style instructions, requests for small clarifying comments where the goal of the code is not obvious, tests. So, no OSS license restrictions. Libraries I specify most of the times I used LLMs (and only once did I ask it to suggest a library). Logging and telemetry I add myself. So long story short, I use the LLM to show me a draft of a solution and then mercilessly refactor it to match my practices and guidelines. I don't do 50 exchanges out of laziness, no.
2. Tasks where my expertise is lacking. I recently used an LLM to help me with making a `.clone()`-heavy Rust code to become nearly zero-copy for performance reasons -- it is a code on a hot path. As much as I love Rust and I am fairly good at it (realistically I'm IMO at 7.5 / 10), all the lifetimes and zero-copy semantics I still don't know yet. A long session with an LLM after, I emerged both better educated and with a faster code. IMO a win-win.
Laughably narrow-minded projection of your own perspective on others.
Honestly you’re trying to prove AI is ineffective by telling us it didn’t work with your ineffective protocol. That is not a strong argument.
Either way, when a model starts making dumb mistakes like that these days I start a fresh conversation (to blow away all of the bad tokens in the current one), either with that model or another one.
I often switch from Claude 3.7 Sonnet to o3 or o4-mini these days. I paste in the most recent "good" version of the thing we're working on and prompt from there.
* experiment with multiple models, preferably free high quality models like Gemini 2.5. Make sure you're using the right model, usually NOT one of the "mini" varieties even if its marketed for coding.
* experiment with different ways of delivering necessary context. I use repomix to compile a codebase to a text file and upload that file. I've found more integrated tooling like cursor, aider, or copilot, are less effective then dumping a text file into the prompt
* use multi-step workflows like the one described [1] to allow the llm to ask you questions to better understand the task
* similarly use a back-and-forth one-question-at-a-time conversation to have the llm draft the prompt for you
* for this prompt I would focus less on specifying 10 results and more about uploading all necessary modules (like with repomix) and then verifying all 10 were completed. Sometimes the act of over specifying results can corrupt the answer.
[1]: https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/
I'm a pretty vocal AI-hater, partly because I use it day to day and am more familiar with its shortfalls - and I hate the naive zealotry so many pro-AI people bring to AI discussions. BUTTT we can also be a bit more scientific in our assessments before discarding LLMs - or else we become just like those naive pro-AI-everything zealots.
Hard disagree, I get to hyperfocus on making magical things that surprise and delight me every day.
I don’t think this is the case, if anything the opposite is true. Most of us would like to do the work code but have realized, at some career point, that you’re paid more to abstract yourself away from that and get others to do it either in technical leadership or management.
I'll be a radical and say that I think it depends and is very subjective.
Author above you seems to enjoy working on code by itself. You seem to have a different motivation. My motivation is solving problems I encounter, code just happen to be one way out of many possible ones. The author of the submission article seems to love the craft of programming in itself, maybe the problem itself doesn't even matter. Some people program just for the money, and so on.
2. Some of the modern LLMs generate very impressive code. Variables caching values that are reused several times, utility functions, even closure helpers scoped to a single function. I agree that when the LLM code's quality falls bellow a certain threshold then it's better in every way to just write it yourself instead.
Not me. I code because I love to code, and I get paid to do what I love. If that's not you…find a different profession?
Delegating part of that to an LLM so I can code the stuff I love is a big win for my motivation and is making me doing the work tasks with a bit more desire and pleasure.
Please don't forget that most of us out there can't code for money anything that their heart wants. If you can, I'd be happy for you (and envious) but please understand that's also a fairly privileged life you'd be having in that case.
100% this.
Do you ever read my comments, or do you just imagine what I might have said and reply to that?
Nothing is truly 100% safe or free of bugs. What I meant with my comment up-thread was that I have enough experience to have a fairly quick and critical eye of code, and that has saved my skin many times.
Thought it was obvious.
It requires magical incantations that may or may not work and where a missing comma in a prompt can break the output just as badly as the US waking up and draining compute resources.
Has nothing to do with eloquence
This isn't how it works, psychologically. The whole time I'm manual coding, I'm wondering if it'd be "easier" to start prompting. I keep thinking about a passage from The Road To Wigan Pier where Orwell addresses this effect as it related to the industrial revolution:
>Mechanize the world as fully as it might be mechanized, and whichever way you turn there will be some machine cutting you off from the chance of working—that is, of living.
>At a first glance this might not seem to matter. Why should you not get on with your ‘creative work’ and disregard the machines that would do it for you? But it is not so simple as it sounds. Here am I, working eight hours a day in an insurance office; in my spare time I want to do something ‘creative’, so I choose to do a bit of carpentering—to make myself a table, for instance. Notice that from the very start there is a touch of artificiality about the whole business, for the factories can turn me out a far better table than I can make for myself. But even when I get to work on my table, it is not possible for me to feel towards it as the cabinet-maker of a hundred years ago felt towards his table, still less as Robinson Crusoe felt towards his. For before I start, most of the work has already been done for me by machinery. The tools I use demand the minimum of skill. I can get, for instance, planes which will cut out any moulding; the cabinet-maker of a hundred years ago would have had to do the work with chisel and gouge, which demanded real skill of eye and hand. The boards I buy are ready planed and the legs are ready turned by the lathe. I can even go to the wood-shop and buy all the parts of the table ready-made and only needing to be fitted together; my work being reduced to driving in a few pegs and using a piece of sandpaper. And if this is so at present, in the mechanized future it will be enormously more so. With the tools and materials available then, there will be no possibility of mistake, hence no room for skill. Making a table will be easier and duller than peeling a potato. In such circumstances it is nonsense to talk of ‘creative work’. In any case the arts of the hand (which have got to be transmitted by apprenticeship) would long since have disappeared. Some of them have disappeared already, under the competition of the machine. Look round any country churchyard and see whether you can find a decently-cut tombstone later than 1820. The art, or rather the craft, of stonework has died out so completely that it would take centuries to revive it.
>But it may be said, why not retain the machine and retain ‘creative work’? Why not cultivate anachronisms as a spare-time hobby? Many people have played with this idea; it seems to solve with such beautiful ease the problems set by the machine. The citizen of Utopia, we are told, coming home from his daily two hours of turning a handle in the tomato-canning factory, will deliberately revert to a more primitive way of life and solace his creative instincts with a bit of fretwork, pottery-glazing, or handloom-weaving. And why is this picture an absurdity—as it is, of course? Because of a principle that is not always recognized, though always acted upon: that so long as the machine is there, one is under an obligation to use it. No one draws water from the well when he can turn on the tap. One sees a good illustration of this in the matter of travel. Everyone who has travelled by primitive methods in an undeveloped country knows that the difference between that kind of travel and modern travel in trains, cars, etc., is the difference between life and death. The nomad who walks or rides, with his baggage stowed on a camel or an ox-cart, may suffer every kind of discomfort, but at least he is living while he is travelling; whereas for the passenger in an express train or a luxury liner his journey is an interregnum, a kind of temporary death. And yet so long as the railways exist, one has got to travel by train—or by car or aeroplane. Here am I, forty miles from London. When I want to go up to London why do I not pack my luggage on to a mule and set out on foot, making a two days of it? Because, with the Green Line buses whizzing past me every ten minutes, such a journey would be intolerably irksome. In order that one may enjoy primitive methods of travel, it is necessary that no other method should be available. No human being ever wants to do anything in a more cumbrous way than is necessary. Hence the absurdity of that picture of Utopians saving their souls with fretwork. In a world where everything could be done by machinery, everything would be done by machinery. Deliberately to revert to primitive methods to use archaic took, to put silly little difficulties in your own way, would be a piece of dilettantism, of pretty-pretty arty and craftiness. It would be like solemnly sitting down to eat your dinner with stone implements. Revert to handwork in a machine age, and you are back in Ye Olde Tea Shoppe or the Tudor villa with the sham beams tacked to the wall.
>The tendency of mechanical progress, then, is to frustrate the human need for effort and creation. It makes unnecessary and even impossible the activities of the eye and the hand. The apostle of ‘progress’ will sometimes declare that this does not matter, but you can usually drive him into a comer by pointing out the horrible lengths to which the process can be carried.
sorry it's so long
Gone are those days.
The funny thing is I agree with other comments, it is just kind of like a really good stack overflow. It can’t automate the whole job, not even close, and yet I find the tasks that it cannot automate are so much more boring (the ones I end up doing).
I envy the people who say that AI tools free them up to focus on what they care about. I haven’t been able to achieve this building with ai, if anything it feels like my competence has decreased due to the tools. I’m fairly certain I know how to use the tools well, I just think that I don’t enjoy how the job has evolved.
So it causes developers to regularly fix what chatgpt is wrong about.
Not great.
This has always been true in every craft, and it remains true for programmers in a post LLL world.
Most training data is open source code written by novice to average programmers publishing their first attempts at things and thus LLMS are heavily biased to replicate the naive, slow, insecure code largely uninformed by experience.
Honestly to most programmers early in their career right now, I would suggest spending more time reviewing code, and bugfixes, than writing code. Review is the skillset the industry needs most now.
But you will need to be above average as a software reviewer to be employable. Go out into FOSSland and find a bunch of CVEs, or contribute perf/stability/compat fixes, proving you review and improve things better than existing automated tools.
Trust me, there are bugs -everywhere- if you know how to look for them and proving you can find them is the resume you need now.
The days of anyone that can rub two HTML tags together having a high paying job are over.
The one time i pasted LLM code without reviewing it it belonged on accidentally quadratic.
It was obvious at first read, but probably not for a beginner. The accidental complexity was hidden behind API calls that weren't wrong, just grossly inefficient.
Problem might be, if you lose the "joy" and the "flow" you'll stop caring about things like that. And software is bloated enough already.
Most of AI-generated programming content I use are comments/explanations for legacy code, closely followed by tailored "getting started" scripts and iterations on visualisation tasks (for shitty school assignments that want my pyplots to look nice). The rest requires an understanding, which AI can help you achieve faster (it's read many a book related to the topic, so it can recall information a lot like an experienced colleague may), but it can't confer capital K Knowledge or understanding upon you. Some of the tasks it performs are grueling, take a lot of time to do manually, and provide little mental stimulation. Some may be described as lobotomizing and (in my opinion) may mentally damage you in the "Jack Torrance typewriter" kinda way.
It makes me able to work on the fun parts of my job which possess the qualities the article applauds.
My main worry about AI is that people just keep using the garbage that exists instead of trying to produce something better, because AI takes away much of the pain of interacting with garbage. But most people are already perfectly fine using garbage, so probably not much will change here.
AI coding is similar. We just had a minor issue with ai generated code that was clearly not vetted as closely as it should have been making output it generated over a couple of months not as accurate as it should be. Obviously, it had to be corrected, then vetted and so on, because there is always time to correct things...
edit: What I am getting at is the old-fashioned, penny smart, but pound foolish.
Using Aider would probably solve the task in 5 minutes. Coding it in 30 minutes. The former choice would result in more time for other tasks or reading HN or having a hot beverage or walking in the sun. The second would challenge my rusting algorithmic skills and give me a better understanding of what I'm doing for the medium term.
Hard choice. In any case, I have a good salary, even with the latter option I can decide to spend good times.
AI coding preserves flow more than legacy coding. You never have to go read documentation for an hour. You can continuously code.
Don't see any mention regarding this in the post, which is the common objection people have regarding vibe coding.
This sounds a more likely reason for losing your joy if your passion is coding.
also if you want to see the real cost (at least part of it) of AI coding or the whole fucked up IT industry, go to any mining town in the global south.
based on the current state of AI and the progress im witnessing on a month-by-month basis - my current prediction is there is zero chance AI agents are going to be coding and replacing me in the next few years. if i could short the startups claiming this, I would.
I'm willing to bet that in a few years most of the developers you know will be using LLMs on a daily basis, and will be more productive because of it (having learned how to use it).
As an example, just today I was trying to debug some weird WebSocket behaviour. None of the AI tools could help, not Cursor, not plain old ChatGPT with lots of prompting and careful phrasing of the problem. In fact every LLM I tried (Claude 3.7, GPT o4-mini-high, GPT 4.5) introduced errors into my debugging code.
I’m not saying it will stay this way, just that it’s been my experience.
I still love these tools though. It’s just that I really don’t trust the output, but as inspiration they are phenomenal. Most of the time I just use vanilla ChatGPT though; never had that much luck with Cursor.
A couple days ago I was looking for something to do so gave Claude a paper ("A parsing machine for PEGs") to ask it some questions and instead of answering me it spit out an almost complete implementation. Intrigued, I threw a couple more papers at it ("A Simple Graph-Based Intermediate Representation" && "A Text Pattern-Matching Tool based on Parsing Expression Grammars") where it fleshed out the implementation and, well... color me impressed.
Now, the struggle begins as the thing has to be debugged. With the help of both Claude and Deepseek we got it compiling and passing 2 out of 3 tests which is where they both got stuck. Round and round we go until I, the human who's supposed to be doing no work, figured out that Claude hard coded some values (instead of coding a general solution for all input) which they both missed. In applying ever more and more complicated solutions (to a well solved problem in compiler design) Claude finally broke all debugging output and I don't understand the algorithms enough to go in and debug it myself.
Of course I didn't use any sort of source code management so I could revert to a previous version before it was broken beyond all fixing...
Honestly, I don't even consider this a failure. I learned a lot more on what they are capable of and now know that you have to give them problems in smaller sections where they don't have to figure out the complexities of how a few different algorithms interact with each other. With this new knowledge in hand I started on what I originally intended to do before I got distracted with Claude's code solution to a simple question.
--edit--
Oh, the irony...
After typing this out and making an espresso I figured out the problem Claude and Deepseek couldn't see. So much for the "superior" intelligence.
Makes me wonder how many of the people who continue to argue that LLMs can't help with large existing codebases are missing that you need to selectively copy the right chunks of that code into the model to get good results.
What tools are you guys using? Are there none that can interactively probe the project in a way that a human would, e.g. use code intelligence to go-to-definition, find all references and so on?
Our Rust fly-proxy tree is about 80k (cloc) lines of code; our Go flyd tree (a Go monorepo) is 300k. Generally, I'll prompt an LLM to deal with them in stages; a first pass, with some hints, on a general question like "find the code that does XYZ"; I'll review and read the code itself, then feed that back to the LLM with questions like "summarize all the functionality of this package and how it relates to other packages" or "trace the flow of an HTTP request through all the layers of this proxy".
Generally, I'll take the results of those queries and have them saved in .txt files that I can reference in future prompts.
I think sometimes developers are demanding something close to AGI from their tooling, something that would do exactly what they would do (only, in the span of about 15 seconds). I don't believe in AGI, and so I don't expect it from my tools; I just want them to do a better job of fielding arbitrary questions (or generating arbitrary code) than grep or eglot could.
If your codebase is larger than that there are a few tricks.
The first is to be selective about what you feed into the LLM: if you know the work you are doing is in a particular area of the codebase, just paste that bit in. The LLM can make reasonable guesses about things the code references that it can't see.
An increasingly effective trick is to arm a tool-using LLM with a tool like ripgrep (effectively the "interactively probe the project in a way that a human would" idea you suggested). Claude Code and OpenAI Codex both use this trick. The smarter models are really good at deciding what to search for and evaluating the results.
I've built tools that can run against Python code and extract just the class, function and method signatures and their docstrings - omitting the actual code. If you code is well designed and has reasonable documentation that could be enough for the LLM to understand it.
https://github.com/simonw/symbex is my CLI tool for that
https://simonwillison.net/2025/Apr/23/llm-fragment-symbex/ is a tool I released this morning that turns Symbex into a plugin for my LLM tool.
I use my https://llm.datasette.io/ tool a lot, especially with its new fragments feature: https://simonwillison.net/2025/Apr/7/long-context-llm/
This means I can feed in the exact code that the model needs in order to solve a problem. Here's a recent example:
llm -m openai/o3 \
-f https://raw.githubusercontent.com/simonw/llm-hacker-news/refs/heads/main/llm_hacker_news.py \
-f https://raw.githubusercontent.com/simonw/tools/refs/heads/main/github-issue-to-markdown.html \
-s 'Write a new fragments plugin in Python that registers issue:org/repo/123 which fetches that issue
number from the specified github repo and uses the same markdown logic as the HTML page to turn that into a fragment'
From https://simonwillison.net/2025/Apr/20/llm-fragments-github/ - I'm populating the context with the exact examples needed to solve the problem.https://news.ycombinator.com/item?id=42511441
People are going to be making the same judgements about AI-assisted coding in the near future. Sure, you could code everything yourself for your own personal enrichment, or simply because it's fun. But that will be a pursuit for your own time. In the realm of business, it's a different story: you are either proompting, or you're effectively stealing money from your employer because you're making suboptimal use of the tools available. AI gets you to something working in production so much faster that you'd be remiss not to use it. After all, as Milt and Tim Bryce have shown, the hard work in business software is in requirements analysis and design; programming is just the last translation step.
As a developer I'm bullish on coding agents and GenAI tools, because they can save you time and can augment your abilities. I've experienced it, and I've seen it enough already. I love them, and want to see them continue to be used.
I'm bearish on the idea that "vibe coding" can produce much of value, and people without any engineering background becoming wildly productive at building great software. I know I'm not alone. If you're a good problem solver who doesn't know how to code, this is your gateway. And you better learn what's happening with the code while you can to avoid creating a huge mess later on.
Developers argue about the quality of "vibe coded" stuff. There are good arguments on both sides. At some point I think we all agree that AI will be able generate high quality software faster than a human, someday. But today is not that day. Many will try to convince you that it is.
Within a few years we'll see massive problems from AI generated code, and it's for one simple reason:
Managers and other Bureaucrats do not care about the quality of the software.
Read it again if you have to. It's an uncomfortable idea, but it's true. They don't care about your flow. They don't care about how much you love to build quality things. They don't care if software is good or bad they care about closing tickets and creating features. Most of them don't care, and have never cared about the "craft".
If you're a master mason crafting amazing brickwork, you're exactly the same as some amateur grabbing some bricks from home depot and slapping a wall together. A wall is a wall. That's how the majority of managers view software development today. By the time that shoddy wall crumbles they'll be at another company anyway so it's someone else's problem.
When I talk about the software industry collapsing now, and in a few years we're mired with garbage software everywhere, this is why. These people in "leadership" are salivating at the idea of finally getting something for nothing. Paying a few interns to "vibe code" piles of software while they high five each other and laugh.
It will crash. The bubble will pop.
Developers: Keep your skills sharp and weather out the storm. In a few years you'll be in high demand once again. When those walls crumble, they will need people who what they're doing to repair it. Ask for fair compensation to do so.
Even if I'm wrong about all of this I'm keeping my skills sharp. You should too.
This isn't meant to be anti-management, but it's based on what I've seen. Thanks for coming to my TED talk.
* And to the original point, In my experience the tools interrupt the "flow" but don't necessarily take the joy out of it. I cannot do suggestion/autocomplete because it breaks my flow. I love having a chat window with AI nearby when I get stuck or want to generate some boilerplate.
IDK, there's still a place in society for master masons to work on 100+ year old buildings built by other master masons.
Same with the robots. They can implement solutions but I'm not sure I've heard of any inventing an algorithmic solution to a problem.
1. AI Coding leads to a lack of flow.
2. A lack of flow leads to a lack of joy.
Personally, I can't find myself agreeing with the first argument. Flow happens for me when I use AI. It wouldn't surprise me if this differed developer to developer. Or maybe it is the size of requests I'm making, as mine tend to be on the smaller size where I already have an idea of what I want to write but think the AI can spit it out faster. I also don't really view myself as prompt engineering; instead it feels more like a natural back and forth with the AI to refine the output I'm looking for. There are times it gets stubborn and resistant to change but that is generally a sign that I might want to reconsider using AI for that particular task.
Managers usually can't carve out a full day - but a couple of hours is manageable.
See also this quote from Gergely Orosz:
Despite being rusty with coding (I don't code every day
these days): since starting to use Windsurf / Cursor with
the recent increasingly capable models: I am SO back to
being as fast in coding as when I was coding every day
"in the zone" [...]
When you are driving with a firm grip on the steering
wheel - because you know exactly where you are going, and
when to steer hard or gently - it is just SUCH a big
boost.
I have a bunch of side projects and APIs that I operate -
but usually don't like to touch it because it's (my)
legacy code.
Not any more.
I'm making large changes, quickly. These tools really
feel like a massive multiplier for experienced devs -
those of us who have it in our head exactly what we want
to do and now the LLM tooling can move nearly as fast as
my thoughts!
From https://x.com/GergelyOrosz/status/1914863335457034422It's been amazing to spin up quick React prototypes during a lunch break of concepts and ideas for quick feedback and reactions.
You still need to do that if you're using AI, otherwise how do you know if it's actually done a good job? Or are people really just vibe coding without even reading the code at all? That seems... unlikely to work.
But when that last 1% breaks and AI can’t fix it. That’s where you need the humans.