There's so much demand around this, people are just super eager to get the information. I can understand why, because it was my favorite talk as well :)
In this case the determining factor is that the original submission was not the verbatim speech that the speaker gave, and the speaker himself complained that that some of the transcription was inaccurate.
This then caused significant meta-discussion about the inaccuracy of the transcription. For the other comments that were about the content itself, we can't be sure if those comments were made in response to parts of the talk that were accurately or inaccurately transcribed.
We understand there may be overlapping comments, but that's a price we have to pay given the other considerations. No decision would be perfect, and it's not the same as usual "dupe" scenarios. We have to be a bit flexible when the situation is different from the norm in important ways.
i exepct YC to prioritize publishing this talk so propbably the half life of any of this work is measured in days anyway.
100% of our podcast is published for free, but we still have ~1000 people who choose to support our work with a subscription (it does help pay for editors, equipment, and travel). I always feel bad that we dont have much content for them so i figured i'd put just the slide compilation up for subscribers. i'm trying to find nice ways to ramp up value for our subs over time, mostly by showing "work in progress" things like this that i had to do anyway to summarize/internalize the talk properly - which again is what we published entirely free/no subscription required
that being said, HN is a negative place, and not what I was trying to go for. thank you for your work with the slides!
(As a step towards making it a non-negative place.)
Edit: the emoji at the end of the original sentence has not been quoted. How a smile makes the difference. Original tweet: https://x.com/karpathy/status/1935077692258558443
Reminds me of work where I spend more time figuring out how to run repos than actually modifying code. A lot of my work is focused on figuring out the development environment and deployment process - all with very locked down permissions.
I do think LLMs are likely to change industry considerably, as LLM-guided rewrites are sometimes easier than adding a new feature or fixing a bug - especially if the rewrite is something into more LLM-friendly (i.e., a popular framework). Each rewrite makes the code further Claude-codeable or Cursor-codeable; ready to iterate even faster.
Software 3.0 isn't about using AI to write code. It's about using AI instead of code.
So not Human -> AI -> Create Code -> Compile Code -> Code Runs -> The Magic Happens. Instead, it's Human -> AI -> The Magic Happens.
This is why I think the AI industry is mostly smoke and mirrors. If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities. Yet in the last year or so we've seen marginal improvements based mainly on increasing the scale and quality of the data they're trained on, and the scale of deployments, with some clever engineering work thrown in.
Recursive self-improvement is literally the endgame scenario - hard takeoff, singularity, the works. Are you really saying you're dissatisfied with the progress of those tools because they didn't manage to end the world as we know it just yet?
I think there will be a wall hit eventually with this, much like there was with visual recognition in the mid 2010s[0]. It will continue to improve but not exponentially
To be fair I am bullish it will make white collar work fundamentally different but smart companies will use it to accelerate their workforce productivity, reliability and delivery, not simply cut labor to the bone, despite that seemingly being every CEOs wet dream right now
[0]: remember when everyone was making demos and apps that would identify objects and such, and all the facial augmentation stuff? My general understanding is that the tech is now in the incremental improvement stage. I think LLMs will hit the same stage in the near term and likely hover there for quite awhile
I'm personally 50/50 on this prediction at this point. It doesn't feel like we have enough ingredients for end-to-end recursive self-improvement in the next 5 years, but the overall pace is such that I'm hesitant to say it's not likely either.
Still, my reply was to the person who seemed to say they won't be impressed until they see AIs "able to build better versions of themselves" and "exponential improvements of their capabilities" - to this I'm saying, if/when it happens, it'll be the last thing that they'll ever be impressed with.
> remember when everyone was making demos and apps that would identify objects and such, and all the facial augmentation stuff? My general understanding is that the tech is now in the incremental improvement stage.
I thought that this got a) boring, and b) all those advancements got completely blown away by multimodal LLMs and other related models.
My perspective is that we had a breakthrough across the board in this a couple years ago, after the stuff you mentioned happened, and that isn't showing signs of slowing down.
The progress has been adequate and expected, save for very few cases such as generative image and video, which has exceeded my expectations.
Before we reach the point where AI is self-improving on its own, we should go through stages where AI is being improved by humans using AI. That is, if these tools are capable of reasoning and are able to solve advanced logic, math, and programming challenges as shown in benchmarks, then surely they must be more capable of understanding and improving their own codebases with assistance from humans than humans could do alone.
My point is that if this was being done, we should be seeing much greater progress than we've seen so far.
Either these tools are intelligent, or they're highly overrated. Which wouldn't mean that they can't be useful, just not to the extent that they're being marketed as.
The benchmarks are make of questions that humans created and can answer, and are not composed of anything which a human hasn't been able to answer.
> then surely they must be more capable of understanding and improving their own codebases with assistance from humans than humans could do alone.
I don't think that logic follows. The models have proven that they can have more breadth of knowledge than a single human, but not more capability.
Also, they have no particular insight into their own codebases. They only know what is in their training data -- they can use that to form patterns and solve new problems, but they still only have the that and whatever information is given with the question as base knowledge.
> My point is that if this was being done, we should be seeing much greater progress than we've seen so far.
The point is taken, but I think your reasoning is weak.
> Either these tools are intelligent, or they're highly overrated. Which wouldn't mean that they can't be useful, just not to the extent that they're being marketed as.
I may have missed the marketing you have seen, but I don't see the big AI companies claiming that they are anything but tools that can help humans do things or replace certain human tasks. They do not advertise super human capability in intelligence tasks.
I suspect you are seeing a lot of hype and unfounded expectations, and using that as a basis for a calculation. The formula might be right, but the variables are incorrect.
We have a seen a LOT of progress with AI and language models in the last few years, but expecting them to go from 'can understand language and solve complicated novel problems' to 'making better versions of themselves using solutions that humans haven't been able to come up with yet' is a bit much to expect.
I don't know if one would call them intelligent, but something can be intelligent but at the same time not able to make substantial leaps forward in emerging fields.
Sure, but they do it at superhuman speeds, and if they truly can reason and come up with novel solutions as some AI proponents claim, then they would be able to come up with better answers as well.
So, yes, they do have more capability in certain aspects than a human. If nothing else, they should be able to draw from their vast knowledgebase in ways that a single human never could. So we should expect to see groundbreaking work in all fields of science. Not just in pattern matching applications as we've seen in some cases already, but in tasks that require actual reasoning and intelligence, particularly programming.
> Also, they have no particular insight into their own codebases.
Why not? Aren't most programming languages in their training datasets, and isn't Python, the language most AI tools are written in, one of the easiest languages to generate? Furthermore, can't AI programmers feed its own codebase into the model via context, RAG, etc. in the same way that most other programmers do?
> I may have missed the marketing you have seen, but I don't see the big AI companies claiming that they are anything but tools that can help humans do things or replace certain human tasks. They do not advertise super human capability in intelligence tasks.
You are downplaying the claims being made by AI companies and its proponents.
According to Sam Altman just a few days ago[1]:
> We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence
> we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them
> We already hear from scientists that they are two or three times more productive than they were before AI.
If a human assisted by AI can be more productive than a human alone, then why isn't this productivity boost producing improvements at a faster rate than what the tech industry has been able to deliver so far? Why aren't AI companies dogfooding their products and delivering actual value to humanity beyond benchmark results and shiny demos?
Again, none of this requires actual superhuman levels of intelligence or reaching the singularity. But just based on what they're telling us their products are capable of, the improvements to their own capabilities should be exponential by now.
Karpathy himself gave a perfect example in the talk with that restaurant menu to pictures app - it took few hours of AI-assisted coding to make it, and a week of devops bullshit to publish it. This is the case for everyone, so it slows down the feedback cycles right now.
Give it a couple of months; if we don't have clear evidence of recursive improvements by this time next year, I'll concede something is really off about it all.
I'm simply pointing out the dissonance between what AI companies have been telling us for the past 2+ years, and the results that we would expect to see if their claims were true. I'm not holding my breath that their promises will magically materialize given more time. If they were honest, they would acknowledge that the current tech simply won't get us there because of fundamental issues that still haven't been addressed (e.g. hallucination). But instead it's more profitable to promote software that "thinks", "reasons", and is "close to digital superintelligence".
That menu app is an example of vibe coding, not of increased productivity. He sidestepped the bulk of the work that a human still needs to do to ensure that the software works beyond the happy path scenario he tested, has no security issues, and so on. Clearly, the reason the DevOpsy tasks took him much longer is because he's not an ops person. The solution to this isn't to offload these tasks to AI as well, and ignore all the issues that this could cause on the operational side. It's to either offload them to a competent ops engineer, or become familiar with the tools and processes yourself so that it doesn't take you a week to do them next time.
If you want to use AI to assist you with mindless mechanical tasks, that's fine, I frequently do so too. But don't tell me it's making you more productive when you ignore fundamental processes of software engineering.
If the former then yes singularity. The only hope is it's "good will" (wouldn't bet on that) or turning off switches.
If the latter you still need more workers (programmers or whatever they'll be called) due to increased demand for compute solutions.
That's too coarse of a choice. It's better than people at increasingly large number of distinct tasks. But it's not good enough to recursively self-improve just yet - though it is doing it indirectly: it's useful enough to aid researchers and businesses in creating next generation of models. So in a way, the recursion and resulting exponent are already there, we're just in such early stages that it looks like linear progress.
Yes and we've actually been able to witness in public the dubious contributions that Copilot has made on public Microsoft repositories.
3 to 5 companies iso of the hundreds of thousands who sell software now
https://leanpub.com/patterns-of-application-development-usin...
I kind of expect that from someone heading a company that appears to have sold-the-farm in an AI gamble. It’s interesting to see a similar viewpoint here (all biases considered)
What does this mean? An LLM is used via a software interface. I don’t understand how “take software out of the loop” makes any sense when we are using reprogrammable computers.
Our current computing paradigm is built on APIs stacked on APIs.
Those APIs exist to standardize communication between entities.
LLMs are pretty good at communicating between entities.
Why not replace APIs with some form of LLM?
The rebuttal would be around determinism and maintainability, but I don't think the strongman argument is weak enough to dismiss out of hand. Granted: these would likely be highly-tuned, more deterministic specialized LLMs.
Maybe I have some misconception here. I think seeing a program or system that is doing this “replace APIs with LLMs” thing would help me understand.
Started learning metal guitar seriously to forget about industry as a whole. Highly recommended!
> imagine that the inputs for the car are on the bottom, and they're going through the software stack to produce the steering and acceleration
> imagine inspecting them, and it's got an autonomy slider
> imagine works as like this binary array of a different situation, of like what works and doesn't work
--
Software 3.0 is imaginary. All in your head.
I'm kidding, of course. He's hyping because he needs to.
Let's imagine together:
Imagine it can be proven to be safe.
Imagine it being reliable.
Imagine I can pre-train on my own cheap commodity hardware.
Imagine no one using it for war.
I want to build things, not "launch startups". If someone asks me about a minimal thing of what I build, I want to be able to speak for hours about it, in detail. I want to completely understand it. I'm an engineer, not an enterpreneur.
This whole concept of Software 3.0 takes away all the craft from the thing. It's souless.
- "How did you do it?"
- "Dunno, just prompted it LOL."
Sounds very dumb to me. Even if you make loads of money off of it.
That's why I also believe it will not be a "gateway drug to software development". That's just a profound misconception of what makes good software developers tick.
They are now trying to sell the idea of control. You can "put it in a leash". It's pathetic. I don't want control, I want to understand it. Internalize it.
One thing that never changed in software development is that _you get the payoff_ for the hard work. You solve the mystery. You figure out how it works, by yourself.
Anyone that looked a little close to AI knows that it doesn't work that way. It's a black box thing. You will never fully get the knowledge payoff.
I'm sure he's a great guy, but let's face it: He's there speaking those things because he made some popular videos in the past and they needed to renew their PR strategy.
Speaking of PR strategy...
> [...] augments [...]
Iron Man does not buy his metal suit at Costco. He builds it, in a cave, with a bunch of scraps. That's the magic of building things.
> I really don't think people understand how great things could be.
That's not the issue. I think it could be great, but they're being greedy and underestimating their best audience.
Instead, they're focusing on complete beginners, in the hopes that those beginners will generate enough monkey-bashing to train a model that can churn good quality "software 1.0" (which is the real deal, so far irreplaceable). I believe that's a mistake.
Have you tried building things with this stuff? I have, it's exciting, there's all kinds of new things to build. I haven't been as excited to build things for a while before this.
That's what their PR material seems to imply.
> Have you tried building things with this stuff?
Yes.
> it's exciting, there's all kinds of new things to build
Yes, and no. I don't want to pay to build, or be dependent on an API or model delivery. It means I want to build the juicy stuff, the models themselves. Generate synthetic data, experiment with different approaches for training, etc. That is way outside the reach of common people (it can't be done now, you need lots of money).
I also don't want to buy a fancy GPU so early. Even the most expensive ones are currently very weak to do anything of real value. I could run some inference or adjust some weights, but I would still be at the mercy of some company delivering me base models.
The prices are all inflated because of the hype, and it seems AI companies are synchronizing this inflation with the hardware companies. I'm betting on this pricing bubble to pop.
Regarding the stuff I can do with already pre-trained models, there's not much to learn. In its core, it's basically the same good old stitching of APIs (I've been doing this for decades). I tried using the AI IDEs, but didn't found much value in them. I'm sure they're great for some scenarios, but it's more of a gimmick to use developers to generate training data than anything else.
Regarding "agents" and stuff, I have zero interest in competing in this "user of model" market. It's so easy to do, that any new idea is flooded with attempts and saturates in mere weeks.
When the opportunity to build something _really_ cool and novel appears, I'll give it a real chance. The race between big tech competitors is killing those opportunities for now, only people hired by big AI companies get to do it, and I'm not one of them.
I've self hosted everything except training, I've been using colab for that, not because it's faster (it isn't) but because the heat is awful.
Before, Google would launch a new Android version. Or Facebook would launch React, and developers would flock en masse to the ecosystem and the new features. These are very low-barrier entry ecosystems. All you need is a cheap computer or phone, which you probably already have for other reasons.
I get the impression that this is not happening with AI tech. Whoever is flocking to these is not contributing much to improve the ecosystem. There's a lot of users, but they're not doing anything interesting, and it's not getting the traction they expected.
I could be wrong, but I think I'm on the right track on this one. High-value collaborators know that they can jump in at any time, there's nothing really special about it that requires the early investment, and currently there is no clear reward for being an early adopter.
The danger I see is related to psychological effects caused by humans using LLMs on other humans. And I don't think that's a scenario anyone is giving much attention to, and it's not that bad (it's bad, but not world end bad).
I totally think we should all build it. To be trained from scratch on cheap commodity hardware, so that a lot of people can _really_ learn it and quickly be literate on it. The only true way of democratizing it. If it's not that way, it's a scam.
We need to completely own it and remove the control these shady companies have over it.
Karpathy says nothing changed in 70 years of software. Something very important did: free software.
I don’t think it’s the 4th wave of pioneering a new dawn of civilization but it’s clear LLMs will remain useful when applied correctly.
I stick by my general thesis that OSS will eventually catch up or the gap will be so small only frontier applications will benefit from using the most advanced models
It felt like that was the direction for a while, but in the last year or so, the gap seems to have widened. I'm curious whether this is my perception or validated by some metric.
One crazy thing is that since I keep all my PIM data in git in flat text I now have essentially "siri for Linux" too if I want it. It's a great example of what Karpathy was talking about where improvements in the ML model have consumed the older decision trees and coded integrations.
I'd highly recommend /nothink in the system prompt. Qwen3 is not good at reasoning and tends to get stuck in loops until it fills up its context window.
My current config is qwen2.5-coder-0.5b for my editor plugin and qwen3-8b for interactive chat and aider. I use nibble quants for everything. 0.5b is not enough for something like aider, 8b is too much for interactive editing. I'd also recommend shrinking the ring context in the neovim plugin if you use that since the default is 32k tokens which takes forever and generates a ton of heat.
Another way to put it, is that over time you see this, it usually takes a little while for open source projects to catch up, but once they do they gain traction quite quickly over the closed source counter parts.
The time horizons will be different as they always are, but I believe it will happen eventually.
I’d also argue that browsers got complicated pretty fast, long cry from libhtml in a few short years.
[0]: of which I contend most useful applications of this technology will not be the generalized ChatGPT interface but specialized highly tuned models that don’t need the scope of a generalized querying
I think it's a bit early to change your mind here. We love your 2.0, let's wait for some more time till th e dust settles so we can see clearly and up the revision number.
In fact I'm a bit confused about the number AK has in mind. Anyone else knows how he arrived at software 2.0?
I remember a talk by professor Sussman where he suggest we don't know how to compute, yet[1].
I was thinking he meant this,
Software 0.1 - Machine Code/Assembly Code Software 1.0 - HLLs with Compilers/Interpreters/Libraries Software 2.0 - Language comprehension with LLMs
If we are calling weights 2.0 and NN with libraries as 3.0, then shouldn't we account for functional and oo programming in the numbering scheme?
Nerds are good at the sort of reassuring arithmetic that can make people confident in an idea or investment. But oftentimes that math misses the forest for the trees, and we're left betting the farm on a profoundly bad idea like Theranos or DogTV. Hey, I guess that's why it's called Venture Capital and not Recreation Investing.
If anything it seemed like the middle ground between AI boosters and doomers.
Maybe they didn't, and it's just your perception.
Software 2.0? 3.0? Why stop there? Why not software 1911.1337? We went through crypto, NFTs, web3.0, now LLMs are hyped as if they are frigging AGI (spoiler, LLMs are not designed to be AGI, and even if they were, you sure as hell won't be the one to use them to your advantage, so why are you so irrationally happy about it?).
Man this industry is so tiring! What is the most tiring is the dog-like enthusiasm of the people who buy it EVERY.DAMN.TYPE, as if it's gonna change the life of most of them for the better. Sure, some of these are worse and much more useless than others (NFTs), but in the core of all of it is this cult-like awe we as a society have towards figures like the Karpathy's, Musks and Altmans of this world.
How are LLMs gonna help society? How are they gonna help people work, create and connect with one another? They take away the joy of making art, the joy of writing, of learning how to play a music instrument and sing, and now they are coming for software engineering. Sure, you might be 1%/2% faster, but are you happier, are you smarter (probably not: https://www.mdpi.com/2076-3417/14/10/4115)?
That NotebookLM podcast was like the most unpleasant way I can imagine to consume content. Reading transcripts of live talks is already pretty annoying because it's less concise than the written word. Having it re-expanded by robot-voice back to audio to be read to me just makes it even more unpleasant.
Also sort of perverse we are going audio->transcript->fake audio. "YC has said the official video will take a few weeks to release," - I mean shouldn't one of the 100 AI startups solve this for them?
Anyway, maybe it's just me.. I'm the kind of guy that got a cynical chuckle at the airport the other week when I saw a "magazine of audio books".
The voices sounded REALLY good the first time I used it. But then sounded exactly the same every time after that and became underwhelmed.
https://vocaroo.com/1nZBz5hdjwEh
As a bonus its hilarious in its own right.
"We need to rewrite a lot of software," ok... why?
"AI is the new electricity" Really now... so I should expect a bill every month that always increases and to have my access cut off intermittently when there's a rolling AI power outage?
Interesting times indeed.
Who wants to start a pool on when the first advertisement for "Software 3.0" goes up in an airport somewhere?
great name already
They want to onboard as many people on their stuff and make them as dependent on it as possible, so the switching costs are more.
It's the classic scam. Look at what Meta are doing now that they reached end of the line and are trying to squeeze out people for profitability:
- Bringing Ads to WhatsApp: https://apnews.com/article/whatsapp-meta-advertising-messagi...
- Desperately trying by any illegal means possible to steal your data: https://localmess.github.io/
- Firing all the people who built their empire: https://www.thestreet.com/employment/meta-rewards-executives...
- Enabled ethnic cleansing in multiple instances: https://www.amnesty.org/en/latest/news/2022/09/myanmar-faceb...
If you can't see the total moral bankruptcy of Big Tech, you gotta be blind. Don't Be Evil my ass. To me, LLMs have only one purpose: dumb down the population, make people doubt what's real and what's not, and enrich the tech overlords while our societies drown in the garbage they create.
"Q: What does your name (badmephisto) mean?
A: I've had this name for a really long time. I used to be a big fan of Diablo2, so when I had to create my email address username on hotmail, i decided to use Mephisto as my username. But of course Mephisto was already taken, so I tried Mephisto1, Mephisto2, all the way up to about 9, and all was taken. So then I thought... "hmmm, what kind of chracteristic does Mephisto posess?" Now keep in mind that this was about 10 years ago, and my English language dictionary composed of about 20 words. One of them was the word 'bad'. Since Mephisto (the brother of Diablo) was certainly pretty bad, I punched in badmephisto and that worked. Had I known more words it probably would have ended up being evilmephisto or something :p"
Unbelievable. Perhaps some techies should read Goethe's Faust instead of Lord of the Rings.
If you want to scoff at anyone, scoff at 1990s Blizzard Entertainment for using those names in that way
> The more reliance we have on these models, which already is, like, really dramatic
Please point me to a single critical component anywhere that is built on LLMs. There's absolutely no reliance on models, and ChatGPT being down has absolutely no impact on anything beside teenagers not being able to cheat on their homeworks and LLM wrappers not being able to wrap.
I love Andrej, but come on.
Writing essentially punch cards 70 years ago, writing C 40 years ago and writing Go or Typescript or Haskell 10 years ago, these are all very different activities.
The main thing that changed about programming is the social/political/bureaucratic side.
> LLMs make mistakes that basically no human will make, like, you know, it will insist that 9.11 is greater than 9.9, or that there are two bars of strawberry. These are some famous examples.
But you answered it: It’s a stupid mistake a human makes when trying to mock the stupid mistakes that LLMs make!
One bundles "AGI" with broken promises and bullshit claims of "benefits to humanity" and "abundance for all" when at the same time it takes jobs away with the goal of achieving 10% global unemployment in the next 5 years.
The other is an overpromised scam wrapped up in worthless minted "tokens" on a slow blockchain (Ethereum).
Terms like "Software 3.0", "Web 3.0" and even "AGI" are all bullshit.
If you read the talk you can find out this and more :)
It takes mouse clicks, sends them to the LLM, and asks it to render static HTML+CSS of the output frame. HTML+CSS is basically a JPEG here, the original implementation WAS JPEG but diffusion models can't do accurate enough text yet.
My conclusions from doing this project and interacting with the result were: if LLMs keep scaling in performance and cost, programming languages are going to fade away. The long-term future won't be LLMs writing code, it'll be LLMs doing direct computation.