Feedback 1: The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?
As a non-Cursor user who does AI programming, there is nothing there to make me want to try it out.
Feedback 2: I feel any new agentic AI tool for programming should have a comparison against Aider[1] which for me is the tool to benchmark against. Can you give a compelling reason to use this over Aider? Don't just say "VSCode" - I'm sure there are extensions for VSCode that work with Aider.
As an example of the questions I have:
- Does it have something like Aider's repomap (or better)?
Thanks for the feedback. We'll definitely add a feature list. To answer your question, yes - we support Cursor's features (quick edits, agent mode, chat, inline edits, links to files/folders, fast apply, etc) using open source and openly-available models (for example, we haven't trained our own autocomplete model, but you can bring any autocomplete model or "FIM" model).
We don't have a repomap or codebase summary - right now we're relying on .voidrules and Gather/Agent mode to look around to implement large edits, and we find that works decently well, although we might add something like an auto-summary or Aider's repomap before exiting Beta.
Regarding context - you can customize the context window and reserved amount of token space for each model. You can also use "@ to mention" to include entire files and folders, limited to the context window length. (you can also customize the model's reasoning ability, think tags to parse, tool use format (gemini/openai/anthropic), FIM support, etc).
An important Cursor feature that no one else seems to have implemented yet is documentation indexing. You give it a base URL and it crawls and generates embeddings for API documentation, guides, tutorials, specifications, RFCs, etc in a very language agnostic way. That plus an agent tool to do fuzzy or full text search on those same docs would also be nice. Referring to those @docs in the context works really well to ground the LLMs and eliminate API hallucinations
Back in 2023 one of the cursor devs mentioned [1] that they first convert the HTML to markdown then do n-gram deduplication to remove nav, headers, and footers. The state of the art for chunking has probably gotten a lot better though.
The continue.dev plugin for Visual Studio Code provides documentation indexing. You provide a base URL and a tag. The plugin then scrapes the documentation and builds a RAG index. This allows you to use the documentation as context within chat. For example, you could ask @godotengine what is a sprite?
Context7 is missing lots of info pieces from the repos it indexing and getting overbloated with similar sounding repos, which is becoming confusing for LLM's.
can you elaborate on how context7 handles document indexing or web crawling. If i connect to the mcp server, will it be able to crawl websites fed to it?
This is a good point.We've stayed away from documentation assuming that it's more of a browser agent task, and I agree with other commenters that this would make a good MCP integration.
I wonder if the next round of models trained on tool-use will be good at looking at documentation. That might solve the problem completely, although OSS and offline models will need another solution. We're definitely open to trying things out here, and will likely add a browser-using docs scraper before exiting Beta.
I agree that on the face of it this is extremely useful. I tried using it for multiple libraries and it was a complete failure though, it failed to crawl fairly standard mkdocs and sphynx sites. I guess it's better for the 'built in' ones that they've pre-indexed
I use it mostly to index stuff like Rust docs on docs.rs and rendered mdbooks. The RAG is hit or miss but I haven’t had trouble getting things indexed.
I've used both Cursor and Aider but I've always wanted something simple that I have full control on, if not just to understand how they work. So I made a minimal coding agent (with edit capability) that is fully functional using only seven tools: read, write, diff, browse, command, ask, and think.
I can just disable `ask` tool for example to have it easily go full autonomous on certain tasks.
> The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?
That's all in the website, not the README, but yes a bulleted list or identical info from the site would work well.
Am I the only one that has had bad experiences with aider? For me each time I've tried it, I had to wrestle with and beg the AI to do what I wanted it to do, almost always ending in me just taking over and doing it myself.
If nearly everytime I use it to accomplish something it gets it 40-85% correct and I have to go in to fix the other 60-15%, what is the point? It's as slow as hand writing code then, if not slower, and my flow with Continue is simply better:
1. CTRL L block of code
2. Ask a question or give a task
3. I read what it says and then apply the change myself by CTRL C and then tweaking the one or two little things it inevitably misunderstood about my system and its requirements
Aider is quite configurable, you need to look at the leaderboard and copy one of the high performing model/config setups. Additionally, you need to autoload files such as the readme and coding guidelines for your project.
Aider's killer features are integration of automated lint/typecheck/test and fix loops with git checkpointing. If you're not setting up these features you aren't getting the full value proposition from it.
Never used the tool. But it seems both aider and cursor are not at their strongest out of the box? I read similar thing about cursor and doing custom configuration so it picks up coding guidelines etc etc. Is there some kind of agreed best practice standard that is documented or just try and error best practices from users sharing these?
Aider's leaderboard is a baseline "best practice" for model/edit format/mode selection. Beyond that, it's basically whatever you think are best practices in engineering and code style, which you should capture in documents that can serve double duty both for AI and for human contributors. Given that a lot of this stuff is highly contentious it's really up to you as to pick and choose what you prefer.
and what do you do if you value privacy and don't want to share everything in your project with silicon valley, or you don't want to spend $8/hr to watch Claude do your hobby for you?
I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
If I have to write a prompt as long as Claude's system prompt in order to get reliable edits, I'm not sure I've saved any time at all.
At least you seem to be admitting that aider is useless with local models. That's certainly my experience.
I haven't used local models. I don't have the 60+gb of vram to do so.
I've tested aider with gemini2.5 with prompts as basic as 'write a ts file with pupeteer to load this url, click on button identified by x, fill in input y, loop over these urls' and it performed remarkably well.
Llm performance is 100% dependent on the model you're using so you ca hardly generalize from a small model you run locally on a cpu.
Local models just aren't there yet in terms of being able to host locally on your laptop without extra hardware.
We're hoping that one of the big labs will distill an ~8B to ~32B parameter model that performs SOTA benchmarks! This would be huge both in cost and probably make it reasonable for most people to code with agents in parallel.
This is exactly the issue I have with copilot in office. It doesn't learn from my style so I have to be very specific how I want things. At that point it's quicker to just write it myself.
Perhaps when it can dynamically learn on the go this will be better. But right now it's not terribly useful.
I really wonder why dynamic learning hasn't been explored more. It would be a huge moat for the labs (everyone would have to host and dynamically train their own model with a major lab). Seems like it would make the AI way smarter too.
> At least you seem to be admitting that aider is useless with local models. That's certainly my experience.
I don't see it as an admission. I'd wager 99% of Aider users simply haven't tried local models.
Although I would expect they would be much worse than Sonnet, etc.
> I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
Examples? Aider is a great tool and much (probably most) of it is written by AI.
It feels like everyone and their mother is building coding agents these days. Curious how this compares to others like Cline, VS Code Copilot's Agent mode, Roo Code, Kilo Code, Zed, etc. Not to mention those that are closed source, CLI based, etc. Any standout features?
Void dev here! The biggest players in AI code today are full IDEs, not just extensions, and we think that's because they simply feel better to use by having more control over the UX.
There are certainly a lot of alternatives that are plugins(!), but our differentiation right now is being a full open source IDE and having all the features you get out of the big players (quick edits, agent mode, autocomplete, checkpoints).
Surprisingly, all of the big IDEs today (Cursor/Windsurf/Copilot) send your messages through their backend whenever you send a message, and there is no open source full IDE alternative (besides Void). Your connection to providers is direct with Void, and it's a lot easier to spin up your own models/providers and host locally or use whatever provider you want.
We're planning on building Git branching for agents in the next iteration when LLMs are more independent, and controlling the full IDE experience for that will be really important. I worry plugins will struggle.
I should have been more careful with my wording - I was talking about major VS Code-based IDEs as alternatives. Zed is very impressive, and we've been following them since before Void's launch!
Maybe I live in a bubble, but it's surprising to me that nobody mentions Jetbrains in all these discussions. Which in my professional working experience are the only IDEs anyone uses :shrug:
> The biggest players in AI code today are full IDEs, not just extensions,
Claude Code (neither IDE nor extension) is rapidly gaining ground, it's biggest current limitation being cost, which is likely to get resolved sooner rather than later (Gemini Code anyone?). You're right about the right now, but with the pace at which things are moving, the trends are honestly more relevant than the status quo.
Just want to share our thinking on terminal-based tools!
We think in 1-2 years people will write code at a systems level, not a function level, and it's not clear to us that you can do that with text. Text-based tools like Claude Code work in our text-based-code systems today, but I think describing algorithms to a computer in the future might involve more diagrams, and terminal will not be ideal. That's our reasoning against building a tool in the terminal, but it clearly works well today, and is the simplest way for the labs to train/run terminal tool-use agents.
Every system can be translated can be translated to text though. If there is one thing LLMs have essentially always been good at, it is processing written language.
Hey by the way I hear all communication between people is going to shift to pictograms soon. You know -- emoji and hieroglyphs. Text just isn't ideal, you know
Diagrams are great at providing a simplified view of things but they suck ass when it comes to providing details.
There's a reason why fully creating systems from them died 20 years ago - and it wasn't just because the code gen failed. Finding a bug in your spec when its a mess of arrows and connections can be nigh impossible.
This is completely true, and it's a really common objection.
I don't imagine people will want to fully visualize codebases in a giant unified diagram, but I find it hard to imagine that we won't have digests and overviews that at least stray from plaintext in some way.
I think there are a lot of unexplored ways of using AI to create an intelligent overview of a repo and its data structures, or a React project and its components and state, etc.
> I think there are a lot of unexplored ways of using AI to create an intelligent overview of a repo and its data structures, or a React project and its components and state, etc.
Sounds exactly like what DeepWiki is doing from the Devin AI Agent guys: https://deepwiki.com
Spending too much time on HN and other spaces (including offline) where people talk about what they're doing. Making LLM-based things has also been my job since pretty much the original release of GPT3.5 which kicked off the whole industry, so I have an excuse.
The big giveaway is that everyone who has tried it agrees that it's clearly the best agentic coding tool out there. The very few who change back to whatever they were using before (whether IDE fork, extension or terminal agent), do so because of the costs.
Relevant post on the front page right now: A flat pricing subscription for Claude Code [0]. The comment section supports the above as well.
Together with 3.7 Sonnet. And the claim was that it is rapidly gaining ground, not that it sparked initial interest. I still don’t see much proof of adoption. This is actually the first I’ve heard about anyone actually actively using it since its launch.
>This is actually the first I’ve heard about anyone actually actively using it
I've been reaching for Claude Code first for the last couple weeks. They had offered me a $40 credit after I tried it and didn't really use it, maybe 6 weeks ago, but since I've been using it a lot. I've spent that credit and another $30, and it's REALLY good. One thing I like about Claude Code is you can "/init" and it will create a "CLAUDE.md" that saves off it's understanding of the code, and then you can modify it to give it some working knowledge.
I've also tried Codex with OpenAI and o4-mini, and it works very well too, though I have had it crash on me which claude has not.
I did try Codex with Gemini 2.5 Pro Preview, but it is really weird. It seems to not be able to do any editing, it'll say "You need to make these edits to this file (and describe high level fixes) and then come back when you're done and I'll tell you the edits to do to this other file." So that integration doesn't seem to be complete. I had high hopes because of the reviews of the new 2.5 Pro.
I also tried some claude-like use in the AI panel in Zed yesterday, and made a lot of good progress, it seemed to work pretty well, but then at some point it zeroed out a couple files. I think I might have reached a token limit, it was saying "110K out of 200K" but then something else said "120K" and I wonder if that confused it. With Codex you can compact the history, I didn't see that in Zed. Then at some point my Zed switched from editing to needing me to accept every change. I used nearly the entire trial Zed allowance yesterday asking it to impelement a Galaga-inspired game, with varying success.
The versioning and git branching sounds really neat, I think! Can you say more about that? Curious if you've looked at/are considering using Jujutsu/JJ[0] in addition or instead of git for this, I've played with it some, but been considering trying it more with new AI coding stuff, it feels like it could be a more natural fit than actually creating explicit commits for every change, while still tracking them all? Just a thought!
Interesting, thanks for sharing! We planned on spinning up a new Git branch and shallow Git clone (or possibly worktree/something more optimized) for each agent, and also adding a small auto-merge-with-LLM flow, although something more granular like this might feel better. If we don't use a versioning tool like JJ at first (may just use Git for simplicity at first), we will certainly consider it later on, or might end up building our own.
If you're open to something CLI-based, my project Plandex[1] offers git-based branching (and granular versioning) for AI coding. It also has a sandbox (also built on git) that keeps cumulative changes separate from project files until they're ready to apply.
Isn't continue.dev also open source and not using 'their backend' when sending stuff? I didn't use it in a while, but I know it had support for llama, local models for tab completions, etc.
The extensions API lets you control the sidebar, but you basically don't have control over anything in the editor. We wouldn't have been able to build our inline edit feature, or our navigation UI if we were an extension.
Big fan of Continue btw! There's a small difference in how we handle inline edits - if you've used inline edits in Cursor/Windsurf/Void you'll notice that a box appears above the text you are selecting, and you can type inside of it. This isn't possible with VS Code extensions alone (you _have_ to type into the sidebar).
If I understand your question correctly - Cline and Roo both display diffs by using built-in VS Code components, while Cursor/Windsurf/Void have built their own custom UI to display diffs. Very small detail, and just a matter of preference.
It's about whether the tool can edit just a few lines of the file, or whether it needs to stream the whole file every time - in effect, editing the whole file even though the end result may differ by just a few lines.
I think editing just a part of the file is what Roo calls diff editing, and I'm asking if this is what the person above means by line edits.
I think it'd be worthwhile to call out in a FAQ/comparison table specifically how something like an "AI powered IDE" such as Cursor/Void differs from just using an IDE + a full-featured agentic plugin (VS Codium + Cline).
I agree, having used Cline I am not sure what advantages this would offer, but I would like to know (beyond things like “it’s got an open source ide” - Cline has those too specifically because I can use it in my open source ide)
Yep, Void is a VSCode fork, but we're definitely not wed to VSCode! Building our own IDE/browser-port is not out of the picture. We'll have to see where the next iteration of tool-use agents takes us, but we strongly feel writing typescript/rust/react is not the endgame when describing algorithms to a computer, and a text-based editor might not be ideal in 10 years, or even 2.
openAI chose to acquire windsurf for 3B instead of building something like Void, very curious decision. awesome project, will be closely following this
I think it's worth mentioning that the Theia IDE is a fully open source VS Code-compatible IDE (not a fork of VS Code) that's actively adding AI features with a focus on transparency and hackability.
We considered Theia, and even building our own IDE, but obviously VSCode is just the most popular. Theia might be a good play if Microsoft gets more aggressive about VSCode forks, although it's not clear to us that people will be spending their time writing code in 1-2 years. Chances are definitely not 0 that we end up moving away from VSCode as things progress.
It's the most popular because the tech is decades old. You're all rushing to copy obsolete technology. Now we have 10 copies of an obsolete technology.
I mean I guess I should thank the 10 teams who forked VSCode for proving beyond all reasonable doubt that VSCode is architecturally obsolete. I was already trying to make that argument, but the 10 forks do it so much better.
>> The biggest players in AI code today are full IDEs, not just extensions
Are you sure? I have some expertise with my IDE, some other extension which solve problems for me, a wide range of them, I've learnt shortcuts, troubleshooting, where and who ask for help, but now you're telling me that I am better off leaving all that behind, and it's better for me? ;o
My 2c: I rarely need agent mode. As an older engineer, I usually know what exactly needs to be done and have no problem describing to the LLM what to do to solve what I'm aiming to do. Agent mode seems its more for novice developers who are unsure what tasks need to be broken down and the strategy that they are then solved.
I’m a senior engineer and I find myself using agents all the time. Working on huge codebases or experimenting with different languages and technologies makes everybody “novice”.
Agent mode seems to be better at realizing all the places in the code base that need to be updated, particularly if the feature touches 5+ files, whereas editor starts to struggle with features that touch 2-3 files. "every 60 ticks, predict which items should get cached based on user direction of travel, then fetch, transform and cache them. when new items need to be drawn, check the cache first and draw from there, otherwise fetch and transform on demand." this touches the core engine, user movement, file operations, graphics etc and agent mode seems to have no problem with this at all.
Personally, I’ve found agents to be a great “multitasking” tool.
Let’s say I make a few changes in the code that will require changes or additions to tests. I give the agent the test command I want it to run and the files to read, and let it cycle between running tests and modifying files.
While it’s doing that, I open Slack or do whatever else I need to do.
After a few minutes, I come back, review the agent’s changes, fix anything that needs to be fixed or give it further instructions, and move to the next thing.
Same here. It’s fine for me to use the ChatGPT web interface and switch between it and my IDE/editor.
Context switching is not the bottleneck. I actually like to go away from the IDE/keyboard to think through problems in a different environment (so a voice version of chatgpt that I can talk to via my smartwatch while walking and see some answers either on my smartglasses or via sound would be ideal… I don’t really need more screen (monitor) time)
Sorry to say but this workflow just isn't great unless you're working on something where AI models aren't that helpful -- obscure language/libraries/etc where they hallucinate or write non-idiomatic solutions if left to run much by themselves. In that case, you want the strong human review loop that comes from crafting the context via copy paste and inspecting the results before copying back.
For well trodden paths that AI is good at, you're wasting a ton of time copying context and lint/typechecking/test results and copying back edits. You could probably double your productivity by having an agentic coding workflow in the background doing stuff that's easy while you manually focus on harder problems, or just managing two agents that are working on easy code.
20yrs engineer here, all my life I've dreamed of having something that I could ask general questions about a codebase to and get back a cohesive, useful answer. And that future is now.
I would put it more generic. I love that one can now ask as many dumb questions as it takes about anything.
With humans there is this point where even the most patient teacher has to move on to do other things. Learning is best when one is curios about something and curiosity is more often specific. (When generic one can just read the manual)
At it's most basic, agentic mode is necessary for building the proper context. While I might know the solution at the high level, I need the agent to explore the code base to find things I reference and bring them into context before writing code.
Agentic mode is also super helpful for getting LLMs from "99%" correct code to "100%" correct code. I'll ask them to do something to verify their work. This is often when the agent realizes it hallucinated a method name or used a directionally correct, but wrong column name.
I dont agree. I use agents all the time. I say exactly what the agent should do but often changes need to be made in more than one place in the code base. Could I prompt it for every change one at a time per file? Sure, but it is faster do prompt an agent for it.
"Novice mode" has always been true for the newcomer. When I was new, I really was at the mercy of:
1) Authority (whatever a prominent evangelist developer was peddling)
2) The book I was following as a guide
3) The tutorial I was following as a guide
4) The consensus of the crowd at the time
5) Whatever worked (SO, brute force, whatever library, whatever magic)
It took a long ass time before I got to throw all five of those things out (throw the map away). At the moment, #5 on that list is AI (whatever works). It's a Rite of Passage, and because so much of being a developer involves autodidacticism, this is a valley you must go through. Even so, it's pretty cool when you make it out of that valley (you can do whatever you want without any anxiety about is this the right path?). You are never fearful or lost in the valley(s) for the most part afterward.
Most people have not deployed enough critical code that was mostly written with AI. It's when that stuff breaks, and they have to debug it with AI, that's when they'll have to contend with the blood, sweat, and tears. That's when the person will swear that they'll never use AI again. The thing is, we can never not use AI ever again. So, this is the trial by fire where many will figure out the depth of the valley and emerge from it with all the lessons. I can only speculate, but I suspect the lessons will be something along the lines of "some things should use less AI than others".
I think it's a cool journey, best of luck to the AI-first crowd, you will learn lessons the rest of us are not brave enough to embark on. I already have a basket of lessons, so I travel differently through the valley (hint: My ship still has a helm).
> that's when they'll have to contend with the blood, sweat, and tears.
Or, most software will become immutable. You'll just replace it.
You'll throw away the mess, and let a newer LLM build a better version in a couple of days. You ask the LLM to write down the specs for the newer version based on the old code.
If that is true, then the crowd that is not brave enough to do AI-first will just be left behind.
Do we really want that? To be beholden to the hands of a few?
Hell, you can't even purchase a GPU with high enough VRAM these days for an acceptable amount of money. In part because of geopolitics. I wonder how many morerestrictions are there to come.
There's a lot of FOMO going around, those honing their programming skills will continue to thrive, and that's a guarantee. Don't become a vassal when you can be a king.
The scenario you paint sounds very implausible for non-trivial applications, but even if it ends up becoming the development paradigm, I doubt anyone will be "left behind" as such. People will have time to re-skill. The question is whether some will ever want to or would prefer to take up woodworking.
Whether one takes up woodworking or not depends on whether or not development was primarily for profit, with little to not intrinsic enjoyment of the role.
Coding and woodworking are similiar from my perspective, they are both creative arts. I like coding in different lanuages, woodworking is simply a physical manifestion of such. In a world where you only need agents, is not a world where nerds will be employed. Traditional nerds cant stand out from the crowd anymore.
This is peak AI, it only goes downhill from here in terms of quality, the AI first flows will be replaceable. Those offshored teams that we have suffered with for years now will be primarly replaced (google first programmers). And developers will continue, working around the edges. The differences will be that startups wont be able to use technology horading to stifle competition, unless they make themselves immune from the ai vacumes.
I can appreciate the comments further up around how AI can help unravel the mysterys of a legacy codebase. Being able to ask questions about code in quick succession will mean that we will feel more confident. AI is lossy, hard to direct, yet very confident always. We have 10k line functions in our legacy code that nests and nest. How confident are you to let ai go and refactor this code without oversight and ship to a customer? Thus far im not, maybe i dont know the best model and tools to use and how to apply them, but even if one of those logic branches gets hallucinated im in for a very bumpy ride. Watching non-technical people at my org get frusted and stuck with it in a loop is a lot more common then the successes which seem to be the opposite of the experienced engineers who use it as a tool, not a savour. But every situation is different.
If you think you company can be a differentiator in the market because it has access to the same AI tool as every other company? We'll well see about that. I believe there has to be more.
Im an experienced engineer of 30+yrs. Technology comes and goes. AI is just a another tool in the chest. I use it primarily because i dont have to deal with ads. I also use it to be a electrical engineer, designing circuts in areas i am not familiar with. I can see very simply the noivce side of the coin, it feels like you have super powers because you just dont know enough about the subject to be aware of anything else. Its sped up the learning cycle considerably beacause of the conversational nature. After a few years of projects, i know how to ask better questions to get better results.
If that is true, then the crowd that is not brave enough to do AI-first will just be left behind.
Not even, devoured might be more apt. If I'm manually moving through this valley and a flood is coming through, those who are sticking automatic propellers and navigation systems on their ship are going to be the ones that can surf the flood and come out of the valley. We don't know, this is literally the adventure. I'm personally on the side of a hybrid approach. It's fun as hell, best of luck to everyone.
It's in poor taste to bring up this example, but I'll mention it as softly as I can. There were some people that went down looking for the Titanic recently. It could have worked, you know what I mean? These are risks we all take.
Quoting the Admiral from Starcraft Broodwars cinematic (I'm a learned person):
> It's in poor taste to bring up this example, but I'll mention it as softly as I can. There were some people that went down looking for the Titanic recently. It could have worked, you know what I mean?
Not sure if you drew the right conclusion from that one.
Considering that Agent Mode saves me a lot of hassle doing refactoring ("move the handler to file X and review imports", "check where this constant is used and replace it with <another> for <these cases>", etc.), I'd say you are missing the point...
I actually flip things - I do the breakdown myself in a SPEC.md file and then have the agent work through it. Markdown checklists work great, and the agent can usually update/expand them as it goes.
I think this perspective is better characterized as “solo” and not “old”. I don’t think your age is relevant here.
Senior engineers are not necessarily old but have the experience to delegate manageable tasks to peers including juniors and collaborate with stakeholders. They’re part of an organization by definition. They’re senior to their peers in terms of experience or knowledge, not age.
Agentic AIs slot into this pattern easily.
If you are a solo dev you may not find this valuable. If you are a senior then you probably do.
One benefit is when working on multiple code bases where the size of the code base is larger than the time spent working on it, so there is still a gap of knowledge. Agents don't guarantee the correctness of a search the same an old search field does, but it offers a much more expressive way to do searches and queries in a code base.
Now that I think about it, I might have only ever used agents for searching and answering questions, not for producing code. Perhaps I don't trust the AI to build a good enough structure, so while I'll use AI, it is one file at a time sort of interaction where I see every change it makes. I should probably try out one of these agent based models for a throw away project just to get more anecdotes to base my opinion on.
Coding agents are the future and it's anyone's game right now.
The main reason I think there is such a proliferation is it's not clear what the best interface to coding agents will be. Is it in Slack and Linear? Is it on the CLI? Is it a web interface with a code editor? Is it VS Code or Zed?
Just like everyone has their favored IDE, in a few years time, I think everyone will have their favored interaction pattern for coding agents.
Product managers might like Devin because they don't need to setup an environment. Software engineers might still prefer Cursor because they want to edit the code and run tests on their own.
Cursor has a concept of a shadow workspace and I think we're going to see this across all coding agents. You kick off an async task in whatever IDE you use and it presents the results of the agent in an easy to review way a bit later.
As for Void, I think being open source is valuable on it's own. My understanding is Microsoft could enforce license restrictions at some point down the road to make Cursor difficult to use with certain extensions.
I've tried many of AI coding IDE's, the best ones like RooCode are good simply because they don't gimp your context. The modern day models are already more then capable enough for many coding tasks, you just need to leave them alone and let them utilize their full context window and all will go well. If you hear a bad experience with any of these IDE's, most of the time its because its limiting use of context or improper management of related functions.
We think terminal tools like Claude Code are a good way for research teams to experiment with tool use (obviously pure text), but definitely don't see the terminal as the endgame for these tools.
I know some folks like using the terminal, but if you like Claude Code you should consider plugging your API key into Void and using Claude there! Same exact model and provider and price, but with a UI around the tool calls, checkpoints, etc.
That doesn't really narrow it down much, YC has backed so many AI coding tools that they've started inbreeding. PearAI (YC Fall '24) is a fork of Continue (YC Summer '23).
One of the founders here - Void will always remain open source! There are plenty of examples of an open source alternative finding its own niche (eg Supabase, Mattermost) and we don't see this being any different.
I've been at many open source meetups with YC founders and can tell you that this is not the thinking at all. Rather, the emphasis is on finding a good carve-line between the open source offering and the (eventual) paid one, so that both sides are viable and can thrive.
Most common these days is to make the paid product be a hosted version of the open source software, but there are other ways too. Experienced founders emphasize to new startups how important it is to get this right and to keep your open source community happy.
No one I've heard is treating open source like a bait and switch; quite the opposite. What is sought is a win-win where each component (open source and paid) does better because of the other.
I think there’s a general misconception out there that open sourcing will cannibalize your hosted product business if you make it too easy to run. But in practice, there’s not a lot of overlap between people who want to self-host and people who want cloud. Most people who want cloud still want it even if they can self-host with a single command.
The weird thing is, the biggest reason I don't use Cursor much is because they just distribute this AppImage, which doesn't install or add itself to the Ubuntu app menu, it just sits there and I have to do
The setuid sandbox is not running as root. Common causes:
* An unprivileged process using ptrace on it, like a debugger.
* A parent process set prctl(PR_SET_NO_NEW_PRIVS, ...)
Failed to move to new namespace: PID namespaces supported, Network namespace supported, but failed: errno = Operation not permitted
I have to go Googling, then realize I have to run it with
Often I'm lazy to do all of this and just use the Claude / ChatGPT web version and paste code back and forth to VS code.
The effort required to start Cursor is the reason I don't use it much. VS code is an actual, bona fide installed app with an icon that sits on my screen, I just click it to launch it. So much easier. Even if I have to write code manually.
AppImageLauncher improves the AppImage experience a lot, including making sure they get added to the menu. I'm not sure if it makes launching without the sandbox easier or not.
Not only did you mess up the formatting, but you pasted a very lengthy code, generated by an LLM. Perhaps consider using a pastebin in the future, if at all.
Yup - honestly the space is so open right now still, everyone is trying haha. It's got quite hard to keep track of different models and their strengths / weaknesses, much less the IDE and editor space! I have no idea which of these AI editors would suite me best and a new one comes out like every day.
I'm still in vim with copilot and know I'm missing out. Anyway I'm also adding to the problem as I've got my own too (don't we all?!), at https://codeplusequalsai.com. Coded in vim 'cause I can't decide on an editor!
I wonder why most agentic patterns don't use multiple different retrieval strategies simultaneously and why most of them don't use CodeGraph 1 during discovery phase. Embeddings aren't enough, Agent induced function/class name search isn't enough.
There's so much happening in this space, but I still haven't seen what would be the killer feature for me: dual-mode operation in IDE and CLI.
In a project where I already have a lot of linting brought into the editor, I want to be able to reuse that linting in a headless mode: start something at the CLI, then hop into the IDE when it says it's done or needs help. I'd be able to see the conversation up to that point and the agent would be able to see my linting errors before I start using it in the IDE. For a large, existing codebase that will require a lot of guardrails for an agent to be successful, it's disheartening to imagine splitting customization efforts between separate CLI and IDE tools.
For me so far, cursor's still the state of the art. But it's hard to go all-in on it if I'll also have to go all-in on a CLI system in parallel. Do any of the tools that are coming out have the kind of dual-mode operation I'm interested in? There's so many it's hard to even evaluate them all.
I posted this the other day, but didn't get a response:
Does anyone think this model of "tool" + "curated model aggregator" + "open source" would be useful for other, non-developer fields? For instance, would an AI art tool with sculpting and drawing benefit from being open source?
I've talked with VCs that love open source developer tools, but they seem to hate on the idea of "open creative tools" for designers, illustrators, filmmakers, and other creatives. They say these folks don't benefit from open source. I don't quite get it, because Blender and Krita have millions of users. (ComfyUI is already kind of in that space, it's just not very user-friendly.)
Why do investors seem to want non-developer things to be closed source? Are they right?
I think it’s mostly a value capture thing. More money to be made hooking devs in then broke creatives and failing studios (no offense, it just seems like creatives are getting crushed right now). In one case you’re building for the tech ecosystem, in the other for the arts. VC will favor tech, higher multiples. Closed source is more protected from theft etc in many cases.
But as you point out there are great solutions so it’s clearly not a dead end path.
I mostly use Cursor for the monthly flat pricing which allows me unlimited (slow) calls to most LLMs (Gemini 2.5 Pro, Claude 3.7, etc) without worrying about spending anything more than $20/month.
Its agent is a lot worse than Cursor's in my experience so far. Even tab edits feel worse.
My understanding is that these are not custom models but a combination of prompting and steering. That makes Cursor's performance relative to others pretty surprising to me. Are they just making more requests? I wonder what the secret sauce is.
One thing I noticed is that there's no cost tracking, so it's very hard to predict how much you're spending. This is fine on tools like Cursor that are all inclusive, but is something that is really necessary if you're bringing your own API keys.
This is a great suggestion. We're actually storing the input/output costs of most models, but aren't computing cost estimates yet. Definitely something to add. My only hesitation is that token-based cost estimates may not be accurate (most models do not provide their tokenizers, so you have to eg. estimate the average number of characters per token in order to compute the cost, and this may vary per model).
It'd probably be useful to just show cost after the fact based on the usage returned from the API. Even if I don't know how much my first request will cost, if I know my last request cost x cents then I can probably have a good idea from there.
I've just installed it and tried to have it create a hello world using gemma3:27b-it-qat through ollama but it refused to do it claiming it doesn't have access to my filesystem.
Then I opened an existing file and asked it to modify a function to return a fixed value and it did the same.
I'm an absolute newb in this space so if I'm doing something stupid I'd appreciate it if you helped me correct it because I already had the C/C++ extension complain that it can only be used in "proper vscode" (I imported my settings from vscode using the wizard) and when this didn't work either it didn't spark joy as Marie Kondo would say.
Please don't get me wrong, I gave this a try because I like the idea of having a proper local open source IDE where I can run my own models (even if it's slower) and have control over my data. I'm genuinely interested in making this work.
Thanks for writing! Can you try mentioning the file with "@"? Smaller models sometimes don't realize that they should look for files and folders, but "@" always gives the full context of whatever is in the file/folder directly to them.
Small OSS models are going to get better at this when there's more of a focus on tool-use, which we're expecting in the next iteration of models.
This is very cool and I'm always happy to see more competition in this space. That said, two suggestions:
- The logo looks like it was inspired directly from the Cursor logo and modified slightly. I would suggest changing it.
- It might be wise to brand yourself as your own thing, not just an "open source Cursor". I tend to have the expectation that "open source [X]" projects are worse than "[X]". Probably unfair, I know.
Thanks for the suggestions - these issues have been a bit painful for us, and we will probably fix them in the next major update to Void.
Believe it or not, the logo similarity was actually unintentional, though I imagine there was subconscious bias at play (we created ours trying to illustrate "a slice of the Void").
A minor counterpoint, I personally like the "open source Xyz" because I instantly know what the product is supposed to do. It's also very SEO friendly because you don't know the name of the open source version before you find it, so you can Kagi/Google/DDG "open source Cursor" and get it as a top result, instead of a sea of spammy slime.
> I personally like the "open source Xyz" because I instantly know what the product is supposed to do.
But that assumes that you're already familiar with the non-open-source software referenced. I've never used Cursor so I have no idea what it can or can't do. I'm pretty sure I would never have discovered Inkscape if it had consistently been described as an “open-source Illustrator” as I've simply never used Adobe software.
Void dev here! As others have mentioned, VSCode strongly limits the functionality that you can build as an extension. A few things we've built that aren't supported as an extension:
- the Accept|Reject UI and UX
- Cmd+K
- Control over the terminal and tabs
- Custom autocomplete
- Smaller things like ability to open/close the sidebar, onboarding, etc
It's been a lot harder to build an IDE than an extension, but we think having full control over the IDE (whether that's VSCode or something else we build in the future) will be important in the long run, especially when the next iteration of tool-use LLMs comes out (having native control over Git, the UI/UX around switching between iterations, etc).
>Smaller things like ability to open/close the sidebar
Are you sure about this one? I'm sure I have used an extension whose whole purpose was to automatically open or close the sidebar under certain conditions.
As an (ex) VSCode extension developer, VSCode really does lock down what you can do as an extension. It's well intentioned and likely led to the success of VSCode, but it's not great if you want to build entirely new UI interactions. For instance, something like the cmd-k inline generation UI in Cursor is basically impossible as a VSCode extension.
The restrictive extension ecosystem was a big part of VSCode's success. You can compare to Atom, which allowed extensions to do whatever they wanted: Atom ended up feeling exceptionally slow and bloated because extensions had full latitude to grind your IDE to a halt.
But since there seems to be a need for AI-powered forks of VS Code, it could make sense for them all to build off the same fork, rather than making their own.
Eclipse Theia can host VSCode extensions, but it also has its own extension mechanism that offers more customization, it could be a viable alternative: https://theia-ide.org/docs/extensions/
You're right that extensions do manage fine - the main differences right now are UX improvements (many of them are mentioned above). I can see the differences compounding at some point which is why we're focused on the full IDE side.
One of the big _disadvantages_ is that it prevents access to the VSCode-licensed plugins, such as the good C# LSP (seems EEE isn't completely dead). That's something to pay attention to if you're considering a fork and use an affected language.
Since these products supposedly make developers 1000x more productive it should be no problem to just re-implement those proprietary MS plugins from scratch. Right? Any volunteers...?
MS will be tuning Copilot to the point it’s the best agent for C#, for sure. It might take a little longer ofc. But Nadella mentioned to Zuck in a fireside chat that they are not happy with C# support in LLMs and that they are working on this.
Did you mean to say a debugger? That one has an open alternative (NetCoreDbg) alongside a C# extension fork which uses it (it's also what VS Codium would install). It's also what you'd use via DAP with Neovim, Emacs, etc.
Omnisharp is what the base C# extension used previously. It has been replaced by Roslyn LS (although can be switched to back still). You are talking about something you have no up-to-date knowledge of.
I wish all these companies the best and I understand why they’re forking, but personally I really don’t want my main IDE maintained by a startup, especially as a fork. I use Cursor, and I’ve run into a number of bugs at the IDE level that have nothing to do with the AI features. I imagine this is only going to get worse over time.
May I ask why did you decide against starting with (Eclipse) Theia instead of VSCode?
It's compatible but has better integration and modularity, and doing so might insulate you a bit from your rather large competitor controlling your destiny.
Or is the exit to be bought by Microsoft? By OpenAI? And thus to more closely integrate?
If you're open-source but derivative, can they not simply steal your ideas? Or will your value depend on having a lasting hold on your customers?
I'm really happy there are full-fledged IDE alternatives, but I find the hub-and-spoke model where VSCode/MS is the only decider of integration patterns is a real problem. LSP has been a race to the bottom, feature-wise, though it really simplified IDE support for small languages.
This is a good question. Because we're open source, we will always allow you to host models locally for free, or use your own API key. This makes monetization a bit difficult in the short term. As with many devtool companies, the long-term value comes from enterprise sales.
As a data scientist, my main gripe with all these AI-centric IDEs is that they don’t provide data centric tools for exploring complex data structures inherent to data science. AI cannot tell me about my data, only my code.
I’ll be sticking with VSCode until:
- Notebooks are first class objects. I develop Python packages but notebooks are essential for scratch work in data centric workflows
- I can explore at least 2D data structures interactively (including parquet). The Data Wrangler in VSCode is great for this
How is it that the open source Cursor 'alternative' doesn't have a linux option (either via AppImage, as Cursor offers, or something like a flatpak). I understand that open source does not automatically mean linux, but it is like, weird right?
Given that there's a dozen agentic coding IDEs, I only use Cursor because of the few features they have like auto-identification of the next cursor location (I find myself hitting tab-tab-tab-tab a lot, it speeds up repetitive edits). Are there any other IDEs that implement these QOL features, including Void (given it touts itself specifically as a Cursor alternative)?
I think QOL will shift away from your keyboard. Give Claude Code a try and you’ll understand what I mean. Developer UX will shift away from traditional IDEs. At this point I could use notepad for the the type of manual work I do vs how I orchestrate Claude Code.
The reason I have never bothered with Claude Code (or even other agentic tools), is that I still code mostly by hand.
When I am using LLMs, I know exactly what the code should be and just am using it as a way to produce it faster (my Cursor rules are extremely extensive and focused on my personal architecture and code style, and I share them across all my personal projects), rather than producing a whole feature. When I try and use just the agent in Cursor, it always needs significant modifications and reorganization to meet my standards, even with the extensive rules I have set up.
Cursor appeals to me because those QOL features don't take away the actual code writing part, but instead augment it and get rid of some of the tedium.
A trajectory question: do we still have the debate that whether open-source software takes away SDE jobs or makes the pie grow bigger to create more jobs? The booming OS community in the past seem have created multiple billion-dollar markets. On the other hand, we have a lot less growth than before now, and I was wondering if OSS has started to suppressing the demand of SDEs.
Projects like this are great because open source versions need to figure out the right way to do things, rather than the hacky, closed, proprietary alternatives that pop up first and are just trying to consume as many users as possible to get a most quickly.
In that case, a shitty, closed system is good actually because it's another thing your users will need to "give up" if they move to an alternative. By contrast, an open ide like void will hopefully make headway on an open interface between ides and the llm agents in such a way that it can be adapted by neovim people like me or anyone else for that matter
I guess it's hard to switch from a working setup that you've invested time in.
Especially since you might not be familiar with the new one.
Personally, I'm trying out things in VS Code, just to see how they work. But when I need to work, I do it in Emacs, since I know it better.
Also, with VS Code, just while trying it out, simple things like cut & paste would stop working (choosing them from the menu, they would work, but trying to cut & paste with the key shortcuts and the mouse, wouldn't). You'd have to refresh the whole view or restart it, for cut & paste to become available again.
Emacs' configurability is hard to describe to anyone who hasn't immersed themselves in that sort of environment. There's a small portion of the program written C, but the bulk of it is written in elisp. When you evaluate elisp code, you're not in some sandboxed extension system - you're at the same level as Emacs itself. This allows you to modify nearly any aspect of Emacs.
It'd be a security nightmare if it was more popular, but fortunately the community hovers around being big enough for serious work to be done but small enough that it's not worth writing malware for.
I don't know if it's a security nightmare any more than other editors that have "plugins" (or the like).
One advantage for Emacs is it's both easy and common read the code of the plugins you are using. I can't tell you the last time I looked at the source code of a plugin for VS Code or any other editor. The last time I looked at the code for a plugin in Emacs was today.
I don't think it's a security nightmare per-se. Most of the time, you're not installing a lot of packages (the built-in are extensive) and most of these are small and commonly used.
It's like saying the AUR is a security nightmare. You're just expected to be an adult and vet what you're using.
I'm not sure I agree with the number and size of packages people install (unless you're comparing them to, say, org-mode), but that's not really what I'm talking about.
Emacs runs all elisp code as if it's part of Emacs. Think about what Emacs is capable of, and compare that to what a browser allows its extensions to do. No widely used software works like that because it's way too easy to abuse. Emacs gets away with it because it's not widely used.
I don't know the first thing about VSCode but I'm willing to bet there are strict limits to what its plugins are allowed to do.
I don't know if that's changed since last I wrote an extension for a web browser, but the API is pretty open for the current context (tab) that it's executing in. As long as it's part of the API, the action is doable. Same with VSCode or Sublime. Sandboxed plugins would be pretty useless.
it's a shame vim is so stinky because after 15 years of using it now i find myself using vscode. I always like vim because editing is efficient. Now I dont write as much as supervise a lot of the boilerplate code.
Over the years I have gotten better with vim, added phpactor and other tooling, but frankly i dont have time to futz and its not so polished. With VSCode I can just work. I don't love everything about it, but it works well enough with copilot i forgot the benefits of vim.
I get your experience, but for me using vim is perfect for code exploration. The only needed plugins are fzf.vim and vinegar. The first for fuzzy navigation and the second for quickly browsing the current directory.
LSP experience with VSCode may be superior, but if I truly needed that, I would get an IDE and have proper intellisense. The LSP in Vim and Emacs is more than enough for my basic requirements, which are auto-imports, basic linting, and autocomplete to avoid misspellings. VSCode lacks Vim's agility and Emacs's powerful text tooling, and do far worse on integration.
On a tangent, I get the feeling that the more senior you are, the less likely you are to end up using one of these VIDEs. If you do use any coding assistants at all, it will mostly be for the auto-complete feature - no 'agent mode' malarkey.
When browsing a GitHub repo, there's an option for "assistive chat" with copilot. -- I've found this a useful interface to get the LLM to answer quick questions about the repository without having to dig through myself.
Beyond autocomplete, I've found the LLM to be useful in some cases: sometimes you'll want to make edits which are quite automatic, but can't quite be covered by refactoring tools, and are a bit more involved than search/replace. LLMs can handle that quite well.
For unfamiliar domains, it's also useful to be able to ask an LLM to troubleshoot / identify problems in code.
Maybe it's just me, but the auto-complete is very distracting and something I avoid. Most of the time I'm fighting it, deleting or denying it's suggestions, and it throws me out of flow.
From what I've seen, most senior/staff-level engineers are working for big corps which have limited contracts with providers like Github Copilot, which until recently only gave access to autocomplete.
I prefer the web-based interface. It feels like my choice to reference a tool. It's easy to run multiple chats at once, running prompts against multiple models.
That's very interesting. This is certainly what I was doing before Copilot. Now I let it autocomplete but only sometimes when it makes sense. I guess I am used to the keybinds so that I can undo if I don't like it.
When I was reading your comment I thought that there is a space for an out-of-flow coding assistant, i.e. rather than deploy an entire IDE with extension, the assistant can be just a floating window (I guess chatgpt does that) and is able to dive in and out or just suggest as you type along.
Early when sonnet 3.5 was the best coding model lots of people used them because of the rate limits on anthropic's own API. So that's a plus. There's also the ease of use, one key for every model out there, and you get to choose providers for things like deepseek / qwen / llama if those suit your needs.
I subscribed to the mailing list of void long ago to be notified once the alpha opens, but i've never recieved anything. I forgot about it until today.
Anyway, I didn't know what your service was trying to do so I clicked on the homepage, clicked Sources to see what else was there, it cited <https://extraakt.com/extraakts?source=reddit#:~:text=Open-so...> but the hyperlink took me back to the HN thing, which defeats the purpose of having a source picker
Mandatory reminder that "agentic coding" works way worse than just using the LLM directly, steering it ad needed, filling the gaps, and so forth. The value is in the semantical capabilities of the LLM itself: wrapping it will make it more convenient to use, but always less powerful.
I beg to disagree, Salvatore... Have a go at VS Code with Agent mode turned on (you'll need a paid plan to use Claude and/or Gemini, I think). It gets me out of vim, so yeah, it's that good. :)
Tip: Write a SPEC.md file first. Ask the LLM to ask _you_ about what might be missing from the spec, and once you "agree" ask it to update the SPEC.md or create a TODO.md.
Then iterate on the code. Ask it to implement one of the features, hand-tune and ask it to check things against the SPEC.md and update both files as needed.
Works great for me, especially when refactoring--the SPEC.md grounds Claude enough for it not to go off-road. Gemini is a bit more random...
Interacting with the LLM directly is more work. What I mean is that a wrapper in the best conditions will not damage too much the quality of the LLM itself. In the chat, you continuously avoid suboptimal choices executed by the model, if you are a very experienced code, and a fix here, a fix there, you continuously avoid local minima. After a few iterations you find that your whole project is a lot better designed than otherwise.
Oh, sure. I don't rely on the LLM to write all the code. I do "trust" it to refactor things to a shape I want, and to automate chores like moving chunks of code around or build a quick wrapper for a library.
One thing I particularly don't like about LLMs is that their Python code feels like Java cosplay (full of pointless classes and methods), so my SPEC.md usually has a section on code style right at the start :)
they always start as open source to bait users. how long until this one also turns into BaitWare? I hope it won't since it's backed by Y Combinator and has an Apache 2 license.
(Edit: the parent comment was edited to add "I hope it won't since it's backed by Y Combinator and has an Apache 2 license." - that's a good redirection, and I probably wouldn't have posted a mod reply if the comment had had that originally.)
(Btw if your comment already has replies, it is good to add "Edit:" or something if you're changing it in a way that will alter the context of replies.)
---
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
They first need to substantially grow the user base as we saw with OpenWebUI, only then make an Enterprise offering and switch the license from one day to another.
Yes, they've modified the licence to require preserving their branding. I guess it's an anti-fork measure, you'd have to infringe either on their licence or on their trademark.
The best reason I‘ve seen mentioned by the founder in this thread, is showing/hiding panels, and the onboarding flow. Those are things you can’t do with a plugin. I personally also like Cursor‘s diff view way better than Continue‘s, and maybe that’s because a fork gives more control there.
If I move off Cursor, it's def not going to be to another vs-code derivative.
Zed has it right - build it from the ground up, otherwise, MS is going to kneecap you at some point.
Zed didn't build from the ground up though. I mean, they did for a lot of stuff, but crucially they decided to rely on the LSP ecosystem so most of the investment in improving Zed is also a direct investment in improving VSCode.
If you can't invest in yourself without making the same size investment in your competitor, you probably have no path to actually win out over that competitor.
Additionally, Zed is written in Rust and has robust hardware-accelerated rendering. This has a tangible feel that other editors do not. It feels just so smooth, unlike clunky heavyweight JetBrains products. And it feels solid and sturdy, unlike VS Code buffers, which feel like creaky webviews.
But it's a different take, Brokk is built to let humans supervise AI more effectively rather than optimizing for humans reading and writing code by hand. So it's not a VS Code fork, it's not really an IDE in the traditional sense at all.
1. Create a branch called TaskForLLM_123
2. Add a file with text instructions called Instructions_TaskForLLM_123.txt
3. Have a GitHub action read the branch, perform the task and then submit a PR.
I’ve seen people do this with Claude Code to great success (not in a GH Action). Even multiple sessions concurrently. Token budget is the limit, obviously.
Watched your Youtube. I love this - will try it out and give it to our team. This is effectively the "full mode" version of the mode I currently use Cursor for.
But I do stand by the point. We are seeing umpteen of these things launched every week now, all with the exact same goal in mind; monetizing a thin layer of abstraction between code repos and model providers, to lock enterprises in and sell out as quickly as possible. None of them are proposing anything new or unique above the half dozen open source extensions out there that have gained real community support and are pushing the capabilities forward. Anyone who actually uses agentic coding tools professionally knows that Windsurf is a joke compared to Cline, and that there is no good reason whatsoever for them to have forked. This just poisons the well further for folks who haven't used one yet.
Yes, the high-order bit is to avoid snark, so thanks about that. And it's clear that you know about this space and have good information and thoughts to contribute—great!
I would still push back on this:
> all with the exact same goal in mind
It seems to me that you're assuming too much about other people's intentions, jumping beyond what you can possibly know. When people do that to reduce things to a cynical endstate before they've gotten off the ground, that's not good for discussion or community. This is part of the reason why we have guidelines like these in https://news.ycombinator.com/newsguidelines.html:
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
The time to sell a VSCode fork for 3B was a week ago. If someone wants to move off of VSCode, why would they move to a fork of it instead of to Zed, JetBrains, or a return to the terminal?
Next big sale is going to be something like "Chrome Fork + AI + integrated inter-app MCP". Brave is eh, Arc is being left to die on its own, and Firefox is... doing nothing.