I write a lot of tools that depend on the TypeScript compiler API, and they run in a lot of a lot of JS environments including Node and the browser. The current CJS codebase is even a little tricky to load into standard JS module supporting environments like browsers, so I've been _really_ looking forward to what Jake and others have said will be an upcoming standard modules based version.
Is that still happening, and how will the native compiler be distributed for us tools authors? I presume WASM? Will the compiler API be compatible? Transforms, the AST, LanguageService, Program, SourceFile, Checker, etc.?
I'm quite concerned that the migration path for tools could be extremely difficult.
[edit] To add to this as I think about it: I maintain libraries that build on top of the TS API, and are then in turn used by other libraries that still access the TS APIs. Things like framework static analysis, then used by various linters, compilers, etc. Some linters are integrated with eslint via typescript-eslint. So the dependency chain is somewhat deep and wide.
Is the path forward going to be that just the TS compiler has a JS interop layer and the rest stays the same, or are all TS ecosystem tools going to have to port to Go to run well?
If I got it correctly, they created a node native module that allows synchronous communication on standard I/O between external processes.
So, this node module will make possible the communication between the typescript compiler GO process, that will expose an “API server compiler”, and a client side JavaScript process.
They don’t think it will be possible to port all APIs and some/most of them will be different than today.
I have API use that falls into a few categories, that aren' just LSP-ish type cases:
- Transforms, which presumably there has to be some solution for, even if it's porting to Go.
- Linters, which integrate with typescript-eslint and need the type-checker.
- Codemods, which create and modify AST nodes and re-emit them.
- Static analyzers, which build us app-specific models of the code and rely on AST traversal and the type-checker.
- Analyzer libraries that offer tools to other libraries and apps that expose the TypeScript AST and functions that operate on AST nodes.
Traversing the AST over IPC is going to be too chatty, so I presume there will have to be some sort of way to get a whole SourceFile in one call, but then I wonder about traversal. You'll need a visitor library on your side of the IPC at least, but that's simple. But then you also need all the predicates. You don't want to be calling ts.isTemplateExpression() on every node via IPC.
And I do all this stuff in web workers too, so whatever this IPC is has to work there.
1. “We expect to have a more curated API that is informed by critical use-cases (e.g. linting, transforms, resolution behavior, language service embedding, etc.).”
2. “We also can imagine opportunities to optimize, use other underlying IPC strategies, and provide batch-style APIs to minimize call overhead.”
Anyway, I’ve used the compiler API a lot too, and I really enjoy its huge capabilities, making possible practically everything on the source code (EDIT: and hijack the build process too). I hope we won’t miss too much.
Or do you mean that there's a use case for a compilation in the browser?
I've found that as of 2025, Go's WASM generator isn't as good as LLVM and it has been very difficult for me to even get parity with vanilla JS performance. There is supposedly a way to use a subset of go with llvm for faster wasm, but I haven't tried it (https://tinygo.org/).
I'm hoping that Microsoft might eventually use some of their wasm chops to improve GO's native wasm compiler. Their .NET wasm compiler is pretty darn good, especially if you enable AOT.
https://github.com/golang/go/issues/63904#issuecomment-22536...
It sounds like it would be a great fit for e.g. Lua though.
item := *(*int)(unsafe.Pointer(uintptr(start) + size*uintptr(i)))
A random example taken from Internet.Usually when language reference books used to be printed, or we used ISO languages, what is there on paper, is the language.
We are only discussing semantics, if it is hardcoded primitives, or made available via the standard library, specially in case of blessed packages like unsafe which aren't fully implemented, rather magical types for the compiler.
Hence why the only thing you will see here is mostly documentation, https://github.com/golang/go/blob/master/src/unsafe/unsafe.g...
Which is nothing new since the 1960's that there are systems languages with some way to mark code unsafe, the C linage of languages are the ones that decided to ignore this approach.
I suppose that would be possible with fat-pointers that are reference+offset.
Well, for languages that use a GC. People who are writing WASM that exceeds JS in speed are typically doing it in Rust or C++.
I'm hoping that a later version makes this possible.
The few cases that performed significantly better than the JS version (like >2x speed) were integer-heavy math and tail-call optimized recursive code, some cases were slower than the JS version.
What I was surprised was that the JS version had similar performance to the x64 version with -O3 in some of my benchmakrs (like float64 performance).
This was a while ago though when WASM support had just landed in browsers, so probably things got better now.
Why not something like Rust? Most of the JS ecosystem that is moving toward faster tools seem to be going straight to Rust (Rolldown, rspack (the webpack successor) SWC, OXC, Lightning CSS / Parcel etc) and one of the reasons given is it has really great language constructs for parsers and traversing ASTs (I think largely due to the existence of `match` but i'm not entirely sure)
Was any thought given to this? And if so what was the deciding factors for Go vs something like Rust or another language entirely?
People say this like it's a bad thing. It's not, it's Go's primary strength.
I think they missed out by not going with Rust. It seems like the social factors weighed out. Probably hard to quickly assemble a rust team within msft. Again though that makes Go a practical choice. I don't see why people are so confused by it. Go is a pretty widely used and solid choice to get things done reliably and quickly these days.
I'm not arguing saying they made a bad call. I think what they did was smart with the options in front of them and whatever budget they have. The world isn't good for idealism, but it ideally could have been written in rust in my opinion.
What exactly would that buy and would the outcome matter much?
In my opinion: pragmatism > idealism.
Golang is generally very fast and simple as well , the only problem is of memory allocation / garbage collector overhead but the benefits outweigh the loss.
Dude, just read the article being discussed, this is addressed so you can just stop making shit up.
The audacity of people like you to just keep adding speculations upon speculations on a subject without even bothering to learn what is being discussed.
They absolutely address this in the linked article, so why are we even speculating here?
> Probably hard to quickly assemble a rust team within msft.
The same MSFT that is rewriting their Windows OS in rust as we speak? I think you should stop commenting when you don't know anything about the subject.
I wrote a lot of Go code as well as Java. When people say things like this, I'm not quite sure what exactly they are referring to. No one is forcing you to write mutli-level deep inheritance hierarchies in Java/C#, and Go itself is OOP. Structural typing has its issues as well. Where does this supposed inherent productivity boost lie?
Rust is really hard , compared to golang. This can increase outside contributors as well.
Golang is love , Golang is life.
____
Language choice is always a hot topic! We extensively evaluated many language options, both recently and in prior investigations. We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript. We wrote multiple prototypes experimenting with different data representations in different languages, and did deep investigations into the approaches used by existing native TypeScript parsers like swc, oxc, and esbuild. To be clear, many languages would be suitable in a ground-up rewrite situation. Go did the best when considering multiple criteria that are particular to this situation, and it's worth explaining a few of them.
By far the most important aspect is that we need to keep the new codebase as compatible as possible, both in terms of semantics and in terms of code structure. We expect to maintain both codebases for quite some time going forward. Languages that allow for a structurally similar codebase offer a significant boon for anyone making code changes because we can easily port changes between the two codebases. In contrast, languages that require fundamental rethinking of memory management, mutation, data structuring, polymorphism, laziness, etc., might be a better fit for a ground-up rewrite, but we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
Go also offers excellent control of memory layout and allocation (both on an object and field level) without requiring that the entire codebase continually concern itself with memory management. While this implies a garbage collector, the downsides of a GC aren't particularly salient in our codebase. We don't have any strong latency constraints that would suffer from GC pauses/slowdowns. Batch compilations can effectively forego garbage collection entirely, since the process terminates at the end. In non-batch scenarios, most of our up-front allocations (ASTs, etc.) live for the entire life of the program, and we have strong domain information about when "logical" times to run the GC will be. Go's model therefore nets us a very big win in reducing codebase complexity, while paying very little actual runtime cost for garbage collection.
We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.
Acknowledging some weak spots, Go's in-proc JS interop story is not as good as some of its alternatives. We have upcoming plans to mitigate this, and are committed to offering a performant and ergonomic JS API. We've been constrained in certain possible optimizations due to the current API model where consumers can access (or worse, modify) practically anything, and want to ensure that the new codebase keeps the door open for more freedom to change internal representations without having to worry about breaking all API users. Moving to a more intentional API design that also takes interop into account will let us move the ecosystem forward while still delivering these huge performance wins.
C# and TypeScript are Hejlsberg's children; C# is such an obvious pick that there must have been a monster problem with it that they didn't think could ever be fixed.
C# has all that stuff that the FAQ mentions about Go while also having an obvious political benefit. I'd hope the creator of said language who also made the decision not to use it would have an interesting opinion on the topic! I really hope we find out the real story.
As a C# developer I don't want to be offended but, like, I thought we were friends? What did we do wrong???
Transcript: "But I will say that I think Go definitely is much more low-level. I'd say it's the lowest level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In contrast, C# is sort of bytecode-first, if you will. There are some ahead-of-time compilation options available, but they're not on all platforms and don't really have a decade or more of hardening. They weren't engineered that way to begin with. I think Go also has a little more expressiveness when it comes to data structure layout, inline structs, and so forth."
For anyone who can't watch the video, he mentions a few things (summarizing briefly just the linked time code, it's worth a watch):
- Go being the lowest level language that still has garbage collection
- Inline structs and other data structure expressiveness features
- Existing JS code is in a C-like function+data structure style and not an OOP style, this is easier to translate directly to Go while C# would require OOPifying it.
Sure, AOT is not as mature in C# but is this reason enough to be a show stopper? It seems there're other reasons Anders don't want to address publicly. Maybe as simple reasons as "Go is 10 times easier to pick up than C#" and "language features don't matter when the project matters". Those would indeed hurt the image of C# and Anders obviously don't want that.
But I don't see it as big drama.
The side-by-sides that show how Go code is closer to the current TS code (visually) than C# would be are pretty compelling. He made it pretty clear they're "porting" not rewriting.
Wouldn't these things be useful if you are making an actual compiler, that would run TS? Since in this case, the runtime is JS, I don't think any of these things would get any usage, unless they are used in the existing transpiler.
It's a fascinating language, but it lacks a flagship product.
I feel the same way about Haxe. Someone created an amazing language, but it lacks a big enough community.
Realistically languages need 2 things for adoption. Momentum and ease of use. Rust has more momentum than ease, but arguably can solve problems higher level languages can't.
I'm half imagining a hackathon like format where teams are challenged to use niche languages. The foundations behind these languages can fund prizes.
And AFAIK Symmetry Investments is that dogfood startup.
What is this logic? "You worked on C# years ago so you must use C# for everything"?
"You must dictate C# to every team you lead forever, no matter what skills they have"?
"You must uphold a dogma that C# is the best language for everything, because you touched it last"?
Why aren't you using this logic to argue that they should use Delphi or TurboPascal because Anders Hejlsberg created those? Because there is no logic; the person who created hammers doesn't have to use hammers to solve every problem.
So it's not just that the lead architect of C# is involved in the TypeScript changes. It's also that this is under the same roof and the same sign hangs on the building outside for both languages.
If Ford made a car and powered it with a Chevy engine, wouldn't you be curious what was going on also?
It’s probably most common when an automaker introduces a new make and wants to save time and capital on developing and getting into production a new engine.
The Toyota matrix and Pontiac vibe used a lot of the same parts shared an engine and drivetrains if I'm not mistaken.
Anders Hejlsberg hasn't been the lead architect of C# for like 13 years. Mads Torgersen is:
https://dotnetcore.show/episode-104-c-sharp-with-mads-torger... - "I got hired by Microsoft 17 years ago to help work on C#. First, I worked with Anders Hejlsberg, who’s sort of the legendary creator and first lead designer of C#. And then when he and I had a little side project with others to do TypeScript, he stayed over there. And I got to take over as lead designer C#. So for the last, I don’t know, nearly a decade, that’s been my job at Microsoft to, to take care of the evolution of the C# programming language"
Years later, "why aren't you using YOUR LANGUAGE, huh? What's the matter, you don't like YOUR LANGUAGE?" is pushy and weird; he's a person with a job, not a religious cult leader.
> "If Ford made a car and powered it with a Chevy engine, wouldn't you be curious what was going on also?"
Like these? https://www.slashgear.com/1642034/fords-powered-by-non-ford-...
It's also not what anyone said.
> It's best not to use quotation marks to make it look like you're quoting someone when you're not. <https://news.ycombinator.com/item?id=21643562>
> "An indirect quote lets you capture or summarize what someone said or wrote without using their exact words. It helps to convey the tone or meaning of your source without quoting them directly." - https://www.grammarly.com/blog/punctuation-capitalization/qu...
I'm distilling and exaggerating multiple the comments to convey the tone and meaning of the bit I want to focus on. Asking "why not C#?" has the implicit framing "it should be C# by default and you have to justify why not" and calling out that bias to show it to be unreasonable is the intent.
Nope. None of those are even close to Ford + Chevy. (Ford + Mazda is well known of course).
I chose the analogy carefully.
Maybe top ten behind MSSQL, Powershell, Excel Formulae, DAX etc.
I do love F#, but its compiler is a rusty set of monkey bars. It's somehow single pass, meaning the type checker will struggle if you don't reorder certain expressions - but also dog slow, especially for `inline` definitions (which work more like templates or hygienic macros than .net generics, and are far more powerful.) File order matters, bafflingly! Newer .net features like spans and ref structs are missing with no clear path to implementation. Doing moderately clever things can cause the compiler to throw weird, opaque, internal errors. F# is built around immutability but there's no integration with the modern .net immutable collections.
It's clearly languishing and being kept alive by a skeleton crew, which is sad, because it deserves better, but I've used research prototypes less clunky than what ought to be a flagship.
- I don't think the compiler is actually that bad, and yes - inline definitions I think once you are going on the "templating route" are going to be slower. Spans and ref structs are there - I think the design of them is more intuitive actually - the C# "ref struct" at first glance sounds like an oxymoron to me.
- modern .net immutable collections - in testing these are significantly slower than some of the F# options especially when you go away from the standard lib and use some of the other collection libraries. The algorithms within the C# immutable libs were not as optimal for some common collection types. They didn't feel modern last time I used them and I was forced to switch to the F# ones and/or others in the F# ecosystem to get the performance I needed. Immutable code felt MUCH more idiomatic with F#.
- "Doing moderately clever things can cause the compiler to throw weird, opaque, internal errors" - happened with init fields for me; can't recall another time.
Don't mind the file order bit - I thought OCAml and a few other languages also do this. Apps still scale OK, and when I was coding in it got me out of a few spaghetti code issues as the code scaled up to about the 500,000+ LOC mark.
However I do agree with you on it being kept alive by skeleton crew - I think the creators and tooling staff have moved on to the next big thing (AI and specifically Github Copilot). Which the way things are moving will raise some interesting questions about all coding languages in general potentially.
Huh? They're already implemented! It took years and they've still got some rough edges, yes, but they've been implemented for a few years now.
Agreed with the rest, though. As much as I love working with F#, I've jumped ship.
If it's the latter, I think the pitch of TS remains the same — it's a better way of writing JS, not the best language for all contexts.
If the TS team is getting a 10x improvement moving from TS to Go, you might imagine you could save about 10x on your server cpu. Or that your backend would be 10x more responsive.
If you have dedicated team for front and back anyhow, is a 10x slow down really worth a shared codebase?
as you know full well, Delphi and Turbo Pascal don't have strong library ecosystems, don't have good support for non-Windows platforms, and don't have a large developer base to hire from, among other reasons. if Hejlsberg was asked why Delphi or Turbo Pascal weren't used, he might give one or more of those reasons. the question is why he didn't use C#, for which those reasons don't apply.
Compare that to go. It's not even close. I see comments bickering about the size of executable files... Almost no major product cares about that within order of magnitude.
Go is a wild choice to write a compiler in. Literally in my top 10 things I never want to do. Everything else about it drove them to do it.
Go executables do not.
TSC is installed in too many places for that burden to be placed all of a sudden. It is the same reason why Java has had a complicated acceptance history too. It's fine in the places that it is pre-installed, but no where else.
Node/React/Typescript developers do not want to install .net all of a sudden. If you react that poorly, pretend they decided they decided to write it in Java and ask if you think Node/React/Typescript developers WANT to install Java.
SQL Server connections are one example where I do get .exe with the .pdb in the publish directory but the .exe won't run correctly without the "Microsoft.Data.SqlClient.SNI.dll" file.
Another example are any libraries that have "RequiresDynamicCodeAttribute" requirements.
C#: 945 kB Go: 2174 kB
Both are EXEs you just copy to the machine, no separate runtime needed, talks directly to the OS.
But in either case binary sizes are smaller and more scalable than what Go produces. The assumption that Go is good at compact binaries just does not replicate in reality. Obviously it’s nice to be able not touch it at all and opting into Rust for distributing native SDKs. Go is completely unfit for this as it has even more invasive VM interaction when you use it as dynamically linked library. NativeAOT is “just ok” at this and Go is “way more underwhelming than you think”.
I think if you're in domain of using SIMD, besides base RAM usage of 2-5MB you should not see drastic difference unless you have a lot of allocation traffic. But I guess Rust solved most of this, just wanted to note that specific memory and deployment requirements are usually solvable by changing build and GC settings.
That said, I must have misstated my opinion if it seems like I didn't think they have a good reason. This is Anders Hejlsberg. The guy is a genius; he definitely has a good reason. They just didn't say what it is in this blog post (but did elsewhere in a podcast video linked in the HN thread).
It obviously does because the larger open source world are huge users of Typescript. This isn't some business-only Excel / PowerBI type product.
To put it another way, I think a lot of people would get quite pissed if tsc was going to be rewritten in C# because of the obvious headaches that's going to cause to users. Go is pretty much the perfect option from a user's point of view - it generates self-contained statically linked binaries.
And there would be logistical problems. With go, you just need to distribute the executable, but with c#, you also need a .net runtime, and on any platform that isn't Windows that almost certainly isn't already installed. And even if it is, you have to worry if the runtime is sufficiently up to date.
If they used c# there is a chance the community might fork typescript, or switch to something else, and that might not be a gamble MS would want to take just to get more exposure for c#.
By who?
Nobody said anything about who likes what more, nor does that even matter in the context of the original claim that .NET doesn't have a good runtime outside of Windows.
Is this the ultimate reason,Go is fast enough without being overally difficult. I'm humbly open to being wrong.
While I'm here, any reason Microsoft isn't sponsoring a solid open source game engine.
Even a bit of support for Godot's C#( help them get it working on web), would be great.
Even better would be a full C# engine with support for web assembly.
They did that. https://godotengine.org/article/introducing-csharp-godot/
At least some initial grant to get it started.
Getting C# working on web would be an amazing. It is already on the roadmap but some sponsorship would help tremendously for sure.
It is a huge company. They can do more than one thing. C#/.NET certainly isn't dead, but I'm not sure they really care if you do use it like they once did. It's there if you find it useful. If not, that's cool too.
I think Microsoft can find the money if they wanted to.
It is a powerful and robust language with great standard library, but you just cant be comfortable with it. All those boilerplate, all those sealed override virtual public protected or whatnot before each statement, those curly braces everywhere. You are always inside classes that are inside namespace, and even then you need to go deeper and have curly braces with properties and arrows in random places. Delegates and Events are ugly and unintuitive, two set of syntax for linq (and honestly for almost any somewhat new feature of the language), ref in out, you name it. It is hard to push something so inelegant.
Delegates and Events were a mistake, but that's a low-level .NET mistake that a lot of modern code can easily ignore, with Action<> and Func<> now reliably almost everywhere and WinForms easy to write off as "dead". (You can especially eliminate the need for the ugliness of Delegates and Events with System.Reactive.Linq.)
Records and Primary Constructors remove a ton of the boiler-plate of writing basic "DTOs" and/or dependency injection.
C# is pretty elegant, and a nicely evolving language. Microsoft isn't any longer trying to bet on C# as a "systems programming language" because too many people see JIT support and VMs as "not low level enough" (including apparently also Anders Hejlsberg), but that doesn't mean C# isn't "future proof".
Personally, I would like them to never touch the game dev side of the market.
People have been using Blazor WASM in Production for more than a year now. It's been stable since .NET 8.
I can see they do this in the future tbh, given how large their xbox gaming ecosystem, this path is very make sense since they can cut cost while giving option to their studios or indie developers
I would like MS to buy them out and FOSS the engine. Maybe if they split the ad business off into its own thing.
Unity feels like a bizarre almost abusing business relationship. They can change the terms of service at will.
The licensing is confusing. Billy is a freelancer. He makes a small game for his friend company. His friends company raises a funding round.
Depending on how much money is raised, Unity is going to call Billy up and extort him to upgrade to a higher license tier.
I don't particularly like Godot, but every few months I try and learn it again.
The game engine landscape is like picking the least worst option.
All that to be fair, Unity provided a high quality game engine for effectively nothing for over a decade to the vast majority of its users. It's time to pay the piper.
Also some of the low level improvements on C# have been done with collaboration with Unity team's requirements, regarding their Burst use cases.
Cool. Can you tell us a bit more about the technical process of porting the TS code over to Go? Are you using any kind of automation or translation?
Personally, I've found Copilot to be surprisingly effective at translating Python code over to structurally similar Go code.
I don't think Go is a bad choice, though!
then when you find the correct version but you then have to install both the x86 and x64 version because the first one you installed doesn't work
yeh, great ecosystem
at least a Go binary runs 99.99999% of the time when you start it.
Memory management? Or a stricter type system?
"VS Code Go Language Extension Goes from Microsoft to Google"
https://visualstudiomagazine.com/articles/2020/06/10/go-goes...
What do you see as the future for use cases where the typescript compiler is embedded in other projects? (Eg. Deno, Jupyter kernels, etc.)
There’s some talk of an inter process api, but vague hand waving here about technical details. What’s the vision?
In TS7 will you be able to embed the compiler? Or is that not supported?
So, how exactly is my app/whatever supposed to spin up a parallel process in the OS and then talk to it over IPC? How do you shut it down when the 'host' process dies?
Not vaguely. Not hand wave "just launch it". How exactly do you do it?
How do you do it in environments where that capability (spawning arbitrary processes) is limited? eg. mobile.
How do you package it so that you distribute it in parallel? Will it conflict with other applications that do the same thing?
When you look at, for example, a jupyter kernel, it is already a host process launched and managed by jupyter-lab or whatever, which talks via network chatter.
So now each kernel process has to manage another process, which it talks to via IPC?
...
Certainly, there are no obvious performance reasons to avoid IPC, but I think there are use cases where having the compiler embedded makes more sense.
Usually the very easiest way to do this is to launch the target as a subprocess and communicate over stdin/stdout. (Obviously, you can also negotiate things like shared memory buffers once you have a communication channel, but stdin/stdout is enough for a lot of stuff.)
> How do you shut it down when the 'host' process dies?
From the perspective of the parent process, you can go through some extra work to guarantee this if you want; every operating system has facilities for it. For example, in Linux, you can make use of PR_SET_PDEATHSIG. Actually using that facility properly is a bit trickier, but it does work.
However, since the child process, in this case, is aware that it is a child process, the best way to go about it would be to handle it cooperatively. If you're communicating over stdin/stdout, the child process's stdin will close when the parent process dies. This is portable across Windows and UNIX-likes. The child process can then exit.
> How do you do it in environments where that capability (spawning arbitrary processes) is limited? eg. mobile.
On Android, there is nothing special to do here as far as I know. You should be able to bundle and spawn a native process just fine. Go binaries are no exception.
On iOS, it is true that apps are not allowed to spawn child processes, as far as I am aware. On iOS you'd need a different strategy. If you still want a native code approach, though, it's more than doable. Since you're on iOS, you'll have some native code somewhere. You can compile Go code into a Clang-compatible static library archive, using -buildmode=c-archive. There's a bit more nuance to it to get something that will link properly in iOS, but it is supported by Go itself (Go supports iOS and Android in the toolchain and via gomobile.) Once you have something that can be linked into the process space, the old IPC approach would continue to work, with the semantic caveat that it's not technically interprocess anymore. This approach can also be used in any other situation you're doing native code, so as long as you can link C libraries.
If you're in an even more restrictive situation, like, I dunno, Cloudflare Pages Functions, you can use a WASM bundle. It comes at a performance hit, but given that the Go port of the TypeScript compiler is already roughly 3.5x faster than the TypeScript implementation, it probably will not be a huge issue compared to today's performance.
> How do you package it so that you distribute it in parallel? Will it conflict with other applications that do the same thing?
There are no particular complexities with distributing Go binaries. You need to ship a binary for each architecture and OS combination you want to support, but Go has relatively straight-forward cross-compiling, so this is usually very easy to do. (Rather unusually, it is even capable of cross-compiling to macOS and iOS from non-Apple platforms. Though I bet Zig can do this, too.) You just include the binary into your build. If you are using some bindings, I would expect the bindings to take care of this by default, making your resulting binaries "just work" as needed.
It will not conflict with other applications that do the same thing.
> When you look at, for example, a jupyter kernel, it is already a host process launched and managed by jupyter-lab or whatever, which talks via network chatter.
> So now each kernel process has to manage another process, which it talks to via IPC?
Yes, that's right: you would have to have another process for each existing process that needs its own compiler instance, if going with the IPC approach. However, unless we're talking about an obscene number of processes, this is probably not going to be much of an issue. If anything, keeping it out-of-process might help improve matters if it's currently doing things synchronously that could be asynchronous.
Of course, even though this isn't really much of an issue, you could still avoid it by going with another approach if it really was a huge problem. For example, assuming the respective Jupyter kernel already needs Node.JS in-process somehow, you could just as well have a version of tsc compiled into a Node-API module, and do everything in-process.
> Certainly, there are no obvious performance reasons to avoid IPC, but I think there are use cases where having the compiler embedded makes more sense.
Except for browsers and edge runtimes, it should be possible to make an embedded version of the compiler if it is necessary. I'm not sure if the TypeScript team will maintain such a version on their own, it remains to be seen exactly what approach they take for IPC.
I'm not a TypeScript Compiler developer, but I hope these answers are helpful in some way anyways.
> It will not conflict with other applications that do the same thing.
It is possible not to conflict with existing parallel deployments, but depending on your IPC mechanism, it is by no means assured when you're not forking and are instead launching an external process.
For example, it could by default bind a specific default port. This would work in the 'naive' situation where the client doesn't specify a port and no parallel instances are running. ...but if two instances are running, they'll both try to use the same port. Arbitrary applications can connect to the same port. Maybe you want to share a single compiler service instance between client apps in some cases?
Not conflicting is not a property of parallel binary deployment and communication via IPC by default.
IPC is, by definition intended to be accessible by other processes.
Jupyter kernels for example are launched with a specified port and a secret by cli argument if I recall correctly.
However, you'd have to rely on that mechanism being built into the typescript compiler service.
...ie. it's a bit complicated right?
Worth it for the speedup? I mean, sure. Obviously there is a reason people don't embed postgres. ...but they don't try to ship a copy of it along side their apps either (usually).
I fail to see how starting another process under an OS like Linux or Windows can be conflicting. Don't share resources, and you're conflict-free.
> IPC is, by definition intended to be accessible by other processes
Yes, but you can limit the visibility of the IPC channel to a specific process, in the form of stdin/stdout pipe between processes, which is not shared by any other processes. This is enough of a channel to coordinate creation of a more efficient channel, e.g. a shmem region for high-bandwidth communication, or a Unix domain socket (under Linux, you can open a UDS completely outside of the filesystem tree), etc.
A Unix shell is a thing that spawns and communicates with running processes all day long, and I'm yet to hear about any conflicts arising from its normal use.
You can get a conflicting resource in a shell by typing 'npm start' twice in two different shells, and it'll fail with 'port in use'.
My point is that you can do not conflicting IPC, but by default IPC is conflicting because it is intended to be.
You cannot bind the same port, semaphore, whatever if someone else is using it. That's the definition of having addressable IPC.
I don't think arguing otherwise is defensible or reasonable.
Having a concern that a network service might bind the same port as an other copy of the same network service deployed on the same target by another host is an entirely reasonable concern.
I think we're getting off into the woods here with an arbitrary 'die on this hill' point about semantics which I really don't care about.
TLDR: If you ship an IPC binary, you have to pay attention to these concerns. Pretending otherwise means you're not doing it properly.
It's not an idle concern; it's a real concern that real actual application developers have to worry about, in real world situations.
I've had to worry about it.
I think it's not unfair to think it's going to be more problematic than the current, very easy, embedded story, and it is a concern that simply does not exist when you embed a library instead of communicating using IPC.
Sure, some IPC approaches can run into issues, such as using TCP connections over loopback. However, I'm describing an approach that should never conflict since the resources that are shared are inherited directly, and since the binary would be embedded in your application bundle and not shared with other programs on the system. A similar example would be language servers which often work this way: no need to worry about conflicts between different instances of language servers, different language servers, instances of different versions of the same language server, etc.
There's also some precedent for this approach since as far as I understand it, it's also what the Go-based ESBuild tool does[1], also popular in the Node.JS ecosystem (it is used by Vite.)
> For example, it could by default bind a specific default port. This would work in the 'naive' situation where the client doesn't specify a port and no parallel instances are running. ...but if two instances are running, they'll both try to use the same port. Arbitrary applications can connect to the same port. Maybe you want to share a single compiler service instance between client apps in some cases?
> Not conflicting is not a property of parallel binary deployment and communication via IPC by default.
> IPC is, by definition intended to be accessible by other processes.
Yes, although the set of processes which the IPC mechanism is designed to be accessible by can be bound to just one process, and there are cross-platform mechanisms to achieve this on popular desktop OSes. I can not speak for why one would choose TCP over stdin/stdout, but, I don't expect that tsc will pick a method of IPC that is flawed in this way, since it would not follow precedent anyway. (e.g. tsserver already uses stdio[2].)
> Jupyter kernels for example are launched with a specified port and a secret by cli argument if I recall correctly.
> However, you'd have to rely on that mechanism being built into the typescript compiler service.
> ...ie. it's a bit complicated right?
> Worth it for the speedup? I mean, sure. Obviously there is a reason people don't embed postgres. ...but they don't try to ship a copy of it along side their apps either (usually).
Well, I wouldn't honestly go as far as to say it's complicated. There's a ton of precedent for how to solve this issue without any conflict. I can not speak to why Jupyter kernels use TCP for IPC instead of stdio, I'm very sure they have reasons why it makes more sense in their case. For example, in some use cases it could be faster or perhaps just simpler to have multiple channels of communication, and doing this with multiple pipes to a subprocess is a little more complicated and less portable than stdio. Same for shared memory: You can always have a protocol to negotiate shared memory across some serial IPC mechanism, but you'll almost always need a couple different shared memory backends, and it adds some complexity. So that's one potential reason.
(edit: Another potential reason to use TCP sockets is, of course, if your "IPC" is going across the network sometimes. Maybe this is of interest for Jupyter, I don't know!)
That said, in this case, I think it's a non-issue. ESBuild and tsserver demonstrate sufficiently that communication over stdio is sufficient for these kinds of use cases.
And of course, even if the Jupyter kernel itself has to speak the TCP IPC protocols used by Jupyter, it can still subprocess a theoretical tsc and use stdio-based IPC. Not much complexity to speak of.
Also, unrelated, but it's funny you should say that about postgres, because actually there have been several different projects that deliver an "embeddable" subset of postgres. Of course, the reasoning for why you would not necessarily want to embed a database engine are quite a lot different from this, since in this case IPC is merely an implementation detail whereas in the database case the network protocol and centralized servers are essentially the entire point of the whole thing.
[1]: https://github.com/evanw/esbuild/blob/main/cmd/esbuild/stdio...
[2]: https://github.com/microsoft/TypeScript/wiki/Standalone-Serv...
With a TSC in Go, it's no longer true. Previously you only had to figure out how to run JS, now you have to figure out both how to manage a native process _and_ run the JS output.
This obviously matters less for situations where you have a clear separation between the build stage and runtime stage. Most people complaining here seem to be talking about environments were compilation is tightly integrated with the execution of the compiled JS.
Porting to Go was the right decision, but part of me would've liked to see a different approach to solve the performance issue. Here I'm not thinking about the practicality, but simply about how cool it would've been if performance had instead been improved via:
- porting to OCaml. I contributed to Flow once upon a time, and a version of TypeScript in OCaml would've been huge in unifying the efforts here.
- porting to Rust. Having "official" TypeScript crates in rust would be huge for the Rust javascript-tooling ecosystem.
- a new runtime (or compiler!). I'm thinking here an optional, stricter version of TypeScript that forbids all the dynamic behaviours that make JavaScript hard to optimize. I'm also imagining an interpreter or compiler that can then use this stricter TypeScript to run faster or produce an efficient native binary, skipping JavaScript altogether and using types for optimization.
This last option would've been especially exciting since it is my opinion that Flow was hindered by the lack of dogfooding, at least when I was somewhat involved with the project. I hope this doesn't happen in the TypeScript project.
None of these are questions, just wanted to share these fanciful perspectives. I do agree Go sounds like the right choice, and and in any case I'm excited about the improvement in performance and memory usage. It really is the biggest gripe I have with TypeScript right now!
Rust and OCaml are _maybe_ prettier to look at, but for the average TypeScript developer Go is a much more understandable target IMO.
Lifetimes and ownership are not trivial topics to grasp, and they add overhead (as discussed here: https://github.com/microsoft/typescript-go/discussions/411) that not all contributors might grasp immediately.
(FWIW, It must have been a very well thought out rationale.)
Edit: watched the revenant clip from the GH discussion- makes sense. Maybe push NativeAoT to be as good?
I am (positively) surprised Hejlsberg has not used this opportunity to push C#: a rarity in the software world where people never let go of their darlings. :)
And lightly edited transcript here: https://github.com/microsoft/typescript-go/discussions/411#d...
In a game engine, you probably aren't recreating every game object from frame to frame. But in a compiler, you're creating new objects for every file you parse. That's a huge amount of work for the GC.
Basically I'd be interested to know what the bottlenecks in tsc are, whether there's much low-hanging fruit, and if not why not.
So this might be a very different performance profile.
*edit* I had initially written "single-pass", but in the context of a compiler, that's ambiguous.
In a situation like a game engine I think 1.5x is reasonable, but TS has a huge amount of polymorphic data reading that defeats a lot of the optimizations in JS engines that get you to monomorphic property access speeds. If JS engines were better at monomorphizing access to common subtypes across different map shapes maybe it'd be closer, but no engine has implemented that or seems to have much appetite for doing so.
Also for command-line tools, the JIT warmup time can be pretty significant, adding a lot to overall command-to-result latency (and in some cases even wiping out the JIT performance entirely!)
I really wish JS VMs would invest in this. The DOM is full of large inheritance hierarchies, with lots of subtypes, so a lot of DOM code is megamorphic. You can do tricks like tearing off methods from Element to use as functions, instead of virtual methods as usual, but that quite a pain.
None of these things say "this is a good way to build a large compiler suite that we're building for performance".
A few things mentioned in an interview:
Cannot build native binaries from TypeScript
Cannot as easily take advantage of concurrency in TypeScript
Writing fast TypeScript requires you to write things in a way that isn't 'normal' idiomatic TypeScript. Easier to onboard new people onto a more idiomatic codebase.
- C++ with thousands of tiny objects and virtual function calls? - JavaScript where data is stored in large Int32Array and does operations on it like a VM?
If you know anything about how JavaScript works, you know there is a lot of costly and challenging resource management.
(disclaimer: I am a biased Go fan)
In JavaScript, you can't even put 8M keys in a Hashmap; inserts take > 1 second per element:
Just going from ESLint to Biome is more than a 10x improvement... it's not just 1.5x because it's not just the runtime logic at play for build tools.
JS is 10x-100x slower than native languages (C++, Go, Rust, etc) if you write the code normally (i.e. don't go down the road of uglifying your JS code to the point where it's dramatically less pleasant to work with than the C++ code you're comparing to).
It's kind of annoying how even someone like Hejlsberg is throwing around words like "native" in such an ambiguous, sloppy, and prone-to-be-misleading way on a project like this.
"C++" isn't native. The object code that it gets compiled to, large parts of which are in the machine's native language, is.
Likewise "TypeScript" isn't non-native in any way that doesn't apply to any other language. The fact that tsc emits JS instead of the machine's native language is what makes TypeScript programs (like tsc itself) comparatively slow.
It's the compilers that are the important here, not the languages. (The fact that the TypeScript team was committed to making sure the typescript-go compiler is the same (nearly line-for-line equivalent) to the production version of the TypeScript compiler written in itself really highlights this.)
The question comes up and he quickly glosses over it, but by the sound of it he isn't impressed with the performance or support of AOT compiled C# on all targeted platforms.
[19:14] why not C#?
Dimitri: Was C# considered?
Anders: It was, but I will say that I think Go definitely is -- it's, I'd say, the lowest-level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In C#, it's sort of bytecode first, if you will; there is some ahead-of-time compilation available, but it's not on all platforms and it doesn't have a decade or more of hardening. It was not geared that way to begin with. Additionally, I think Go has a little more expressiveness when it comes to data structure layout, inline structs, and so forth. For us, one additional thing is that our JavaScript codebase is written in a highly functional style -- we use very few classes; in fact, the core compiler doesn't use classes at all -- and that is actually a characteristic of Go as well. Go is based on functions and data structures, whereas C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#. That transition would have involved more friction than switching to Go. Ultimately, that was the path of least resistance for us.
Dimitri: Great -- I mean, I have questions about that. I've struggled in the past a lot with Go in functional programming, but I'm glad to hear you say that those aren't struggles for you. That was one of my questions.
Anders: When I say functional programming here, I mean sort of functional in the plain sense that we're dealing with functions and data structures as opposed to objects. I'm not talking about pattern matching, higher-kinded types, and monads.
[12:34] why not Rust?
Anders: When you have a product that has been in use for more than a decade, with millions of programmers and, God knows how many millions of lines of code out there, you are going to be faced with the longest tail of incompatibilities you could imagine. So, from the get-go, we knew that the only way this was going to be meaningful was if we ported the existing code base. The existing code base makes certain assumptions -- specifically, it assumes that there is automatic garbage collection -- and that pretty much limited our choices. That heavily ruled out Rust. I mean, in Rust you have memory management, but it's not automatic; you can get reference counting or whatever you could, but then, in addition to that, there's the borrow checker and the rather stringent constraints it puts on you around ownership of data structures. In particular, it effectively outlaws cyclic data structures, and all of our data structures are heavily cyclic.
(https://www.reddit.com/r/golang/comments/1j8shzb/microsoft_r...)
They could have used static classes in C#.
- C# Ahead of Time compiler doesn't target all the platforms they want.
- C# Ahead of Time compiler hasn't been stressed in production as many years as Go.
- The core TypeScript compiler doesn't use any classes; Go is functions and datastructures whereas C# is heavily OOP, so they would have to switch paradigms to use C#.
- Go has better control of low level memory layouts.
- Go was ultimately the path of least resistance.
https://github.com/microsoft/typescript-go/discussions/411#d...
For a daemon like an LSP I reckon C# would've worked.
Graph for the differences in Runtime, Runtime Trimmed, and AOT .NET.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/n...
I doubt the ability to cross-compile TSC would have been a major factor. These artifacts are always produced on dedicated platforms via separate build stages before publishing and sign-off. Indeed, Go is better at native cross-compilation where-as .NET NativeAOT can do only do cross-arch and limited cross-OS by tapping into Zig toolchain.
The gc compiler considers it a first-class build target.
Like C#, the binary tends to be on the larger side[1], which makes it less than ideal for driving a marketing website, but that's not really a problem here. Installation of tools before use is already the norm.
[1] The tinygo compiler can produce very small WASM binaries, comparable to C, albeit with some caveats (surprisingly, the extra data gc produces isn't just noise).
I am sure it is good enough that the team decided to choose Go either way OR it is not important for this project.
> I doubt the ability to cross-compile TSC would have been a major factor.
I never said it was a major factor (I even said "I don't think it would be a deal breaker"), but it is a factor nonetheless. It definitely helps a lot during cross-platform debugging since you don't need to setup a whole toolchain just to test a bug in another platform, instead you can simple build a binary on your development machine and send it to the other machine.
But the only reason I asked this is because I was curious really, no need to be so defensive.
Where do you think Go gets those chubby static linked executables from?
That people have to apply UPX on top.
He clearly knows all this so the obvious inference is that the decision isn't really about features. The most likely problem is a lack of confidence in the .NET team, or some political problems/bad blood inside Microsoft. Perhaps he's tried to use it and been frustrated by bugs; the comment about "battle hardened" feel like where the actual rationale is hiding. We're not getting the full story here, that's clear enough.
I'm honestly surprised Microsoft's policies allowed this. Normally companies have rules that require dogfooding for exactly this reason. Such a project is not terribly urgent, it has political heft within Microsoft. They could presumably have got the .NET team to fix bugs or make optimizations they need, at least a lot easier than getting the Go team to do it. Yet they chose not to. Who would have any confidence in adoption of .NET for performance sensitive programs now? Even the father of .NET doesn't want to use it. Anyone who wants to challenge a decision to adopt it can just point at Microsoft's own actions as evidence.
It is especially jarring given that they are a first-party customer who would have no trouble in getting necessary platforms supported or projects expedited (like NativeAOT-LLVM-WASM) in .NET. And the statements of Anders Hejlsberg himself which contradict the facts about .NET as a platform make this even more unfortunate.
There's an interesting contrast here with Java, where javac was ported to Java from C++ very early on in its lifecycle. And the Java AOT compiler (native image) is not only fully written in Java itself, everything from optimizations to code generation, but even the embedded runtime is written in Java too. Whereas in the .NET world Roslyn took quite a long time to come along, it wasn't until .NET 6, and of course MS rejected it from Windows more or less entirely for the same sorts of rationales as what Anders provides here.
It was introduced back then with .NET Framework 4.6 (C# 6) - a loong time ago (July 2015). The OSS .NET has started with Roslyn from the very beginning.
> And the Java AOT compiler (native image) is not only fully written in Java itself, everything from optimizations to code generation, but even the embedded runtime is written in Java too.
NativeAOT uses the same architecture. There is no C++ besides GC and pre-existing compiler back-end (both ILC and RyuJIT drive it during compilation process). Much like GraalVM's Native Image, the VM/host, type system facilities, virtual/interface dispatch and everything else it could possibly need is implemented in C# including the linker (reachability analysis/trimming, kind of like jlink) and optimizations (exact devirtualization, cctor interpreter, etc.).
In the end, it is the TypeScript team members who worked on this port, not Anders Hejlsberg himself, which is my understanding. So we need to take this into account when judging what is being communicated.
no? https://github.com/microsoft/typescript-go/graphs/contributo...
First he mentions the no classes thing. It is hard to see how that would matter even for automated porting, because like you said, he could just use static classes, and even do a static using statement on the calling side.
Another one of his reasons was that Go was good at processing complex graphs, but it is hard to imagine how Go would be better at that than C#. What language feature that Go has, but C# does not supports that? I don't think anyone will be able to demonstrate one. This distinction makes sense for Go vs Rust, but not for Go vs C#.
As for the platform / AOT argument, I don't know as much about that, but I thought it was supposed to be possible now. If it isn't, it seems like it would be better for Microsoft to beef that up than to allow a vote of no confidence to be cast like this.
Cue rust devotees in 3, 2, ..
If you are a rust devotee, you can use https://github.com/FractalFir/rustc_codegen_clr to compile your rust code to the same .NET runtime as C#. The project is still in the works but support is said to be about 95% complete.
This is a big concern to me. Could you expand on what work is left to do for the native implementation of gsc? In particular, can you make an argument why that last bit of work won't reduce these 10x figures we're seeing? I'm worried the marketing got ahead of the engineering
Are there any insights on the platform decision?
- There is an esbuild process running in the background.
- If I look at the JavaScript returned to the browser, it is transpiled without any types present.
So even though the URLs in Vite dev mode look like they're pointing to "raw" TypeScript files, they're actually transpiled JavaScript, just not bundled.
I could be incorrect, of course, but it sure seems to me like Vite is using ESBuild on the Node.JS side and not tsc on the web browser side.
One thing I'm curious about: What about updating the original Typescript-based compiler to target WASM and/or native code, without needing to run in a Javascript VM?
Was that considered? What would (at a high level) the obstacles be to achieving similar performance to Golang?
Edit: Clarified to show that I indicate updating the original compiler.
JavaScript, like other dynamic languages, runs well with a JIT because the runtime can optimize for hotspots and common patterns (e.g. this method's first argument is generally an object with this shape, so write a fast path for that case). In theory you could write an AOT compiler for TypeScript that made some of those inferences at compile time based on type definitions, but
(a) nobody's done that
(b) it still wouldn't be as fast as native, or much faster than JIT
(c) it would be limited - any optimizations would die as soon as you used an inherently dynamic method like JSON.parse()
That is a good question.
One of the nice advantages of js is that it can run so many places. Will TypeScript still be able to enjoy that legacy going forward, or is native only what we should expect in 7+?
Will the questions and answers be posted anywhere outside of Discord after it's concluded?
Maybe I'm unique, but I prefer to not have the compiler tied up with a paryicular dependency-management/library system.
Simplicity is a feature, not a bug. Overly expressive languages become nightmares to work with and reason about (see: C++ templates)
Go's compilation times are also extremely fast compared to Rust, which is a non-negligible cost when iterating on large projects.
C and Rust both have predictable memory behaviour, Go does not.
(IE, as opposed to reference counting, where if you have cyclic loops, you need to manually go in and "break" the loop so memory gets reclaimed.)
It's actually pretty easy to do something like this with C, just using something like an arena allocator, or honestly, leaking memory. I actually wrote a little allocator yesterday that just dumps memory into a linkedlist, it's not very complicated: http://github.com/danieltuveson/dsalloc/
You allocate wherever you want, and when you're done with the big messy memory graph, you throw it all out at once.
There are obviously a lot of other reasons to choose go over C, though (easier to learn, nicer tooling, memory safety, etc).
Really interesting news, and uniquely dismaying to me as someone who is fighting tooth and claw to keep JS language tooling in the JS ecosystem.
My question has to do with Ryan's statement:
> We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript
I've experimented deeply in this area (maybe 15k hours invested in BABLR so far) and what I've found is that it's richly rewarding. Javascript is fast enough for what is needed, and its ability to cache on immutable data can make it lightning fast not through doing more work faster, but by making it possible to do less work. In other words, change the complexity class not the constant factor.
Is this a direction you investigated? What made you decide to try to move sideways instead of forwards?
Have you considered the man-years and energy you're making everyone waste? Just as an example, I wonder what the carbon footprint of ESLint has been over the years...
Now, it pales in comparison to Python, but still...
TS currently wastes tons of resources (most especially peoples' time) by not being able to share its data and infrastructure with other tools and ecosystems, but while there would be much bigger wins from tackling the systemic problem, you wouldn't be able to say something as glib as "TS is 10x faster". Only the work that can be distilled to a metric is done now, because that's how to get a promotion when you work for a company like Microsoft
Thank you Typescript team for chasing those promotions!
From a performance perspective, I'd expect C++ and Rust to be much easier targets too, since I've seen quite a few industrial Go services be rewritten in C++/Rust after they fail to meet runtime performance / operability targets.
Wasn't there a recent study from Google that came to the same conclusion? (They see improved productivity for Go with junior programmers that don't understand static typing, but then they can never actually stabilize the resulting codebase.)
One trade off is if the code for TS is no longer written in TS, that means the core team won’t be dogfooding TS day in and day out anymore, which might hurt devx in the long run. This is one of the failure modes that hurt Flow (written in OCaml), IMO. Curious how the team is thinking about this.
Does that mean more "support rotations" for TS compiler engineers on GitHub? Are there full-stack TS apps that the TS team owns that ownership can be spread around more? Will the TS team do more rotations onto other teams at MSFT?
Second, JavaScript already executes quickly. Aside from arithmetic operations it has now reached performance parity to Java and highly optimized JavaScript (typed arrays and an understanding of data access from arrays and objects in memory) can come within 1.5x execution speed of C++. At this point all the slowness of JavaScript is related to things other than code execution, such as: garbage collection, unnecessary framework code bloat, and poorly written code.
That being said it isn't realistic to expect measurably significant faster execution times by replacing JavaScript with a WASM runtime. This is more true after considering that many performance problems with JavaScript in the wild are human problems more than technology problems.
Third, WASM has nothing to do with JavaScript, according to its originators and maintainers. WASM was never created to compete, replace, modify, or influence JavaScript. WASM was created as a language ubiquitous Flash replacement in a sandbox. Since WASM executes in an agnostic sandbox the cost to replace an existing runtime is high since an existing run time is already available but a WASM runtime is more akin to installing a desktop application for first time run.
The rest of the 10x comes from multi-threading, which wasn't possible to do in a simple way in the JS compiler (efficient multithreading while writing idiomatic code is hard in JS).
JavaScript is very fast for single-threaded programs with monomorphic functions, but in the TypeScript compiler's case, the polymorphic functions and opportunity for parallelization mean that Go is substantially faster while keeping the same overall program structure.
What I do know is that some people complain about long compile times in their code that can last up to 10 minutes. I had a personal application that was greater than 60k lines of code and the tsc compiler would compile it in about 13 seconds on my super old computer. SWC would compile it in about 2.5 seconds. This tells me the far greater opportunity for performance improvement is not in modifying the compiler but in modifying the application instance.
WTF.
As a result the amount of libraries that ship flow types has absolutely dwindled over the years, and now typescript has completely taken over.
Yet projects inevitably get to the stage where a more native representation wins out. I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language.
It makes me think I should be starting any project I have in the lowest level representation that allows me some ergonomics. Maybe more reason to lean into Zig? I don't mean for places where something like Rust would be appropriate. I mean for anything I would consider using a "good enough" scripting language.
It honestly has me questioning my default assumption to use JS runtimes on the server (e.g. Node, deno, bun). I mean, the benefit of using the same code on the server/client has rarely if ever been a significant contributor to project maintainability for me. And it isn't that hard these days to spin up a web server with simple routing, database connectivity, etc. in pretty much any language including Zig or Go. And with LLMs and language servers, there is decreasing utility in familiarity with a language to be productive.
It feels like the advantages of scripting languages are being eroded away. If I am planning a career "vibe coding" or prompt engineering my way into the future, I wonder how reasonable it would be to assume I'll be doing it to generate lower level code rather than scripts.
Prisma is currently being rewritten from Rust to TypeScript: https://www.prisma.io/blog/rust-to-typescript-update-boostin...
> Yet projects inevitably get to the stage where a more native representation wins out.
I would be careful about extrapolating the performance gains achieved by the Go TypeScript port to non-compiler use cases. A compiler is perhaps the worst use case for a language like JS, because it is both (as Anders Hejlsberg refers to it) an "embarassingly parallel task" (because each source file can be parsed independently), but also requires the results of the parsing step to be aggregated and shared across multiple threads (which requires shared memory multithreading of AST objects). Over half of the performance gains can be attributed to being able to spin up a separate goroutine to parse each source file. Anders explains it perfectly here: https://www.youtube.com/watch?v=ZlGza4oIleY&t=2027s
We might eventually get shared memory multithreading (beyond Array Buffers) in JS via the Structs proposal [1], but that remains to be seen.
[1] https://github.com/tc39/proposal-structs?tab=readme-ov-file
As for the "compilers are special" reasoning, I don't ascribe to it. I suppose because it implies the opposite: something (other than a compiler) is especially suited to run well in a scripting language. But the former doesn't imply the later in reality and so the case should be made independently. The Prisma case is one: you are already dealing with JavaScript objects so it is wise to stay in JavaScript. The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.
WASM is used to generate the query plan, but query execution now happens entirely within TypeScript, whereas under the previous architecture both steps were handled by Rust. So in a very literal sense some of the Rust code is being rewritten in TypeScript.
> Basically, if the majority of your application is already in JavaScript and expects primarily to interact with other code written in JavaScript, it usually doesn't make sense to serialize your data, pass it to another runtime for some processing, then pass the result back.
My point was simply to refute the assertion that once software is written in a low level language, it will never be converted to a higher level language, as if low level languages are necessarily the terminal state for all software, which is what your original comment seemed to be suggesting. This feels like a bit of a "No true Scotsman" argument: https://en.wikipedia.org/wiki/No_true_Scotsman
> As for the "compilers are special" reasoning, I don't ascribe to it.
Compilers (and more specifically lexers and parsers) are special in the sense that they're incredibly well suited for languages with shared memory multithreading. Not every workload fits that profile.
> The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.
I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits still apply. In fact, one of the stated reasons for the Prisma rewrite was "skillset barriers". "Contributing to the query engine requires a combination of Rust and TypeScript proficiency, reducing the opportunity for community involvement." [1]
[1] https://www.prisma.io/blog/from-rust-to-typescript-a-new-cha...
That is why I am saying your evidence is a red herring. It is a case where a reasonable decision was made to rewrite in JavaScript/TypeScript but it has nothing to do with the merits of the language and everything to do with the environment that the entire system is running in. They even state the Rust code is fast (and undoubtedly faster than the JS version), just not fast enough to justify the IPC cost.
And it in no way applies to the point I am making, where I explicitly question "starting a new project" for example "my default assumption to use JS runtimes on the server". It's closer to a "Well, actually ..." than an attempt to clarify or provide a reasoned response.
The world is changing before our eyes. The coding LLMs we have already are good but the ones in the pipeline are better. The ones coming next year are likely to be even better. It is time to revisit our long held opinions. And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.
First of all, I would argue that software rewrites are a bad proxy metric for language quality in general. Language rewrites don't measure languages purely on a qualitative scale, but rather on a scale of how likely they are to be misused in the wrong problem domain.
Low level languages tend to have a higher barrier to entry, which as a result means they're less likely to be chosen on a whim during the first iteration of a project. This phenomenon is exhibited not just at the macroscopic level of language choice, but often times when determining which data structures and techniques to use within a specific language. I've very seldomly found myself accidentally reaching for a Uint8Array or a WeakRef in JS when a normal array or reference would suffice, and then having to rewrite my code, not because those solutions are superior, but because they're so much less ergonomic that I'm only likely to use them when I'm relatively certain they're required.
This results in obvious selection bias. If you were to survey JS developers and ask how often they've rewritten a normal reference in favor of a WeakRef vs the opposite migration, the results would be skewed because the cost of dereferencing WeakRefs is high enough that you're unlikely to use them hastily. The same is true to a certain extent in regards to language choice. Developers are less likely to spend time appeasing Rust's borrow checker when PHP/Ruby/JS would suffice, so if a scripting language is the best choice for the problem at hand, they're less likely to get it wrong during the first iteration and have to suffer through a massive rewrite (and then post about it on HN). I've seen plenty of examples of competent software developers saying they'd choose a scripting language in lieu of Go/Rust/Zig. Here's the founder of Hashicorp (who built his company on Go, and who's currently building a terminal in Zig), saying he'd choose PHP or Rails for a web server in 2025: https://www.youtube.com/watch?v=YQnz7L6x068&t=1821s
That is not my intention. Perhaps you are reading absolutes and chasing after black and white statements. When I say "it makes me think I should ..." I am not saying: "Everyone everywhere should always under any circumstances ...". It is a call to question the assumption, not to make emphatic universal decisions on any possible project that could ever be conceived. That would be a bad faith interpretation of my post. If that is what you are arguing against, consider if you really believe that is what I meant.
So my point stands: I am going to consider this more deeply rather than default assuming that an interpreted scripting language is suitable.
> Low level languages tend to have a higher barrier to entry,
I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.
The first comment I wrote in this thread was a response to the following quote: "Yet projects inevitably get to the stage where a more native representation wins out." Inevitable means impossible to evade. That's about as close to a black and white statement as possible. You're also completely ignoring the substance of my argument and focusing on the wording. My point is that language rewrites (like the TS rewrite that sparked this discussion) are a faulty indicator of scripting language quality.
> I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.
And I've already said that I disagree with this assertion. I'll just quote myself in case you haven't read through all my comments: "I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits [of scripting languages] still apply." I was under the impression that I didn't have to keep restating my position.
I don't believe that AI has eroded the barriers of entry to the point where the average Ruby or PHP developer will enjoy passing around memory allocators in Zig while writing API endpoints. Neither of us can be 100% certain about what the future holds for AI, but as someone else pointed out, making technical decisions in the present based on AI speculation is a gamble.
Inevitable:
as is certain to happen; unavoidably.
informal
as one would expect; predictably.
"inevitably, the phone started to ring just as we sat down"
Which interpretation of the word is "good faith" considering the rest of my post? If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement? Would you argue with Google and say "I have sat down before and the phone didn't ring"?It is Hacker News policy and just good internet etiquette to argue with good faith in mind. I find it hard to believe you could have read my entire post and come away with the belief of absolutism.
edit: Just to add to this, your interpretation assumes I think Django (the Python web application framework) will unavoidably be rewritten in a lower level language. And Ruby on Rails will unavoidably be rewritten. Do you believe that is what I was saying? Do you believe that I actually believe that?
> If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement?
If we were having a discussion about automobile safety and you wrote several hundred words about why a specific type of accident isn't indicative of a larger trend, I wouldn't respond by cherry picking the first sentence of your comment, and quoting Google definitions about a phone ringing.
I used Google to point out that your argument, which hinged on your definition of what the word "inevitable" means is the narrowest possible interpretation of my statement. An interpretation so narrow that it indicates you are arguing in bad faith, which I believe to be the case. You are accusing me of making an argument that I did not make by accusing me of not understanding what a word means. You are wrong on both accounts as demonstrated.
The only person thinking in black in white is the figment of me in your imagination. I've re-read the argument chain and I'm happy leaving my point where it is. I don't think your points, starting with your attempt at a counter example with Prisma, nor your exceptional compiler argument, nor any of the other points you have tried support your case.
My argument does not hinge upon the definition of the word inevitable. You originally said "I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language."
I gave a relatively thorough accounting of why you've observed this, and why it doesn't indicate what you believe it to indicate here: https://news.ycombinator.com/item?id=43339297
Instead of addressing the substance of the argument you focused on this introductory sentence: "I'd like to address your larger point which seems to be that all greenfield projects are necessarily best suited to low level languages."
Regardless of how narrowly or widely you want me to interpret your stance, my point is that the data you're using to form your opinion (rewrites from higher to lower level languages) does not support any variation of your argument. You "can't think of a time a high profile project written in a lower level representation got ported to a higher level language" because developers tend to be more hesitant about reaching for lower level languages (due to the higher barrier to entry), and therefore are less likely to misuse them in the wrong problem domain.
Making technical decisions based on hypothetical technologies that may solve your problems in "a year or so" is a gamble.
> And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.
Arguably Go is a scripting language designed for exactly that purpose.
I would not call Go a scripting language. Go programs are statically linked single binaries, not a textual representation that is loaded into an interpreter or VM. It has more in common with C than Bash. But to make sure we are clear (in case you want to dig in on calling Go a scripting language) I am talking about dynamic programming languages like Python, Ruby, JavaScript, PHP, Perl, etc. which generally do not compile to static binaries and instead load text files into an interpreter/VM. These dynamic scripted languages tend to have performance below static binaries (like Go, Rust, C/C++) and usually below byte code interpreted languages (like C# and Java).
1. As products mature, they may find useful scenarios involving runtime environments that don’t necessarily match the ones that were in mind back when the foundation was laid. If relevant parts are rewritten in a lower-level language like C or Rust, it becomes possible to reuse them across environments (in embedded land, in Web via WASM, etc.) without duplicate implementations while mostly preserving or even improving performance and unlocking new use cases and interesting integrations.
2. As products mature, they may find use cases that have drastically different performance requirements. TypeScript was not used for truly massive codebases, until it was, and then performance became a big issue.
Starting a product trying to get all of the above from the get go is rarely a good idea: a product that rots and has little adoption due to feature creep and lack of focus (with resulting bugs and/or slow progress) doesn’t stand a chance against a product that runs slower and in fewer environments but, crucially, 1) is released, 2) makes sound design decisions, and 3) functions sufficiently well for the purposes of its audience.
Whether LLMs are involved or not makes no meaningful difference: no matter how good your autocomplete is, other things equal the second instance still wins over the first—it still takes less time to reach the usefulness threshold and start gaining adoption. (And if you are making a religious argument about omniscient entities for which there is no meaningful difference between those two cases, which can instantly develop a bug-free product with infinite flexibility and perfect performance at whatever the level of abstraction required, coming any year, then you should double-check whether if they do arrive anyone would still be using them for this purpose. In a world where I, a hypothetical end user, can get X instantly conjured for me out of thin air by a genie, you, a hypothetical software developer, better have that genie conjure you some money lest your family goes hungry.)
Of course, LLMs may stay as "autocomplete" forever. Or for decades. But my intuition is telling me that in the next 2-3 years they are going to increase in capability, especially for coding, at a pace greater than the last 2 years. The evidence that I have (by actually using them) seems to point in that direction.
I'm perfectly capable of writing programs in Perl, Python, JavaScript, C++, PHP, Java. Each of those languages (and more actually) I have used professionally in the past. I am confident I could write a perfectly good app in Go, Rust, Elixir, C, Ruby, Swift, Scala, etc.
If you asked me 6 months ago "what would you choose to write a basic CRUD web app" I probably would have said TypeScript. What I am questioning now is: why? What would lead me to choose TypeScript? Do the reasons I would have chosen TypeScript continue to make sense today?
There are no genies here, only questioning of assumptions. And my new assumptions include the assumption that any coding I would do will involve a code assisting LLM. That opens up new possibilities for me. Given LLM assistance, why wouldn't I write my web app layer in Rust or Zig?
Your assumptions about the present and near future will guide your own decisions. If you don't share the same intuitions you will come to different conclusions.
Same reasons as with no LLM assistance. You would be choosing higher maintenance burden and slower development speed compared to your competitors, though. They will get it out faster, they will have fewer issues, and will be able to find people to support it more easily. Your product may run faster, but theirs will work and be out faster.
I show up and say "I have a C compiler". Does it matter at that point how good your assembly is? All of a sudden I can generate 10x the amount of assembly that you generate. And you are probably aghast, what crappy assembly my C compiler generates.
Now ask yourself: how often do you look at generated assembly?
Compilers don't care about writing maintainable assembly. They are a tool to generate assembly in high volumes. History has shown that people who use C compilers were able to get products to market faster compared to people who wrote using assembly.
So lets assume, for the sake of understanding my position, that LLMs will be like the compiler. I give it some high-level English description of the code I want it to run and it generates a high volume of [programming language] as its output. My argument is, the programming language that it outputs is important and it would be better for it to output into a language that low level native binaries. In the same way I don't care about "maintainable assembly" coming out of a C compiler, I don't care about maintainable Python coming out of my LLM.
A well tested compiler is far more deterministic than an LLM, and can be largely treated as a black box because it won't randomly hallucinate output.
We have engineering practices that guard against humans making mistakes that break builds or production environments. It isn't like we are going to discard those practices. In fact, we'll double down on them. I would subject an LLM to the level of strict validation that any human engineer would fine suffocating.
The reason we trust compilers as a black box is because we have created systems that allow us to do so. There is no reason I can see currently that we will be unable to do so for LLM output.
I might be wrong, time will tell. We're going to find out because some will try. And if it turns out to be as effective as C was compared to assembly then I want to be on that side of history as early as possible.
Exactly, which is why I would want humans and LLMs to write maintainable code, so that I can review and maintain it, which brings us back to the original question of which programming languages are the easiest to maintain...
I want maintainable systems you want maintainable code. We can just accept that difference. I believe maintainable systems can be achieved without focusing on code that humans find maintainable. In the future, I believe we will build systems on top of code primarily written by LLMs and the rubric of what constitutes good code will change accordingly.
edit: I would also add that your position is exactly the position of assembly programmers when C came around. They lamented the assembly the C compiler generated. "I want assembly I can read, understand and maintain" they demanded. They didn't get it.
You started off by comparing LLM output to compiler output, which I pointed out is a false equivalence because LLMs aren't as deterministic as compilers.
Then you switched to comparing LLMs to humans, which I'm fine with, but then LLMs must be expected to produce maintainable code just like humans.
Now you're going back to the original premise that LLM output is comparable to compiler output, thus completing the loop.
Perhaps it is impossible for you to imagine that LLMs can share some properties with compilers and other properties with humans? And that this specific blend of properties makes them unique? And that uniqueness means we have to take a nuanced approach to understanding their impact on designing and building systems?
So lets lay it out. LLMs are like compilers in that they take high level instructions (in the form of English) and translate it into programming languages. Maybe "transpiler" would be a word you prefer? LLMs are like humans in that this translation of high level instructions to programming languages is non-deterministic and so it requires system level controls to handle this imprecision.
I do not detect any conflict in these two ideas but perhaps you see things differently.
Yes, but determinism is the factor that allows me to treat compilers as a black box without verifying their output. LLMs do not share this specific property, which is why I have to verify their output, and easily verifiable software is what I call "maintainable".
Would you go back to writing assembly? Would you diligently work to make the compiler "more" deterministic. Would you engineer your systems around potential failures?
How do industries like the medical or aviation deal with imperfect humans? Are there lessons we can learn from those domains that may apply to writing code with non-deterministic LLMs?
I also just want to point out an irony here. I'm arguing in favor of languages like Go, Rust and Zig over the more traditional dynamic scripting languages like Python, PHP, Ruby and JavaScript. I almost can't believe I'm fighting the "unmaintainable" angle here. Do people really think a web server written in Go or Rust is unmaintainable? I'm defending my position as if they are, but come on. This is all a bit ridiculous.
We have a system in science for verifying shoddy human output, it's called peer review. And it's easier for your peers to review your code when it's maintainable. We're back in the loop.
Funny thing about this thread and black and white thinking. I feel a different kind of loop.
Things are not black and white. It will be less maintainable relatively speaking, proper tool for the job and all that. That’s why you will be left in the dust.
This sounds more like a "we're kinda stuck with Javascript here" situation. The team is making a compromise, can't have your cake and eat it too I guess.
Software never gets rewritten in a higher level language, but software is constantly replaced by alternatives. First example that comes to mind is Discord, an Electron app that immediately and permanently killed every other voice client on the market when it launched.
If we assume that coding assistants continue to improve as they have been and we also assume that they are able to generate lower level code on par with higher level code, then it seems the leverage shifts away from "easy to implement features" languages to "fast in most contexts" languages.
Only time will tell, of course. But I wonder if we will see a new wave of replacements from Electron based apps to LLM assisted native apps.
Do you mean voice clients like FaceTime, Zoom, Teams, and Slack?
Discord can run from a browser, making onboarding super easy. The installable app being in Electron makes for minimal (if any) difference between it and the website.
In summary, running in the web browser helps a lot, and Electron makes it very easy for them to keep the browser version first class.
As an added bonus, they can support Linux, Windows and macOS equally well.
I would say it helps as without Electron, serving all the above with equal feature parity just would have been too expensive or slow and perhaps it just wouldn’t have been as frictionless for all types of new users like it is.
What was that saying again? Premature optimisation is the root of all evil
A thread going into what Knuth meant by that quote that is usually shortened to "premature optimization is the root of all evil". Or, to rephrase it: don't tire yourself out climbing for the high fruit, but do not ignore the low-hanging fruit. But really I don't even see why "scripting languages" are the particular "high level" languages of choice. Compilers nowadays are good. No one is asking you to drop down to C or C++.
Mind you I'm sure there were similar attempts at a language with those goals, but they didn't have the backing of Google.
People are also highly unpredictable, so it is usually a matter of trial and error, very often their feedback may completely erase wide sets of assumptions you were building your product around.
It's borderline impossible to do it on mature product, but rewriting mature product to something faster is not borderline impossible - it's just very hard.
Note that it doesn't apply if you just program something in accordance from an rfc where everything is predefined.
But the static languages have changed, a lot, for the better since then. I now find that when I'm greenfielding something, if I have even a clue how I want to structure it overall, that static languages end up being faster somewhere around a week into the development process. Dynamic languages are superficially easier to refactor, but the refactorings tend to take the form of creating functions that take more and more possible inputs and this corrodes the design over time. Static programs stay working the whole time, and I can easily transform the entire program to take some parameter differently or something and get assurance I'm not missing a code path.
I personally actively avoid dynamic languages for initial development now, for anything that is going to be over a week in size. The false economies are already biting by that point and it gets progressively and generally monotonically worse over time.
This comes from someone who was almost 100% dynamic scripting language in the first 15 years of my career. It's not from lack of understanding of dynamic scripting languages, used at scale.
And when you factor in LLMs being ridiculously good at scaffolding basic apps, the time to reach that turning point will continue to decrease. It takes me time to write out test harness boiler plate, or making a nice dev/staging environment configuration. It is why many languages come with a `mylang create proj` command line tool to automate a basic project. But the custom scaffolding that a LLM can provide will eventually beat any command line project creation tool we can imagine.
This is one of the driving realizations of my point. I've coded in a lot of dynamic languages and a lot of static languages and the distance between their developer experiences are shrinking drastically. I would expect a decent dynamic language expert to become productive in Go very quickly. Rust may be more difficult but again should be totally possible for any competent programmer. Then you add on top of that the fact they will be ramping up using an LLM that can explain the code they are looking at to them, that can provide suggestions on how to approach problems, that can actually write example code, etc.
And then there are all of the benefits of deploying statically compile binaries. Of managing memory layouts precisely. Of taking direct advantage of things like simd when appropriate.
The ergonomics of compiling your code for every combination of architecture and platform you plan to deploy to? It's not fun. I promise.
> my default assumption to use JS runtimes on the server
AWS Lambda has a minimum billing interval of 1ms. To do anything interesting you have to call other APIs which usually have a minimum latency of 5 to 30ms. You aren't buying much of anything in any scalable environment.
> there is decreasing utility in familiarity with a language to be productive.
I hope you aren't planning on making money from this code. Either way, have fun debugging that!
> the advantages of scripting languages are being eroded away.
As long as scripting languages have interfaces which let them access C libraries either directly or through compiled modules they will have strong advantages. Just having a CLI where you can test out ideas and check performance is massively powerful and I hate not having it in any compiled project. Go has particularly bad ergonomics here as writing test cases are easy but exploring ideas is not due to it's strictness down to even the code styling level.
I mean, on the one hand you are arguing for C FFI and on the other worrying about compiling for every combination of architecture. Those positions seem to be contradictory. Although I guess you're assuming that other people who write the C libraries for you did that work. I guess you better hope libraries exist for every possible performance issue you come across in your cross platform scripting library.
And why limit your runtime to AWS Lambda? That is a constraint you are placing on yourself. Nowadays with docker you can have pretty much any Linux you want as an image. But why not just implement on top of cgroups from scratch? I guess we live a world where that is unthinkable to many. Probably just better to pay AWS. But if you do use docker, all of a sudden worrying about compiling for all of those architectures seems like less of an issue. And you can use ECS, so you can still pay AWS!
As for tooling issues, and there are definitely tooling issues with every language, it is a pick your poison kind of thing. I remember really liking Pascal tooling way back in the day. Smalltalk images have some nifty features. Who doesn't like Lisp, the language that taught us all REPL. Not sure I'd choose them for a project today though.
As LLMs get better, I just assume what constitutes "developer experience" is going to change. Will I even care about how unergonomic writing test cases in Go can be if I can just say "LLM, write a test that covers X, Y, Z case". As long as I can read the resultant output and verify it meets my expectations, I don't care how many characters of code or boilerplate that will force the LLM to generate.
edit: I misread your point about Go test cases but I'll leave my mistake standing. My overall point was the stuff I find annoying to do myself I can just farm out to the LLM. If the cost of writing an experiment is "LLM, give this a try" and if it works great and if not `git checkout`, then I will be ok with something less optimal.
The trade-off is that the team will have to start dealing with a lot of separate issues... How do tools like ESLint TS talk to TSC now? How to run this in playground? How to distribute the binaries? And they also lose out on the TS type system, which makes their Go version rely a little more on developer prowess.
This is an easy choice for one of the most fundamental tools underlaying a whole ecosystem, maintained by Microsoft and one of the developers of C# itself, full-time.
Other businesses probably want to focus on actually making money by leading their domain and easing long-term maintenance.
Now we may have a new case: native but fast to add features using a code assist LLM.
If that new case is a true reflection of the near future (only time will tell) then it makes the case against the scripted solution. If (and only if) you could use a code assist LLM to match the feature efficiency of a scripting language while using a native language, it would seem reasonable to choose that as the starting point.
The adoption of AI Code Assistance I am sure will be driven similarly anecdotally, because who has the time or money to actually measure productivity techniques when you can just build a personal set of superstitions that work for you (personally) and sell it? Or put another way, what manager actually would spend money on basic science?
Just two years ago, a friend of mine described it as quite a hassle to get a RESTful backend running in Go. He got it working but it was more work than usual. Was he an outlier or have things been getting better in the framework department?
Go tends to have more boiler plate than other languages. So more typing work, less thinking work, less maintenance work once completed.
It won't happen tomorrow, but I am quite certain eventually it will be in a position where executables are generated directly, and we will enter into a new computation model.
Like you can nowadays still inspect the generated Assembly, and eventually fine tune it, our AI tools of the future might provide a similar approach.
Just like being familiar with assembly is extremely useful in certain circumstances (I spent a lot of time looking at assembly while working in the games industry. In fact, I was part of a small group that found a bug in the MS C++ compiler which we discovered by inspecting the output assembly) it will be extremely useful for programmers to be competent in the low level representations. At least for a good long while (probably years) we'll review almost all of the code generated before shipping it. But it won't be long until we just "trust" the output of the AI tooling.
And by "trust" I mean we will have engineering practices in place to validate the software before release. Unit testing, integration testing, functional testing, static analysis, etc.
At some point the volume of code generated by the AIs will be so much that it won't be practical to consider every single line of code. Just like the volume of assembly created by the C compiler is so much that for the most part we just assume it is fine. Only in special cases do we narrowly focus on a hot loop or some other part of the code.
This might happen slowly, over decades, or quickly over the next two years.
The JS runtimes are fine for the majority of use cases but the ecosystem is really the issue IMO.
> the benefit of using the same code on the server/client has rarely if ever been a significant contributor to project maintainability for me
I agree and now with OpenAPI this is even less of an argument.
However, it is important not to conflate "scripting language" and "dynamic language" and "interpreted". While there is some correlation there, it is not a necessary one.
Objective-C is an example of a fast AOT-compiled pretty dynamic language, and WebScript was an interpreted scripting language with pretty much identical syntax and semantics.[2]
What do I mean with fast? In my experience, Objective-C can be extremely fast [3], though it can also be used very much like a scripting language and can also be used in ways that are as slow or even slower than popular scripting languages. That range is very interesting.
So I don't actually think the tradeoff you describe between low-level unergonomic fast and high-level ergonomic slow is a necessary one, and one of the goals of Objective-S is to prove that point.[4]
So far, it's looking very good. Basically, the richer ways of connecting components appear to allow fairly simple "scripted" connections to achieve reasonably high performance [5]. However, I now have a very simple AOT compiler (no optimizations whatsoever!) and that gives another factor 2.6 [6].
Steve Sinowsky wrote: "Does developer convenience really trump correctness, scalability, performance, separation of concerns, extensibility, and accidental complexity?"[7].
I am saying: how about we not have to choose?
And I'd much rather debug/modify semantically rich, high-level code that my LLM generated.
[1] https://blog.metaobject.com/2015/10/jitterdammerung.html
[2] https://blog.metaobject.com/2019/12/the-4-stages-of-objectiv...
[3] https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...
[5] https://blog.metaobject.com/2021/07/deleting-code-to-double-...
[6] https://dl.acm.org/doi/10.1145/3689492.3690052
[7] https://darkcoding.net/research/IEEE-Convenience_Over_Correc...
> And I'd much rather debug/modify semantically rich, high-level code that my LLM generated.
This I agree with. In fact, we may find that the natural fit for use with LLMs is a language not popular amongst humans. The main issue, in my opinion, is we end up with native code executables, complete control over memory layout and direct access to system calls. Those properties just happen to align with languages like Rust, Go, Zig, C/C++, etc. but they aren't limited to them.
1. https://github.com/dudykr/stc - Abandoned (https://github.com/swc-project/swc/issues/571#issuecomment-1...)
2. https://github.com/kaleidawave/ezno - In active development. Does not have the goal of 1:1 parity to tsc.
But as @keturakis pointed out (thanks!), Deno/Bun still rely on TSC, which I was not aware of.
> Note — Similar to other build tools, Bun does not typecheck the files. Use tsc (the official TypeScript CLI) if you're looking to catch static type errors.
A compiled managed language is much better approach for userspace applications.
Pity that they didn't go with AOT compiled .NET, though.
Yeah. It seems to be unfashionable somewhat even within Microsoft.
(edit: it seems to be you and me and barely anyone else on HN advocating for C#)
If anyone should have picked C# it would be him.
But I think it's also an indication that Typescript may be bigger and more important for Microsoft than C#/.NET is at this time. It's definitely much more used than C# is according to this non-representative survey of Stack Overflow (https://survey.stackoverflow.co/2024/technology).
I happened to be doing a lot of C# and .NET dev when all this transition was happening, and it was very cool to be able to run .NET in Linux. C# is a powerful language with great and constantly evolving ideas in it.
But then all the stuff between the runtimes, API surfaces, Core vs Framework, etc all got extremely confusing and off-putting. It was necessary to bring all these ecosystems together, but I wonder if that kept people away for a bit? Not sure.
Here is a kind of weird, given the team.
OS kernels, firmware, GPGPU,....
If it is the ML inspired type system, there are plenty of options among compiled managed languages, true Go isn't really on that camp, but whatever.
There are also Swift, F# (Native AOT), Scala (Native, GraalVM, OpenJ9).
Leaking memory is sometimes not a huge issue. Missile allocation is real. Undefined behaviour, seg faults, data races, etc from edge cases slow down development.
The promise of rust isn't that it's super fast to learn but once you have you never deal with a swath of issues ever again.
And that's speaking from a deficit. Rust is an excellent language to do language development. It has arguably the best tooling for it in the ecosystem in my opinion and a vibrant community for it. Some of the most recent languages have foundations in rust. That is likely to continue going forward.
If it was a fresh compiler then the choice would be more difficult.
I was trying ot push .net as our possible language for somehow high performance executables. Seeing this means I'll stop trying to advocate for it. If even this team doesn't believe in it.
I like when Microsoft doesn't pretend that their technologies are the right answer for every problem.
But, of course, that is not unusual. There is no language in existence that is best suited to every project out there.
But I just said my point is not about performance at all! It is about the whole package. Performance of c# and go are both enough for my usecase, same for java and c obviously. They just told us that they don't think the whole package makes sense, and disowned the AOT compilation.
Which made me naturally think your point was, indeed, about performance. Although as it appears to be, I'm wrong, so it's fair enough.
This seems super petty to me. Like, if at the end of the day you get a binary that works on your OS and doesn’t require a runtime, why should you “love” that they picked one language over another? It’s exactly the same outcome for you as a user.
I mean, if you wanted to contribute to the project and you knew go better than rust, that would make sense. But sounds like you just don’t like rust because of… reasons, and you’re just glad to see rust “fail” for their use case.
ah, answers below: https://news.ycombinator.com/item?id=43333296
Not saying that in a judgemental way, I'm just genuinely surprised. What does this say about what Hejlsberg thinks of C# at the moment? I would assume one reason they don't pick C# is because it's deeply unpopular in the open source world. If Microsoft was so successful in making Typescript popular for open source work, why can't they do it for C#?
I have not opted to use C# for anything significant in the past decade or so. I am not 100% sure why, but there's always been something I'd rather use. Whether that's Go, Rust, Ruby or Haskell. I always enjoyed working in C#, I think it's a well designed and powerful language even if it never made the top of my list recently. I never considered that there might be something so fundamentally wrong with it that not even Hejlsberg himself would use it to build a Typescript compiler.
What's wrong with C#?
- C# is bytecode-first, Go targets native code. While C# does have AOT capabilities nowadays this is not as mature as Go's and not all platforms support it. Go also has somewhat better control over data layout. They wanted to get as low-level as possible while still having garbage collection.
- This is meant to be something of a 1:1 port rather than a rewrite, and the old code uses plain functions and data structures without an OOP style. This suits Go well while a C# port would have required more restructuring.
I'm not sure what's going on, I guess he's just not involved with the runtime side of .NET at all to actually know where the capability sits circa 2024/2025. But really, it's a terrible situation to be in. Especially just how worse langdev UX in Go is compared to C#, F# or Rust. No one would've batted an eye if either of those was used.
Can you explain why the DX in Go is "worse"? I've seen the exact opposite during my professional work.
> While C# does have AOT capabilities nowadays this is not as mature as Go's and not all platforms support it
https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...
Only Android is missing from that list (marked as "Experimental"). We could argue about maturity but this is a bit subjective.
> Go also has somewhat better control over data layout
How? C# supports structs, ref structs (stack allocated only structures), explicit stack allocation (`stackalloc`), explicit struct field layouts through annotations, control over method local variable initialization, control over inlining, etc. Hell, C# even supports a somewhat limited version of borrow checking through the `scoped` keyword.
> This is meant to be something of a 1:1 port rather than a rewrite, and the old code uses plain functions and data structures without an OOP style.
C# has been consistently moving into that direction by taking more and more inspiration from F#.
The only reasonable reason would be extensive usage of structural typing which is present in TS and Go but not in C#.
That's sort of the problem with C#. It couples the type (struct vs class) with allocation. C# started life by copying 1990's Java "everything-is-a-reference". So it's in a weird place where things were bolted on later to give more control but still needs to support the all-objects-are-refs style. C# is just not ergonomic if you need to care about data layout in memory.
Go uses a C-like model. Everything is a value type. Real pointers are in the language. Now you can write a function that inputs pointers and does not care whether they point to stack, heap, or static area. That function can be used for all 3 types, no fuss.
Agree. Where things are allocated is a consumer decision.
> C# is just not ergonomic if you need to care about data layout in memory
I disagree. I work on a public high performance C# code and I don't usually face issues when dealing with memory allocations and data layout. You can perfectly use structs everywhere (value types) and pass references when needed (`ref`).
> Now you can write a function that inputs pointers and does not care whether they point to stack, heap, or static area.
You can do this perfectly fine in C#, it might not be what some folks consider "idiomatic OOP" but I could not care less about them.
The point is C++ sucks dude. There is no way that you can reasonably think that bolting a GC on to C++ is going to be a pleasurable experience. This whole conversation started with _language ergonomics_. I don’t care that it’ll save 0.5 milliseconds. I’d rather dig holes than write C++.
You’re absolutely delusional if you think C++ is enjoyable compared to any managed language or if you think AI is capable of replacing anything.
You’ve moved this conversation extremely far off topic and I won’t be replying again.
Cheers dude. Good luck with your chat bots and CVE’s from your raw pointers.
NativeAOT story itself is also interesting - I noted it in a sibling comment but .NET has much better base binary size and binary size scalability through stronger reachability analysis, metadata compression and pointer-rich binary sections dehydration at a small startup cost (it's still in the same ballpark). The compiler output is also better and so is whole program view driven devirtualization, something Go does not have. In the last 4 years, .NET's performance has improved more than Go's in the last 8. It is really good at text processing at both low and high level (only losing to Rust).
The most important part here is that TypeScript at Microsoft is a "first-party" customer. This means if they need additional compiler accommodations to improve their project experience from .NET, they could just raise it and they will be treated with priority.
This decision is technically and politically unsound at multiple levels at once. For example, they will need good WASM support. .NET's existing WASM support is considered "decent" and even that one is far from stellar, yet considered ahead of the Go one. All they needed was to allocate additional funding for the ongoing already working NativeAOT-LLVM-WASM prototype to very quickly get the full support of the target they needed. But alas.
Maybe it's time to stop eating everything that Microsoft sales folks/evangelists spoon feed you and wake up to the fact that only because people paid by Microsoft to roll the drum about Microsoft products telling you that .NET and C# is oh so good and the best in everything, maybe it's not actually that credible?
Look at the hard facts. Every single product which Microsoft has built that actually matters (e.g. all their Azure CNCF stuff, Dapr, now this) is using non Microsoft languages and technologies.
You won't see Blazor being used by Microsoft or the 73rd reinvention of ASP.NET Core MVC Minimal APIs Razor Pages Hocus Pocus WCF XAML Enterprise (TM) for anything mission critical.
So that could be a fundamental reason why.
It's not lost on me that this is a widely used aphorism. The problem is that it's not true in any way shape or form.
According to whom?
--https://github.com/microsoft/typescript-go/discussions/411
I haven't looked at the tsc codebase. I do currently use Golang at my job and have used TypeScript at a previous job several years ago.
I'm surprised to hear that idiomatic Golang resembles the existing coding patterns of the tsc codebase. I've never felt that idiomatic code in Golang resembled idiomatic code in TypeScript. Notably, sum types are commonly called out as something especially useful in writing compilers, and when I've wanted them in Golang I've struggled to replace them.
Is there something special about the existing tsc codebase, or does the statement about idiomatic Golang resembling the existing codebase something you could say about most TypeScript codebases?
To be fair, they didn't actually say that. What they said was that idiomatic Go resembles their existing patterns. I'd imagine what they mean by that is that a port from their existing patterns to Go is much closer to a mechanical 1:1 process than a port to Rust or C#. Rust is the obvious choice for a fully greenfield implementation, but reorganizing around idiomatic Rust patterns would be much harder for most programs that are not already written in a compatible style. e.g. For Rust programs, the precise ownership and transfer of memory needs to be modelled, whereas Go and JS are both GC'd and don't require this.
For a codebase that relies heavily on exception handling, I can imagine a 1:1 port would require more thought, but compilers generally need to have pretty good error recovery so I wouldn't be surprised if tsc has bespoke error handling patterns that defers error handling and passes around errors as values a lot; that would map pretty well to Go.
Most TypeScript projects are very far away from compiler code, so that this wouldn't resemble typical TypeScript isn't too surprising. Compilers written in Go also don't tend to resemble typical Go either, in fairness.
TSC doesn't use many union types, it's mostly OOP-ish down-casting or chains of if-statements.
One reason for this is I think performance; most objects are tagged by bitsets in order to pack more info about the object without needing additional allocations. But TypeScript can't really (ergonomically) represent this in the type system, so that means you don't get any real useful unions.
A lot of the objects are also secretly mutable (for caching/performance) which can make precise union types not very useful, since they can be easily invalidated by those mutations.
though looking at that flood of loose ifs+returns, i kinda wish they used rust :)
Why _not_ use Go?
Because of its truly primitive type system, and because Microsoft already has a much better language — C#, which is both faster and can be more high level and more low-level at the same time, depending on your needs.
I am a complete nobody to argue with the likes of Hejlsberg, but it feels like AOT performance problems could be solved if tsc needed it, and tsc adoption of C# would also help push C#/.NET adoption. Once again, Microsoft proves that it's a bunch of unrelated companies at odds with each other.
That is the main reason they gave for why they those chose Go. The parent asked "Why _not_ use Go?"
But there is probably some truth in what you say as well. Footguns are no doubt refreshing after being engrossed in Typescript (and C#) for decades. At some point you start to notice that your tests end up covering all the same cases as your advanced types, and you begin question why you are putting in so much work repeating yourself, which ultimately sees you want to look for better.
Which, I suppose, is why industry itself keeps ending up taking that to the extreme, cycling between static and dynamic typing over and over again.
At the extreme end of the spectrum that starts to become true. But the languages that fill that space are also unusable beyond very narrow tasks. This truth is not particularly relevant to what is seen in practice.
In the realm of languages people actually use on a normal basis, with their half-assed type systems, a few more advanced concepts sprinkled in here and there really don't do anything to reduce the need for testing as you still have to test around all the many other holes in the type system, which ends up incidentally covering those other cases as well.
In practice, the primary benefit of the type system in these real-world languages is as it relates to things like refactoring. That is incredibly powerful and not overlapped by tests. However, the returns are diminishing. As you get into increasingly advanced type concepts, there is less need/ability to refactor on those touch points.
Most seem to agree that a complete type system is way too much (especially for general purpose programming), and no type system is too little; that a half-assed type system is the right balance. However, exactly how much half-assery is the right amount of half-assery is where the debate begins. I posit that those who go in deep with thinking less half-assery is the way eventually come to appreciate more half-assery.
> I don't think the "industry" is a person
Nobody does.
See, I fundamentally disagree that those languages are "unusable beyond very narrow tasks", because I never stated that only a complete and absolutely proven type system can provide those proofs. In fact, even a relatively mid-tier (a little bit above average) type-system like C#'s can already provide enormous benefits in this regard. See, when you test for something like raw JavaScript, you end up testing things that are even about the shape of your objects, in C# you don't have to do this (because the type system dictates the shape). You also have to be very careful around possibly null objects and values, which in a language with "proper" nullable types (and support from it in the type system and static checkers) like C# can be lowered vastly (if you use the resource, naturally). C# is also a language that "brings the types into runtime" through reflection, so it will even bring you things that you don't need to test in your code (only when developing the library) like reflection for example (you will not see things that are meant to assert shapes, like 'zod' or 'pydantic' in C# or other mid-tier typed languages for example). C#'s type system also proves many things about the safety of your code, for example you basically never need to test your usage of Spans, the type system and static analysis will already rule out most problematic usages of those things. You also never need to test if your int is actually a float because some random place in your code it was set to be so (like in JS), you also never need to test against many other basic assumptions even an extremely basic type system would give you (even Go's one).
This is to say that, basically, this don't hold true for relatively simple type systems. I'm also yet to see this holding true for more advanced ones, for example: Rust is a relatively well used language for a lot of low-level projects. I never saw someone testing (well bounded safe) rust code for basic shapes of types, nor for the conclusions the type system provides when writing on it. For example, testing if the type system was really able to catch that ownership transference happening here, of it is really safe to assume that there's only one mutable reference to that object after you called that method, or if the destructor of the object is really running in the end of the scope of the function, or even if the overly complex associated type result was actually what you meant it to be (in fact, if you would ever use those complicated types, it would be precisely to have very strong compile-time guarantees that both a test would not be able to cover for -- entirely, and that you would not write unit tests specifically for in the first place). So I don't think it is true that you need a powerful type system to see the reduction in tests that you would need to write in a completely dynamically typed language, nor I think it is true when you start having really powerful type constructs, that you will come to this conclusion """start to notice that your tests end up covering all the same cases as your advanced types""". I also don't think that you need to go to the extreme of this spectrum to see those benefits, they appear gradually and increase gradually as you move towards the end (when you end up with more extremely uncommon things like dependent typing, refinement types or effect systems).
I also certainly don't agree that it does matter that "most people" think or don't think about powerful type systems and the languages using them, it matters more that the right people are using them, people that want to be benefitted from this, than the everyday masses (this is another overly complex disccussion tho). And while I can understand the feelings you have towards the "low end of half-assery type systems", and even agree to a certain reasonable degree (naturally, with my own considerations), I don't think glorifying mediocre type systems is the way to go (like many people usually do, for some terrifying reason). It is enough to recognize that a half-assery type-system usually gets the job done and that's it, completely fine and okay, it may even be faster to write, instead of trying to justify that we should "pursue primitive type systems" because of the fact that we can do things well on them. Maybe I'm digressing to much, it's hard to respond to this comment in a satisfactory manner.
>> I don't think the "industry" is a person
> Nobody does.
Yeah, this was not a very productive point of mine, sorry.
Then why do you think nobody uses them (outside of certain narrow tasks)? It is hard to deny the results.
The reality is that they are intractable. For the vast majority of programming problems, testing is good enough and far, far more practical. There is a very good reason why the languages people normally use (yes, including C# and Rust) prefer testing over types.
> See, when you test for something like raw JavaScript, you end up testing things that are even about the shape of your objects
Incidentally, but not explicitly. You also end up incidentally testing things like the shape even in languages that provide strict guarantees in the type system. That's the nature of testing.
I do agree that testing is not well understood by a lot of developers. There are for sure developers who think that explicitly testing for, say, the shape of data is a test that needs to be written. A lot of developers straight up don't know what makes for a useful test. We'd do well to help them better understand testing, but I'm not sure "don't even think about it, you've got a half-assed type system to lean on!" get us there. Quite the opposite.
> it matters more that the right people are using them
Well, they're not. And they are not going to without some fundamental breakthrough that changes the tractability of using languages with an advanced (on the full spectrum, not relative to Go) type system. The tradeoffs just aren't worth it in nearly every case. So we're stuck with half-assed type systems and relying on testing, for better or worse. Yes, that includes C# and Rust.
> I don't think glorifying mediocre type systems is the way to go (like many people usually do, for some terrifying reason).
Does it matter? Engineers don't make decisions based on some random emotional plea on HN. A keyboard cowboy might be swayed in the wrong direction by such, but then this boils down to being effectively equivalent to "If we don't talk about sex maybe teenage pregnancy will cease." Is that really the angle you want to go with?
> The reality is that they are intractable. For the vast majority of programming problems, testing is good enough and far, far more practical. There is a very good reason why the languages people normally use (yes, including C# and Rust) prefer testing over types.
Deny what results? Do you have some kind of formal demonstration that they are impossible to use outside of those "certain narrow tasks" (unknown)? Or do you have proof that NOBODY use them for more than those "certain narrow tasks"? Otherwise this is more "I feel like it" than something I would even need to justify deeply.
Also, with Rust this is certainly false, most people that I've saw using it (and myself) don't overly test everything in it besides more complex behavior (which types hardly can prove it is correct), but it eliminates the need for a whole suit of smaller tests that would be necessary in less powerful languages (it is literally regarded as one of the languages where "if it compiles, it works" -- or "generally works" for a reason).
> Incidentally, but not explicitly. You also end up incidentally testing things like the shape even in languages that provide strict guarantees in the type system. That's the nature of testing.
Now I want some proof to it. Make an example test that "incidentally tests things like the shape" in C#, please. I've seen a good bunch of codebases in C# and I'm pretty sure I never saw something even remotely like this.
> I do agree that testing is not well understood by a lot of developers. There are for sure developers who think that explicitly testing for, say, the shape of data is a test that needs to be written. A lot of developers straight up don't know what makes for a useful test. We'd do well to help them better understand testing, but I'm not sure "don't even think about it, you've got a half-assed type system to lean on!" get us there. Quite the opposite.
Now this point is getting lost, you changed from:
> At some point you start to notice that your tests end up covering all the same cases as your advanced types, and you begin question why you are putting in so much work repeating yourself, which ultimately sees you want to look for better.
To "I know better than a lot of developers how to test", which don't make any sense to me. You either has the same baseline testing knowledge of this "lot of developers", and hence reach similar conclusions in regards to testing (what I quoted), or you have a better understanding of it than them (and your conclusions are merely based on your own perception of testing). I don't think those points are free to take, you would need to justify this a little bit more, and I'm sure """don't even think about it, you've got a half-assed type system to lean on!""" was not the core of my point, nor a faithful representation of what I said.
> Well, they're not. And they are not going to without some fundamental breakthrough that changes the tractability of using languages with an advanced (on the full spectrum, not relative to Go) type system. The tradeoffs just aren't worth it in nearly every case. So we're stuck with half-assed type systems and relying on testing, for better or worse. Yes, that includes C# and Rust.
I would not call Rust's type system 'half-assed' tho, it is very compatible with a bunch of ML languages, it is really a very sophisticated type system with HM type inference, generic associated types, traits and more powerful things. Comparing it to C# would be unreasonable. I may also be a little bit mean to C#, it has a "mid-tier, but sufficiently good type system" for many purposes, my main problems with it are regarding to the type inference (and the lack of some basic features), but it has generics since early versions, it has interfaces, classes, subtyping, recursive type constraints, extension methods, deterministic destructors, scoped local definitions and a bunch of small useful resources. It is surely mid-tier in many aspects, but not something trivial and I don't think you can put them all in the same basket. Either way, I also don't think that you got my point: I said it matters more that the right people are using them, precisely because those are the people that would make good use of those type systems. As a very simple example (and I consider Rust type system a very powerful one in this context) it was interesting to see that someone like Asahi Lina said the Rust language and it's features were useful for making a GPU driver, that she experienced less problems common in C (a language with a way smaller and simpler type system) and that it was having some positive effects on it. Surely, most software is not written in Rust, but the ones that are, and this is what matters, are being developed by the right people that would use them right. This is another point, as I stated earlier, but you responded to it so I'm giving a better exploration on the surroundings of what I meant here.
> Does it matter? Engineers don't make decisions based on some random emotional plea on HN. A keyboard cowboy might be swayed in the wrong direction by such, but then this boils down to being effectively equivalent to "If we don't talk about sex maybe teenage pregnancy will cease." Is that really the angle you want to go with?
It absolutely does matter. And I do believe we should talk less about mundane things and that glorifying bad ways of living can have a terrible influence in teenage brains (and even in adults in many cases), but this is also another discussion. My point is that people don't argue emotionally like they were arguing emotionally, they argue emotionally like they were right. They are (generally) not saying "well, see, I really love Go and its simplicity, so because of my personal preferences I'm saying that other languages are bad", they are saying "see, as we obviously values simplicity, and Go is simpler than that X language, Go is better than X language" (which is the shape of arguments I usually see, not ipsis literis, but in implications and the style of pointing things), and this is much more dangerous than any "purely emotional plea" (and also, most software engineers are not masters of argumentation that can dissect something like this and find all the intricate problems and possible fallacies behind it, they will believe what is most believable at the moment for them and that's generally it).
The results of software written in languages with robust type systems. Having mathematical guarantees that your program is correct is a good place to be, but climbing the mountain to get there is, well...
> Do you have some kind of formal demonstration that they are impossible to use outside of those "certain narrow tasks" (unknown)? Or do you have proof that NOBODY use them for more than those "certain narrow tasks"?
See, now you're starting to understand exactly why these languages are intractable. But perhaps we can dumb things down to my mere mortal level: Why don't you use those languages for regular programming tasks?
> To "I know better than a lot of developers how to test"
How, exactly, did you reach that conclusion? You don't have to be good at something to recognize when people are bad at something.
> but it eliminates the need for a whole suit of smaller tests that would be necessary in less powerful languages
What kind of small tests are you envisioning?
Furthermore, even if we grant that statement as being true for the sake of discussion, there is still the problem that the primary intent of tests is to offer documentation. That the documentation is self-validating is the reason you're not writing it in Microsoft Word instead, but that is really a secondary benefit.
If you defer to the type system then you're moving some, but not all, of the documentation into the type system, fragmenting the information. Is that fair to other developers? In reality, you're going to want to write the tests anyway for the sake of consistency and completeness. A programmer needs to deliver something that works, of course, but also something that does a good job of communicating to other developers what is going on. Programming is decidedly not a solo activity.
Sure, in an ideal world you could document the entire program in the type system, but the languages people normally use simply don't have what it takes to enable that. They lean on testing instead. Worse is better, I suppose.
> Make an example test that "incidentally tests things like the shape" in C#, please.
We should probably talk about my fee structure first. I don't want you coming back crying that it was too much when I send you the bill for the work performed.
That said, under my professional duty to act your interest, I expect you would far better served thinking about how you might go about avoiding testing the shape given a random useful test. You don't really need my services here.
> I would not call Rust's type system 'half-assed' tho
It doesn't even have proper support for something basic like value constraints, let alone more advanced concepts. It has more features than Go, but I fail to see how that offers transcendence beyond half-assery. I'll grant you that it is less half-assed, just as I did earlier.
As for this, as I said I really like having the type system as my documentation, and I don't know what exactly you are saying with "is that fair to other developers", this is not only fair but very useful, in fact this is WHY people really like good typed libraries in TypeScript world, they make using the thing MUCH easier and more guided, to the point that you don't even need to read the real documentation written in text that much as just exploring in the code editor. As a very good example, I literally make for my teammates many useful libraries and they love using them, the convenience they provide, and I find joy when they find "oh, that's amazing, your library can do this exact thing I was needing to do because you thought about it before", this makes their life easier, this makes my life easier, and even when they use the library a bit wrong it will still work because I made everything flow well and have easier paths for gradual usage when you need. It is not only fair, it is good if you know how to do it well, and I rarely need to write tests for my own code more than I need to write for my libraries (if they are right, generally speaking the code that uses it has way less easy to make flaws, and thus I reduce the number of tests needed in the end of the day). As if programming is or not a "solo activity", this depends on many things, I certainly have hundreds of solo projects of mine and I love working on them, and I also have my projects that are developed along with other people, and I love them as well. Programming for me is at the same time a form of expression, and art and a job at the same time.
> Sure, in an ideal world you could document the entire program in the type system, but the languages people normally use simply don't have what it takes to enable that. They lean on testing instead. Worse is better, I suppose.
As I said before, Rust does have that, and many modern languages have more and more on that. I understand that the world mostly uses less powerful languages, but this is not their fault, most languages have dozens of legacy projects behind them and it is really hard to let go (I mean, there are important programs written in COBOL nowadays, and I assume the language is even worse than your "worse is better", but people still need to use it). I'm not advocating for abandomning those projects, nor that languages with worse type systems are terrible, but that you should simply not say "it is better" just because people need to use it because of reasons, nor glorify their mediocrity. Mediocrity should be enhanced (and that's why even Go, a "simple language" is still gaining features from time to time, even it is not a crystalized stone of specific directives that will never change, and some day it will have more and more features as time moves on; all languages are slowly evolving, even COBOL itself, so if being "worse" is a goal, I think most of them are not following that goal).
> We should probably talk about my fee structure first. I don't want you coming back crying that it was too much when I send you the bill for the work performed.
> That said, under my professional duty to act your interest, I expect you would far better served thinking about how you might go about avoiding testing the shape given a random useful test. You don't really need my services here.
Oh, sorry, I thought this was a discussion where people was really trying to reach the truth, not some sort of "pay me if you want to see my point" kind of thing. I'm more of a "talk is cheap, show me the code guy", and I will certainly not pay someone to justify their own onus probandi. If this is how things will be, I think a discussion with you furthermore would be pointless.
> It doesn't even have proper support for something basic like value constraints, let alone more advanced concepts. It has more features than Go, but I fail to see how that offers transcendence beyond half-assery. I'll grant you that it is less half-assed, just as I did earlier.
And this is obviously an extreme form of exaggeration. I literally coded a basic working numeric system in rust type system with mathematical operations just for fun (and there are crates that does that), if this don't imply the language has a very powerful one I don't think anything would. Obviously, I'm not saying that Rust has THE MOST powerful type system, I never once implied that, but it is not "half-assery" in any way, it is also many times above what Go is able to do. It's not only "more features", it is fundamentally more open to changes and advancements than it is.
--------------- (part 2)
Because you didn't bother to read what I wrote, again, that's why. I suggested it is not fair to fragment the information. If you want to duplicate the information, go nuts. But that rounds us back to the very beginning where we opened with the topic of growing tired of repeating yourself...
> I thought this was a discussion where people was really trying to reach the truth, not some sort of "pay me if you want to see my point" kind of thing.
Yes, it is a discussion, not a make work project. If you want to deliver a point in that discussion, just do it. No need for stupid games.
> I generally don't care that much about them, because I don't find the need for them most of the time
Exactly. Same reason why nobody uses them. [I know, I know, you think this needs to be proven. But it really doesn't. That is silly.]
However, this means that you also recognize that there is a line where more typing is not worth it. So, where, exactly is that line? You say "here", but then I'll say "but no, you also need this". You'll say "that is really not that important", but I'll say "no, it is!" We could go on like that forever.
Eventually a sane person will arrive and simply say: "It depends." And maybe someday you too will understand that statement.
> And this is obviously an extreme form of exaggeration.
It is not. I will grant you that it is the lowest hanging fruit for testing, so in practice it is probably not worth the effort, but it is a great indicator of how the type system is half-assed. If it truly believed in not half-assing it, it would be there, and is in languages with robust type systems.
> For starters, things like "is the shape of this data correct"
When would a test like that ever be necessary? If the shape of your data is wrong somehow, the "documentation" tests won't be able to succeed either, so you implicitly find out that the shape is wrong anyway. There is no need to repeat yourself here. Not only is there no need for repetition, worse, tests like that usually end up making the test suite brittle and hard to manage.
> First, I don't think this is true [at all].
I gathered. This seems to be the source of contention around the testing topic when we cannot even agree what testing is. From your vantage point I can understand how you are unable to recognize the overlap. But it remains that if you write useful tests, you implicitly also end up testing what the type system covers.
The type system is still incredibly beneficial for other reasons, of course. To a point. But, again, the returns are diminishing.
It is harder, obviously, but it still tends to be a matter of understanding most of the time. But I can understand what you mean now, you are not saying it is "impossible" to do so, but that it is very hard to do so, harder than using testing (even with a lower guarantee level). If this is the case I can buy it partially, but then your point would not be as strong, I mean, we need to make a language with a good type system that can prove things reasonably well and be not that hard to use. This is more of a call to action than an impossibility question.
> See, now you're starting to understand exactly why these languages are intractable. But perhaps we can dumb things down to my mere mortal level: Why don't you use those languages for regular programming tasks?
Following my previous response: I generally don't care that much about them, because I don't find the need for them most of the time (this is part of why I said the mostly the right people matter, instead of everyone), but also because I find many of them designed in a way that is not ergonomic enough for me. This is more a design problem (that most languages have) than something else. But also, I generally use for my everyday preferred programming tasks languages that have at least powerful enough type systems, like Rust or Swift, F#, C# (because it is on the higher level of mid-tier), or even Kotlin (that I like very much, it don't has everything I like but it is closer to Swift and has a better compiler tooling) than I use languages that have not that good type-systems and resources (like Go or C), in this sense I pretty much live to my own standards, I just write tests for the things that matter and I use the type systems of those languages to prove things about my code. It works very well, and I very rarely experience problems with this, it is really satisfying for me, but I get not everyone likes this way of coding.
> How, exactly, did you reach that conclusion? You don't have to be good at something to recognize when people are bad at something.
Wdym? You don't need to be good at something, but you surely need to [understand] [better] than most people to realize those same people are worse in [understanding] and [applicability] than you.
Even knowing that you don't know something is a sign that you have a better understanding of that thing than others, but you cannot overlook to other people and say they are dumb if you consider yourself to be at the same level than them, this would be irrational to believe (because you have virtually the same knowledge and limitations).
So, if you say "most developers don't understand testing" I must get from your words, that you at least know what is a [better understanding of testing], or that you are at least more [aware] than other people about the limitations of their own testing (which implies priviledged knowledge).
But if it is the case, then an affirmation like "We'd do well to help them better understand testing" would be just pure insanity. If you believe "we" can help people understand better something, you must understand this [better] than them, there is no teacher that, ceteris paribus, know less than his students in the specific matter that he's teaching and that the student has flaws on it.
> What kind of small tests are you envisioning?
For starters, things like "is the shape of this data correct" (in the base level of any strongly typed language), and things like "was this object unitialized in the end of the scope as intended" (on the destructors feature), and things like "did I forgot to call some method, make some state change or do something else" (on the typestate concept). And even things like "is this function being possibly misused" (with type-system guarantees about mutability, nullability, aliasing and owning references, you can remove a whole class of specific tests like "is this argument invalid", "is this object being mutated outside of this function and thus being possibly in an invalid state at some point in this function", "can I be sure I can optimize this code by doing in place mutations without breaking other parts of the whole software that would be depending on it", etc). Obviously this is kind of abstract, but this is because testing is usually a pursuit of turning those generally abstract concepts onto something practical like "when the user is loging in, is the returned object in a consistent state, are the services, managers and encryption tools being properly used" or "is the customer in a valid known state at this point in the code that is significantly more complex? Can I be sure that it is not null and thus I don't need to test against it in some point?" etc.
> Furthermore, even if we grant that statement as being true for the sake of discussion, there is still the problem that the primary intent of tests is to offer documentation. That the documentation is self-validating is the reason you're not writing it in Microsoft Word instead, but that is really a secondary benefit.
First, I don't think this is true [at all]. There are many kinds of tests, and the "intent" behind them is different. For example, I can say that the primary intent of tests is to ensure the given problem and the expressed code are aligned, to check if what I did is really doing what I intended and that I did not commit any logical errors when expressing the thing, or that I was unable to express correctly what I intended to express (like if, even if everything was correct from a purely computational standpoint, I was able to effectively reach the state I was trying to consciously reach). I think a [side effect] of testing is that it turns out to be pretty good documentation for many problems, but not that it is the [intended] goal of it. Maybe if you are developing it with this goal in mind, but not as an objective unique truth.
And also, even if I accept this: type systems are extremely good ways of documenting the things you are doing. I saw many times haskell programmers say the use the types of the functions they want to call, or the things they want to make, as the way to find appropriate usages of that thing (i.e. if they need to convert a string to an int, they can search in their editor for [String -> Maybe Int] and they will find many useful functions, and probably the one they want, and everything would be very clear for them in this sense). Good types lead the programmers using them to correct code, because they make it very hard (or, sometimes, even impossible) to express incorrect programs using those types. Part of the reason I really like good type systems is that I am a very forgetful person, if I write something the chances of me forgetting about it later down the line are very high, so I really like the sensation of coming back to a codebase and finding all the clues I left for myself (i.e. the types) and discovering that for use that I need that other thing (or else it won't compile), and that function I'm calling can error in some specific signalized ways in the types (and now I remember, I need to do this and this) and how everything fits well like a good and very comfortable puzzle. This is my ideal documentation, and tests are also important for me to remember more forms of how I used this code in practicality sometimes, but many times the types are really everything I need. This, obviously, is more anedoctal, this is how I view things, but I think many people would agree with me on this, and that this is not absurd at all as a conclusion.
-------------- I was bitten by the char limits here, so I'll put in parts (part 1)
Let's be real: You can absolutely write "Go-style" code in just about any language that might have been considered for this. But you wouldn't want to, as a more advanced type system enables entirely different idioms, and it is in bad faith to other developers (including future you) to stray too far from those idioms. If ignoring idioms doesn't sound like a bad idea on day one, you'll feel the hurt and regret soon enough...
Go was chosen because the idioms are generally in alignment with their needs and those idioms are wholly dependent on the shape of its type system.
I would say structural typing is very "esoteric" for most strongly typed languages actually, but this is not a problem.
And proceeding, the implications of your response is very strange. See, your point is essentially saying that "we should use Go, because it entails writting in only one idiom, and writing in languages that enables you to do more idioms -- more powerful languages -- is bad faith to other developers", but Hejlsberg himself said he chose go because of specific characteristics of the compiler that was already written, not because it is "the ideal one for every single prospect", while your point has implications that are absolutely more general. So I don't think he would agree with you that this was his reasoning for using go (the "don't have other idioms" thing), I also don't think this whole "more idioms" thing even make sense, but this is not needed to respond to this.
He did, but much more importantly Cavanaugh said that he chose Go because of it having similar semantics and code structure. In other words, idiomatic Go is similar to how the original code was written. While I am sure Hejlsberg's input was icing on the cake, it was the not the ultimate determinator. C# having the best compiler in the world on every front still wouldn't have ticked the boxes the guy in charge needed to tick.
> So I don't think he would agree with you that this was his reasoning for using go
He may not, but it also wasn't his choice in the end anyway, so its a bit strange that you are leaning on his word.
Advanced type systems are guard rails to spot and avoid issues early on, but that role can be fulfilled by tests as well, and Typescript has a LOT of tests and use cases that will be flagged up. Does it need a strong type system for the internal tooling on top of that?
I'm not an authority on the matter and know nothing about the compiler's internals, but I'm confident in saying that the type system is good enough for this use case.
Edit: I have reached the bottom of the thread and still have not seen this visceral reaction mentioned by the OP.
There's more reactions here. I think devs have lost the plot, tbh.
Meanwhile, this decision was made or led by one of the few people that developed multiple popular programming languages. I trust his opinion / decision more than internet commenters, including my own.
What is weird is how much people talk about how other people react. Modern social media is weird
Holy Language Wars are a spectator sport as old as the internet itself. It's normal to comment on one side fighting another. What's weird is pretending not to see the fighting
But, advocates for language X need to make sure they read and understand the requirements and tradeoffs, which could probably have been communicated better.
Never been a big fan of MS, but must say that typescript is well done imho. thanks for it and all the hard work!
(I'm simply not aware of them but that also means I won't make any statements about these)
I would argue it needs editing, as it violates the HN guideline:
> use the original title, unless it is misleading or linkbait; don't editorialize.
Opened discussion [1]
- [0] https://github.com/microsoft/typescript-go/discussions/410
- [1] https://github.com/microsoft/typescript-go/discussions/467
https://github.com/microsoft/typescript-go/commits?after=dad...
../../../tmp/typescript-go/built/local/lib.dom.d.ts:27982:6 - error TS2300: Duplicate identifier 'KeyType'.
27982 type KeyType = "private" | "public" | "secret";
~~~~~~~
../../../tmp/typescript-go/built/local/lib.webworker.d.ts:9370:6 - 'KeyType' was also declared here.
9370 type KeyType = "private" | "public" | "secret";
~~~~~~~
Probably an easy fix.Running it in another portion results in SIGSEGV with a bad/nil pointer defererence, which puts me in the camp of people questioning the choice of Go.
They would be still setting up the project, if it was Rust.
Why not find out what's going wrong and submit a bug report / merge request instead of immediately dismissing a choice made by one of the leading authorities in programming languages in the world?
- Native executable support on all major platforms
- He doesn't seen to believe that AOT compiled C# can give the best possible performance on all major platforms
- Good control of the layout of data structures
- Had to have garbage collection
- Great concurrency support
- Simple, easy to approach, and great tooling
One other thing I forgot to mention was that he talked about how the current compiler was mostly written as more or less pure functions operating on data structures, as opposed to being object oriented, and that this fits very well with the Go way of doing things, making 1:1 port much easier.
I don't think it was the performance. C# is usually on par or faster than Go.
Could be the lack of maturity but also that I believe Go produces smaller binaries which makes a lot of sense for a CLI.
I benchmarked HTML rendering and Dotnet was 2-3x faster than Go using either Templ or html/template.
Etc.
those hardcoded byte arrays are how everyone does templating everywhere right?
or are you talking about after they changed back their "platform" test to not do that and is substantially slower than go
The link your provided is severely outdated using data from Round 21 and .NET 6.
You can write pure functions operating on data structures in C#, it's maybe not as idiomatic as in Go, but it should not cause problems.
Based on interviews, it seems Hejlsberg cares a lot about keeping the code base as idiomatic and approachable as possible. So it's clearly a factor.
If doing a web server, on the other hand, these things wouldn't matter at all as you would be running a container anyway.
In my opinion, using C# for this use case isn't a practical choice on a greenfield project.
Very specific areas require reflection which is not analyzable with the main user being serialization and serialization just happens to be completely solved.
Every tooling has its faults.
Strange choice to use Go for the compiler instead of C# or F#.
Now if they will have problems, they will depend on the Go team at Google to fix them.
https://devblogs.microsoft.com/go/
Just like they have their own Java distribution, after everything that caused C# to exist in first place,
https://devblogs.microsoft.com/java/
Yes, the new DevDiv is not like the Microsoft of old.
But then the .NET team shouldn't be asking every now and then on social media, why other languages get chosen, outside the Windows ecosytem.
The opposite would be true as well, teams at Google using Typescript or C# would rely on Microsoft to fix any issues.
Or collaborate with Go team.
MS literally already has a whole team around Go. And if they didn't, Go is completely open source.
C# is open-source in name only.
Love it :D
Hopefully this would also reduce the memory footprint because my VS Code intelisense keeps crashing unless I give it like 70% of my RAM, its probably because of our fairly large graphql.ts file which contains auto-generated grapqhl types.
This isn't a knock against Go or necessarily a promotion of Rust, just seems like a lot of duplicated effort. I don't know the timelines in place or where the community projects were vs. the internal MS project.
I kinda wonder, though, if in 5 or 10 years how many of these tools will still be crazy fast. Hopefully all of them! But I also would not be surprised if this new performance headroom is eaten away over time until things become only just bearable again (which is how I would describe the current performance of typescript).
Plus, using TS directly to do runtime validation of types will become a lot more viable without having to precompile anything. Not only serverside, we'll compile the whole thing to WASM and ship it to the client to do our runtime validation there.
> Bootstrapping is a fairly common practice when creating a programming language. Many compilers for many programming languages are bootstrapped, including compilers for ALGOL, BASIC, C, C#, Common Lisp, D, Eiffel, Elixir, Go, Haskell, Java, Modula-2, Nim, Oberon, OCaml, Pascal, PL/I, Python, Rust, Scala, Scheme, TypeScript, Vala, Zig and more.
Most of the Rust compiler is in Rust, that's correct, but it does by default use LLVM to do code generation, which is in C++.
For instance, this compiler for a pattern matching notation has parts of it implementation using the notation itself:
https://www.kylheku.com/cgit/txr/tree/stdlib/match.tl
Some pattern matching occurs in the function match-case-to-casequal. This is why it is preceded by a dummy implementation of non-triv-pat-p, a function needed by the pattern matching logic for classifying whether a pattern is trivial or not; it has to be defined so that the if-match and other macros in the following function can expand. The sub just says every pattern is nontrivial, a conservative guess.
non-triv-pat-p is later redefined. And it uses match-case! So the pattern matcher has bootstrapped this function: a fundamental pattern classification function in the pattern matcher is written using pattern matching. Because of the way the file is staged, with the stub initial implementation of that function, this is all boostrapped in a single pass.
My most plausible guess would be that compiler writers don't want to dig into native code and performance, writing a TS to Go translator looks like a more familiar task for them. Lack of JS version performance analysis anywhere in the announcements kinda confirms this.
Programming languages are tools. Nothing more.
It doesn't use type hints yet, and the difficulty there is that you'd need a sound type system in order to rely on the types. You may be able to use type hints to generate optimized and fallback functions, with type guards, but that doesn't exist yet and it sounds like the TypeScript team wants to move pretty quickly with this.
[1]: https://porffor.dev/
While I like faster TSC, I don't like that the TypeScript compiler needs to be written in another language to achieve speed; it kind of reminds everyone that TS isn't a good language for complicated CPU/IO tasks.
Given that the TypeScript team has resigned to the fact that JavaScript engines can't run the TypeScript compiler (TSC) sufficiently fast for foreseeable future and are rewriting it entirely in Go, then it is unlikely they will seek to do AOT.
https://www.microsoft.com/en-us/research/publication/static-...
> immutable data structures --> "we are fully concurrent, because these are what I often call embarrassingly parallelizable problems"
The relationship of their performance gains to functional programming ideas is explained beginning at 8:14 https://youtu.be/pNlq-EVld70?feature=shared&t=522
Also I get the sense from the video that it still outputs only JS. It would be nice if we could build typescript executables that didn't require that, even if was just WASM, though that is more of a different backend rather than a different compiler.
Edit: C# was addressed: https://github.com/microsoft/typescript-go/discussions/411#d...
> The JS-based codebase will continue development into the 6.x series, and TypeScript 6.0 will introduce some deprecations and breaking changes to align with the upcoming native codebase.
> While some projects may be able to switch to TypeScript 7 upon release, others may depend on certain API features, legacy configurations, or other constraints that necessitate using TypeScript 6. Recognizing TypeScript’s critical role in the JS development ecosystem, we’ll still be maintaining the JS codebase in the 6.x line until TypeScript 7+ reaches sufficient maturity and adoption.
It sounds like the Python 2 -> 3 migration, or the .Net Framework 4 -> .Net 5 (.Net Core) migration.
I'm still in a multi-year project to upgrade past .Net Framework 4; so I can certainly empathize with anyone who gets stuck on TS 6 for an extended period of time.
My theory - that Go will always be the choice for things like this when ease, simplicity, and good (but not absolute) performance is the goal - continues to hold.
And if it's run-time, can we expect browsers to replace V8 with this Go library?
(I realize this is a noob/naive question - apologies)
So, to me, Hejlsberg's choice sounds pretty logical.
After, why go ? why not...
Also, what’s up with 10x everywhere? Why not 9.5x or 11x?
https://devblogs.microsoft.com/typescript/announcing-typescr...
This will only allow you to run your TypeScript in Node, but does not perform type checking, and I don't believe has any plans to. This is from Node.js 23.9.0
https://nodejs.org/api/typescript.html#type-stripping
I don't believe Node has any plans for type checking TS.
> Modern editors like Visual Studio and Visual Studio Code have excellent performance.
Well I am not sure we are on the same page here. Still, fingers crossed.
See how many spaghetti types get churned through this faster transpiler.
Didn't expect Jevons paradox popping up for compilers.
If the Typescript team were to go with Rust or C# they would have to contend with async/await decoration and worry about starvation and monopolization.
Go frees the developer from worrying about these concerns.
There's a significant performance penalty for using javascript outside the browser.
I'm not aware of any JS runtime outside a browser that supports concurrency (other than concurrently awaiting IO), so you can't do parallel compilation in a single process.
It's generally also very difficult to make a JS program as fast as even a naive go program, and the performance tooling for go is dramatically more mature.
You haven't looked very hard then, NodeJS has supported worker threads for years. However, to uphold Javascript's safety guarantees, they can only communicate via message passing, or sharing a special `SharedArrayBuffer` datatype, neither of which are well suited to sharing large immutable data structures.
What is more important is that tsc does typechecking, which is a static analysis of sorts to ensure code correctness. But this has nothing to do with runtime performance, that's entirely in JS land and in JS transpilers / optimizers.
You seem to be referring to runtime performance of compiled code. The announcement is about compile times; it's about the performance of the compiler itself.
I've been revisiting my editing setup over the last 6 months and to my surprise I've time traveled back to 2012 and am once again really enjoying Sublime Text. It's still by far the most performant editor out there, on account of the custom UI toolkit and all the incredibly fast indexing/search/editing engines (everything's native).
Not sure how this announcement impacts VSCode's UI being powered by Electron, but having the indexing/search/editing engines implemented in Go should drastically improve my experience. The editor will never be as fast as Sublime but if they can make it fast enough to where I don't notice the indexing/search/editing lag in large projects/files, I'd probably switch back.
It has no bearing on this at all.
tl;dr — Rust would be great for a rewrite, but Go makes way more sense for a port. After the dust settles, I hope people focus on the outcomes, not the language choice.
I was very surprised to see that the TypeScript team didn’t choose Rust, not just because it seemed like an obvious technical choice but because the whole ecosystem is clearly converging on Rust _right now_ and has been for a while. I write Rust for my day job and I absolutely love Rust. TypeScript will always have such a special place in my heart but for years now, when I can use Rust.. I use Rust. But it makes a lot of sense to pick Go.
The key “reading between the lines” from the announcement is that they’re doing a port not a rewrite. That’s a very big difference on a complex project with 100-man-years poured into it.
Places where Go is a better fit than Rust when porting JavaScript:
- Go, like JavaScript and unlike Rust, is garbage collected. The TypeScript compiler relies on garbage collection in multiple places, and there are probably more that do but no one realizes it. It would be dangerous and very risky to attempt to unwind all of that. If it were a Rust rewrite, this problem goes away, but they’re not doing a rewrite.
- Rust is so stupidly hard. I repeat, I love Rust. Love it. But damn. Sometimes it feels like the Rust language actively makes decisions that demolish the DX of the 99.99% use-case if there’s a 0.001% use-case that would be slightly more correct. Go is such a dream compared to Rust in this respect. I know people that more-or-less learned Go in a weekend and are writing it professionally daily. I also know people that have been writing Rust every day professionally for years and say they still feel like noobs. It’s undeniable what a difference this makes on productivity for some teams.
Places where Go is just as good a fit as Rust:
- Go and Rust both have great parallelism/concurrency support. Go supports both shared memory (with explicit synchronization) and message-passing concurrency (via goroutines & channels). In JavaScript, multi-threading requires IPC with WebWorkers, making Go’s concurrency model a smoother fit for porting a JS-heavy codebase that assumes implicit shared state. Rust enforces strict ownership rules that disallows shared state, or we can at least say makes it a lot harder (by design, admittedly).
- Go and Rust both have great tooling. Sure, there are so many Rust JavaScript tools, but esbuild definitively proves that Go tooling can work. Heck, the TypeScript project itself uses esbuild today.
- Go and Rust are both memory safe.
- Go and Rust have lots of “zero (or near zero) cost abstractions” in their language surface. The current TypeScript compiler codebase makes great use of TypeScript enums for bit fiddling and packing boolean flags into a single int32. It sucks to deal with (especially with a Node debugger attached to the TypeScript typechecker). While Go structs are not literally zero cost, they’re going to be SO MUCH nicer than JavaScript objects for a use-case like this that’s so common in the current codebase. I think Rust sorta wins when it comes to plentiful abstractions, but Go has more than enough to make a huge impact.
Places where Rust wins:
- the Rust type system. no contest. In fairness, Go doesn’t try to have a fancy type system. It makes up for a lot of the DX I complained about above. When you get an error that something won’t compile, but only when targeting Windows because Rust understands the difference in file permissions… wow. But clearly, what Go has is good enough.
- so many new tools (basically, all of them that are not also in JS) are being done in Rust now. The alignment on this would have been cool. But hey, maybe this will force the bindings to be high-quality which benefits lots of other languages too (Zig type emitter, anyone?!).
By this time next week when the shock wears off, I just really hope what people focus on is that our TypeScript type checking is about to get 10 times faster. That’s such a big deal. I can’t even put it into words. I hope the TypeScript team is ready to be bombarded by people trying to use this TODAY despite them saying it’s just a preview, because there are some companies that are absolutely desperate to improve their editor perf and un-bottleneck their CI. I hope people recognize what a big move this is by the TypeScript team to set the project up for success for the next dozen years. Fully ejecting from being a self-hosted language is a BIG and unprecedented move!
Specifically if you race any non-trivial Go object (say, a hash table, or a string) then that's immediately UB. Internally what's happening is that these objects have internal consistency rules which you can easily break this way and they're not protected against that because the trivial way to do so is expensive. Writing a Go data race isn't as trivial as writing a use-after-free in C++ but it's not actually difficult to do by mistake.
In single threaded software this is no caveat at all, but most large software these days does have some threading involved.
In terms of concrete examples, this might allow remote code execution, arbitrary reads or writes of memory that you otherwise don't have access to, stuff like that.
Typescript is a Microsoft project, right? I’m surprised they didn’t choose C#.
Go doesn't seem to be memory safe, see https://www.reddit.com/r/rust/comments/wbejky/comment/ii7ak8... and https://go.dev/play/p/3PBAfWkSue3
Rust is memory safe. Go is memory safe. Python is memory safe. Typescript is memory safe. C++ is not memory safe. C is not memory safe.
At some point a next generation solver will make this not compile, and people will probably invent an even weirder edge case for that solver.
Whereas the Go example is just how Go works, that's not a bug that's by design, don't expect Go to give you thread safety that's not what they promised.
The burden placed by rust on the developer is to keep track of all possible mutability and readability states and commit to them upfront during development. (If I may summarize, been a long time since I wrote any Rust). The rest it takes care of for you.
The question of which a developer prefers at a certain skill level, and which a manager of developers at a certain skill level prefers, is going to vary.
That said, most people still call Go memory safe even in spite of this being possible, because, well, https://go.dev/ref/mem
> While programmers should write Go programs without data races, there are limitations to what a Go implementation can do in response to a data race. An implementation may always react to a data race by reporting the race and terminating the program. Otherwise, each read of a single-word-sized or sub-word-sized memory location must observe a value actually written to that location (perhaps by a concurrent executing goroutine) and not yet overwritten. These implementation constraints make Go more like Java or JavaScript, in that most races have a limited number of outcomes, and less like C and C++, where the meaning of any program with a race is entirely undefined, and the compiler may do anything at all.
That last sentence is the most important part. Java in particular specifically defines that tears may happen in a similar fashion, see 17.6 and 17.7 of https://docs.oracle.com/javase/specs/jls/se8/html/jls-17.htm...
I believe that most JVMs implement dynamic dispatch in a similar manner to C++, that is, classes are on the heap, and have a vtable pointer inside of them. Whereas Go's interfaces can work like Rust's trait objects, where they're a pair of (data pointer, vtable pointer). So the behavior we see here with Go is unlikely to be possible in Java, because the tear wouldn't corrupt the vtable pointer, because it's inside what's pointed at by the initial pointer, rather than being right after it in memory.
These bugs do happen, but they have a more limited blast radius than ones in languages that are clearly unsafe, and so it feels wrong to lump Go in with them even though in some strict sense you may want to categorize it the other way.
On the other hand, though, in practice, I've wound up using Go in production quite a lot, and these bugs are excessively rare. And I don't mean concurrency bugs: Go's concurrency facilities kind of suck, so those are certainly not excessively rare, even if they're less common than I would have expected. However... not all Go concurrency bugs can possibly segfault. I'd argue most of them can't, at least not on most common platforms.
So how severely you treat this lapse is going to come down to taste. I see the appeal of Rust's iron-clad guarantees around limiting the blast radius, but of course everything comes with limitations. I believe that any discussion about the limitations of guarantees like these should have some emphasis on the real impact. e.g. It's easy enough to see that the issues with memory management in C and C++ are serious based on the security track record of programs written in C and C++, I think we're still yet to fully understand how much of an impact Go's lack of safe concurrency will impact Go software in the long run.
I both want to agree with this, but also point to things like https://www.uber.com/en-CA/blog/data-race-patterns-in-go/, which found a bunch of bugs. They don't really contextualize it in terms of other kinds of bugs, so it's really hard to say from just this how rare they actually are. One of the insidious parts of non-segfaulting data race bugs is that you may not notice them until you do, so they're easy to under-report. Hence the checker used in the above study.
> not all Go concurrency bugs can possibly segfault. I'd argue most of them can't, at least not on most common platforms.
For sure, absolutely. And I do think that's meaningful and important.
> I think we're still yet to fully understand how much of an impact Go's lack of safe concurrency will impact Go software in the long run.
Yep, and I do suspect it'll be closer to Java than to C.
I don't know if I'd evangelize for adopting Go on the scale that Uber has: I think Go works best for shared-nothing architectures and gets gradually less compelling as you dig into more complex concurrency. That said, since Uber is an early adopter, there is a decent chance that what they have learned will help future organizations avoid repeating some of the same issues, via improvements to tooling and the language.
[1]: https://go.dev/blog/loopvar-preview
[2]: https://go.dev/blog/synctest
[3]: https://github.com/mgechev/revive/blob/HEAD/RULES_DESCRIPTIO...
How can a segfault lead to attack or exploitation?
Edit: Answering my own question (from https://go.dev/ref/mem):
Reads of memory locations larger than a single machine word are encouraged but not required to meet the same semantics as word-sized memory locations, observing a single allowed write w. For performance reasons, implementations may instead treat larger operations as a set of individual machine-word-sized operations in an unspecified order. This means that races on multiword data structures can lead to inconsistent values not corresponding to a single write. When the values depend on the consistency of internal (pointer, length) or (pointer, type) pairs, as can be the case for interface values, maps, slices, and strings in most Go implementations, such races can in turn lead to arbitrary memory corruption.
Is that exploitable? It depends. It's easier to assume that it is than hope that it isn't.
However, while it is a more serious category of issue, I have two reasons to suggest people don't over-index on it:
- Concurrency bugs that can not lead to segmentation faults are by no means safe, they can still lead to exploits of arbitrary severity. Ones that can are more dangerous since they can violate Go's own safety guarantees, but so can the "unsafe" package, so you need to put it into some perspective.
- Concurrency bugs that can are likely to be less common. In my experience, it is not extremely common to re-assign shared map or interface values in Go. If you are sharing a value of map, slice, string or interface and do plan on re-assigning it (thus causing the hazard in question) you can work around this problem trivially by adding a tiny bit of indirection, using an atomic pointer to the value instead, and re-assigning that pointer instead. Making a new value each time is no big deal since all of the fat pointers in question are still relatively small (just 2-3 machine words) though it incurs more allocations and pointer indirections so YMMV.
And of course I recommend using all applicable linters, the checklocks analyzer from gVisor, and careful encapsulation of shared memory where possible. Even better is to avoid it entirely if you can.
Of course, as much as I love Go, some types of program are going to need lots of hairy shared memory and mutations interweaving. And for that, Rust is the obvious best choice.
What? And how? And how would that help in Go which has a completely different garbage collection mechanism?
My development in regards to language:
- Javascript sucks I love Python.
- Python sucks I love Typescript.
Another commenter pointed out that compilers have very different performance characteristics to games, and I'll include web servers in that too.
tsc needs to start up fast and finish fast. There's not a ton of time to benefit from JIT.
Your server on the other hand will run for how long between deployments?
Right now you can make use of the --erasableSyntaxOnly to find any enums in your code, and start porting over to an alternative. This article lists alternatives if you're interested.
https://exploringjs.com/tackling-ts/ch_enum-alternatives.htm...
as i see next planned feature is macro in TS(joke, just because 3rd book is macro).
I'll give Typescript yet another go. I really like it and wish I could use it. It's just that any project I start, inevitably the sourcemap chain will go wrong and I lose the ability to run the debugger in any meaningful way.
I’ve been using F# full-time for 6 years now. And compiler/tooling gets painfully slow fast.
Still wouldn’t trade it for anything else though.
Do any other well-adopted tools in the ecosystem use Go?
esbuild is the most well-known/used project, probably beats all other native bundlers combined. I can't remember anything else off the top of my head.
sigh
That's really not what's stopping TS being built in to browsers. Have a look at the discussions around the types-as-comments proposal https://tc39.es/proposal-type-annotations/
"To meet those goals, we’ve begun work on a native port of the TypeScript compiler and tools. The native implementation will drastically improve editor startup, reduce most build times by 10x, and substantially reduce memory usage."
It's hard to tell if there will even be a runtime that somehow uses TS types to optimize even further (e.g. by proving that a function diverges) but to my knowledge they currently don't and I don't think there's any in the works (or if that's even possible while maintaining runtime soundness, considering you can "lie" to TS by casting to `unknown` and then back to any other type).
Just like if you said faster C++ that could mean the compiler runs faster, or the resulting machine code runs faster.
Just because the compile target is another human readable language doesn’t mean it ceases to be a typescript program.
I didn’t think this particular example was very ambiguous because a general 10x speed up in the resulting JS would be insane, and I have used typescript enough to wish the compiler was faster. Though if we’re being pedantic, which I enjoy doing sometimes, I would say it is ambiguous.
That still wouldn't make sense, in the same way that it wouldn't make sense to say "Python type hints found a way to automatically write more performant Python". With few exceptions, the TypeScript compiler doesn't have any runtime impact at all — it simply removes the type annotations, leaving behind valid JavaScript that already existed as source code. In fact, avoiding runtime impact is an explicit design goal of TypeScript [1].
They've even begun to chip away at the exceptions with the `erasableSyntaxOnly` flag [2], which disables features like enums that do emit code with runtime semantics.
[1] https://github.com/microsoft/TypeScript/wiki/TypeScript-Desi...
[2] https://www.typescriptlang.org/docs/handbook/release-notes/t...
I get your point, but... this is exactly the premise of mypyc ;)
Sure, lots of build tools do this, but that's not Typescript.
With very few exceptions, Typescript is written so that removing the Typescript-specific things makes it equivalent to the Javascript it transpiles to.
https://betterstack.com/community/guides/scaling-nodejs/node....
If you don't know enough about TypeScript to understand that TypeScript is not a runtime, I'm not sure why you would care about TypeScript being faster (in either case).
Preact was "a faster React", for example.
I mean I think generally you’d want to click the link and read the article before commenting
- It's not ambiguous because they mean $X.
- It is ambiguous because it has many possible meanings.
- It is not ambiguous because it has many possible meanings
Yeah, that exists. AssemblyScript has an AOT compiler that generates binaries from statically typed code.
Typescript's type system is unsound so it probably will never be very useful for an optimizing compiler. That was never the point of TS however.
I don't think that is too far fetched either since typescript already has most of the type information.
It’s been a crazy couple of weeks for TS!!
If you have to invent things for something to be considered ambiguous, is it really ambiguous?
It would be possible that MS wrote a TypeScript compiler that emits native binaries and that made the language 10x faster, why not?
Transpiling in itself also doesn't remove the possibility of producing more optimized code, especially if the source has more information about the types. The official TypeScript compiler doesn't really do any of that right now (e.g. it won't remove a branch about how to handle a variable if its type equals a number even if it has the type information to know it can't have been set to one). Heck, it doesn't even (natively, you can always bolt this on yourself) support producing minified transpiled code to improve runtime parsing! In both examples it's not because transpilation prevents optimization though, it's just not done (or possibly worthwhile if TS only ever targets JS runtimes as JS JIT is extraordinarily good these days).
Except in the case of Doom, which can run on anything.
If someone posted an article talking about the "handedness" of DNA or something, I wouldn't complain "oh, you confused me, I thought you were saying DNA has hands!"
I agree with pseudopersonal in that the title should be changed. technically it's not misleading, but not everyone uses or is familiar with typescript.
Mostly these days I’m only aware of C# when it inconveniences me.
The reasons stated on github doesn't seem to be convincing imo.
- platform support
NativeAOT supports all platforms, the only one missing is Android which is marked experimental, but since they'd be treated as a "1st party" customer since they're both MS projects, this could be easily expedited. Even WASM is supported in NativeAOT using the LLVM toolchain and is often compared to perform better than Go's WASM target which doesn't use LLVM
- Usage of functions and structs
C# supports this, and you even have better control on layout and performance in this regard. Functions can easily be ported as static functions on static classes. They could even use F# which is even closer to Typescript if they wanted a more direct port as both languages compile to IL for NativeAOT.
There must be more reasons as to why it didn't choose C# in this regard, likely non-technical related. A missed opportunity imo.
However, in both native AOT and Go you actually have some parts of the runtime bundled in (e.g. garbage collector).
Javascript is not slow because of GC or JIT (the JVM is about twice as fast in benchmarks; Go has a GC) but because JS as a language is not designed for performance. Despite all the work that V8 does it cannot perform enough analysis to recover desirable performance. The simplest example to explain is the lack of machine numbers (e.g. ints). JS doesn't have any representation for this so V8 does a lot of work to try to figure out when a number can be represented as an int, but it won't catch all cases.
As for "working solution over language politics" you are entirely pulling that out of thin air. It's not supported by the article in any way. There is discussion at https://github.com/microsoft/typescript-go/discussions/411 that mentions different points.
I wonder if Typescript could introduce integer type(s) that a direct TS -> native code compiler (JIT or AOT) could use. Since TS becomes valid JS if all type annotations are removed, such numbers would just become normal JS numbers from the POV of a JS runtime which does not understand TS.
No, it is not. It is a continuation of an existing trend
You may be interested in esbuild (https://github.com/evanw/esbuild), turborepo (https://github.com/vercel/turborepo), biome-js (https://github.com/biomejs/biome) are all native reimplementations of existing projects in JS/TS. esbuild is written in Go, the others in Rust.
> reveals something deeper: Microsoft prioritized shipping a working solution over language politics
Its not that "deep". I don't see the politics either way, there are clearly successful projects using both Go and Rust. The only people who see "politics" are those who see people disagreeing, are unable to understand the substance of the disagreement and decide "ah, it's just politics".
So, I don't think the comment is AI-generated for this reason.
They're using en-dash which is even easier: option-hyphen.
This is the wrong way to do AI detection. For one, LLM would have used the right dash. But at least find someone wasting our time with belabored or overwrought text that doesn't even interact with anything.
In the pre-Unicode days, people would use two hyphens (--) to simulate em dashes.
I'm not sure that this is particularly accurate for the Rust case. The goal of this project was to perform a 1:1 port from TypeScript to a faster language. The existing codebase assumes a garbage collector so Rust is not really a realistic option here. I would bet they picked GCed languages only.
From https://github.com/microsoft/typescript-go/discussions/411
> Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
> We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.
Personally, I'm a big believer in choosing the right language for the job. C# is a great language, and often is "good enough" for many jobs. (I've done it for 20 years.) That doesn't mean it's always the best choice for the job. Likewise, sometimes picking a "familiar language" for a target audience is better than picking a personal favorite.
But, the team posted their rationale for Go here: https://github.com/microsoft/typescript-go/discussions/411
I hope you really mean for "userspace tools / programs" which is what these dev-tools are, and not in the area of device drivers, since that is where "systems programming" is more relevant.
I don't know why one would choose JS or TS for "systems programming", but I'm assuming you're talking about user-space programs.
But really, those who know the difference between a compiled language and a VM-based language know the obvious fundamental performance limitations of developer tools written in VM-based languages like JS or TS and would avoid them as they are not designed for this use case.
I don't follow. If they had picked Rust over Go why couldn't you also argue that they are prioritising shipping a working solution over language politics. It seems like a meaningless statement.
There is already a growing number of native-code tools of the JS/TS ecosystem, like esbuild or swc.
Maybe we should expect attempts of native AOT compilation for TS itself, to run on the server side, much like C# has an AOC native-code compiler.
This should prevent most of the memory safety issues, though data races could still be tricky (e.g. Go is memory unsafe due to data races)
Also in this space is Gleam [2] which targets Erlang / OTP, if high concurrency and fault tolerance is your cup of tea.
[1]: https://reasonml.github.io/
[2]: https://gleam.run/
I think they went for Go mostly because of memory management, async and syntactic similarity to interpreted languages which makes total sense for a port.
Really is this a surprise to anyone? I don't think anyone thinks JS is suitable for 'systems programming'.
Javascript is the language we have for the browser - there's no value in debating it's merits when it's the only option. Javascript on the server has only ever accrued benefits from being the same language as the browser.
Why not C#? https://youtu.be/10qowKUW82U?t=1155
/s
All the bootcamp cargo culting crew have pumped these lies such as "the language doesn't matter" or "learn coding in 1 week for a SWE job with JS / TS" and it has caused the increase in low quality software and with several developers asking how to improve or add "performance" optimizations as such.
What we have just seen is that the TS team has admitted that a limit has been reached and *almost always* the solution is either porting it to a compiled language or relying on scaling with new computers with new processors in accordance to Moore's Law to get performance for free.
Now the bootcampers are rediscovering why we need "static typing" and why a "compiled language" is more performant than a VM-based language.
All the time spent trying to optimize JITs for JavaScript engines, or alternative Python implementations (e.g., PyPy), and fruitless efforts like trying to get JVMs to start fast enough for use in cloud "lambda function" applications. Ugh...
This is how we got Graal, why would you call it "fruitless effort"?
For my specific example of JVMs on lambdas, I wasn't really thinking about GraalVM. I was more thinking of all the hacky, fiddly, things that people were doing to "warm up" their JVM-based lambdas. Like some of the stuff described in this article I just randomly grabbed from a web search: https://medium.com/@marcos.duarte242/keeping-your-aws-lambda...
The reality is that JVM languages were just the wrong tool for the job of writing short-lived applications.
Even though I wasn't really thinking about GraalVM, it might not be shocking that I don't really like it either- for the same kind of reason(s). Java was designed as a fairly dynamic language: you have runtime reflection, dynamic class loading (hot swapping), and various other (admittedly niche) features. So, Java code destined for GraalVM has to be written differently than Java code destined for a standard JVM runtime, which is an inverted way of saying that the nominal goal of GraalVM is technically impossible (you can't, generally, write a native compiler for the Java programming language). So, again, we're taking a language that was designed and optimized for specific runtime properties and we're forcing that square peg into the round hole of AOT compilation. You want native performance? Use a native language!
It feels like someone trying to design a hammer to also be a really shitty screwdriver. Why not just use a hammer sometimes and a screwdriver other times?
Every time I've said that languages like Python, JavaScript, and basically any other language where it's hard to avoid heap allocations, pointer chasing, and copious data copies are all slow, there are plenty of people who come out of the woodwork to inform me that it's all negligible.
To be a little bit fair to those people, I have been in many situations where people go "my matlab/python code is too slow, I must re-write it in C", and I've been able to get an order of magnitude improvement by re-writing the code in the same language. Hell I've ported terrible Fortran code to python/numpy and gotten significant performance improvement. Of course taking that well written code and re-writing that in well written C will probably give you a further order of magnitude improvement. Fast code in a slow language can beat slow code in a fast language, but obviously never beat fast code in a fast language.
I'm just a little bitter because of how many times I've been shushed in places like programming language subreddits and here when I've pointed out how inefficient some cool new library/framework/paradigm is. It feels like I'm either being gaslit or everyone else is in denial that things like excessive heap allocations really do still matter in 2025, and that JITs almost never help much with realistic workloads for a large percentage of applications.
half of the perf gain is from moving to native code, other half is from concurrency
10x faster compilation, not runtime performance
Why is typescript not already a standard natively supported by browers?!
This is an admission that these JavaScript based languages (including TypeScript) are just completely unsuitable for these performance and scalable situations, especially when the codebase scales.
As long as it is a compiled language with reasonable performance and with proper memory management situations, Go is the unsurprising choice, but the wise choice to solve this problem.
But this choice definitively shows (and as admitted by the TS team) how immature both JavaScript and TypeScript are in performance and scalability scenarios and should be absolutely avoided for building systems that need it. Especially in the backend.
Just keep it in the frontend.
Anyway, JS is not immature in performance per se, but in this particular use case, a native language is faster. But they had to solve the problem first before they could decide what language was best for it.