I write a lot of tools that depend on the TypeScript compiler API, and they run in a lot of a lot of JS environments including Node and the browser. The current CJS codebase is even a little tricky to load into standard JS module supporting environments like browsers, so I've been _really_ looking forward to what Jake and others have said will be an upcoming standard modules based version.
Is that still happening, and how will the native compiler be distributed for us tools authors? I presume WASM? Will the compiler API be compatible? Transforms, the AST, LanguageService, Program, SourceFile, Checker, etc.?
I'm quite concerned that the migration path for tools could be extremely difficult.
[edit] To add to this as I think about it: I maintain libraries that build on top of the TS API, and are then in turn used by other libraries that still access the TS APIs. Things like framework static analysis, then used by various linters, compilers, etc. Some linters are integrated with eslint via typescript-eslint. So the dependency chain is somewhat deep and wide.
Is the path forward going to be that just the TS compiler has a JS interop layer and the rest stays the same, or are all TS ecosystem tools going to have to port to Go to run well?
If I got it correctly, they created a node native module that allows synchronous communication on standard I/O between external processes.
So, this node module will make possible the communication between the typescript compiler GO process, that will expose an “API server compiler”, and a client side JavaScript process.
They don’t think it will be possible to port all APIs and some/most of them will be different than today.
I've found that as of 2025, Go's WASM generator isn't as good as LLVM and it has been very difficult for me to even get parity with vanilla JS performance. There is supposedly a way to use a subset of go with llvm for faster wasm, but I haven't tried it (https://tinygo.org/).
I'm hoping that Microsoft might eventually use some of their wasm chops to improve GO's native wasm compiler. Their .NET wasm compiler is pretty darn good, especially if you enable AOT.
https://github.com/golang/go/issues/63904#issuecomment-22536...
It sounds like it would be a great fit for e.g. Lua though.
Well, for languages that use a GC. People who are writing WASM that exceeds JS in speed are typically doing it in Rust or C++.
The few cases that performed significantly better than the JS version (like >2x speed) were integer-heavy math and tail-call optimized recursive code, some cases were slower than the JS version.
What I was surprised was that the JS version had similar performance to the x64 version with -O3 in some of my benchmakrs (like float64 performance).
This was a while ago though when WASM support had just landed in browsers, so probably things got better now.
Why not something like Rust? Most of the JS ecosystem that is moving toward faster tools seem to be going straight to Rust (Rolldown, rspack (the webpack successor) SWC, OXC, Lightning CSS / Parcel etc) and one of the reasons given is it has really great language constructs for parsers and traversing ASTs (I think largely due to the existence of `match` but i'm not entirely sure)
Was any thought given to this? And if so what was the deciding factors for Go vs something like Rust or another language entirely?
People say this like it's a bad thing. It's not, it's Go's primary strength.
____
Language choice is always a hot topic! We extensively evaluated many language options, both recently and in prior investigations. We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript. We wrote multiple prototypes experimenting with different data representations in different languages, and did deep investigations into the approaches used by existing native TypeScript parsers like swc, oxc, and esbuild. To be clear, many languages would be suitable in a ground-up rewrite situation. Go did the best when considering multiple criteria that are particular to this situation, and it's worth explaining a few of them.
By far the most important aspect is that we need to keep the new codebase as compatible as possible, both in terms of semantics and in terms of code structure. We expect to maintain both codebases for quite some time going forward. Languages that allow for a structurally similar codebase offer a significant boon for anyone making code changes because we can easily port changes between the two codebases. In contrast, languages that require fundamental rethinking of memory management, mutation, data structuring, polymorphism, laziness, etc., might be a better fit for a ground-up rewrite, but we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
Go also offers excellent control of memory layout and allocation (both on an object and field level) without requiring that the entire codebase continually concern itself with memory management. While this implies a garbage collector, the downsides of a GC aren't particularly salient in our codebase. We don't have any strong latency constraints that would suffer from GC pauses/slowdowns. Batch compilations can effectively forego garbage collection entirely, since the process terminates at the end. In non-batch scenarios, most of our up-front allocations (ASTs, etc.) live for the entire life of the program, and we have strong domain information about when "logical" times to run the GC will be. Go's model therefore nets us a very big win in reducing codebase complexity, while paying very little actual runtime cost for garbage collection.
We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.
Acknowledging some weak spots, Go's in-proc JS interop story is not as good as some of its alternatives. We have upcoming plans to mitigate this, and are committed to offering a performant and ergonomic JS API. We've been constrained in certain possible optimizations due to the current API model where consumers can access (or worse, modify) practically anything, and want to ensure that the new codebase keeps the door open for more freedom to change internal representations without having to worry about breaking all API users. Moving to a more intentional API design that also takes interop into account will let us move the ecosystem forward while still delivering these huge performance wins.
C# and TypeScript are Hejlsberg's children; C# is such an obvious pick that there must have been a monster problem with it that they didn't think could ever be fixed.
C# has all that stuff that the FAQ mentions about Go while also having an obvious political benefit. I'd hope the creator of said language who also made the decision not to use it would have an interesting opinion on the topic! I really hope we find out the real story.
As a C# developer I don't want to be offended but, like, I thought we were friends? What did we do wrong???
Transcript: "But I will say that I think Go definitely is much more low-level. I'd say it's the lowest level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In contrast, C# is sort of bytecode-first, if you will. There are some ahead-of-time compilation options available, but they're not on all platforms and don't really have a decade or more of hardening. They weren't engineered that way to begin with. I think Go also has a little more expressiveness when it comes to data structure layout, inline structs, and so forth."
Sure, AOT is not as mature in C# but is this reason enough to be a show stopper? It seems there're other reasons Anders don't want to address publicly. Maybe as simple reasons as "Go is 10 times easier to pick up than C#" and "language features don't matter when the project matters". Those would indeed hurt the image of C# and Anders obviously don't want that.
But I don't see it as big drama.
For anyone who can't watch the video, he mentions a few things (summarizing briefly just the linked time code, it's worth a watch):
- Go being the lowest level language that still has garbage collection
- Inline structs and other data structure expressiveness features
- Existing JS code is in a C-like function+data structure style and not an OOP style, this is easier to translate directly to Go while C# would require OOPifying it.
It's a fascinating language, but it lacks a flagship product.
I feel the same way about Haxe. Someone created an amazing language, but it lacks a big enough community.
Realistically languages need 2 things for adoption. Momentum and ease of use. Rust has more momentum than ease, but arguably can solve problems higher level languages can't.
I'm half imagining a hackathon like format where teams are challenged to use niche languages. The foundations behind these languages can fund prizes.
And AFAIK Symmetry Investments is that dogfood startup.
What is this logic? "You worked on C# years ago so you must use C# for everything"?
"You must dictate C# to every team you lead forever, no matter what skills they have"?
"You must uphold a dogma that C# is the best language for everything, because you touched it last"?
Why aren't you using this logic to argue that they should use Delphi or TurboPascal because Anders Hejlsberg created those? Because there is no logic; the person who created hammers doesn't have to use hammers to solve every problem.
So it's not just that the lead architect of C# is involved in the TypeScript changes. It's also that this is under the same roof and the same sign hangs on the building outside for both languages.
If Ford made a car and powered it with a Chevy engine, wouldn't you be curious what was going on also?
Maybe top ten behind MSSQL, Powershell, Excel Formulae, DAX etc.
I do love F#, but its compiler is a rusty set of monkey bars. It's somehow single pass, meaning the type checker will struggle if you don't reorder certain expressions - but also dog slow, especially for `inline` definitions (which work more like templates or hygienic macros than .net generics, and are far more powerful.) File order matters, bafflingly! Newer .net features like spans and ref structs are missing with no clear path to implementation. Doing moderately clever things can cause the compiler to throw weird, opaque, internal errors. F# is built around immutability but there's no integration with the modern .net immutable collections.
It's clearly languishing and being kept alive by a skeleton crew, which is sad, because it deserves better, but I've used research prototypes less clunky than what ought to be a flagship.
Huh? They're already implemented! It took years and they've still got some rough edges, yes, but they've been implemented for a few years now.
Agreed with the rest, though. As much as I love working with F#, I've jumped ship.
Anders Hejlsberg hasn't been the lead architect of C# for like 13 years. Mads Torgersen is:
https://dotnetcore.show/episode-104-c-sharp-with-mads-torger... - "I got hired by Microsoft 17 years ago to help work on C#. First, I worked with Anders Hejlsberg, who’s sort of the legendary creator and first lead designer of C#. And then when he and I had a little side project with others to do TypeScript, he stayed over there. And I got to take over as lead designer C#. So for the last, I don’t know, nearly a decade, that’s been my job at Microsoft to, to take care of the evolution of the C# programming language"
Years later, "why aren't you using YOUR LANGUAGE, huh? What's the matter, you don't like YOUR LANGUAGE?" is pushy and weird; he's a person with a job, not a religious cult leader.
> "If Ford made a car and powered it with a Chevy engine, wouldn't you be curious what was going on also?"
Like these? https://www.slashgear.com/1642034/fords-powered-by-non-ford-...
It's also not what anyone said.
> It's best not to use quotation marks to make it look like you're quoting someone when you're not. <https://news.ycombinator.com/item?id=21643562>
If it's the latter, I think the pitch of TS remains the same — it's a better way of writing JS, not the best language for all contexts.
If the TS team is getting a 10x improvement moving from TS to Go, you might imagine you could save about 10x on your server cpu. Or that your backend would be 10x more responsive.
If you have dedicated team for front and back anyhow, is a 10x slow down really worth a shared codebase?
as you know full well, Delphi and Turbo Pascal don't have strong library ecosystems, don't have good support for non-Windows platforms, and don't have a large developer base to hire from, among other reasons. if Hejlsberg was asked why Delphi or Turbo Pascal weren't used, he might give one or more of those reasons. the question is why he didn't use C#, for which those reasons don't apply.
That said, I must have misstated my opinion if it seems like I didn't think they have a good reason. This is Anders Hejlsberg. The guy is a genius; he definitely has a good reason. They just didn't say what it is in this blog post (but did elsewhere in a podcast video linked in the HN thread).
It obviously does because the larger open source world are huge users of Typescript. This isn't some business-only Excel / PowerBI type product.
To put it another way, I think a lot of people would get quite pissed if tsc was going to be rewritten in C# because of the obvious headaches that's going to cause to users. Go is pretty much the perfect option from a user's point of view - it generates self-contained statically linked binaries.
And there would be logistical problems. With go, you just need to distribute the executable, but with c#, you also need a .net runtime, and on any platform that isn't Windows that almost certainly isn't already installed. And even if it is, you have to worry if the runtime is sufficiently up to date.
If they used c# there is a chance the community might fork typescript, or switch to something else, and that might not be a gamble MS would want to take just to get more exposure for c#.
By who?
Go executables do not.
TSC is installed in too many places for that burden to be placed all of a sudden. It is the same reason why Java has had a complicated acceptance history too. It's fine in the places that it is pre-installed, but no where else.
Node/React/Typescript developers do not want to install .net all of a sudden. If you react that poorly, pretend they decided they decided to write it in Java and ask if you think Node/React/Typescript developers WANT to install Java.
C#: 945 kB Go: 2174 kB
Both are EXEs you just copy to the machine, no separate runtime needed, talks directly to the OS.
Is this the ultimate reason,Go is fast enough without being overally difficult. I'm humbly open to being wrong.
While I'm here, any reason Microsoft isn't sponsoring a solid open source game engine.
Even a bit of support for Godot's C#( help them get it working on web), would be great.
Even better would be a full C# engine with support for web assembly.
They did that. https://godotengine.org/article/introducing-csharp-godot/
At least some initial grant to get it started.
Getting C# working on web would be an amazing. It is already on the roadmap but some sponsorship would help tremendously for sure.
It is a huge company. They can do more than one thing. C#/.NET certainly isn't dead, but I'm not sure they really care if you do use it like they once did. It's there if you find it useful. If not, that's cool too.
I think Microsoft can find the money if they wanted to.
Personally, I would like them to never touch the game dev side of the market.
I can see they do this in the future tbh, given how large their xbox gaming ecosystem, this path is very make sense since they can cut cost while giving option to their studios or indie developers
Cool. Can you tell us a bit more about the technical process of porting the TS code over to Go? Are you using any kind of automation or translation?
Personally, I've found Copilot to be surprisingly effective at translating Python code over to structurally similar Go code.
then when you find the correct version but you then have to install both the x86 and x64 version because the first one you installed doesn't work
yeh, great ecosystem
at least a Go binary runs 99.99999% of the time when you start it.
I don't think Go is a bad choice, though!
Memory management? Or a stricter type system?
What do you see as the future for use cases where the typescript compiler is embedded in other projects? (Eg. Deno, Jupyter kernels, etc.)
There’s some talk of an inter process api, but vague hand waving here about technical details. What’s the vision?
In TS7 will you be able to embed the compiler? Or is that not supported?
So, how exactly is my app/whatever supposed to spin up a parallel process in the OS and then talk to it over IPC? How do you shut it down when the 'host' process dies?
Not vaguely. Not hand wave "just launch it". How exactly do you do it?
How do you do it in environments where that capability (spawning arbitrary processes) is limited? eg. mobile.
How do you package it so that you distribute it in parallel? Will it conflict with other applications that do the same thing?
When you look at, for example, a jupyter kernel, it is already a host process launched and managed by jupyter-lab or whatever, which talks via network chatter.
So now each kernel process has to manage another process, which it talks to via IPC?
...
Certainly, there are no obvious performance reasons to avoid IPC, but I think there are use cases where having the compiler embedded makes more sense.
Usually the very easiest way to do this is to launch the target as a subprocess and communicate over stdin/stdout. (Obviously, you can also negotiate things like shared memory buffers once you have a communication channel, but stdin/stdout is enough for a lot of stuff.)
> How do you shut it down when the 'host' process dies?
From the perspective of the parent process, you can go through some extra work to guarantee this if you want; every operating system has facilities for it. For example, in Linux, you can make use of PR_SET_PDEATHSIG. Actually using that facility properly is a bit trickier, but it does work.
However, since the child process, in this case, is aware that it is a child process, the best way to go about it would be to handle it cooperatively. If you're communicating over stdin/stdout, the child process's stdin will close when the parent process dies. This is portable across Windows and UNIX-likes. The child process can then exit.
> How do you do it in environments where that capability (spawning arbitrary processes) is limited? eg. mobile.
On Android, there is nothing special to do here as far as I know. You should be able to bundle and spawn a native process just fine. Go binaries are no exception.
On iOS, it is true that apps are not allowed to spawn child processes, as far as I am aware. On iOS you'd need a different strategy. If you still want a native code approach, though, it's more than doable. Since you're on iOS, you'll have some native code somewhere. You can compile Go code into a Clang-compatible static library archive, using -buildmode=c-archive. There's a bit more nuance to it to get something that will link properly in iOS, but it is supported by Go itself (Go supports iOS and Android in the toolchain and via gomobile.) Once you have something that can be linked into the process space, the old IPC approach would continue to work, with the semantic caveat that it's not technically interprocess anymore. This approach can also be used in any other situation you're doing native code, so as long as you can link C libraries.
If you're in an even more restrictive situation, like, I dunno, Cloudflare Pages Functions, you can use a WASM bundle. It comes at a performance hit, but given that the Go port of the TypeScript compiler is already roughly 3.5x faster than the TypeScript implementation, it probably will not be a huge issue compared to today's performance.
> How do you package it so that you distribute it in parallel? Will it conflict with other applications that do the same thing?
There are no particular complexities with distributing Go binaries. You need to ship a binary for each architecture and OS combination you want to support, but Go has relatively straight-forward cross-compiling, so this is usually very easy to do. (Rather unusually, it is even capable of cross-compiling to macOS and iOS from non-Apple platforms. Though I bet Zig can do this, too.) You just include the binary into your build. If you are using some bindings, I would expect the bindings to take care of this by default, making your resulting binaries "just work" as needed.
It will not conflict with other applications that do the same thing.
> When you look at, for example, a jupyter kernel, it is already a host process launched and managed by jupyter-lab or whatever, which talks via network chatter.
> So now each kernel process has to manage another process, which it talks to via IPC?
Yes, that's right: you would have to have another process for each existing process that needs its own compiler instance, if going with the IPC approach. However, unless we're talking about an obscene number of processes, this is probably not going to be much of an issue. If anything, keeping it out-of-process might help improve matters if it's currently doing things synchronously that could be asynchronous.
Of course, even though this isn't really much of an issue, you could still avoid it by going with another approach if it really was a huge problem. For example, assuming the respective Jupyter kernel already needs Node.JS in-process somehow, you could just as well have a version of tsc compiled into a Node-API module, and do everything in-process.
> Certainly, there are no obvious performance reasons to avoid IPC, but I think there are use cases where having the compiler embedded makes more sense.
Except for browsers and edge runtimes, it should be possible to make an embedded version of the compiler if it is necessary. I'm not sure if the TypeScript team will maintain such a version on their own, it remains to be seen exactly what approach they take for IPC.
I'm not a TypeScript Compiler developer, but I hope these answers are helpful in some way anyways.
> It will not conflict with other applications that do the same thing.
It is possible not to conflict with existing parallel deployments, but depending on your IPC mechanism, it is by no means assured when you're not forking and are instead launching an external process.
For example, it could by default bind a specific default port. This would work in the 'naive' situation where the client doesn't specify a port and no parallel instances are running. ...but if two instances are running, they'll both try to use the same port. Arbitrary applications can connect to the same port. Maybe you want to share a single compiler service instance between client apps in some cases?
Not conflicting is not a property of parallel binary deployment and communication via IPC by default.
IPC is, by definition intended to be accessible by other processes.
Jupyter kernels for example are launched with a specified port and a secret by cli argument if I recall correctly.
However, you'd have to rely on that mechanism being built into the typescript compiler service.
...ie. it's a bit complicated right?
Worth it for the speedup? I mean, sure. Obviously there is a reason people don't embed postgres. ...but they don't try to ship a copy of it along side their apps either (usually).
I fail to see how starting another process under an OS like Linux or Windows can be conflicting. Don't share resources, and you're conflict-free.
> IPC is, by definition intended to be accessible by other processes
Yes, but you can limit the visibility of the IPC channel to a specific process, in the form of stdin/stdout pipe between processes, which is not shared by any other processes. This is enough of a channel to coordinate creation of a more efficient channel, e.g. a shmem region for high-bandwidth communication, or a Unix domain socket (under Linux, you can open a UDS completely outside of the filesystem tree), etc.
A Unix shell is a thing that spawns and communicates with running processes all day long, and I'm yet to hear about any conflicts arising from its normal use.
You can get a conflicting resource in a shell by typing 'npm start' twice in two different shells, and it'll fail with 'port in use'.
My point is that you can do not conflicting IPC, but by default IPC is conflicting because it is intended to be.
You cannot bind the same port, semaphore, whatever if someone else is using it. That's the definition of having addressable IPC.
I don't think arguing otherwise is defensible or reasonable.
Having a concern that a network service might bind the same port as an other copy of the same network service deployed on the same target by another host is an entirely reasonable concern.
I think we're getting off into the woods here with an arbitrary 'die on this hill' point about semantics which I really don't care about.
TLDR: If you ship an IPC binary, you have to pay attention to these concerns. Pretending otherwise means you're not doing it properly.
It's not an idle concern; it's a real concern that real actual application developers have to worry about, in real world situations.
I've had to worry about it.
I think it's not unfair to think it's going to be more problematic than the current, very easy, embedded story, and it is a concern that simply does not exist when you embed a library instead of communicating using IPC.
Sure, some IPC approaches can run into issues, such as using TCP connections over loopback. However, I'm describing an approach that should never conflict since the resources that are shared are inherited directly, and since the binary would be embedded in your application bundle and not shared with other programs on the system. A similar example would be language servers which often work this way: no need to worry about conflicts between different instances of language servers, different language servers, instances of different versions of the same language server, etc.
There's also some precedent for this approach since as far as I understand it, it's also what the Go-based ESBuild tool does[1], also popular in the Node.JS ecosystem (it is used by Vite.)
> For example, it could by default bind a specific default port. This would work in the 'naive' situation where the client doesn't specify a port and no parallel instances are running. ...but if two instances are running, they'll both try to use the same port. Arbitrary applications can connect to the same port. Maybe you want to share a single compiler service instance between client apps in some cases?
> Not conflicting is not a property of parallel binary deployment and communication via IPC by default.
> IPC is, by definition intended to be accessible by other processes.
Yes, although the set of processes which the IPC mechanism is designed to be accessible by can be bound to just one process, and there are cross-platform mechanisms to achieve this on popular desktop OSes. I can not speak for why one would choose TCP over stdin/stdout, but, I don't expect that tsc will pick a method of IPC that is flawed in this way, since it would not follow precedent anyway. (e.g. tsserver already uses stdio[2].)
> Jupyter kernels for example are launched with a specified port and a secret by cli argument if I recall correctly.
> However, you'd have to rely on that mechanism being built into the typescript compiler service.
> ...ie. it's a bit complicated right?
> Worth it for the speedup? I mean, sure. Obviously there is a reason people don't embed postgres. ...but they don't try to ship a copy of it along side their apps either (usually).
Well, I wouldn't honestly go as far as to say it's complicated. There's a ton of precedent for how to solve this issue without any conflict. I can not speak to why Jupyter kernels use TCP for IPC instead of stdio, I'm very sure they have reasons why it makes more sense in their case. For example, in some use cases it could be faster or perhaps just simpler to have multiple channels of communication, and doing this with multiple pipes to a subprocess is a little more complicated and less portable than stdio. Same for shared memory: You can always have a protocol to negotiate shared memory across some serial IPC mechanism, but you'll almost always need a couple different shared memory backends, and it adds some complexity. So that's one potential reason.
(edit: Another potential reason to use TCP sockets is, of course, if your "IPC" is going across the network sometimes. Maybe this is of interest for Jupyter, I don't know!)
That said, in this case, I think it's a non-issue. ESBuild and tsserver demonstrate sufficiently that communication over stdio is sufficient for these kinds of use cases.
And of course, even if the Jupyter kernel itself has to speak the TCP IPC protocols used by Jupyter, it can still subprocess a theoretical tsc and use stdio-based IPC. Not much complexity to speak of.
Also, unrelated, but it's funny you should say that about postgres, because actually there have been several different projects that deliver an "embeddable" subset of postgres. Of course, the reasoning for why you would not necessarily want to embed a database engine are quite a lot different from this, since in this case IPC is merely an implementation detail whereas in the database case the network protocol and centralized servers are essentially the entire point of the whole thing.
[1]: https://github.com/evanw/esbuild/blob/main/cmd/esbuild/stdio...
[2]: https://github.com/microsoft/TypeScript/wiki/Standalone-Serv...
With a TSC in Go, it's no longer true. Previously you only had to figure out how to run JS, now you have to figure out both how to manage a native process _and_ run the JS output.
This obviously matters less for situations where you have a clear separation between the build stage and runtime stage. Most people complaining here seem to be talking about environments were compilation is tightly integrated with the execution of the compiled JS.
Porting to Go was the right decision, but part of me would've liked to see a different approach to solve the performance issue. Here I'm not thinking about the practicality, but simply about how cool it would've been if performance had instead been improved via:
- porting to OCaml. I contributed to Flow once upon a time, and a version of TypeScript in OCaml would've been huge in unifying the efforts here.
- porting to Rust. Having "official" TypeScript crates in rust would be huge for the Rust javascript-tooling ecosystem.
- a new runtime (or compiler!). I'm thinking here an optional, stricter version of TypeScript that forbids all the dynamic behaviours that make JavaScript hard to optimize. I'm also imagining an interpreter or compiler that can then use this stricter TypeScript to run faster or produce an efficient native binary, skipping JavaScript altogether and using types for optimization.
This last option would've been especially exciting since it is my opinion that Flow was hindered by the lack of dogfooding, at least when I was somewhat involved with the project. I hope this doesn't happen in the TypeScript project.
None of these are questions, just wanted to share these fanciful perspectives. I do agree Go sounds like the right choice, and and in any case I'm excited about the improvement in performance and memory usage. It really is the biggest gripe I have with TypeScript right now!
Rust and OCaml are _maybe_ prettier to look at, but for the average TypeScript developer Go is a much more understandable target IMO.
Lifetimes and ownership are not trivial topics to grasp, and they add overhead (as discussed here: https://github.com/microsoft/typescript-go/discussions/411) that not all contributors might grasp immediately.
(FWIW, It must have been a very well thought out rationale.)
Edit: watched the revenant clip from the GH discussion- makes sense. Maybe push NativeAoT to be as good?
I am (positively) surprised Hejlsberg has not used this opportunity to push C#: a rarity in the software world where people never let go of their darlings. :)
And lightly edited transcript here: https://github.com/microsoft/typescript-go/discussions/411#d...
In a game engine, you probably aren't recreating every game object from frame to frame. But in a compiler, you're creating new objects for every file you parse. That's a huge amount of work for the GC.
Basically I'd be interested to know what the bottlenecks in tsc are, whether there's much low-hanging fruit, and if not why not.
So this might be a very different performance profile.
*edit* I had initially written "single-pass", but in the context of a compiler, that's ambiguous.
In a situation like a game engine I think 1.5x is reasonable, but TS has a huge amount of polymorphic data reading that defeats a lot of the optimizations in JS engines that get you to monomorphic property access speeds. If JS engines were better at monomorphizing access to common subtypes across different map shapes maybe it'd be closer, but no engine has implemented that or seems to have much appetite for doing so.
Also for command-line tools, the JIT warmup time can be pretty significant, adding a lot to overall command-to-result latency (and in some cases even wiping out the JIT performance entirely!)
I really wish JS VMs would invest in this. The DOM is full of large inheritance hierarchies, with lots of subtypes, so a lot of DOM code is megamorphic. You can do tricks like tearing off methods from Element to use as functions, instead of virtual methods as usual, but that quite a pain.
None of these things say "this is a good way to build a large compiler suite that we're building for performance".
A few things mentioned in an interview:
Cannot build native binaries from TypeScript
Cannot as easily take advantage of concurrency in TypeScript
Writing fast TypeScript requires you to write things in a way that isn't 'normal' idiomatic TypeScript. Easier to onboard new people onto a more idiomatic codebase.
- C++ with thousands of tiny objects and virtual function calls? - JavaScript where data is stored in large Int32Array and does operations on it like a VM?
If you know anything about how JavaScript works, you know there is a lot of costly and challenging resource management.
(disclaimer: I am a biased Go fan)
In JavaScript, you can't even put 8M keys in a Hashmap; inserts take > 1 second per element:
Just going from ESLint to Biome is more than a 10x improvement... it's not just 1.5x because it's not just the runtime logic at play for build tools.
JS is 10x-100x slower than native languages (C++, Go, Rust, etc) if you write the code normally (i.e. don't go down the road of uglifying your JS code to the point where it's dramatically less pleasant to work with than the C++ code you're comparing to).
It's kind of annoying how even someone like Hejlsberg is throwing around words like "native" in such an ambiguous, sloppy, and prone-to-be-misleading way on a project like this.
"C++" isn't native. The object code that it gets compiled to, large parts of which are in the machine's native language, is.
Likewise "TypeScript" isn't non-native in any way that doesn't apply to any other language. The fact that tsc emits JS instead of the machine's native language is what makes TypeScript programs (like tsc itself) comparatively slow.
It's the compilers that are the important here, not the languages. (The fact that the TypeScript team was committed to making sure the typescript-go compiler is the same (nearly line-for-line equivalent) to the production version of the TypeScript compiler written in itself really highlights this.)
The question comes up and he quickly glosses over it, but by the sound of it he isn't impressed with the performance or support of AOT compiled C# on all targeted platforms.
[19:14] why not C#?
Dimitri: Was C# considered?
Anders: It was, but I will say that I think Go definitely is -- it's, I'd say, the lowest-level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In C#, it's sort of bytecode first, if you will; there is some ahead-of-time compilation available, but it's not on all platforms and it doesn't have a decade or more of hardening. It was not geared that way to begin with. Additionally, I think Go has a little more expressiveness when it comes to data structure layout, inline structs, and so forth. For us, one additional thing is that our JavaScript codebase is written in a highly functional style -- we use very few classes; in fact, the core compiler doesn't use classes at all -- and that is actually a characteristic of Go as well. Go is based on functions and data structures, whereas C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#. That transition would have involved more friction than switching to Go. Ultimately, that was the path of least resistance for us.
Dimitri: Great -- I mean, I have questions about that. I've struggled in the past a lot with Go in functional programming, but I'm glad to hear you say that those aren't struggles for you. That was one of my questions.
Anders: When I say functional programming here, I mean sort of functional in the plain sense that we're dealing with functions and data structures as opposed to objects. I'm not talking about pattern matching, higher-kinded types, and monads.
[12:34] why not Rust?
Anders: When you have a product that has been in use for more than a decade, with millions of programmers and, God knows how many millions of lines of code out there, you are going to be faced with the longest tail of incompatibilities you could imagine. So, from the get-go, we knew that the only way this was going to be meaningful was if we ported the existing code base. The existing code base makes certain assumptions -- specifically, it assumes that there is automatic garbage collection -- and that pretty much limited our choices. That heavily ruled out Rust. I mean, in Rust you have memory management, but it's not automatic; you can get reference counting or whatever you could, but then, in addition to that, there's the borrow checker and the rather stringent constraints it puts on you around ownership of data structures. In particular, it effectively outlaws cyclic data structures, and all of our data structures are heavily cyclic.
(https://www.reddit.com/r/golang/comments/1j8shzb/microsoft_r...)
They could have used static classes in C#.
- C# Ahead of Time compiler doesn't target all the platforms they want.
- C# Ahead of Time compiler hasn't been stressed in production as many years as Go.
- The core TypeScript compiler doesn't use any classes; Go is functions and datastructures whereas C# is heavily OOP, so they would have to switch paradigms to use C#.
- Go has better control of low level memory layouts.
- Go was ultimately the path of least resistance.
https://github.com/microsoft/typescript-go/discussions/411#d...
For a daemon like an LSP I reckon C# would've worked.
Graph for the differences in Runtime, Runtime Trimmed, and AOT .NET.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/n...
Where do you think Go gets those chubby static linked executables from?
That people have to apply UPX on top.
I doubt the ability to cross-compile TSC would have been a major factor. These artifacts are always produced on dedicated platforms via separate build stages before publishing and sign-off. Indeed, Go is better at native cross-compilation where-as .NET NativeAOT can do only do cross-arch and limited cross-OS by tapping into Zig toolchain.
I am sure it is good enough that the team decided to choose Go either way OR it is not important for this project.
> I doubt the ability to cross-compile TSC would have been a major factor.
I never said it was a major factor (I even said "I don't think it would be a deal breaker"), but it is a factor nonetheless. It definitely helps a lot during cross-platform debugging since you don't need to setup a whole toolchain just to test a bug in another platform, instead you can simple build a binary on your development machine and send it to the other machine.
But the only reason I asked this is because I was curious really, no need to be so defensive.
He clearly knows all this so the obvious inference is that the decision isn't really about features. The most likely problem is a lack of confidence in the .NET team, or some political problems/bad blood inside Microsoft. Perhaps he's tried to use it and been frustrated by bugs; the comment about "battle hardened" feel like where the actual rationale is hiding. We're not getting the full story here, that's clear enough.
I'm honestly surprised Microsoft's policies allowed this. Normally companies have rules that require dogfooding for exactly this reason. Such a project is not terribly urgent, it has political heft within Microsoft. They could presumably have got the .NET team to fix bugs or make optimizations they need, at least a lot easier than getting the Go team to do it. Yet they chose not to. Who would have any confidence in adoption of .NET for performance sensitive programs now? Even the father of .NET doesn't want to use it. Anyone who wants to challenge a decision to adopt it can just point at Microsoft's own actions as evidence.
First he mentions the no classes thing. It is hard to see how that would matter even for automated porting, because like you said, he could just use static classes, and even do a static using statement on the calling side.
Another one of his reasons was that Go was good at processing complex graphs, but it is hard to imagine how Go would be better at that than C#. What language feature that Go has, but C# does not supports that? I don't think anyone will be able to demonstrate one. This distinction makes sense for Go vs Rust, but not for Go vs C#.
As for the platform / AOT argument, I don't know as much about that, but I thought it was supposed to be possible now. If it isn't, it seems like it would be better for Microsoft to beef that up than to allow a vote of no confidence to be cast like this.
It is especially jarring given that they are a first-party customer who would have no trouble in getting necessary platforms supported or projects expedited (like NativeAOT-LLVM-WASM) in .NET. And the statements of Anders Hejlsberg himself which contradict the facts about .NET as a platform make this even more unfortunate.
There's an interesting contrast here with Java, where javac was ported to Java from C++ very early on in its lifecycle. And the Java AOT compiler (native image) is not only fully written in Java itself, everything from optimizations to code generation, but even the embedded runtime is written in Java too. Whereas in the .NET world Roslyn took quite a long time to come along, it wasn't until .NET 6, and of course MS rejected it from Windows more or less entirely for the same sorts of rationales as what Anders provides here.
It was introduced back then with .NET Framework 4.6 (C# 6) - a loong time ago (July 2015). The OSS .NET has started with Roslyn from the very beginning.
> And the Java AOT compiler (native image) is not only fully written in Java itself, everything from optimizations to code generation, but even the embedded runtime is written in Java too.
NativeAOT uses the same architecture. There is no C++ besides GC and pre-existing compiler back-end (both ILC and RyuJIT drive it during compilation process). Much like GraalVM's Native Image, the VM/host, type system facilities, virtual/interface dispatch and everything else it could possibly need is implemented in C# including the linker (reachability analysis/trimming, kind of like jlink) and optimizations (exact devirtualization, cctor interpreter, etc.).
In the end, it is the TypeScript team members who worked on this port, not Anders Hejlsberg himself, which is my understanding. So we need to take this into account when judging what is being communicated.
no? https://github.com/microsoft/typescript-go/graphs/contributo...
Cue rust devotees in 3, 2, ..
If you are a rust devotee, you can use https://github.com/FractalFir/rustc_codegen_clr to compile your rust code to the same .NET runtime as C#. The project is still in the works but support is said to be about 95% complete.
Are there any insights on the platform decision?
- There is an esbuild process running in the background.
- If I look at the JavaScript returned to the browser, it is transpiled without any types present.
So even though the URLs in Vite dev mode look like they're pointing to "raw" TypeScript files, they're actually transpiled JavaScript, just not bundled.
I could be incorrect, of course, but it sure seems to me like Vite is using ESBuild on the Node.JS side and not tsc on the web browser side.
This is a big concern to me. Could you expand on what work is left to do for the native implementation of gsc? In particular, can you make an argument why that last bit of work won't reduce these 10x figures we're seeing? I'm worried the marketing got ahead of the engineering
One thing I'm curious about: What about updating the original Typescript-based compiler to target WASM and/or native code, without needing to run in a Javascript VM?
Was that considered? What would (at a high level) the obstacles be to achieving similar performance to Golang?
Edit: Clarified to show that I indicate updating the original compiler.
JavaScript, like other dynamic languages, runs well with a JIT because the runtime can optimize for hotspots and common patterns (e.g. this method's first argument is generally an object with this shape, so write a fast path for that case). In theory you could write an AOT compiler for TypeScript that made some of those inferences at compile time based on type definitions, but
(a) nobody's done that
(b) it still wouldn't be as fast as native, or much faster than JIT
(c) it would be limited - any optimizations would die as soon as you used an inherently dynamic method like JSON.parse()
That is a good question.
Will the questions and answers be posted anywhere outside of Discord after it's concluded?
One of the nice advantages of js is that it can run so many places. Will TypeScript still be able to enjoy that legacy going forward, or is native only what we should expect in 7+?
Simplicity is a feature, not a bug. Overly expressive languages become nightmares to work with and reason about (see: C++ templates)
Go's compilation times are also extremely fast compared to Rust, which is a non-negligible cost when iterating on large projects.
C and Rust both have predictable memory behaviour, Go does not.
(IE, as opposed to reference counting, where if you have cyclic loops, you need to manually go in and "break" the loop so memory gets reclaimed.)
It's actually pretty easy to do something like this with C, just using something like an arena allocator, or honestly, leaking memory. I actually wrote a little allocator yesterday that just dumps memory into a linkedlist, it's not very complicated: http://github.com/danieltuveson/dsalloc/
You allocate wherever you want, and when you're done with the big messy memory graph, you throw it all out at once.
There are obviously a lot of other reasons to choose go over C, though (easier to learn, nicer tooling, memory safety, etc).
Really interesting news, and uniquely dismaying to me as someone who is fighting tooth and claw to keep JS language tooling in the JS ecosystem.
My question has to do with Ryan's statement:
> We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript
I've experimented deeply in this area (maybe 15k hours invested in BABLR so far) and what I've found is that it's richly rewarding. Javascript is fast enough for what is needed, and its ability to cache on immutable data can make it lightning fast not through doing more work faster, but by making it possible to do less work. In other words, change the complexity class not the constant factor.
Is this a direction you investigated? What made you decide to try to move sideways instead of forwards?
Have you considered the man-years and energy you're making everyone waste? Just as an example, I wonder what the carbon footprint of ESLint has been over the years...
Now, it pales in comparison to Python, but still...
TS currently wastes tons of resources (most especially peoples' time) by not being able to share its data and infrastructure with other tools and ecosystems, but while there would be much bigger wins from tackling the systemic problem, you wouldn't be able to say something as glib as "TS is 10x faster". Only the work that can be distilled to a metric is done now, because that's how to get a promotion when you work for a company like Microsoft
Thank you Typescript team for chasing those promotions!
From a performance perspective, I'd expect C++ and Rust to be much easier targets too, since I've seen quite a few industrial Go services be rewritten in C++/Rust after they fail to meet runtime performance / operability targets.
Wasn't there a recent study from Google that came to the same conclusion? (They see improved productivity for Go with junior programmers that don't understand static typing, but then they can never actually stabilize the resulting codebase.)
One trade off is if the code for TS is no longer written in TS, that means the core team won’t be dogfooding TS day in and day out anymore, which might hurt devx in the long run. This is one of the failure modes that hurt Flow (written in OCaml), IMO. Curious how the team is thinking about this.
Does that mean more "support rotations" for TS compiler engineers on GitHub? Are there full-stack TS apps that the TS team owns that ownership can be spread around more? Will the TS team do more rotations onto other teams at MSFT?
Second, JavaScript already executes quickly. Aside from arithmetic operations it has now reached performance parity to Java and highly optimized JavaScript (typed arrays and an understanding of data access from arrays and objects in memory) can come within 1.5x execution speed of C++. At this point all the slowness of JavaScript is related to things other than code execution, such as: garbage collection, unnecessary framework code bloat, and poorly written code.
That being said it isn't realistic to expect measurably significant faster execution times by replacing JavaScript with a WASM runtime. This is more true after considering that many performance problems with JavaScript in the wild are human problems more than technology problems.
Third, WASM has nothing to do with JavaScript, according to its originators and maintainers. WASM was never created to compete, replace, modify, or influence JavaScript. WASM was created as a language ubiquitous Flash replacement in a sandbox. Since WASM executes in an agnostic sandbox the cost to replace an existing runtime is high since an existing run time is already available but a WASM runtime is more akin to installing a desktop application for first time run.
The rest of the 10x comes from multi-threading, which wasn't possible to do in a simple way in the JS compiler (efficient multithreading while writing idiomatic code is hard in JS).
JavaScript is very fast for single-threaded programs with monomorphic functions, but in the TypeScript compiler's case, the polymorphic functions and opportunity for parallelization mean that Go is substantially faster while keeping the same overall program structure.
What I do know is that some people complain about long compile times in their code that can last up to 10 minutes. I had a personal application that was greater than 60k lines of code and the tsc compiler would compile it in about 13 seconds on my super old computer. SWC would compile it in about 2.5 seconds. This tells me the far greater opportunity for performance improvement is not in modifying the compiler but in modifying the application instance.
WTF.
As a result the amount of libraries that ship flow types has absolutely dwindled over the years, and now typescript has completely taken over.
Yet projects inevitably get to the stage where a more native representation wins out. I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language.
It makes me think I should be starting any project I have in the lowest level representation that allows me some ergonomics. Maybe more reason to lean into Zig? I don't mean for places where something like Rust would be appropriate. I mean for anything I would consider using a "good enough" scripting language.
It honestly has me questioning my default assumption to use JS runtimes on the server (e.g. Node, deno, bun). I mean, the benefit of using the same code on the server/client has rarely if ever been a significant contributor to project maintainability for me. And it isn't that hard these days to spin up a web server with simple routing, database connectivity, etc. in pretty much any language including Zig or Go. And with LLMs and language servers, there is decreasing utility in familiarity with a language to be productive.
It feels like the advantages of scripting languages are being eroded away. If I am planning a career "vibe coding" or prompt engineering my way into the future, I wonder how reasonable it would be to assume I'll be doing it to generate lower level code rather than scripts.
Prisma is currently being rewritten from Rust to TypeScript: https://www.prisma.io/blog/rust-to-typescript-update-boostin...
> Yet projects inevitably get to the stage where a more native representation wins out.
I would be careful about extrapolating the performance gains achieved by the Go TypeScript port to non-compiler use cases. A compiler is perhaps the worst use case for a language like JS, because it is both (as Anders Hejlsberg refers to it) an "embarassingly parallel task" (because each source file can be parsed independently), but also requires the results of the parsing step to be aggregated and shared across multiple threads (which requires shared memory multithreading of AST objects). Over half of the performance gains can be attributed to being able to spin up a separate goroutine to parse each source file. Anders explains it perfectly here: https://www.youtube.com/watch?v=ZlGza4oIleY&t=2027s
We might eventually get shared memory multithreading (beyond Array Buffers) in JS via the Structs proposal [1], but that remains to be seen.
[1] https://github.com/tc39/proposal-structs?tab=readme-ov-file
As for the "compilers are special" reasoning, I don't ascribe to it. I suppose because it implies the opposite: something (other than a compiler) is especially suited to run well in a scripting language. But the former doesn't imply the later in reality and so the case should be made independently. The Prisma case is one: you are already dealing with JavaScript objects so it is wise to stay in JavaScript. The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.
WASM is used to generate the query plan, but query execution now happens entirely within TypeScript, whereas under the previous architecture both steps were handled by Rust. So in a very literal sense some of the Rust code is being rewritten in TypeScript.
> Basically, if the majority of your application is already in JavaScript and expects primarily to interact with other code written in JavaScript, it usually doesn't make sense to serialize your data, pass it to another runtime for some processing, then pass the result back.
My point was simply to refute the assertion that once software is written in a low level language, it will never be converted to a higher level language, as if low level languages are necessarily the terminal state for all software, which is what your original comment seemed to be suggesting. This feels like a bit of a "No true Scotsman" argument: https://en.wikipedia.org/wiki/No_true_Scotsman
> As for the "compilers are special" reasoning, I don't ascribe to it.
Compilers (and more specifically lexers and parsers) are special in the sense that they're incredibly well suited for languages with shared memory multithreading. Not every workload fits that profile.
> The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.
I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits still apply. In fact, one of the stated reasons for the Prisma rewrite was "skillset barriers". "Contributing to the query engine requires a combination of Rust and TypeScript proficiency, reducing the opportunity for community involvement." [1]
[1] https://www.prisma.io/blog/from-rust-to-typescript-a-new-cha...
That is why I am saying your evidence is a red herring. It is a case where a reasonable decision was made to rewrite in JavaScript/TypeScript but it has nothing to do with the merits of the language and everything to do with the environment that the entire system is running in. They even state the Rust code is fast (and undoubtedly faster than the JS version), just not fast enough to justify the IPC cost.
And it in no way applies to the point I am making, where I explicitly question "starting a new project" for example "my default assumption to use JS runtimes on the server". It's closer to a "Well, actually ..." than an attempt to clarify or provide a reasoned response.
The world is changing before our eyes. The coding LLMs we have already are good but the ones in the pipeline are better. The ones coming next year are likely to be even better. It is time to revisit our long held opinions. And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.
1. As products mature, they may find useful scenarios involving runtime environments that don’t necessarily match the ones that were in mind back when the foundation was laid. If relevant parts are rewritten in a lower-level language like C or Rust, it becomes possible to reuse them across environments (in embedded land, in Web via WASM, etc.) without duplicate implementations while mostly preserving or even improving performance and unlocking new use cases and interesting integrations.
2. As products mature, they may find use cases that have drastically different performance requirements. TypeScript was not used for truly massive codebases, until it was, and then performance became a big issue.
Starting a product trying to get all of the above from the get go is rarely a good idea: a product that rots and has little adoption due to feature creep and lack of focus (with resulting bugs and/or slow progress) doesn’t stand a chance against a product that runs slower and in fewer environments but, crucially, 1) is released, 2) makes sound design decisions, and 3) functions sufficiently well for the purposes of its audience.
Whether LLMs are involved or not makes no meaningful difference: no matter how good your autocomplete is, other things equal the second instance still wins over the first—it still takes less time to reach the usefulness threshold and start gaining adoption. (And if you are making a religious argument about omniscient entities for which there is no meaningful difference between those two cases, which can instantly develop a bug-free product with infinite flexibility and perfect performance at whatever the level of abstraction required, coming any year, then you should double-check whether if they do arrive anyone would still be using them for this purpose. In a world where I, a hypothetical end user, can get X instantly conjured for me out of thin air by a genie, you, a hypothetical software developer, better have that genie conjure you some money lest your family goes hungry.)
Of course, LLMs may stay as "autocomplete" forever. Or for decades. But my intuition is telling me that in the next 2-3 years they are going to increase in capability, especially for coding, at a pace greater than the last 2 years. The evidence that I have (by actually using them) seems to point in that direction.
I'm perfectly capable of writing programs in Perl, Python, JavaScript, C++, PHP, Java. Each of those languages (and more actually) I have used professionally in the past. I am confident I could write a perfectly good app in Go, Rust, Elixir, C, Ruby, Swift, Scala, etc.
If you asked me 6 months ago "what would you choose to write a basic CRUD web app" I probably would have said TypeScript. What I am questioning now is: why? What would lead me to choose TypeScript? Do the reasons I would have chosen TypeScript continue to make sense today?
There are no genies here, only questioning of assumptions. And my new assumptions include the assumption that any coding I would do will involve a code assisting LLM. That opens up new possibilities for me. Given LLM assistance, why wouldn't I write my web app layer in Rust or Zig?
Your assumptions about the present and near future will guide your own decisions. If you don't share the same intuitions you will come to different conclusions.
Making technical decisions based on hypothetical technologies that may solve your problems in "a year or so" is a gamble.
> And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.
Arguably Go is a scripting language designed for exactly that purpose.
I would not call Go a scripting language. Go programs are statically linked single binaries, not a textual representation that is loaded into an interpreter or VM. It has more in common with C than Bash. But to make sure we are clear (in case you want to dig in on calling Go a scripting language) I am talking about dynamic programming languages like Python, Ruby, JavaScript, PHP, Perl, etc. which generally do not compile to static binaries and instead load text files into an interpreter/VM. These dynamic scripted languages tend to have performance below static binaries (like Go, Rust, C/C++) and usually below byte code interpreted languages (like C# and Java).
First of all, I would argue that software rewrites are a bad proxy metric for language quality in general. Language rewrites don't measure languages purely on a qualitative scale, but rather on a scale of how likely they are to be misused in the wrong problem domain.
Low level languages tend to have a higher barrier to entry, which as a result means they're less likely to be chosen on a whim during the first iteration of a project. This phenomenon is exhibited not just at the macroscopic level of language choice, but often times when determining which data structures and techniques to use within a specific language. I've very seldomly found myself accidentally reaching for a Uint8Array or a WeakRef in JS when a normal array or reference would suffice, and then having to rewrite my code, not because those solutions are superior, but because they're so much less ergonomic that I'm only likely to use them when I'm relatively certain they're required.
This results in obvious selection bias. If you were to survey JS developers and ask how often they've rewritten a normal reference in favor of a WeakRef vs the opposite migration, the results would be skewed because the cost of dereferencing WeakRefs is high enough that you're unlikely to use them hastily. The same is true to a certain extent in regards to language choice. Developers are less likely to spend time appeasing Rust's borrow checker when PHP/Ruby/JS would suffice, so if a scripting language is the best choice for the problem at hand, they're less likely to get it wrong during the first iteration and have to suffer through a massive rewrite (and then post about it on HN). I've seen plenty of examples of competent software developers saying they'd choose a scripting language in lieu of Go/Rust/Zig. Here's the founder of Hashicorp (who built his company on Go, and who's currently building a terminal in Zig), saying he'd choose PHP or Rails for a web server in 2025: https://www.youtube.com/watch?v=YQnz7L6x068&t=1821s
That is not my intention. Perhaps you are reading absolutes and chasing after black and white statements. When I say "it makes me think I should ..." I am not saying: "Everyone everywhere should always under any circumstances ...". It is a call to question the assumption, not to make emphatic universal decisions on any possible project that could ever be conceived. That would be a bad faith interpretation of my post. If that is what you are arguing against, consider if you really believe that is what I meant.
So my point stands: I am going to consider this more deeply rather than default assuming that an interpreted scripting language is suitable.
> Low level languages tend to have a higher barrier to entry,
I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.
The first comment I wrote in this thread was a response to the following quote: "Yet projects inevitably get to the stage where a more native representation wins out." Inevitable means impossible to evade. That's about as close to a black and white statement as possible. You're also completely ignoring the substance of my argument and focusing on the wording. My point is that language rewrites (like the TS rewrite that sparked this discussion) are a faulty indicator of scripting language quality.
> I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.
And I've already said that I disagree with this assertion. I'll just quote myself in case you haven't read through all my comments: "I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits [of scripting languages] still apply." I was under the impression that I didn't have to keep restating my position.
I don't believe that AI has eroded the barriers of entry to the point where the average Ruby or PHP developer will enjoy passing around memory allocators in Zig while writing API endpoints. Neither of us can be 100% certain about what the future holds for AI, but as someone else pointed out, making technical decisions in the present based on AI speculation is a gamble.
Inevitable:
as is certain to happen; unavoidably.
informal
as one would expect; predictably.
"inevitably, the phone started to ring just as we sat down"
Which interpretation of the word is "good faith" considering the rest of my post? If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement? Would you argue with Google and say "I have sat down before and the phone didn't ring"?It is Hacker News policy and just good internet etiquette to argue with good faith in mind. I find it hard to believe you could have read my entire post and come away with the belief of absolutism.
edit: Just to add to this, your interpretation assumes I think Django (the Python web application framework) will unavoidably be rewritten in a lower level language. And Ruby on Rails will unavoidably be rewritten. Do you believe that is what I was saying? Do you believe that I actually believe that?
> If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement?
If we were having a discussion about automobile safety and you wrote several hundred words about why a specific type of accident isn't indicative of a larger trend, I wouldn't respond by cherry picking the first sentence of your comment, and quoting Google definitions about a phone ringing.
I used Google to point out that your argument, which hinged on your definition of what the word "inevitable" means is the narrowest possible interpretation of my statement. An interpretation so narrow that it indicates you are arguing in bad faith, which I believe to be the case. You are accusing me of making an argument that I did not make by accusing me of not understanding what a word means. You are wrong on both accounts as demonstrated.
The only person thinking in black in white is the figment of me in your imagination. I've re-read the argument chain and I'm happy leaving my point where it is. I don't think your points, starting with your attempt at a counter example with Prisma, nor your exceptional compiler argument, nor any of the other points you have tried support your case.
My argument does not hinge upon the definition of the word inevitable. You originally said "I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language."
I gave a relatively thorough accounting of why you've observed this, and why it doesn't indicate what you believe it to indicate here: https://news.ycombinator.com/item?id=43339297
Instead of addressing the substance of the argument you focused on this introductory sentence: "I'd like to address your larger point which seems to be that all greenfield projects are necessarily best suited to low level languages."
Regardless of how narrowly or widely you want me to interpret your stance, my point is that the data you're using to form your opinion (rewrites from higher to lower level languages) does not support any variation of your argument. You "can't think of a time a high profile project written in a lower level representation got ported to a higher level language" because developers tend to be more hesitant about reaching for lower level languages (due to the higher barrier to entry), and therefore are less likely to misuse them in the wrong problem domain.
This sounds more like a "we're kinda stuck with Javascript here" situation. The team is making a compromise, can't have your cake and eat it too I guess.
What was that saying again? Premature optimisation is the root of all evil
A thread going into what Knuth meant by that quote that is usually shortened to "premature optimization is the root of all evil". Or, to rephrase it: don't tire yourself out climbing for the high fruit, but do not ignore the low-hanging fruit. But really I don't even see why "scripting languages" are the particular "high level" languages of choice. Compilers nowadays are good. No one is asking you to drop down to C or C++.
Mind you I'm sure there were similar attempts at a language with those goals, but they didn't have the backing of Google.
Just two years ago, a friend of mine described it as quite a hassle to get a RESTful backend running in Go. He got it working but it was more work than usual. Was he an outlier or have things been getting better in the framework department?
Go tends to have more boiler plate than other languages. So more typing work, less thinking work, less maintenance work once completed.
Software never gets rewritten in a higher level language, but software is constantly replaced by alternatives. First example that comes to mind is Discord, an Electron app that immediately and permanently killed every other voice client on the market when it launched.
If we assume that coding assistants continue to improve as they have been and we also assume that they are able to generate lower level code on par with higher level code, then it seems the leverage shifts away from "easy to implement features" languages to "fast in most contexts" languages.
Only time will tell, of course. But I wonder if we will see a new wave of replacements from Electron based apps to LLM assisted native apps.
Do you mean voice clients like FaceTime, Zoom, Teams, and Slack?
Discord can run from a browser, making onboarding super easy. The installable app being in Electron makes for minimal (if any) difference between it and the website.
In summary, running in the web browser helps a lot, and Electron makes it very easy for them to keep the browser version first class.
As an added bonus, they can support Linux, Windows and macOS equally well.
I would say it helps as without Electron, serving all the above with equal feature parity just would have been too expensive or slow and perhaps it just wouldn’t have been as frictionless for all types of new users like it is.
Now we may have a new case: native but fast to add features using a code assist LLM.
If that new case is a true reflection of the near future (only time will tell) then it makes the case against the scripted solution. If (and only if) you could use a code assist LLM to match the feature efficiency of a scripting language while using a native language, it would seem reasonable to choose that as the starting point.
The adoption of AI Code Assistance I am sure will be driven similarly anecdotally, because who has the time or money to actually measure productivity techniques when you can just build a personal set of superstitions that work for you (personally) and sell it? Or put another way, what manager actually would spend money on basic science?
The ergonomics of compiling your code for every combination of architecture and platform you plan to deploy to? It's not fun. I promise.
> my default assumption to use JS runtimes on the server
AWS Lambda has a minimum billing interval of 1ms. To do anything interesting you have to call other APIs which usually have a minimum latency of 5 to 30ms. You aren't buying much of anything in any scalable environment.
> there is decreasing utility in familiarity with a language to be productive.
I hope you aren't planning on making money from this code. Either way, have fun debugging that!
> the advantages of scripting languages are being eroded away.
As long as scripting languages have interfaces which let them access C libraries either directly or through compiled modules they will have strong advantages. Just having a CLI where you can test out ideas and check performance is massively powerful and I hate not having it in any compiled project. Go has particularly bad ergonomics here as writing test cases are easy but exploring ideas is not due to it's strictness down to even the code styling level.
I mean, on the one hand you are arguing for C FFI and on the other worrying about compiling for every combination of architecture. Those positions seem to be contradictory. Although I guess you're assuming that other people who write the C libraries for you did that work. I guess you better hope libraries exist for every possible performance issue you come across in your cross platform scripting library.
And why limit your runtime to AWS Lambda? That is a constraint you are placing on yourself. Nowadays with docker you can have pretty much any Linux you want as an image. But why not just implement on top of cgroups from scratch? I guess we live a world where that is unthinkable to many. Probably just better to pay AWS. But if you do use docker, all of a sudden worrying about compiling for all of those architectures seems like less of an issue. And you can use ECS, so you can still pay AWS!
As for tooling issues, and there are definitely tooling issues with every language, it is a pick your poison kind of thing. I remember really liking Pascal tooling way back in the day. Smalltalk images have some nifty features. Who doesn't like Lisp, the language that taught us all REPL. Not sure I'd choose them for a project today though.
As LLMs get better, I just assume what constitutes "developer experience" is going to change. Will I even care about how unergonomic writing test cases in Go can be if I can just say "LLM, write a test that covers X, Y, Z case". As long as I can read the resultant output and verify it meets my expectations, I don't care how many characters of code or boilerplate that will force the LLM to generate.
edit: I misread your point about Go test cases but I'll leave my mistake standing. My overall point was the stuff I find annoying to do myself I can just farm out to the LLM. If the cost of writing an experiment is "LLM, give this a try" and if it works great and if not `git checkout`, then I will be ok with something less optimal.
The JS runtimes are fine for the majority of use cases but the ecosystem is really the issue IMO.
> the benefit of using the same code on the server/client has rarely if ever been a significant contributor to project maintainability for me
I agree and now with OpenAPI this is even less of an argument.
The trade-off is that the team will have to start dealing with a lot of separate issues... How do tools like ESLint TS talk to TSC now? How to run this in playground? How to distribute the binaries? And they also lose out on the TS type system, which makes their Go version rely a little more on developer prowess.
This is an easy choice for one of the most fundamental tools underlaying a whole ecosystem, maintained by Microsoft and one of the developers of C# itself, full-time.
Other businesses probably want to focus on actually making money by leading their domain and easing long-term maintenance.
1. https://github.com/dudykr/stc - Abandoned (https://github.com/swc-project/swc/issues/571#issuecomment-1...)
2. https://github.com/kaleidawave/ezno - In active development. Does not have the goal of 1:1 parity to tsc.
A compiled managed language is much better approach for userspace applications.
Pity that they didn't go with AOT compiled .NET, though.
Yeah. It seems to be unfashionable somewhat even within Microsoft.
(edit: it seems to be you and me and barely anyone else on HN advocating for C#)
If anyone should have picked C# it would be him.
But I think it's also an indication that Typescript may be bigger and more important for Microsoft than C#/.NET is at this time. It's definitely much more used than C# is according to this non-representative survey of Stack Overflow (https://survey.stackoverflow.co/2024/technology).
I happened to be doing a lot of C# and .NET dev when all this transition was happening, and it was very cool to be able to run .NET in Linux. C# is a powerful language with great and constantly evolving ideas in it.
But then all the stuff between the runtimes, API surfaces, Core vs Framework, etc all got extremely confusing and off-putting. It was necessary to bring all these ecosystems together, but I wonder if that kept people away for a bit? Not sure.
Here is a kind of weird, given the team.
OS kernels, firmware, GPGPU,....
If it is the ML inspired type system, there are plenty of options among compiled managed languages, true Go isn't really on that camp, but whatever.
There are also Swift, F# (Native AOT), Scala (Native, GraalVM, OpenJ9).
If it was a fresh compiler then the choice would be more difficult.
I was trying ot push .net as our possible language for somehow high performance executables. Seeing this means I'll stop trying to advocate for it. If even this team doesn't believe in it.
I like when Microsoft doesn't pretend that their technologies are the right answer for every problem.
But, of course, that is not unusual. There is no language in existence that is best suited to every project out there.
But I just said my point is not about performance at all! It is about the whole package. Performance of c# and go are both enough for my usecase, same for java and c obviously. They just told us that they don't think the whole package makes sense, and disowned the AOT compilation.
Which made me naturally think your point was, indeed, about performance. Although as it appears to be, I'm wrong, so it's fair enough.
This seems super petty to me. Like, if at the end of the day you get a binary that works on your OS and doesn’t require a runtime, why should you “love” that they picked one language over another? It’s exactly the same outcome for you as a user.
I mean, if you wanted to contribute to the project and you knew go better than rust, that would make sense. But sounds like you just don’t like rust because of… reasons, and you’re just glad to see rust “fail” for their use case.
ah, answers below: https://news.ycombinator.com/item?id=43333296
According to whom?
Not saying that in a judgemental way, I'm just genuinely surprised. What does this say about what Hejlsberg thinks of C# at the moment? I would assume one reason they don't pick C# is because it's deeply unpopular in the open source world. If Microsoft was so successful in making Typescript popular for open source work, why can't they do it for C#?
I have not opted to use C# for anything significant in the past decade or so. I am not 100% sure why, but there's always been something I'd rather use. Whether that's Go, Rust, Ruby or Haskell. I always enjoyed working in C#, I think it's a well designed and powerful language even if it never made the top of my list recently. I never considered that there might be something so fundamentally wrong with it that not even Hejlsberg himself would use it to build a Typescript compiler.
What's wrong with C#?
- C# is bytecode-first, Go targets native code. While C# does have AOT capabilities nowadays this is not as mature as Go's and not all platforms support it. Go also has somewhat better control over data layout. They wanted to get as low-level as possible while still having garbage collection.
- This is meant to be something of a 1:1 port rather than a rewrite, and the old code uses plain functions and data structures without an OOP style. This suits Go well while a C# port would have required more restructuring.
I'm not sure what's going on, I guess he's just not involved with the runtime side of .NET at all to actually know where the capability sits circa 2024/2025. But really, it's a terrible situation to be in. Especially just how worse langdev UX in Go is compared to C#, F# or Rust. No one would've batted an eye if either of those was used.
Can you explain why the DX in Go is "worse"? I've seen the exact opposite during my professional work.
> While C# does have AOT capabilities nowadays this is not as mature as Go's and not all platforms support it
https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...
Only Android is missing from that list (marked as "Experimental"). We could argue about maturity but this is a bit subjective.
> Go also has somewhat better control over data layout
How? C# supports structs, ref structs (stack allocated only structures), explicit stack allocation (`stackalloc`), explicit struct field layouts through annotations, control over method local variable initialization, control over inlining, etc. Hell, C# even supports a somewhat limited version of borrow checking through the `scoped` keyword.
> This is meant to be something of a 1:1 port rather than a rewrite, and the old code uses plain functions and data structures without an OOP style.
C# has been consistently moving into that direction by taking more and more inspiration from F#.
The only reasonable reason would be extensive usage of structural typing which is present in TS and Go but not in C#.
The point is C++ sucks dude. There is no way that you can reasonably think that bolting a GC on to C++ is going to be a pleasurable experience. This whole conversation started with _language ergonomics_. I don’t care that it’ll save 0.5 milliseconds. I’d rather dig holes than write C++.
NativeAOT story itself is also interesting - I noted it in a sibling comment but .NET has much better base binary size and binary size scalability through stronger reachability analysis, metadata compression and pointer-rich binary sections dehydration at a small startup cost (it's still in the same ballpark). The compiler output is also better and so is whole program view driven devirtualization, something Go does not have. In the last 4 years, .NET's performance has improved more than Go's in the last 8. It is really good at text processing at both low and high level (only losing to Rust).
The most important part here is that TypeScript at Microsoft is a "first-party" customer. This means if they need additional compiler accommodations to improve their project experience from .NET, they could just raise it and they will be treated with priority.
This decision is technically and politically unsound at multiple levels at once. For example, they will need good WASM support. .NET's existing WASM support is considered "decent" and even that one is far from stellar, yet considered ahead of the Go one. All they needed was to allocate additional funding for the ongoing already working NativeAOT-LLVM-WASM prototype to very quickly get the full support of the target they needed. But alas.
Maybe it's time to stop eating everything that Microsoft sales folks/evangelists spoon feed you and wake up to the fact that only because people paid by Microsoft to roll the drum about Microsoft products telling you that .NET and C# is oh so good and the best in everything, maybe it's not actually that credible?
Look at the hard facts. Every single product which Microsoft has built that actually matters (e.g. all their Azure CNCF stuff, Dapr, now this) is using non Microsoft languages and technologies.
You won't see Blazor being used by Microsoft or the 73rd reinvention of ASP.NET Core MVC Minimal APIs Razor Pages Hocus Pocus WCF XAML Enterprise (TM) for anything mission critical.
So that could be a fundamental reason why.
It's not lost on me that this is a widely used aphorism. The problem is that it's not true in any way shape or form.
--https://github.com/microsoft/typescript-go/discussions/411
I haven't looked at the tsc codebase. I do currently use Golang at my job and have used TypeScript at a previous job several years ago.
I'm surprised to hear that idiomatic Golang resembles the existing coding patterns of the tsc codebase. I've never felt that idiomatic code in Golang resembled idiomatic code in TypeScript. Notably, sum types are commonly called out as something especially useful in writing compilers, and when I've wanted them in Golang I've struggled to replace them.
Is there something special about the existing tsc codebase, or does the statement about idiomatic Golang resembling the existing codebase something you could say about most TypeScript codebases?
To be fair, they didn't actually say that. What they said was that idiomatic Go resembles their existing patterns. I'd imagine what they mean by that is that a port from their existing patterns to Go is much closer to a mechanical 1:1 process than a port to Rust or C#. Rust is the obvious choice for a fully greenfield implementation, but reorganizing around idiomatic Rust patterns would be much harder for most programs that are not already written in a compatible style. e.g. For Rust programs, the precise ownership and transfer of memory needs to be modelled, whereas Go and JS are both GC'd and don't require this.
For a codebase that relies heavily on exception handling, I can imagine a 1:1 port would require more thought, but compilers generally need to have pretty good error recovery so I wouldn't be surprised if tsc has bespoke error handling patterns that defers error handling and passes around errors as values a lot; that would map pretty well to Go.
Most TypeScript projects are very far away from compiler code, so that this wouldn't resemble typical TypeScript isn't too surprising. Compilers written in Go also don't tend to resemble typical Go either, in fairness.
TSC doesn't use many union types, it's mostly OOP-ish down-casting or chains of if-statements.
One reason for this is I think performance; most objects are tagged by bitsets in order to pack more info about the object without needing additional allocations. But TypeScript can't really (ergonomically) represent this in the type system, so that means you don't get any real useful unions.
A lot of the objects are also secretly mutable (for caching/performance) which can make precise union types not very useful, since they can be easily invalidated by those mutations.
though looking at that flood of loose ifs+returns, i kinda wish they used rust :)
../../../tmp/typescript-go/built/local/lib.dom.d.ts:27982:6 - error TS2300: Duplicate identifier 'KeyType'.
27982 type KeyType = "private" | "public" | "secret";
~~~~~~~
../../../tmp/typescript-go/built/local/lib.webworker.d.ts:9370:6 - 'KeyType' was also declared here.
9370 type KeyType = "private" | "public" | "secret";
~~~~~~~
Probably an easy fix.Running it in another portion results in SIGSEGV with a bad/nil pointer defererence, which puts me in the camp of people questioning the choice of Go.
Why not find out what's going wrong and submit a bug report / merge request instead of immediately dismissing a choice made by one of the leading authorities in programming languages in the world?
They would be still setting up the project, if it was Rust.
Strange choice to use Go for the compiler instead of C# or F#.
Now if they will have problems, they will depend on the Go team at Google to fix them.
The opposite would be true as well, teams at Google using Typescript or C# would rely on Microsoft to fix any issues.
Or collaborate with Go team.
Why _not_ use Go?
Because of its truly primitive type system, and because Microsoft already has a much better language — C#, which is both faster and can be more high level and more low-level at the same time, depending on your needs.
I am a complete nobody to argue with the likes of Hejlsberg, but it feels like AOT performance problems could be solved if tsc needed it, and tsc adoption of C# would also help push C#/.NET adoption. Once again, Microsoft proves that it's a bunch of unrelated companies at odds with each other.
Advanced type systems are guard rails to spot and avoid issues early on, but that role can be fulfilled by tests as well, and Typescript has a LOT of tests and use cases that will be flagged up. Does it need a strong type system for the internal tooling on top of that?
I'm not an authority on the matter and know nothing about the compiler's internals, but I'm confident in saying that the type system is good enough for this use case.
That is the main reason they gave for why they those chose Go. The parent asked "Why _not_ use Go?"
Let's be real: You can absolutely write "Go-style" code in just about any language that might have been considered for this. But you wouldn't want to, as a more advanced type system enables entirely different idioms, and it is in bad faith to other developers (including future you) to stray too far from those idioms. If ignoring idioms doesn't sound like a bad idea on day one, you'll feel the hurt and regret soon enough...
Go was chosen because the idioms are generally in alignment with their needs and those idioms are wholly dependent on the shape of its type system.
But there is probably some truth in what you say as well. Footguns are no doubt refreshing after being engrossed in Typescript (and C#) for decades. At some point you start to notice that your tests end up covering all the same cases as your advanced types, and you begin question why you are putting in so much work repeating yourself, which ultimately sees you want to look for better.
Which, I suppose, is why industry itself keeps ending up taking that to the extreme, cycling between static and dynamic typing over and over again.
At the extreme end of the spectrum that starts to become true. But the languages that fill that space are also unusable beyond very narrow tasks. This truth is not particularly relevant to what is seen in practice.
In the realm of languages people actually use on a normal basis, with their half-assed type systems, a few more advanced concepts sprinkled in here and there really don't do anything to reduce the need for testing as you still have to test around all the many other holes in the type system, which ends up incidentally covering those other cases as well.
In practice, the primary benefit of the type system in these real-world languages is as it relates to things like refactoring. That is incredibly powerful and not overlapped by tests. However, the returns are diminishing. As you get into increasingly advanced type concepts, there is less need/ability to refactor on those touch points.
Most seem to agree that a complete type system is way too much (especially for general purpose programming), and no type system is too little; that a half-assed type system is the right balance. However, exactly how much half-assery is the right amount of half-assery is where the debate begins. I posit that those who go in deep with thinking less half-assery is the way eventually come to appreciate more half-assery.
> I don't think the "industry" is a person
Nobody does.
See, I fundamentally disagree that those languages are "unusable beyond very narrow tasks", because I never stated that only a complete and absolutely proven type system can provide those proofs. In fact, even a relatively mid-tier (a little bit above average) type-system like C#'s can already provide enormous benefits in this regard. See, when you test for something like raw JavaScript, you end up testing things that are even about the shape of your objects, in C# you don't have to do this (because the type system dictates the shape). You also have to be very careful around possibly null objects and values, which in a language with "proper" nullable types (and support from it in the type system and static checkers) like C# can be lowered vastly (if you use the resource, naturally). C# is also a language that "brings the types into runtime" through reflection, so it will even bring you things that you don't need to test in your code (only when developing the library) like reflection for example (you will not see things that are meant to assert shapes, like 'zod' or 'pydantic' in C# or other mid-tier typed languages for example). C#'s type system also proves many things about the safety of your code, for example you basically never need to test your usage of Spans, the type system and static analysis will already rule out most problematic usages of those things. You also never need to test if your int is actually a float because some random place in your code it was set to be so (like in JS), you also never need to test against many other basic assumptions even an extremely basic type system would give you (even Go's one).
This is to say that, basically, this don't hold true for relatively simple type systems. I'm also yet to see this holding true for more advanced ones, for example: Rust is a relatively well used language for a lot of low-level projects. I never saw someone testing (well bounded safe) rust code for basic shapes of types, nor for the conclusions the type system provides when writing on it. For example, testing if the type system was really able to catch that ownership transference happening here, of it is really safe to assume that there's only one mutable reference to that object after you called that method, or if the destructor of the object is really running in the end of the scope of the function, or even if the overly complex associated type result was actually what you meant it to be (in fact, if you would ever use those complicated types, it would be precisely to have very strong compile-time guarantees that both a test would not be able to cover for -- entirely, and that you would not write unit tests specifically for in the first place). So I don't think it is true that you need a powerful type system to see the reduction in tests that you would need to write in a completely dynamically typed language, nor I think it is true when you start having really powerful type constructs, that you will come to this conclusion """start to notice that your tests end up covering all the same cases as your advanced types""". I also don't think that you need to go to the extreme of this spectrum to see those benefits, they appear gradually and increase gradually as you move towards the end (when you end up with more extremely uncommon things like dependent typing, refinement types or effect systems).
I also certainly don't agree that it does matter that "most people" think or don't think about powerful type systems and the languages using them, it matters more that the right people are using them, people that want to be benefitted from this, than the everyday masses (this is another overly complex disccussion tho). And while I can understand the feelings you have towards the "low end of half-assery type systems", and even agree to a certain reasonable degree (naturally, with my own considerations), I don't think glorifying mediocre type systems is the way to go (like many people usually do, for some terrifying reason). It is enough to recognize that a half-assery type-system usually gets the job done and that's it, completely fine and okay, it may even be faster to write, instead of trying to justify that we should "pursue primitive type systems" because of the fact that we can do things well on them. Maybe I'm digressing to much, it's hard to respond to this comment in a satisfactory manner.
>> I don't think the "industry" is a person
> Nobody does.
Yeah, this was not a very productive point of mine, sorry.
Then why do you think nobody uses them (outside of certain narrow tasks)? It is hard to deny the results.
The reality is that they are intractable. For the vast majority of programming problems, testing is good enough and far, far more practical. There is a very good reason why the languages people normally use (yes, including C# and Rust) prefer testing over types.
> See, when you test for something like raw JavaScript, you end up testing things that are even about the shape of your objects
Incidentally, but not explicitly. You also end up incidentally testing things like the shape even in languages that provide strict guarantees in the type system. That's the nature of testing.
I do agree that testing is not well understood by a lot of developers. There are for sure developers who think that explicitly testing for, say, the shape of data is a test that needs to be written. A lot of developers straight up don't know what makes for a useful test. We'd do well to help them better understand testing, but I'm not sure "don't even think about it, you've got a half-assed type system to lean on!" get us there. Quite the opposite.
> it matters more that the right people are using them
Well, they're not. And they are not going to without some fundamental breakthrough that changes the tractability of using languages with an advanced (on the full spectrum, not relative to Go) type system. The tradeoffs just aren't worth it in nearly every case. So we're stuck with half-assed type systems and relying on testing, for better or worse. Yes, that includes C# and Rust.
> I don't think glorifying mediocre type systems is the way to go (like many people usually do, for some terrifying reason).
Does it matter? Engineers don't make decisions based on some random emotional plea on HN. A keyboard cowboy might be swayed in the wrong direction by such, but then this boils down to being effectively equivalent to "If we don't talk about sex maybe teenage pregnancy will cease." Is that really the angle you want to go with?
Edit: I have reached the bottom of the thread and still have not seen this visceral reaction mentioned by the OP.
There's more reactions here. I think devs have lost the plot, tbh.
Meanwhile, this decision was made or led by one of the few people that developed multiple popular programming languages. I trust his opinion / decision more than internet commenters, including my own.
What is weird is how much people talk about how other people react. Modern social media is weird
Holy Language Wars are a spectator sport as old as the internet itself. It's normal to comment on one side fighting another. What's weird is pretending not to see the fighting
But, advocates for language X need to make sure they read and understand the requirements and tradeoffs, which could probably have been communicated better.
So, to me, Hejlsberg's choice sounds pretty logical.
After, why go ? why not...
Never been a big fan of MS, but must say that typescript is well done imho. thanks for it and all the hard work!
I would argue it needs editing, as it violates the HN guideline:
> use the original title, unless it is misleading or linkbait; don't editorialize.
(I'm simply not aware of them but that also means I won't make any statements about these)
Opened discussion [1]
- [0] https://github.com/microsoft/typescript-go/discussions/410
- [1] https://github.com/microsoft/typescript-go/discussions/467
https://github.com/microsoft/typescript-go/commits?after=dad...
- Native executable support on all major platforms
- He doesn't seen to believe that AOT compiled C# can give the best possible performance on all major platforms
- Good control of the layout of data structures
- Had to have garbage collection
- Great concurrency support
- Simple, easy to approach, and great tooling
One other thing I forgot to mention was that he talked about how the current compiler was mostly written as more or less pure functions operating on data structures, as opposed to being object oriented, and that this fits very well with the Go way of doing things, making 1:1 port much easier.
I don't think it was the performance. C# is usually on par or faster than Go.
Could be the lack of maturity but also that I believe Go produces smaller binaries which makes a lot of sense for a CLI.
I benchmarked HTML rendering and Dotnet was 2-3x faster than Go using either Templ or html/template.
Etc.
those hardcoded byte arrays are how everyone does templating everywhere right?
or are you talking about after they changed back their "platform" test to not do that and is substantially slower than go
You can write pure functions operating on data structures in C#, it's maybe not as idiomatic as in Go, but it should not cause problems.
Based on interviews, it seems Hejlsberg cares a lot about keeping the code base as idiomatic and approachable as possible. So it's clearly a factor.
If doing a web server, on the other hand, these things wouldn't matter at all as you would be running a container anyway.
Love it :D
This isn't a knock against Go or necessarily a promotion of Rust, just seems like a lot of duplicated effort. I don't know the timelines in place or where the community projects were vs. the internal MS project.
> Bootstrapping is a fairly common practice when creating a programming language. Many compilers for many programming languages are bootstrapped, including compilers for ALGOL, BASIC, C, C#, Common Lisp, D, Eiffel, Elixir, Go, Haskell, Java, Modula-2, Nim, Oberon, OCaml, Pascal, PL/I, Python, Rust, Scala, Scheme, TypeScript, Vala, Zig and more.
Most of the Rust compiler is in Rust, that's correct, but it does by default use LLVM to do code generation, which is in C++.
For instance, this compiler for a pattern matching notation has parts of it implementation using the notation itself:
https://www.kylheku.com/cgit/txr/tree/stdlib/match.tl
Some pattern matching occurs in the function match-case-to-casequal. This is why it is preceded by a dummy implementation of non-triv-pat-p, a function needed by the pattern matching logic for classifying whether a pattern is trivial or not; it has to be defined so that the if-match and other macros in the following function can expand. The sub just says every pattern is nontrivial, a conservative guess.
non-triv-pat-p is later redefined. And it uses match-case! So the pattern matcher has bootstrapped this function: a fundamental pattern classification function in the pattern matcher is written using pattern matching. Because of the way the file is staged, with the stub initial implementation of that function, this is all boostrapped in a single pass.
Hopefully this would also reduce the memory footprint because my VS Code intelisense keeps crashing unless I give it like 70% of my RAM, its probably because of our fairly large graphql.ts file which contains auto-generated grapqhl types.
My most plausible guess would be that compiler writers don't want to dig into native code and performance, writing a TS to Go translator looks like a more familiar task for them. Lack of JS version performance analysis anywhere in the announcements kinda confirms this.
I kinda wonder, though, if in 5 or 10 years how many of these tools will still be crazy fast. Hopefully all of them! But I also would not be surprised if this new performance headroom is eaten away over time until things become only just bearable again (which is how I would describe the current performance of typescript).
Plus, using TS directly to do runtime validation of types will become a lot more viable without having to precompile anything. Not only serverside, we'll compile the whole thing to WASM and ship it to the client to do our runtime validation there.
> immutable data structures --> "we are fully concurrent, because these are what I often call embarrassingly parallelizable problems"
The relationship of their performance gains to functional programming ideas is explained beginning at 8:14 https://youtu.be/pNlq-EVld70?feature=shared&t=522
What is more important is that tsc does typechecking, which is a static analysis of sorts to ensure code correctness. But this has nothing to do with runtime performance, that's entirely in JS land and in JS transpilers / optimizers.
There's a significant performance penalty for using javascript outside the browser.
I'm not aware of any JS runtime outside a browser that supports concurrency (other than concurrently awaiting IO), so you can't do parallel compilation in a single process.
It's generally also very difficult to make a JS program as fast as even a naive go program, and the performance tooling for go is dramatically more mature.
You seem to be referring to runtime performance of compiled code. The announcement is about compile times; it's about the performance of the compiler itself.
It doesn't use type hints yet, and the difficulty there is that you'd need a sound type system in order to rely on the types. You may be able to use type hints to generate optimized and fallback functions, with type guards, but that doesn't exist yet and it sounds like the TypeScript team wants to move pretty quickly with this.
[1]: https://porffor.dev/
While I like faster TSC, I don't like that the TypeScript compiler needs to be written in another language to achieve speed; it kind of reminds everyone that TS isn't a good language for complicated CPU/IO tasks.
Given that the TypeScript team has resigned to the fact that JavaScript engines can't run the TypeScript compiler (TSC) sufficiently fast for foreseeable future and are rewriting it entirely in Go, then it is unlikely they will seek to do AOT.
https://www.microsoft.com/en-us/research/publication/static-...
> The JS-based codebase will continue development into the 6.x series, and TypeScript 6.0 will introduce some deprecations and breaking changes to align with the upcoming native codebase.
> While some projects may be able to switch to TypeScript 7 upon release, others may depend on certain API features, legacy configurations, or other constraints that necessitate using TypeScript 6. Recognizing TypeScript’s critical role in the JS development ecosystem, we’ll still be maintaining the JS codebase in the 6.x line until TypeScript 7+ reaches sufficient maturity and adoption.
It sounds like the Python 2 -> 3 migration, or the .Net Framework 4 -> .Net 5 (.Net Core) migration.
I'm still in a multi-year project to upgrade past .Net Framework 4; so I can certainly empathize with anyone who gets stuck on TS 6 for an extended period of time.
My theory - that Go will always be the choice for things like this when ease, simplicity, and good (but not absolute) performance is the goal - continues to hold.
Also, what’s up with 10x everywhere? Why not 9.5x or 11x?
And if it's run-time, can we expect browsers to replace V8 with this Go library?
(I realize this is a noob/naive question - apologies)
https://devblogs.microsoft.com/typescript/announcing-typescr...
This will only allow you to run your TypeScript in Node, but does not perform type checking, and I don't believe has any plans to. This is from Node.js 23.9.0
https://nodejs.org/api/typescript.html#type-stripping
I don't believe Node has any plans for type checking TS.
If the Typescript team were to go with Rust or C# they would have to contend with async/await decoration and worry about starvation and monopolization.
Go frees the developer from worrying about these concerns.
> Modern editors like Visual Studio and Visual Studio Code have excellent performance.
Well I am not sure we are on the same page here. Still, fingers crossed.
I've been revisiting my editing setup over the last 6 months and to my surprise I've time traveled back to 2012 and am once again really enjoying Sublime Text. It's still by far the most performant editor out there, on account of the custom UI toolkit and all the incredibly fast indexing/search/editing engines (everything's native).
Not sure how this announcement impacts VSCode's UI being powered by Electron, but having the indexing/search/editing engines implemented in Go should drastically improve my experience. The editor will never be as fast as Sublime but if they can make it fast enough to where I don't notice the indexing/search/editing lag in large projects/files, I'd probably switch back.
It has no bearing on this at all.
My development in regards to language:
- Javascript sucks I love Python.
- Python sucks I love Typescript.
Right now you can make use of the --erasableSyntaxOnly to find any enums in your code, and start porting over to an alternative. This article lists alternatives if you're interested.
https://exploringjs.com/tackling-ts/ch_enum-alternatives.htm...
tl;dr — Rust would be great for a rewrite, but Go makes way more sense for a port. After the dust settles, I hope people focus on the outcomes, not the language choice.
I was very surprised to see that the TypeScript team didn’t choose Rust, not just because it seemed like an obvious technical choice but because the whole ecosystem is clearly converging on Rust _right now_ and has been for a while. I write Rust for my day job and I absolutely love Rust. TypeScript will always have such a special place in my heart but for years now, when I can use Rust.. I use Rust. But it makes a lot of sense to pick Go.
The key “reading between the lines” from the announcement is that they’re doing a port not a rewrite. That’s a very big difference on a complex project with 100-man-years poured into it.
Places where Go is a better fit than Rust when porting JavaScript:
- Go, like JavaScript and unlike Rust, is garbage collected. The TypeScript compiler relies on garbage collection in multiple places, and there are probably more that do but no one realizes it. It would be dangerous and very risky to attempt to unwind all of that. If it were a Rust rewrite, this problem goes away, but they’re not doing a rewrite.
- Rust is so stupidly hard. I repeat, I love Rust. Love it. But damn. Sometimes it feels like the Rust language actively makes decisions that demolish the DX of the 99.99% use-case if there’s a 0.001% use-case that would be slightly more correct. Go is such a dream compared to Rust in this respect. I know people that more-or-less learned Go in a weekend and are writing it professionally daily. I also know people that have been writing Rust every day professionally for years and say they still feel like noobs. It’s undeniable what a difference this makes on productivity for some teams.
Places where Go is just as good a fit as Rust:
- Go and Rust both have great parallelism/concurrency support. Go supports both shared memory (with explicit synchronization) and message-passing concurrency (via goroutines & channels). In JavaScript, multi-threading requires IPC with WebWorkers, making Go’s concurrency model a smoother fit for porting a JS-heavy codebase that assumes implicit shared state. Rust enforces strict ownership rules that disallows shared state, or we can at least say makes it a lot harder (by design, admittedly).
- Go and Rust both have great tooling. Sure, there are so many Rust JavaScript tools, but esbuild definitively proves that Go tooling can work. Heck, the TypeScript project itself uses esbuild today.
- Go and Rust are both memory safe.
- Go and Rust have lots of “zero (or near zero) cost abstractions” in their language surface. The current TypeScript compiler codebase makes great use of TypeScript enums for bit fiddling and packing boolean flags into a single int32. It sucks to deal with (especially with a Node debugger attached to the TypeScript typechecker). While Go structs are not literally zero cost, they’re going to be SO MUCH nicer than JavaScript objects for a use-case like this that’s so common in the current codebase. I think Rust sorta wins when it comes to plentiful abstractions, but Go has more than enough to make a huge impact.
Places where Rust wins:
- the Rust type system. no contest. In fairness, Go doesn’t try to have a fancy type system. It makes up for a lot of the DX I complained about above. When you get an error that something won’t compile, but only when targeting Windows because Rust understands the difference in file permissions… wow. But clearly, what Go has is good enough.
- so many new tools (basically, all of them that are not also in JS) are being done in Rust now. The alignment on this would have been cool. But hey, maybe this will force the bindings to be high-quality which benefits lots of other languages too (Zig type emitter, anyone?!).
By this time next week when the shock wears off, I just really hope what people focus on is that our TypeScript type checking is about to get 10 times faster. That’s such a big deal. I can’t even put it into words. I hope the TypeScript team is ready to be bombarded by people trying to use this TODAY despite them saying it’s just a preview, because there are some companies that are absolutely desperate to improve their editor perf and un-bottleneck their CI. I hope people recognize what a big move this is by the TypeScript team to set the project up for success for the next dozen years. Fully ejecting from being a self-hosted language is a BIG and unprecedented move!
Specifically if you race any non-trivial Go object (say, a hash table, or a string) then that's immediately UB. Internally what's happening is that these objects have internal consistency rules which you can easily break this way and they're not protected against that because the trivial way to do so is expensive. Writing a Go data race isn't as trivial as writing a use-after-free in C++ but it's not actually difficult to do by mistake.
In single threaded software this is no caveat at all, but most large software these days does have some threading involved.
Typescript is a Microsoft project, right? I’m surprised they didn’t choose C#.
Go doesn't seem to be memory safe, see https://www.reddit.com/r/rust/comments/wbejky/comment/ii7ak8... and https://go.dev/play/p/3PBAfWkSue3
Rust is memory safe. Go is memory safe. Python is memory safe. Typescript is memory safe. C++ is not memory safe. C is not memory safe.
At some point a next generation solver will make this not compile, and people will probably invent an even weirder edge case for that solver.
Whereas the Go example is just how Go works, that's not a bug that's by design, don't expect Go to give you thread safety that's not what they promised.
The burden placed by rust on the developer is to keep track of all possible mutability and readability states and commit to them upfront during development. (If I may summarize, been a long time since I wrote any Rust). The rest it takes care of for you.
The question of which a developer prefers at a certain skill level, and which a manager of developers at a certain skill level prefers, is going to vary.
That said, most people still call Go memory safe even in spite of this being possible, because, well, https://go.dev/ref/mem
> While programmers should write Go programs without data races, there are limitations to what a Go implementation can do in response to a data race. An implementation may always react to a data race by reporting the race and terminating the program. Otherwise, each read of a single-word-sized or sub-word-sized memory location must observe a value actually written to that location (perhaps by a concurrent executing goroutine) and not yet overwritten. These implementation constraints make Go more like Java or JavaScript, in that most races have a limited number of outcomes, and less like C and C++, where the meaning of any program with a race is entirely undefined, and the compiler may do anything at all.
That last sentence is the most important part. Java in particular specifically defines that tears may happen in a similar fashion, see 17.6 and 17.7 of https://docs.oracle.com/javase/specs/jls/se8/html/jls-17.htm...
I believe that most JVMs implement dynamic dispatch in a similar manner to C++, that is, classes are on the heap, and have a vtable pointer inside of them. Whereas Go's interfaces can work like Rust's trait objects, where they're a pair of (data pointer, vtable pointer). So the behavior we see here with Go is unlikely to be possible in Java, because the tear wouldn't corrupt the vtable pointer, because it's inside what's pointed at by the initial pointer, rather than being right after it in memory.
These bugs do happen, but they have a more limited blast radius than ones in languages that are clearly unsafe, and so it feels wrong to lump Go in with them even though in some strict sense you may want to categorize it the other way.
On the other hand, though, in practice, I've wound up using Go in production quite a lot, and these bugs are excessively rare. And I don't mean concurrency bugs: Go's concurrency facilities kind of suck, so those are certainly not excessively rare, even if they're less common than I would have expected. However... not all Go concurrency bugs can possibly segfault. I'd argue most of them can't, at least not on most common platforms.
So how severely you treat this lapse is going to come down to taste. I see the appeal of Rust's iron-clad guarantees around limiting the blast radius, but of course everything comes with limitations. I believe that any discussion about the limitations of guarantees like these should have some emphasis on the real impact. e.g. It's easy enough to see that the issues with memory management in C and C++ are serious based on the security track record of programs written in C and C++, I think we're still yet to fully understand how much of an impact Go's lack of safe concurrency will impact Go software in the long run.
I both want to agree with this, but also point to things like https://www.uber.com/en-CA/blog/data-race-patterns-in-go/, which found a bunch of bugs. They don't really contextualize it in terms of other kinds of bugs, so it's really hard to say from just this how rare they actually are. One of the insidious parts of non-segfaulting data race bugs is that you may not notice them until you do, so they're easy to under-report. Hence the checker used in the above study.
> not all Go concurrency bugs can possibly segfault. I'd argue most of them can't, at least not on most common platforms.
For sure, absolutely. And I do think that's meaningful and important.
> I think we're still yet to fully understand how much of an impact Go's lack of safe concurrency will impact Go software in the long run.
Yep, and I do suspect it'll be closer to Java than to C.
I don't know if I'd evangelize for adopting Go on the scale that Uber has: I think Go works best for shared-nothing architectures and gets gradually less compelling as you dig into more complex concurrency. That said, since Uber is an early adopter, there is a decent chance that what they have learned will help future organizations avoid repeating some of the same issues, via improvements to tooling and the language.
[1]: https://go.dev/blog/loopvar-preview
[2]: https://go.dev/blog/synctest
[3]: https://github.com/mgechev/revive/blob/HEAD/RULES_DESCRIPTIO...
What? And how? And how would that help in Go which has a completely different garbage collection mechanism?
I'll give Typescript yet another go. I really like it and wish I could use it. It's just that any project I start, inevitably the sourcemap chain will go wrong and I lose the ability to run the debugger in any meaningful way.
I’ve been using F# full-time for 6 years now. And compiler/tooling gets painfully slow fast.
Still wouldn’t trade it for anything else though.
Do any other well-adopted tools in the ecosystem use Go?
esbuild is the most well-known/used project, probably beats all other native bundlers combined. I can't remember anything else off the top of my head.
Another commenter pointed out that compilers have very different performance characteristics to games, and I'll include web servers in that too.
tsc needs to start up fast and finish fast. There's not a ton of time to benefit from JIT.
Your server on the other hand will run for how long between deployments?
That's really not what's stopping TS being built in to browsers. Have a look at the discussions around the types-as-comments proposal https://tc39.es/proposal-type-annotations/
sigh
"To meet those goals, we’ve begun work on a native port of the TypeScript compiler and tools. The native implementation will drastically improve editor startup, reduce most build times by 10x, and substantially reduce memory usage."
It's hard to tell if there will even be a runtime that somehow uses TS types to optimize even further (e.g. by proving that a function diverges) but to my knowledge they currently don't and I don't think there's any in the works (or if that's even possible while maintaining runtime soundness, considering you can "lie" to TS by casting to `unknown` and then back to any other type).
Just like if you said faster C++ that could mean the compiler runs faster, or the resulting machine code runs faster.
Just because the compile target is another human readable language doesn’t mean it ceases to be a typescript program.
I didn’t think this particular example was very ambiguous because a general 10x speed up in the resulting JS would be insane, and I have used typescript enough to wish the compiler was faster. Though if we’re being pedantic, which I enjoy doing sometimes, I would say it is ambiguous.
Sure, lots of build tools do this, but that's not Typescript.
With very few exceptions, Typescript is written so that removing the Typescript-specific things makes it equivalent to the Javascript it transpiles to.
That still wouldn't make sense, in the same way that it wouldn't make sense to say "Python type hints found a way to automatically write more performant Python". With few exceptions, the TypeScript compiler doesn't have any runtime impact at all — it simply removes the type annotations, leaving behind valid JavaScript that already existed as source code. In fact, avoiding runtime impact is an explicit design goal of TypeScript [1].
They've even begun to chip away at the exceptions with the `erasableSyntaxOnly` flag [2], which disables features like enums that do emit code with runtime semantics.
[1] https://github.com/microsoft/TypeScript/wiki/TypeScript-Desi...
[2] https://www.typescriptlang.org/docs/handbook/release-notes/t...
I get your point, but... this is exactly the premise of mypyc ;)
https://betterstack.com/community/guides/scaling-nodejs/node....
If you don't know enough about TypeScript to understand that TypeScript is not a runtime, I'm not sure why you would care about TypeScript being faster (in either case).
Preact was "a faster React", for example.
I mean I think generally you’d want to click the link and read the article before commenting
- It's not ambiguous because they mean $X.
- It is ambiguous because it has many possible meanings.
- It is not ambiguous because it has many possible meanings
Yeah, that exists. AssemblyScript has an AOT compiler that generates binaries from statically typed code.
Typescript's type system is unsound so it probably will never be very useful for an optimizing compiler. That was never the point of TS however.
I don't think that is too far fetched either since typescript already has most of the type information.
It’s been a crazy couple of weeks for TS!!
If you have to invent things for something to be considered ambiguous, is it really ambiguous?
It would be possible that MS wrote a TypeScript compiler that emits native binaries and that made the language 10x faster, why not?
Transpiling in itself also doesn't remove the possibility of producing more optimized code, especially if the source has more information about the types. The official TypeScript compiler doesn't really do any of that right now (e.g. it won't remove a branch about how to handle a variable if its type equals a number even if it has the type information to know it can't have been set to one). Heck, it doesn't even (natively, you can always bolt this on yourself) support producing minified transpiled code to improve runtime parsing! In both examples it's not because transpilation prevents optimization though, it's just not done (or possibly worthwhile if TS only ever targets JS runtimes as JS JIT is extraordinarily good these days).
Except in the case of Doom, which can run on anything.
If someone posted an article talking about the "handedness" of DNA or something, I wouldn't complain "oh, you confused me, I thought you were saying DNA has hands!"
I agree with pseudopersonal in that the title should be changed. technically it's not misleading, but not everyone uses or is familiar with typescript.
Mostly these days I’m only aware of C# when it inconveniences me.
However, in both native AOT and Go you actually have some parts of the runtime bundled in (e.g. garbage collector).
Why not C#? https://youtu.be/10qowKUW82U?t=1155
/s
Every time I've said that languages like Python, JavaScript, and basically any other language where it's hard to avoid heap allocations, pointer chasing, and copious data copies are all slow, there are plenty of people who come out of the woodwork to inform me that it's all negligible.
To be a little bit fair to those people, I have been in many situations where people go "my matlab/python code is too slow, I must re-write it in C", and I've been able to get an order of magnitude improvement by re-writing the code in the same language. Hell I've ported terrible Fortran code to python/numpy and gotten significant performance improvement. Of course taking that well written code and re-writing that in well written C will probably give you a further order of magnitude improvement. Fast code in a slow language can beat slow code in a fast language, but obviously never beat fast code in a fast language.
I'm just a little bitter because of how many times I've been shushed in places like programming language subreddits and here when I've pointed out how inefficient some cool new library/framework/paradigm is. It feels like I'm either being gaslit or everyone else is in denial that things like excessive heap allocations really do still matter in 2025, and that JITs almost never help much with realistic workloads for a large percentage of applications.
All the bootcamp cargo culting crew have pumped these lies such as "the language doesn't matter" or "learn coding in 1 week for a SWE job with JS / TS" and it has caused the increase in low quality software and with several developers asking how to improve or add "performance" optimizations as such.
What we have just seen is that the TS team has admitted that a limit has been reached and *almost always* the solution is either porting it to a compiled language or relying on scaling with new computers with new processors in accordance to Moore's Law to get performance for free.
Now the bootcampers are rediscovering why we need "static typing" and why a "compiled language" is more performant than a VM-based language.
All the time spent trying to optimize JITs for JavaScript engines, or alternative Python implementations (e.g., PyPy), and fruitless efforts like trying to get JVMs to start fast enough for use in cloud "lambda function" applications. Ugh...
This is how we got Graal, why would you call it "fruitless effort"?
Javascript is not slow because of GC or JIT (the JVM is about twice as fast in benchmarks; Go has a GC) but because JS as a language is not designed for performance. Despite all the work that V8 does it cannot perform enough analysis to recover desirable performance. The simplest example to explain is the lack of machine numbers (e.g. ints). JS doesn't have any representation for this so V8 does a lot of work to try to figure out when a number can be represented as an int, but it won't catch all cases.
As for "working solution over language politics" you are entirely pulling that out of thin air. It's not supported by the article in any way. There is discussion at https://github.com/microsoft/typescript-go/discussions/411 that mentions different points.
I wonder if Typescript could introduce integer type(s) that a direct TS -> native code compiler (JIT or AOT) could use. Since TS becomes valid JS if all type annotations are removed, such numbers would just become normal JS numbers from the POV of a JS runtime which does not understand TS.
No, it is not. It is a continuation of an existing trend
You may be interested in esbuild (https://github.com/evanw/esbuild), turborepo (https://github.com/vercel/turborepo), biome-js (https://github.com/biomejs/biome) are all native reimplementations of existing projects in JS/TS. esbuild is written in Go, the others in Rust.
> reveals something deeper: Microsoft prioritized shipping a working solution over language politics
Its not that "deep". I don't see the politics either way, there are clearly successful projects using both Go and Rust. The only people who see "politics" are those who see people disagreeing, are unable to understand the substance of the disagreement and decide "ah, it's just politics".
So, I don't think the comment is AI-generated for this reason.
They're using en-dash which is even easier: option-hyphen.
This is the wrong way to do AI detection. For one, LLM would have used the right dash. But at least find someone wasting our time with belabored or overwrought text that doesn't even interact with anything.
In the pre-Unicode days, people would use two hyphens (--) to simulate em dashes.
I'm not sure that this is particularly accurate for the Rust case. The goal of this project was to perform a 1:1 port from TypeScript to a faster language. The existing codebase assumes a garbage collector so Rust is not really a realistic option here. I would bet they picked GCed languages only.
From https://github.com/microsoft/typescript-go/discussions/411
> Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.
> We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.
Personally, I'm a big believer in choosing the right language for the job. C# is a great language, and often is "good enough" for many jobs. (I've done it for 20 years.) That doesn't mean it's always the best choice for the job. Likewise, sometimes picking a "familiar language" for a target audience is better than picking a personal favorite.
But, the team posted their rationale for Go here: https://github.com/microsoft/typescript-go/discussions/411
I don't follow. If they had picked Rust over Go why couldn't you also argue that they are prioritising shipping a working solution over language politics. It seems like a meaningless statement.
There is already a growing number of native-code tools of the JS/TS ecosystem, like esbuild or swc.
Maybe we should expect attempts of native AOT compilation for TS itself, to run on the server side, much like C# has an AOC native-code compiler.
Really is this a surprise to anyone? I don't think anyone thinks JS is suitable for 'systems programming'.
Javascript is the language we have for the browser - there's no value in debating it's merits when it's the only option. Javascript on the server has only ever accrued benefits from being the same language as the browser.
I hope you really mean for "userspace tools / programs" which is what these dev-tools are, and not in the area of device drivers, since that is where "systems programming" is more relevant.
I don't know why one would choose JS or TS for "systems programming", but I'm assuming you're talking about user-space programs.
But really, those who know the difference between a compiled language and a VM-based language know the obvious fundamental performance limitations of developer tools written in VM-based languages like JS or TS and would avoid them as they are not designed for this use case.
I think they went for Go mostly because of memory management, async and syntactic similarity to interpreted languages which makes total sense for a port.
This should prevent most of the memory safety issues, though data races could still be tricky (e.g. Go is memory unsafe due to data races)
Also in this space is Gleam [2] which targets Erlang / OTP, if high concurrency and fault tolerance is your cup of tea.
[1]: https://reasonml.github.io/
[2]: https://gleam.run/
half of the perf gain is from moving to native code, other half is from concurrency
10x faster compilation, not runtime performance
This is an admission that these JavaScript based languages (including TypeScript) are just completely unsuitable for these performance and scalable situations, especially when the codebase scales.
As long as it is a compiled language with reasonable performance and with proper memory management situations, Go is the unsurprising choice, but the wise choice to solve this problem.
But this choice definitively shows (and as admitted by the TS team) how immature both JavaScript and TypeScript are in performance and scalability scenarios and should be absolutely avoided for building systems that need it. Especially in the backend.
Just keep it in the frontend.
Anyway, JS is not immature in performance per se, but in this particular use case, a native language is faster. But they had to solve the problem first before they could decide what language was best for it.
Why is typescript not already a standard natively supported by browers?!
Also I get the sense from the video that it still outputs only JS. It would be nice if we could build typescript executables that didn't require that, even if was just WASM, though that is more of a different backend rather than a different compiler.
Edit: C# was addressed: https://github.com/microsoft/typescript-go/discussions/411#d...