All this stuff has been around for about five years now, and was mostly working five years ago. The APIs should have settled down long ago. Despite this, there are frequent "refactorings" which cause breaking changes to APIs. (I'm tempted to term this "refuckering".)
Some of this API churn is just renaming types or enum values for consistency, or adding new parameters to functions. Some changes are major, such as turning an event loop inside out. Code using the API must be fixed to compensate.
Because of all the breaking changes, the related crates (Wgpu, the interface to Vulkan, etc., Winit, the interface to the operating system's window manager, and Egui, which handles 2D dialog boxes and menus) must advance in lockstep. There's not much coordination between the various development groups on this. Wgpu and Winit both think they're in charge. and others should adapt to them. Egui tries to cope. Users of the stack suffer in silence.
When there's a bug, there's no going back to an older version. The refuckering prevents that. Changes due to API breaks are embedded in code that uses these APIs.
I'm currently chasing what ought to be a simple bug in egui, and I've been stuck for over a month. The unit tests won't run for some target platforms that used to work, and bug reports are ignored while new features are being added. (Users keep demanding more features in Egui, and Egui is growing towards web browser layout complexity.)
Most users are giving up. In the last year, three 3D rendering libraries and two major 3D game projects have been abandoned. There's are about two first-rate 3D games in Rust, Tiny Glade and Hydrofoil Generation, and both avoid this graphics stack.
The "Stability by Design" article is helpful in that it makes it clear what's gone wrong in Rust 3D land.
It's also why we see so few original projects in Rust (compared to Go, etc), and so many rewrite-in-Rust projects: A rewrite of `ls` or `grep` is a project that has an engraved-in-stone requirement.
Creating an entire new project requires more flexibility as the requirements are only fully specified once some user feedback is in.
It would not be wise to choose Rust for something of an exploratory nature; anything original is going to be painful as large-scale refactors (which are a necessity in an original project) are going to be particularly painful.
I'm glad the article was helpful though!
However I find that inexperienced people tend to coalesce around some languages rather than others.
> When there's a bug, there's no going back to an older version.
This practice is developed by the industry for a reason. It would be really silly to hear people running AI in Windows XP.
> Most users are giving up.
Rust is pretty new, users shouldn't rush to switch a stack because of hype. It takes time for languages to mature, even more for the ecosystem.
Is it fair to assume that every individual library/API can somewhat easily create a brand-new-world with every release (because it won't compile until the types are "re-aligned") yet they don't bother to check if the new release works with any other library/API?
I think the problem is partially cultural with a specific ecosystem but also fundamental. It takes a lot of type craft, care and creativity to design future-proof function/method signatures that are "open" to extension without breakage.
When I made the comment above, I assume a library has already passed that filter.
It used to be, but the compiler changed.
I understand it's an extreme example — Zig is both niche and pre-1.0 — but it was the first thing that came to mind as a counterexample to, an API that never changes.
It's not just an LLM thing, when I looked up tutorials and docs, most of them were also using code that didn't work anymore. And the library I needed only worked with a specific version, so I had to upgrade.. but not too far!
Boring, "painful" languages, by contrast, were quite productive for game jams. Except the time when adding a member to a class broke the Emscripten build on the last day, and it took me the entire day to track it down (because why on earth would it be that!)
But woe unto the user who first starts using your library after a decade of that "evolution" and they are faced with a dozen functions that all have similar but increasingly long names and do very similar things with subtle but likely important differences. (I guess a culture of "the longest function name is probably the newest and the one you want" will emerge eventually.)
Personally, I like when a library's API represents the best way the author knows to tackle a given problem today without also containing an accumulated pile of how they thought the problem should have been tackled years ago before they knew better.
If I want the old solutions, that's what versioning is for. I'll use the old version.
And you'll miss all the stuff you do not want the old solution for. And all the old bugs
> faced with a dozen functions that all have similar but increasingly long names and do very similar things with subtle but likely important differences.
Unless all the old versions are marked add old/deprecated and can be hidden from your view. Then you only care about the old stuff if you used it before and don't want to change
I find this to be extremely "it depends": this is a normal interaction amongst software engineers, if you have mature engineers.
It's only not normal in those ecosystems (Everyone knows which ones those are) which appear to have not had adults in the rooms during the design phase.
If you've only been developing for the last decade or so, you may think it's normal that stuff breaks all the time when software is upgraded. If you've been developing since the 90s you'd know that "upgrades brings breakage" is a new phenomenon, and the field (prior to 2014 or thereabouts) is filled with engineering compromises made purely to avoid breakage.
It's why, if I have an application from 10 years ago in React, I won't even try to build it again, but if I have an application from 1995 in C, it'll build just fine, dependencies included[1] (with pages of warnings, of course).
[1] C dependency graphs are small and shallow, because of the high friction in adding third-party libraries.
This seems a little nuts, to be honest. It feels like you're just pushing failures from easy-to-diagnose issues with the function signature, to hard-to-diagnose issues with the function body. Basically the function body now has to support any combination of hash parameters that the function has ever taken over its entire history -- Is this information documented? Do you have test coverage for every possible parameter combo?
Of course, normal rules apply like, "Don't pollute your program with a proliferation of booleans."
"Am I missing a key, is the value in the hashmap nil, or was there an upstream error or error in this function that is presenting as nil?"
https://clojure.org/guides/destructuring#_associative_destru...
Some extra reading if you're curious: https://softwareengineering.stackexchange.com/questions/2723...
I'm not convinced though that the Clojure graph represents something I'd view positively. Notably, the Scala codebase gets smaller at one point, which looks fantastic to me. Nobody want the world changing under there feet every five seconds, but if code just accumulates without refactors, the end product often becomes not only unusable, but also difficult to replace.
Personally, I'd much prefer more frequent refactors in a changing codebase (vs code addition) but with a strict adherance to semvar, that way, I can stick on v3 or whatever if I'm worried about v4 breaking things for me, but the project itself doesn't need to worry about stagnating, or being stuck with design decisions that aren't relevant anymore.
I'm always happy when I see a library like pyarrow that has high version numbers, because I take it as a sign that they're most likely actually following semvar, as opposed to libraries that stay on v0 for 10+ years.
Say, you have the first function, with a specific signature and it does its stuff
Then later, you improve the stuff and the signature changes with a breaking change
Do not do that
Instead, create a new function (with a new signature) and push in it the whole code
And rewrite the old function to use your new function
This way, you keep one "production code": the new function. And you keep one interface-to-legacy code: the old function (which is nothing but a compatible gateway to the new function, and can be easily forgotten)
Old users have no breaking, new users have features, you keep a single code and are only burdened with a small compatible layer
I previously worked in PHP, Perl-cgi, Java, and Python- webtools mostly based on MySQL and other SQL database flavours.
I worked in a Clojure only shop for a while and they taught me the ways after that you don’t go back. Everything can quickly click into place, it’s daunting to start the learning curve is very unsteep, takes long to get anywhere, but as a curiosity it was fun, then I started to hate how everything else was done now I’m sold my soul to the Clojure devil.
You add the code, and rather than change it if needed, you just leave it there and add more code.
You could argue too that Scala is much safer so changes to the code are not scary and it's easier to be stable even under code changes.
But you're right: that would be a particularily useful information
Well that's just slander as far as I'm concerned. Of course we other non-clojure programmers believe in backwards compatibility. What a crazy thing to suggest that we don't.
You could instead consider:
* How many major version releases / rewrites happen in this language? (This might be a sign of ecosystem instability.)
* How much new code is replacing old code? (This might imply the language needs more bugfixes.)
I really liked those charts, I wonder how you can generate them, whether there's a tool out there that you can just feed a Git repo into or something.
The retention charts show you how much new code is replacing old code, and you can see the releases/rewrites as the code gets replaced.
This sentence bothered me way more that it should've, for some reason.
The outcome is the same, statically typed or dynamically. In both cases one need to perform refactoring in case of breaking changes.
No. In statically typed languages, failures are usually caught in CI. In dynamically typed languages, they end up in production - https://github.com/pypa/setuptools/issues/4519
From a CI/CD perspective, you should make sure that on updates, things won't break. As others suggest, a maintainable project would have test suites.
Except if you aim to have a program that you will never update again. Write the code once, compile it, and archive it. When you decide to keep that program available to potential clients, be prepared to back up dependencies, the OS it runs on, and everything that makes it operable. If there is a breaking change in the ecosystem of that program, it will break it.
Also your statement is only partially correct. Breaking changes in dependencies end up in production only if you don't have tests. And I know this is news to many people using static types but in many Ruby shops for example there are test coverages in excess of 90% and at the very least I never approve a PR without happy path tests.
Which are opt-in in dynamically typed languages.
You get the same functionality in statically typed languages and it's not opt-in, AND the developer doesn't have to do the work of type-checking (the compiler does it).
That's true. However, you have now replaced the work of a compiler with testing.
On top of that, sometimes mocking in tests also hide the string breaking change you don’t yet know about.
On top of my head, I saw this happen with a base64 string padded vs unpadded, or emojis making their way through when they did not before, etc.
So yeah, the compiler tells you which pieces of your jigsaw apparently fit together, but tests show you if you get the right picture on the jigsaw at the end (or on some regions).