Then I moved into backend development, where I was doing all Java, Scala, and Python. It was... dare I say... easy! Sure, these kinds of languages bring with them other problems, but I loved batteries-included standard libraries, build systems that could automatically fetch dependencies -- and oh my, such huge communities with open-source libraries for nearly anything I could imagine needing. Even if most of the build systems (maven, sbt, gradle, pip, etc.) have lots of rough edges, at least they exist.
Fast forward 12 years, and I find myself getting back in to Xfce. Ugh. C is such a pain in the ass. I keep reinventing wheels, because even if there's a third-party library, most of the time it's not packaged on many of the distros/OSes our users use. Memory leaks, NULL pointer dereferences, use-after-free, data races, terrible concurrency primitives, no tuples, no generics, primitive type system... I hate it.
I've been using Rust for other projects, and despite it being an objectively more difficult language to learn and use, I'm still much more productive in Rust than in C.
--> src/main.rs:45:34
|
45 | actions.append(&mut func(opt.selected));
| ---- ^^^^^^^^^^^^ expected `&str`, found `String`
| |
| arguments to this function are incorrect
|
help: consider borrowing here
|
45 | actions.append(&mut func(&opt.selected));
|
I even had to cheat a little to get that far, because my editor used rust-analyzer to flag the error before I had the chance to build the code.Also, I highly recommend getting into the habit of running `cargo clippy` regularly. It's a wonderful tool for catching non-idiomatic code. I learned a lot from its suggestions on how I could improve my work.
When I say Rust is harder to use (even after learning it decently well), what I mean is that it's still easier to write a pile of C code and get it to compile than it is to write a pile of Rust code and get it to compile.
The important difference is that the easier-written C code will have a bunch of bugs in it than the Rust code will. I think that's what I mean when I say Rust is harder to use, but I'm more productive in it: I have to do so much less debugging when writing Rust, and writing and debugging C code is more difficult and takes up more time than writing the Rust code (and doing whatever less debugging is necessary there).
> Also, I highly recommend getting into the habit of running `cargo clippy` regularly. It's a wonderful tool for catching non-idiomatic code.
That's a great tip, and I usually forget to do so. On a couple of my personal projects, I have a CI step that fails the build if there are any clippy messages, but I don't use it for most of my personal projects. I do have a `cargo fmt --check` in my pre-commit hooks, but I should add clippy to that as well.
As someone who is more familiar with Rust than C: only if you grok the C build system(s). For me, getting C to build at all (esp. if I want to split it up into multiple files or use any kind of external library) is much more difficult than doing the same in Rust.
Regarding Clippy, you can also crank it up with `cargo clippy -- -Wclippy::pedantic`. Some of the advice at that level gets a little suspect. Don't just blindly follow it. It offers some nice suggestions though, like:
warning: long literal lacking separators
--> src/main.rs:94:22
|
94 | if num > 1000000000000 {
| ^^^^^^^^^^^^^ help: consider: `1_000_000_000_000`
|
that you don't get by default.Rust doesn’t magically make the vast majority of bugs go away. Most of bugs are entirely portable!
You might say that the C and Rust code will have the same number of logic errors, but I'm not convinced that's the case either. Sure, if you just directly translate the C to Rust, maybe. But if you rewrite the C program in Rust while making good use of Rust's type system, it's likely you'll have fewer logic errors in the Rust code as well.
Rust has other nice features that will help avoid bugs you might write in a C program, like most Result-returning functions in the stdlib being marked #[must_use], or match expressions being exhaustive, to name a couple things.
So for example today I dealt with a synchronization issue. This turned out to not be a code bug but a human misunderstanding of a protocol specification saga, which was not possible to code into a type system of any sort. The day before was a constraint network specification error. In both cases the code was entirely irrelevant to the problem.
Literally all I deal with are human problems.
My point is Rust doesn't help with these at all, however clever you get. It is no different to C, but C will give you a superset of vulnerabilities on top of that.
Fundamentally Rust solves no problems I have. Because the problems that matter are human ones. We are too obsessed with the microscopic problems of programming languages and type systems and not concentrating on making quality software which is far more than just "Rust makes all my problems go away" because it doesn't. It kills a small class of problems which aren't relevant to a lot of domains.
(incidentally the problems above are implemented in a subset of c++)
I run into those things nearly daily, so... ok then.
You can also have that hooked up to the editor, just like `cargo check` errors. I find this to be quite useful, because i hace a hard time getting into habits, especially for thing that i'm not forced to do in some way. It's important that those Clippy lints are shown as soft warnings instead of hard errors though, as otherwise they'd be too distracting at times.
* In Rust, you will have to deal with a lot of unnecessary errors. The language is designed to make its users create a host of auxiliary entities: results, options, futures, tasks and so on. Instead of dealing with the "interesting" domain objects, the user of the language is mired in the "intricate interplay" between objects she doesn't care about. This is, in general, a woe of languages with extensive type systems, but in Rust it's a woe on a whole new level. Every program becomes a Sisyphean struggle to wrangle through all those unnecessary objects to finally get to write the actual code. Interestingly though, there's a tendency in a lot of programmers to like solving these useless problems instead of dealing with the objectives of their program (often because those objectives are boring or because programmers don't understand them, or because they have no influence over them).
What it needs to say is something along the lines of "a function f is defined with type X, but is given an argument of type Y": maybe the function should be defined differently, maybe the argument needs to change -- it's up to the programmer to decide.
I buy a fruit mixer from Amazon.com ; I send it back along with a note: expected a 230VAC mixer, found a 110VAC mixer.
For example with C++ the language offers enough functionality that you can create abstractions at any level, from low level bit manipulation to high level features such as automatic memory management, high level data objects etc.
With C you can never escape the low level details. Cursed to crawl.
Back in 1994/95, I wrote an API, in C, that was a communication interface. We had to use C, because it was the only language that had binary/link compatibility between compilers (the ones that we used).
We designed what I call "false object pattern." It used a C struct to simulate a dynamic object, complete with function pointers (that could be replaced, in implementation), and that gave us a "sorta/kinda" vtable.
Worked a charm. They were still using it, 25 years later.
That said, I don't really miss working at that level. I have been writing almost exclusively in Swift, since 2014.
You were not alone in this. It is the basis of glib's GObject which is at the bottom of the stack for all of GTK and GNOME.
You don't have to think about exceptions, overloaded operators, copy constructors, move semantics etc.
You'll also still need to think about when to copy and move ownership, only without a type system to help you tell which is which, and good luck ensuring resources are disposed correctly (and only once) when you can't even represent scoped objects. `goto` is still the best way to deal with destructors, and it still takes a lot of boilerplate.
The beauty of C is that it allows you to pick your level of complexity.
You're also still very much free to write either language purely, and "glue" them together easily using Cython.
zig seems like someone wanted something between C and "the good parts" of C++, with the generations of cruft scrubbed out
rust seems like someone wanted a haskell-flavoured replacement for C++, and memory-safety
i would expect "zig for C++" to look more like D or Carbon than rust. and i'd expect "rust for C" to have memory safety and regions, and probably steal a few ocaml features
comptime is a better version of C++ templates.
Not everyone likes RAII by itself. Allocating and deallocating things one at a time is not always efficient. That is not the only way to use RAII but it's the most prevalent way.
I like strong, featureful type systems and functional programming; Zig doesn't really fit the bill for me there. Rust is missing a few things I want (like higher-kinded types; GATs don't go far enough for me), but it's incredible how they've managed to build so many zero- and low-cost abstractions and make Rust feel like quite a high-level language sometimes.
Yeah, it has new features, but you're stuck working on a C89 codebase, good luck!
I don't know a great answer to that. I almost feel like languages should cut and run at some point and become a new thing.
The problem is that I want a language where things are safe by default. Many of the newer stuff added in C++ makes things safe, perhaps even to the level of Rust's guarantees -- but that's only if you use only these new things, and never -- even by accident -- use any of the older patterns.
I'd rather just learn a language without all that baggage.
I suffered writing those for many years. I finally simply learned not to do them anymore. Sort of like there's a grain of sand on the bottom of my foot and the skin just sort of entombed it in a callous.
This all just comes off incredibly arrogant, if I'm being honest.
In this instance Walter is correct - the mistakes he listed are very rarely made by experienced C programmers, just as ballet dancers rarely trip over their own feet walking down a pavement.
The problem of those errors being commonplace in those that are barely five years in to C coding and still have another five to go before hitting the ten year mark still exists, of course.
But it's a fair point that given enough practice and pain those mistakes go away.
Do you know how fucking obnoxious it is when 200 people like you come into every thread to tell 10 C or Javascript developers that they can't be trusted with the languages and environments they've been using for decades? There are MILLIONS of successful projects across those two languages, far more than Rust or Typescript. Get a fucking grip.
I realize a lot of people don't want to use it; and that's fine, don't use it.
I also like that C forces me to do stuff myself. It doesn't hide the magic and complexity. Also, my typical experience is that if you have to write your standard data structures on your own, you not only learn much more, but you also quickly see possibly performance improvements for your specific use case, that would have otherwise been hidden below several layers of library abstractions.
This has put me in a strange situation: everyone around me is always trying to use the latest feature of the newest C++ version, while I increasingly try to get rid of C++ features. A typical example I have encountered several times now is people using elaborate setups with std::string_view to avoid string copying, while exactly the same functionality could've been achieved by fewer code, using just a simple raw const char* pointer.
Do `#include <gc.h>` then just use `GC_malloc()` instead of `malloc()` and never free. And add `-lgc` to linking. It's already there on most systems these days, lots of things use it.
You can add some efficiency by `GC_free()` in cases where you're really really sure, but it's entirely optional, and adds a lot of danger. Using `GC_malloc_atomic()` also adds efficiency, especially for large objects, if you know for sure there will be no pointers in that object (e.g. a string, buffer, image etc).
There are weak pointers if you need them. And you can add finalizers for those rare cases where you need to close a file or network connection or something when an object is GCd, rather than knowing programmatically when to do it.
But simply using `GC_malloc()` instead of `malloc()` gets you a long long way.
You can also build Boehm GC as a full transparent `malloc()` replacement, and replacing `operator new()` in C++ too.
> Do `#include <gc.h>` then just use `GC_malloc()` instead of `malloc()` and never free.
Even more liberating (and dangerous!): do not even malloc, just use variable length-arrays:
void f(float *y, float *x, int n)
{
float t[n]; // temporary array, destroyed at the end of scope
...
}
This style forces you to alloc the memory at the outermost scope where it is visible, which is a nice thing in itself (even if you use malloc).In practice, stack sizes used to be quite limited and system-dependent. A modern linux system will give you several megabites of stack by default (128MB in my case, just checked in my linux mint 22 wilma). You can check it using "ulimit -all", and you can change it for your child processes using "ulimit -s SIZE_IN_KB". This is useful for your personal usage, but may pose problems when distributing your program, as you'll need to set the environment where your program runs, which may be difficult or impossible. There's no ergonomical way to do that from inside your C program, that I know of.
I think the only other language that has a similar property is Zig.
> Odin is a manual memory management based language. This means that Odin programmers must manage their own memory, allocations, and tracking. To aid with memory management, Odin has huge support for custom allocators, especially through the implicit context system.
https://odin-lang.org/docs/overview/#implicit-context-system
Not that I actually think this is a good idea (I think the explicitly style of Zig is better), but it is an idea nonetheless.
The most inconvenient aspect for me is manual memory management, but it’s not too bad as long as you’re not dealing with text or complex data structures.
I never liked that you have to choose between this and C++ though. C could use some automation, but that's C++ in "C with classes" mode. The sad thing is, you can't convince other people to use this mode, so all you have is either raw C interfaces which you have to wrap yourself, or C++ interfaces which require galaxy brain to fully grasp.
I remember growing really tired of "add member - add initializer - add finalizer - sweep and recheck finalizers" loop. Or calculating lifetime orders in your mind. If you ask which single word my mind associates with C, it will be "routine".
C++ would be amazing if its culture wasn't so obsessed with needless complexity. We had a local joke back then: every C++ programmer writes heaps of C++ code to pretend that the final page of code is not C++.
In my experience, whether it's software architecture or programming language design, it's easy to make things complicated, but it takes vision and discipline to keep them simple.
C++ can avoid string copies by passing `const string&` instead of by value. Presumably you're also passing around a subset of the string, and you're doing bounds and null checks, e.g.
const char* Buf = "Hello World" ;
print_hello(Buf, 6);
string_view is just a char* + len; which is what you should be passing around anyway.Funnily enough, the problem with string view is actually C api's, and this problem exists in C. Here's a perfect example: (I'm using fopen, but pretty much every C api has this problem).
FILE* open_file_from_substr(const char* start, int len)
{
return fopen(start);
}
void open_files()
{
const char* buf = "file1.txt file2.txt file3.txt";
for (int i = 0; i += 10; ++i) // my math might be off here, apologies
{
open_file_from_substr(buf + i, buf + i + 10); // nope.
}
}
> When I develop methods in pure C, I always enjoy that I can concentrate 100% on algorithmic aspects instead of architectural decisions which I only have to decide on because of the complexity of the languageI agree this is true when you develop _methods_, but I think this falls apart when you design programs. I find that you spend as much time thinking about memory management and pointer safety as you do algorithmic aspects, and not in a good way. Meanwhile, with C++, go and Rust, I think about lifetimes, ownership and data flow.
Then I try actually going through the motions of writing a production-grade application in C and I realise why I left it behind all those years ago. There's just so much stuff one has to do on one's own, with no support from the computer. So many things that one has to get just right for it to work across edge cases and in the face of adversarial users.
If I had to pick up a low-level language today, it'd likely be Ada. Similar to C, but with much more help from the compiler with all sorts of things.
Related-- I'm curious what percentage of Rust newbies "fighting the borrow checker" is due to the compiler being insufficiently sophisticated vs. the newbie not realizing they're trying to get Rust to compile a memory error.
This was made all the worse by the fact that I frequently, eventually, succeeded in "winning". I would write unnecessary and unprofiled "micro-optimizations" that I was confident were safe and would remain safe in Rust, that I'd never dare try to maintain in C++.
Eventually I mellowed out and started .clone()ing when I would deep copy in C++. Thus ended my fight with the borrow checker.
This must have been a very very long time ago, with optimizing compilers you don't really know even if they will emit any instructions.
I wouldn't dare guess what a compiler does to a RISC target.
(But yes, this was back in the early-to-mid 2000s I think. Whether that is a long time ago I don't know.)
[1]: https://entropicthoughts.com/python-programmers-experience
Just let your C(++) compiler generate assembly on an ARM-64 platform, like Apple Silicon or iOS. Fasten your seat belt.
C source files for demoscene and games were glorified macro assemblers full of inline assembly.
Is that not the problem rust was created to solve?
There's a big cloud of hype at the bleeding edge, but if you dare to look beyond that cloud, there are many boring and well matured technologies doing fine.
So, now, after a long time, Ada is starting to catch on???
When Ada was first announced, back then, my favorite language was PL/I, mostly on CP67/CMS, i.e., IBM's first effort at interactive computing with a virtual machine on an IBM 360 instruction set. Wrote a little code to illustrate digital Fourier calculations, digital filtering, and power spectral estimation (statistics from the book by Blackman and Tukey). Showed the work to a Navy guy at the JHU/APL and, thus, got "sole source" on a bid for some such software. Later wrote some more PL/I to have 'compatible' replacements for three of the routines in the IBM SSP (scientific subroutine package) -- converted 2 from O(n^2) to O(n log(n)) and the third got better numerical accuracy from some Ford and Fulkerson work. Then wrote some code for the first fleet scheduling at FedEx -- the BOD had been worried that the scheduling would be too difficult, some equity funding was at stake, and my code satisfied the BOD, opened the funding, and saved FedEx. Later wrote some code that saved a big part of IBM's AI software YES/L1. Gee, liked PL/I!
When I started on the FedEx code, was still at Georgetown (teaching computing in the business school and working in the computer center) and in my appartment. So, called the local IBM office and ordered the PL/I Reference, Program Guide, and Execution Logic manuals. Soon they arrived, for free, via a local IBM sales rep highly curious why someone would want those manuals -- sign of something big?
Now? Microsoft's .NET. On Windows, why not??
Money and hardware requirements.
Finally there is a mature open source compiler, and our machines are light years beyond those beefy workstations required for Ada compilers in the 1980's.
Where C was clearly designed to be a practical language with feedback from implementing an operating system in C. Ada lacked that kind of practical experience. And it shows.
I don't know anything about modern day Ada, but I can see why it didn't catch on in the Unix world.
I'm curious about this list, because it definitely doesn't seem that way these days. It'd be interesting to see how many of these are still possible now.
Otherwise I use Go if a GC is acceptable and I want a simple language or Rust if I really need performance and safety.
def route = fn (request) {
if (request.method == GET ||
request.method == HEAD) do
locale = "en"
slash = if Str.ends_with?(request.url, "/") do "" else "/" end
path_html = "./pages#{request.url}#{slash}index.#{locale}.html"
if File.exists?(path_html) do
show_html(path_html, request.url)
else
path_md = "./pages#{request.url}#{slash}index.#{locale}.md"
if File.exists?(path_md) do
show_md(path_md, request.url)
else
path_md = "./pages#{request.url}.#{locale}.md"
if File.exists?(path_md) do
show_md(path_md, request.url)
end
end
end
end
}
[1] https://git.kmx.io/kc3-lang/kc3/_tree/master/httpd/page/app/...Yes it is unsafe and you can do absurd things. But it also doesn't get in the way of just doing what you want to do.
With C segmentation fault is not always easy to pinpoint.
However the tooling for C, with sone if the IDEs out there you can set breakpoints/ walk through the code in a debugger, spot more errors during compile time.
There is a debugger included with Perk but after trying to use it a few times I have given up on it.
Give me C and Visual Studio when I need debugging.
On the positive side, shooting yourself in the foot with C is a common occurrence.
I have never had a segmentation fault in Perl. Nor have I had any problems managing the memory, the garbage collector appears to work well. (at least for my needs)
During Unix's early days, AT&T was still under this decree, meaning that it would not sell Unix like how competitors sold their operating systems. However, AT&T licensed Unix, including its source code, to universities for a nominal fee that covered the cost of media and distribution. UC Berkeley was one of the universities that purchased a Unix licenses, and researchers there started making additions to AT&T Unix which were distributed under the name Berkeley Software Distribution (this is where BSD came from). There is also a famous book known as The Lions' Book (https://en.wikipedia.org/wiki/A_Commentary_on_the_UNIX_Opera...) that those with access to a Unix license could read to study Unix. Bootleg copies of this book were widely circulated. The fact that university students, researchers, and professors could get access to an operating system (source code included) helped fuel the adoption of Unix, and by extension C.
When the Bell System was broken up in 1984, AT&T still retained Bell Labs and Unix. The breakup of the Bell System also meant that AT&T was no longer subject to the 1956 consent decree, and thus AT&T started marketing and selling Unix as a commercial product. Licensing fees skyrocketed, which led to an effort by BSD developers to replace AT&T code with open-source code, culminating with 4.3BSD Net/2, which is the ancestor of modern BSDs (FreeBSD, NetBSD, OpenBSD). The mid-1980s also saw the Minix and GNU projects. Finally, a certain undergraduate student named Linus Torvalds started work on his kernel in the early 1990s when he was frustrated with how Minix did not take full advantage of his Intel 386 hardware.
Had AT&T never been subject to the 1956 consent decree, it's likely that Unix might not have been widely adopted since AT&T probably wouldn't have granted generous licensing terms to universities.
It's difficult because I do believe there's an aesthetic appeal in doing certain one-off projects in C: compiled size, speed of compilation, the sense of accomplishment, etc. but a lot of it is just tedious grunt work.
When I simplify and think in terms of streams, it starts getting nice and tidy.
Then, I decided to move to Common Lisp and start gaining less and less money
Then, I decided to move to C and got Nerd Snipped "
Well, atleast he seems more happy xD
C is cool though
A proper virtual machine is extremely difficult to break out of (but it can still happen [1]). Containers are a lot easier to break out of. I virtual machines were more efficient in either CPU or RAM, I would want to use them more, but it's the worst of both.
[1] https://www.zerodayinitiative.com/advisories/ZDI-23-982/
I looked at C++, but it seems that despite being more feature-rich, it also cannot auto-generate functions/methods for working with structures?
Also returning errors with dynamically allocated strings (and freeing them) makes functions bloated.
Also Gnome infrastructure (GObject, GTK and friends) requires writing so much code that I feel sorry for people writing Gnome.
Also, how do you install dependencies in C? How do you lock a specific version (or range of versions) of a dependency with specific build options, for example?
If you try to write the same complicated mess in C as you would in any other language it's going to hurt.
Not having a package manager can be a blessing, depends on your perspective.
Just pick the right projects and the language shines.
C++'s stdlib contains a lot of convenient features, writing them myself and pretending they aren't there is very difficult.
Disabling exceptions is possible, but will come back to bite you the second you want to pull in external code.
You also lose some of the flexibility of C, unions become more complicated, struct offsets/C style polymorphism isn't even possible if I remember correctly.
I love the idea though :)
I've never understood the motivation behind writing something in C++, but avoiding the standard library. Sure, it's possible to do, but to me, they are inseparable. The basic data types and algorithms provided by the standard library are major reasons to choose the language. They are relatively lightweight and memory-efficient. They are easy to include and link into your program. They are well understood by other C++ programmers--no training required. Throughout my career, I've had to work in places where they had a "No Standard Library" rule, but that just meant they implemented their own, and in all cases the custom library was worse. (Also, none of the companies could articulate a reason for why they chose to re-implement the standard library poorly--It was always blamed on some graybeard who left the company decades ago.)
Choosing C++ without the standard library seems like going skiing, but deliberately using only one ski.
You can productively use C++ as C-with-classes (and templates, and namespaces, etc.) without depending on the library. That leaves you no worse off than rolling your own support code in plain C.
Plenty of code bases also predate it, when I started coding C++ in 1995 most people were still rolling their own.
Your brain works a certain way, but you're forced to evolve into the nightmare half-done complex stacks we run these days, and it's just not the same job any more.
I am fast becoming a Zig zealot.
What I've discovered is that while it does regularize some of the syntax of C, the really noticeable thing about Zig is that it feels like C with all the stuff I (and everyone else) always end up building on my own built into the language: various allocators, error types, some basic safety guardrails, and so forth.
You can get clever with it if you want -- comptime is very, very powerful -- but it doesn't have strong opinions about how clever you should be. And as with C, you end up using most of the language most of the time.
I don't know if this is the actual criterion for feature inclusion and exclusion among the Zig devs, but it feels something like "Is this in C, or do C hackers regularly create this because C doesn't have it?" Allocators? Yes. Error unions? Yes. Pattern matching facilities? Not so much. ADTs? Uh, maybe really stupid ones? Generics, eh . . . sometimes people hack that together when it feels really necessary, but mostly they don't.
Something like this, it seems to me, results in features Zig has, features Zig will never have, and features that are enabled by comptime. And it's keeping the language small, elegant, and practical. I'm a big time C fan, and I love it.
For a C programmer, learning and becoming productive in Zig should be a much easier proposition than doing the same for Rust. You're not going to get the same safety guarantees you'd get with Rust, but the world is full of trade offs, and this is just one of them.
Rust is double expensive in this case. You have to memorize the borrow checker and be responsible for all the potential undefined behavior with unsafe code.
But I am not a super human systems programmer. Perhaps if I was the calculus would change. But personally when I have to drop down below a GC language, it is pretty close to the hardware.
Zig simply solves more of my personal pain points... but if rust matures in ways that help those I'll consider it again.
Correct me if I am wrong, but Rust at least has a borrow checker while in C (and Zig) one has to do the borrow checking in their head. If you read a documentation for C libraries, some of them mention things like "caller must free this memory" and others don't specify anything and you have to go to the source code to find out who is responsible for freeing the memory.
As I have always bought into Dennis Ritchie's loop programming concepts, iterator invalidating hasn't been a problem.
Zig has defer which makes it trivial to place next to allocation, and it is released when it goes out of scope.
As building a linked list, dealing with bit fields, ARM peripherals, etc...; all require disabling the Rust borrow checker rules, you don't benefit at all from them in those cases IMHO.
It is horses for courses, and the Rust project admits they chose a very specific horse.
C is what it is, and people who did assembly on a PDP7 probably know where a lot of that is from.
I personally prefer zig to c... but I will use c when it makes the task easier.
I wanted to do this on Linux, because I my main laptop is a Linux machine after my children confiscated my Windows laptop to play Minecraft with the only decent GPU in the house.
And I just couldn't get past the tooling. I could not get through to anything that felt like a build setup that I'd be able to replicate in my own.
On Windows, using Visual Studio, it's not that bad. It's a little annoying compared to a .NET project, and there are a lot more settings to worry about, but at the end of the day VS makes the two but very different from each other.
I actually didn't understand that until I tried to write C++ on Linux. I thought C++ on Windows was worlds different than C#. But now I've seen the light.
I honestly don't know how people do development with on Linux. Make, Cmake, all of that stuff, is so bad.
IDK, maybe someone will come along and tell me, "oh, no, do this and you'll have no problems". I hope so. But without that, what a disgusting waste of time C and C++ is on Linux.
I think it's a pretty normal pattern I've seen (and been though) of learning-oriented development rather than thoughtful engineering.
But personally, AI coding has pushed me full circle back to ruby. Who wants to mentally interpret generated C code which could have optimisations and could also have fancy looking bugs. Why would anyone want to try disambiguating those when they could just read ruby like English?
Because they're implementing Ruby, for example?
This happened to me too. I’m using Python in a project right now purely because it’s easier for the AI to generate and easier for me to verify. AI coding saves me a lot of time, but the code is such low quality there’s no way I’d ever trust it to generate C.
Given that low quality code is perhaps the biggest time-sink relating to our work, I'm struggling to reconcile these statements?
Also there’s often a spectrum of importance even within a project, eg maybe some internal tools aren’t so important vs a user facing thing. Complexity also varies: AI is pretty good at simple CRUD endpoints, and it’s a lot faster than me at writing HTML/CSS UI’s (ie the layout and styling, without the logic).
If you can isolate the AI code to code that doesn’t need to be high quality, and write the code that doesn’t yourself, it can be a big win. Or if you use AI for an MVP that will be incrementally replaced by higher quality code if the MVP succeeds, can be quite valuable since it allows you to test ideas quicker.
I personally find it to be a big win, even though I also spend a lot of time fighting the AI. But I wouldn’t want to build on top of AI code without cleaning it up myself.
There are also some tasks I’ve learned to just do myself: eg I do not let the AI decide my data model/database schema. Data is too important to leave it up to an AI to decide. Also outside of simple CRUD operations, it generates quite inefficient database querying so if it’s on a critical path, perhaps write the queries yourself.
The way he writes about his work in this article, I think he's a true master. Very impressive to see people with such passion and skill.
Could he have jumped right into C and had amazing results, if not for the Journey learning Lisp and changing how he thought of programming.
Maybe learning Lisp is how to learn to program. Then other languages become better by virtue of how someone structures the logic.
a bit LOL, isn't it?
also the part about terraform, ansible and the other stuff.
Your work is genius! I hope KC3 can be adopted widely, there is great potential.
Archived at https://archive.is/zIZ8S
I refuse to touch anything else, but i keep an eye on the new languages that are being worked on, Zig for example
Has it been fuzzed? Have you had someone who is very good at finding bugs in C code look at it carefully? It is understandable if the answer to one or both is "no". But we should be careful about the claims we make about code.
Just a few off the top:
- Rust is a much more complex language than C
- Rust has a much, much slower compiler than pretty much any language out there
- Rust takes most people far longer to "feel" productive
- Rust applications are sometimes (often?) slower than comparable C applications
- Rust applications are sometimes (often?) larger than comparable C applications
You may not value these things, or you may value other things more.
That's completely fine, but please don't pretend as if Rust makes zero trade offs in exchange for the safety that people seem to value so much.
Rust evangelism is probably the worst part of Rust. Shallow comments stating Rust’s superiority read to me like somebody who wants to tell me about Jesus.
If you already dislike this, I ask you to read C-evangelism with respect to the recent Linux drama about Rust in Linux.
but Rust evangelism is on another level
I've never seen a good way to make a CPU that's good for "not C" languages. Those are usually by people who are aggressively uninterested in being fast and so insist on semantics that simply wouldn't get faster if done in hardware. Like the way most Haskell programs execute is just bad and based on bad ideas.
But in both cases, modern CPUs are mostly network- then I/O- then memory-bound. Most C programs aren't written to respect that very well.
There's nothing special or magic about C code, and, if anything, C has moved further and further away from its "portable assembler" moniker over time. And compilers can emit very similar machine instructions for the same type of algorithm regardless of whether you're writing C, Rust, Go, Zig, etc.
Consider, for example, that clang/LLVM doesn't even really compile C. The C is first translated into LLVM's IR, which is then used to emit machine instructions.
But if you're using it in the sense of "C is a privileged language in terms of its connection to hardware architecture, " well, C isn't, and that statement is patently false. There's not a major difference between C, C++, Rust, Zig--even going as far afield as bytecode languages like Java and C#, or fully interpreted stuff like Python or Perl, especially as far as computer architects are concerned.
(And in the sense of "this is the language that architects care most about for tuning performance," I think that's actually C++, simply because that tends to be the language for the proprietary HPC software that pays the big bucks for compiler support.)
But I don't think this carries much weight anymore, might have been true way back in the days.
C gives you more control, which means it's possible to go faster if you know exactly what you're doing.
For me Rust isn't really competing against unchecked C. It's competing against Java and boy does the JVM suck outside of server deployments. C gets disqualified from the beginning, so what you're complaining about falls on deaf ears.
I'm personally suffering the consequences of "fast" C code every day. There are days where 30 minutes of my time are being wasted on waiting for antivirus software. Thinks that ought to take 2 seconds take 2 minutes. What's crazy is that in a world filled with C programs, you can't say with a good conscience that antivirus software is unnecessary.
Also, integrating 3rd party code has always been one of the worst parts of writing a C or C++ program. This 3p library uses Autoconf/Automake, that one uses CMake, the other one just ships with a Visual Studio .sln file... I want to integrate them all into my own code base with one build system. That is going to be a few hours or days of sitting there and figuring out which .c and .h files need to be considered, where they are, what build flags and -Ddefines are needed, how the build configuration translates into the right build flags and so on.
On more modern languages, that whole drama is done with pip install or cargo install.
Feature wise, yes. C forces you to keep a lot of irreducible complexity in your head.
> Rust has a much, much slower compiler than pretty much any language out there
True. But it doesn't matter much in my opinion. A decent PC should be able to grind any Rust project in few seconds.
> Rust applications are sometimes
Sometimes is a weasel word. C is sometimes slower than Java.
> Rust takes most people far longer to "feel" productive
C takes me more time to feel productive. I have to write code, then unit test, then property tests, then run valgrind, check ubsan is on. Make more tests. Do property testing, then fuzz testing.
Or I can write same stuff in Rust and run tests. Run miri and bigger test suite if I'm using unsafe. Maybe fuzz test.
That is demonstrably false, unless your definition of "decent PC" is something that costs $4000.
I love Rust, but saying misleading (at best) things about build times is not a way to evangelize.
Have you tried looking around and noticing nobody else does that and it's, like, fine?
Faster would obviously be better, but it's not big enough of a deal to cancel out all the advantages compared to C.
So … make && make check ?
https://stackoverflow.com/questions/32127524/how-to-install-...
> I am disappointed with how poorly Rust's build scales, even with the incremental test-utf-8 benchmark which shouldn't be affected that much by adding unrelated files. (...)
> I decided to not port the rest of quick-lint-js to Rust. But... if build times improve significantly, I will change my mind!
Look you're picking a memory unsafe language versus a safe one. Whatever meager gains you save on compilation times (and the link shows the difference is meager if you aren't on a MacOS, which I'm not) will be obliterated by losses in figuring out which UB nasal demon was accidentally released.
This is like that argument that dynamic types save time, because you can catch error in tests. But then have to write more tests to compensate, so you lose time overall.
* Rust is vastly easier to get started with as a new programmer than C or C++. The quality and availability of documentation, tutorials, tooling, ease of installation, ease of dependency management, ease of writing tests, etc. Learning C basically requires learning make / cmake / meson on top of the language, and maybe Valgrind and the various sanitizers too. C's "simplicity" is not always helpful to someone getting started.
* The Rust compiler isn't particularly slow. LLVM is slow. Monomorphization hurts the language, but any other language that made the same tradeoff would see the same problems. The compiler has also gotten much much faster in the last few years and switching linkers or compiler backends makes a huge difference.
* Orgs that have studied tracked this don't find Rust to be less productive. Within a couple of months programmers tend to be just as if not more productive than they were previously with other languages. The ramp-up is probably slower than, say, Go, but it's not Scala / Haskell. And again, the tooling & built in test framework really helps with productivity.
* Rust applications are very rarely slower than comparable C applications
* Rust applications do tend to be larger than comparable C applications, but largely because of static vs. dynamic linking and larger debuginfo.
Neither of our opinions make someone else's opinion false.
- Rust may have felt easier for you or some, but certainly not everyone or even most. It might be worth it, but it's not an easy on ramp for many.
- Excuses for slow compile times don't make compile times faster.
- That's why I said "feel." There are warm fuzzy and cold prickly human things in here. Studies that pretend at measuring something we all know cannot be measured are summarily dismissed.
- More excuses do not make a statement false. Rust compile times are some of the slowest I have seen in >25 years of development.
Again, the trade-offs work for many people and orgs. That's great!
That doesn't make them disappear or become, "false."
It's precisely this tone and attitude (that is so prevalent in the community) that keeps so many of us away.
(And yes, I was considering if I should shout in capslock ;) )
I have seen so many fresh starts in Rust that went great during week 1 and 2 and then they collided with the lifetime annotations and then things very quickly got very messy. Let's store a texture pointer created from an OpenGL context based on in-memory data into a HashMap...
impl<'tex,'gl,'data,'key> GlyphCache<'a> {
Yay? And then your hashmap .or_insert_with fails due to lifetime checks so you need a match on the hashmap entry and now you're doing the key search twice and performance is significantly worse than in C.
Or you need to add a library. In C that's #include and a -l linker flag. In Rust, you now need to work through this:
https://doc.rust-lang.org/cargo/reference/manifest.html
to get a valid Cargo.toml. And make sure you don't name it cargo.toml, or stuff will randomly break.
This is just bizarre to me, the claim that dependency management is easier in C projects than in Rust. It is incredibly rare that adding a dependency to a C project is just an #include and -l flag away. What decent-sized project doesn't use autotools or cmake or meson or whatever? Adding a dependency to any of those build systems is more work than adding a single, short line to Cargo.toml.
And even if you are just using a hand-crafted makefile (no thank you, for any kind of non-trivial, cross-platform project), how do you know that dependency is present on the system? You're basically just ignoring that problem and forcing your users to deal with it.
On the other hand, C definitely goes too far in to the opposite extreme. I am very tired of reinventing wheels in C because integrating third-party dependencies is even more annoying than writing and maintaining my own versions of common routines.
Both aspects are something I think many developers grow to appreciate eventually.
- compile times
- compile times
- compile times
Not a problem for small utilities, but once you start pulling dependencies... pain is felt.
If it's something I'm actively developing, the compile is incremental, so it doesn't take that long.
What does often take longer than I'd like is linking. I need to look into those tricks where you build all the infrequently-changing bits (like third-party dependent crates) into a shared library, and then linking is very quick. For debug builds, this could speed up my development cycle quite a bit.
You can argue that Rust generics are a trivial example of increased complexity vs the C language and I'd kinda agree: except the language would be cumbersome to use without them but with all the undefined C behavior defined. Complexity can't disappear, it can be moved around.
But who cares?
The fact that C chooses not to nail everything down makes it a simpler and more flexible language, which is why it's sometimes preferred.
It makes the C semantics you are coding against more complex. Lots of unlisted or handwaved things in the spec become problems you need to keep in mind far more often than you would with better definitions.
And implementation wise, probably there's something to do with LLVM.
You don't need to wait for long compile times in Haskell if you don't want to, there are interpreters and REPLs available as well.
You don't need to wait for long compile times in C++ if you don't want to, most folks use binary libraries, not every project is compiled from scratch, there are incremental compilers and linkers, REPLs like ROOT, managed versions with JIT like C++/CLI, and if using modern tooling like Visual C++ or Live++, hot code reloading.
- Project leadership being at the whims of the moderators
- Language complexity
- Openly embracing 3rd party libraries and ecosystems for pretty much anything
- Having to rely on esoteric design choices to wrestle the compiler into using specific optimizations
- The community embracing absurd design complexity like implementing features via extension traits in code sections separated from both where the feature is going to be used and where either the structs and traits are implemented
- A community of zealots
I think the upsides easily outcompete the downsides, but I'd really wish it'd resolve some of these issues...
The difference is C also lets you ignore the inherent complexity, and that's where bugs and vulnerabilities come from.
In C you can ignore whatever you feel like, and that bothers some people so much that they have to stop everyone else from doing it.
The moment you start building something that's not exposed to the internet and hacking it has no implications, C beats it due to simplicity and speed of development .
I don't disagree that Rust might technically be a better option for a new project, but it's still a fairly fast moving language with an ecosystem that hasn't completely settled down. Many are increasingly turned off by the fast changing developer environments and ecosystems, and C provides you with a language and libraries that has already been around for decades and aren't likely to change much.
There are also so many programming concepts and ideas in Rust, which are all fine and useful in their own right, but they are a distraction if you don't need them. Some might say that you could just not use them, but they sneak up on you in third party libraries, code snippets, examples and suggestions from others.
Personally I find C a more cosy language, which is great for just enjoying programming for a bit.
It's not just about security, it's about reliability too. If my program crashes because of a use-after-free or null pointer dereference, I'm going to be pissed off even if there aren't security implications.
I prefer Rust to C for all sorts of projects, even those that will never sit in front of a network.
Also, no: that's only true for some kinds of programs. Rust, c++, and go all have a much easier ecosystem for things like data structures and more complex libraries that make writing many programs much easier than in C.
The only place I find C still useful over one of the other three is embedded, mostly because of the ecosystem, and rust is catching up there also.
(This is somewhat ironic, because I teach a class in C. It remains a useful language when you want someone to quickly see the relationship between the line of code they wrote and the resulting assembly, but it's also fraught - undefined behavior lurks in many places and adds a lot of pain. I will one day switch the class to rust, but I inherited the C version and it takes a while.)
So many people have implemented those data structures though, and they are available freely and openly, you can choose to your liking, i.e. ohash, or uthash, or khash, etc. and that is only for a hash table.
Those complex libraries are out there, too, for C, obviously.
The reason for why it is not in the standard library is obvious enough: there are many ways to implement those data structures, and there is no one size that fits all.
Obviously, all of these languages are capable of doing anything the others can. Turing complete is turing complete. But compare the experience of writing a multithreaded program that has, as part of it, an embedded HTTP server that provides statistics as it runs. It's painful in C, fairly annoying in C++ unless you match well to some existing framework, pretty straightforward in Rust, and kinda trivial in Go.
One comment talked about not using a (faster) B-Tree instead of a AVL-tree in C, because of the complexity (thus maintenance burden and risk of mistakes) it would add to the code.
They were happy to use a B-Tree in Rust though
Rust's safety features help prevent a large class of bugs. Security issues are only one kind of bug.
> C beats it due to simplicity and speed of development
C being faster to develop than Rust is a ludicrous claim.
Rust is a complex language that is safe.
C is a simple language that is unsafe.
There are always compromises and it always depends on the project. Of course importing a dependency is faster in Rust.
But the best language ever imho is Golang. Its simple and safe with the compromise being the GC.
Why is this important? C is the lingua franca of digital infrastructure. Whether that's due to merit or inertia is left as an exercise for the reader. I sure hope your new project isn't meant to supplant that legacy infrastructure, 'cause if it needs to run on legacy hardware, Rust won't work.
This is an incredibly annoying constraint when you're starting a new project, and Rust won't let you because you can't target the platform you need to target. For example, I spent hours building a Rust async runtime for Zephyr, only to discover it can't run on half the platforms Zephyr supports because Rust doesn't ship support for those platforms.
Are what cargo, rustc, etc. are expected to run on. You probably meant target.
> i686-unknown-none
Is admittedly a missing target. `x86_64-unknown-none` specifies stuff like `extern "C"`'s ABI (per https://doc.rust-lang.org/rustc/platform-support/x86_64-unkn... ) which is a lot less universal/appropriate for i686, where AFAIK everyone chooses their own different incompatible ABIs - which might be the reason it's not provided? Usually you want to pick an i686-unknown-* target that aligns more closely with your own needs (e.g. your desired object/library/binary file format, abi, bootloader, ...?)
C:\local>rustup target list | findstr i686
i686-linux-android
i686-pc-windows-gnu
i686-pc-windows-gnullvm
i686-pc-windows-msvc (installed)
i686-unknown-freebsd
i686-unknown-linux-gnu
i686-unknown-linux-musl
i686-unknown-uefi
If, truly, none of them are appropriate for your needs, that's when it's time to use a custom target (per https://doc.rust-lang.org/rustc/targets/custom.html ) and `build-std` (per https://doc.rust-lang.org/cargo/reference/unstable.html#buil... .) Using a toolchain file to pin your nightly rustc version might be appropriate (per https://rust-lang.github.io/rustup/overrides.html#the-toolch... .)The last time I played with custom targets was on https://github.com/MaulingMonkey/rust-opendingux-test/tree/m... , using the old `xargo` instead of `build-std`. Notes.md details modifications made to make things work.
Python isn’t simple, it’s a very complex language. And Mojo aims to be a superset of Python - if it’s simple, that’s only because it’s incomplete.