Parallel ./configure(tavianator.com)
101 points by brooke2k 8 hours ago | 17 comments
iforgotpassword 1 hour ago
The other issue is that people seem to just copy configure/autotools scripts over from older or other projects because either they are lazy or don't understand them enough to do it themselves. The result is that even with relatively modern code bases that only target something like x86, arm and maybe mips and only gcc/clang, you still get checks for the size of an int, or which header is needed for printf, or whether long long exists.... And then the entire code base never checks the generated macros in a single place, uses int64_t and never checks for stint.h in the configure script...
codys 3 hours ago
I did something like the system described in this article a few years back. [1]

Instead of splitting the "configure" and "make" steps though, I chose to instead fold much of the "configure" step into the "make".

To clarify, this article describes a system where `./configure` runs a bunch of compilations in parallel, then `make` does stuff depending on those compilations.

If one is willing to restrict what the configure can detect/do to writing to header files (rather than affecting variables examined/used in a Makefile), then instead one can have `./configure` generate a `Makefile` (or in my case, a ninja file), and then have the "run the compiler to see what defines to set" and "run compiler to build the executable" can be run in a single `make` or `ninja` invocation.

The simple way here results in _almost_ the same behavior: all the "configure"-like stuff running and then all the "build" stuff running. But if one is a bit more careful/clever and doesn't depend on the entire "config.h" for every "<real source>.c" compilation, then one can start to interleave the work perceived as "configuration" with that seen as "build". (I did not get that fancy)

[1]: https://github.com/codyps/cninja/tree/master/config_h

tavianator 3 hours ago
Nice! I used to do something similar, don't remember exactly why I had to switch but the two step process did become necessary at some point.

Just from a quick peek at that repo, nowadays you can write

#if __has_attribute(cold)

and avoid the configure test entirely. Probably wasn't a thing 10 years ago though :)

o11c 2 hours ago
The problem is that the various `__has_foo` aren't actually reliable in practice - they don't tell you if the attribute, builtin, include, etc. actually works the way it's supposed to without bugs, or if it includes a particular feature (accepts a new optional argument, or allows new values for an existing argument, etc.).
codys 2 hours ago
yep. C's really come a long way with the special operators for checking if attributes exist, if builtins exist, if headers exist, etc.

Covers a very large part of what is needed, making fewer and fewer things need to end up in configure scripts. I think most of what's left is checking for items (types, functions) existence and their shape, as you were doing :). I can dream about getting a nice special operator to check for fields/functions, would let us remove even more from configure time, but I suspect we won't because that requires type resolution and none of the existing special operators do that.

mikepurvis 2 hours ago
You still need a configure step for the "where are my deps" part of it, though both autotools and CMake would be way faster if all they were doing was finding, and not any testing.
throwaway81523 2 hours ago
GNU Parallel seems like another convenient approach.
creatonez 5 hours ago
Noticed an easter egg in this article. The text below "I'm sorry, but in the year 2025, this is ridiculous:" is animated entirely without Javascript or .gif files. It's pure CSS.

This is how it was done: https://github.com/tavianator/tavianator.com/blob/cf0e4ef26d...

o11c 5 hours ago
Unfortunately it forgets to HTML-escape the <wchar.h> etc.
tavianator 3 hours ago
Whoops! Forgot to do that when I switched from a ``` block to raw html
epistasis 7 hours ago
I've spent a fair amount of time over the past decades to make autotools work on my projects, and I've never felt like it was a good use of time.

It's likely that C will continue to be used by everyone for decades to come, but I know that I'll personally never start a new project in C again.

I'm still glad that there's some sort of push to make autotools suck less for legacy projects.

monkeyelite 6 hours ago
You can use make without configure. If needed, you can also write your own configure instead of using auto tools.

Creating a make file is about 10 lines and is the lowest friction for me to get programming of any environment. Familiarity is part of that.

viraptor 5 hours ago
It's a bit of a balance once you get bigger dependencies. A generic autoconf is annoying to write, but rarely an issue when packaging for a distro. Most issues I've had to fix in nixpkgs were for custom builds unfortunately.

But if you don't plan to distribute things widely (or have no deps).. Whatever, just do what works for you.

edoceo 5 hours ago
Write your own configure? For an internal project, where much is under domain control, sure. But for the 1000s of projects trying to multi-plarform and/or support flavours/versions - oh gosh.
monkeyelite 5 hours ago
It depends on how much platform specific stuff you are trying to use. Also in 2025 most packages are tailored for the operating system by packagers - not the original authors.

Autotools is going to check every config from the past 50 years.

tidwall 6 hours ago
I've stopped using autotools for new projects. Just a Makefile, and the -j flag for concurrency.
psyclobe 4 hours ago
cmake ftw
JCWasmx86 2 hours ago
Or meson is a serious alternative to cmake (Even better than cmake imho)
aldanor 4 hours ago
You mean cargo build
yjftsjthsd-h 4 hours ago
... can cargo build things that aren't rust? If yes, that's really cool. If no, then it's not really in the same problem domain.
kouteiheika 2 hours ago
No it can't.

It can build a Rust program (build.rs) which builds things that aren't Rust, but that's an entirely different use case (building non-Rust library to use inside of Rust programs).

malkia 3 hours ago
cmake uses configure, or configure-like too!
ahartmetz 51 minutes ago
Same concept, but completely different implementation.
gorgoiler 4 hours ago
On the topic* of having 24 cores and wanting to put them to work: when I were a lad the promise was that pure functional programming would trivially allow for parallel execution of functions. Has this future ever materialized in a modern language / runtime?

  x = 2 + 2
  y = 2 * 2
  z = f(x, y)
  print(z)
…where x and y evaluate in parallel without me having to do anything. Clojure, perhaps?

*And superficially off the topic of this thread, but possibly not.

gdwatson 3 hours ago
Superscalar processors (which include all mainstream ones these days) do this within a single core, provided there are no data dependencies between the assignment statements. They have multiple arithmetic logic units, and they can start a second operation while the first is executing.

But yeah, I agree that we were promised a lot more automatic multithreading than we got. History has proven that we should be wary of any promises that depend on a Sufficiently Smart Compiler.

lazide 3 hours ago
Eh, in this case not splitting them up to compute them in parallel is the smartest thing to do. Locking overhead alone is going to dwarf every other cost involved in that computation.
gdwatson 3 hours ago
Yeah, I think the dream was more like, “The compiler looks at a map or filter operation and figures out whether it’s worth the overhead to parallelize it automatically.” And that turns out to be pretty hard, with potentially painful (and nondeterministic!) consequences for failure.

Maybe it would have been easier if CPU performance didn’t end up outstripping memory performance so much, or if cache coherency between cores weren’t so difficult.

eptcyka 1 hour ago
Spawning threads or using a thread pool implicitly would be pretty bad - it would be difficult to reason about performance if the compiler was to make these choices for you.
lazide 2 hours ago
I think it has shaken out the way it has, is because compile time optimizations to this extent require knowing runtime constraints/data at compile time. Which for non-trivial situations is impossible, as the code will be run with too many different types of input data, with too many different cache sizes, etc.

The CPU has better visibility into the actual runtime situation, so can do runtime optimization better.

In some ways, it’s like a bytecode/JVM type situation.

snackbroken 1 hour ago
Bend[1] and Vine[1] are two experimental programming languages that take similar approaches to automatically parallelizing programs; interaction nets[3]. IIUC, they basically turn the whole program into one big dependency graph, then the runtime figures out what can run in parallel and distributes the work to however many threads you can throw at it. It's also my understanding that they are currently both quite slow, which makes sense as the focus has been on making `write embarrassingly parallelizable program -> get highly parallelized execution` work at all until recently. Time will tell if they can manage enough optimizations that the approach enables you to get reasonably performing parallel functional programs 'for free'.

[1] https://github.com/HigherOrderCO/Bend [2] https://github.com/VineLang/vine [3] https://en.wikipedia.org/wiki/Interaction_nets

fweimer 1 hour ago
There have been experimental parallel graph reduction machines. Excel has a parallel evaluator these days.

Oddly enough, functional programming seems to be a poor fit for this because the fanout tends to be fairly low: individual operations have few inputs, and single-linked lists and trees are more common than arrays.

inejge 2 hours ago
> …where x and y evaluate in parallel without me having to do anything.

I understand that yours is a very simple example, but a) such things are already parallelized even on a single thread thanks to all the internal CPU parallelism, b) one should always be mindful of Amdahl's law, c) truly parallel solutions to various problems tend to be structurally different from serial ones in unpredictable ways, so there's no single transformation, not even a single family of transformations.

chubot 4 hours ago
That looks more like a SIMD problem than a multi-core problem

You want bigger units of work for multiple cores, otherwise the coordination overhead will outweigh the work the application is doing

I think the Erlang runtime is probably the best use of functional programming and multiple cores. Since Erlang processes are shared nothing, I think they will scale to 64 or 128 cores just fine

Whereas the GC will be a bottleneck in most languages with shared memory ... you will stop scaling before using all your cores

But I don't think Erlang is as fine-grained as your example ...

Some related threads:

https://news.ycombinator.com/item?id=40130079

https://news.ycombinator.com/item?id=31176264

AFAIU Erlang is not that fast an interpreter; I thought the Pony Language was doing something similar (shared nothing?) with compiled code, but I haven't heard about it in awhile

3 hours ago
speed_spread 4 hours ago
I believe it's not the language preventing it but the nature of parallel computing. The overhead of splitting up things and then reuniting them again is high enough to make trivial cases not worth it. OTOH we now have pretty good compiler autovectorization which does a lot of parallel magic if you set things right. But it's not handled at the language level either.
colechristensen 3 hours ago
there have been fortran compilers which have done auto parallelization for decades, i think nvidia released a compiler that will take your code and do its best to run it on a gpu

this works best for scientific computing things that run through very big loops where there is very little interaction between iterations

que-encrypt 3 hours ago
[dead]
deepsun 4 hours ago
Sure, Tensorflow and Pytorch, here ya go :)
fmajid 6 hours ago
And on macOS, the notarization checks for all the conftest binaries generated by configure add even more latency. Apple reneged on their former promise to give an opt-out for this.
fishgoesblub 6 hours ago
Very nice! I always get annoyed when my fancy 16 thread CPU is left barely used as one thread is burning away with the rest sitting and waiting. Bookmarking this for later to play around with whatever projects I use that still use configure.

Also, I was surprised when the animated text at the top of the article wasn't a gif, but actual text. So cool!

andreyv 2 hours ago
Autoconf can use cache files [1], which can greatly speed up repeated configures. With cache, a test is run at most once.

[1] https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/a...

5 hours ago
redleader55 5 hours ago
Why do we need to even run most of the things in ./configure? Why not just have a file in /etc which is updated when you install various packages which ./configure can read to learn various stats about the environment? Obviously it will still allow setting various things with parameters and create a Makefile, but much faster.
o11c 5 hours ago
Keep in mind that the build intentionally depends on environment variables, people often install non-packaged dependencies in bad ways, and cross-compiling is a thing, so it's not that simple.
wolfgang42 4 hours ago
Some relevant discussion/pointers to other notes on this sort of proposal can be found here: https://utcc.utoronto.ca/~cks/space/blog/sysadmin/AutoconfVa...

(The conclusion I distilled out of reading that at the time, I think, was that this is actually sort of happening, but slowly, and autoconf is likely to stick around for a while, if only as a compatibility layer during the transition.)

pabs3 3 hours ago
Not every OS is going to have such a file, and you also don't know if it matches the actual system ./configure runs on.
SuperV1234 5 hours ago
CMake also needs this, badly...
BobbyTables2 5 hours ago
I was really hoping he worked some autoreconf/macro magic to transform existing configure.ac files into a parallelized result.

Nice writeup though.

psyclobe 4 hours ago
(Luckily?) With c++ your build will nearly always take longer then the configuration step.
bitbasher 4 hours ago
rust: "hold my leggings"
LoganDark 4 hours ago
Since when? I far more often run into CMake taking ages than Cargo.
blibble 6 hours ago
is this really a big deal given you run ./configure once?

it's like systemd trading off non-determinism for boot speed, when it takes 5 minutes to get through the POST

Aurornis 6 hours ago
> is this really a big deal given you run ./configure once

I end up running it dozens of times when changing versions, checking out different branches, chasing dependencies.

It’s a big deal.

> it's like systemd trading off non-determinism for boot speed, when it takes 5 minutes to get through the POST

5 minute POST time is a bad analogy. systemd is used in many places, from desktops (that POST quickly) to embedded systems where boot time is critical.

If deterministic boot is important then you would specify it explicitly. Relying on emergent behavior for consistent boot order is bad design.

The number of systems that have 5 minute POST times and need deterministic boot is an edge case of an edge case.

Twirrim 3 hours ago
>chasing dependencies.

This aspect of configure, in particular, drives me nuts. Obviously I'd like it to be faster, but it's not the end of the world. I forget what I was trying to build the other week, but I had to make 18 separate runs of configure to find all the things I was missing. When I dug into things it looked like it could probably have done it in 2 runs, each presenting a batch of things that were missing. Instead I got stuck with "configure, install missing package" over and over again.

blibble 6 hours ago
> to embedded systems where boot time is critical.

if it's critical on an embedded system then you're not running systemd at all

> The number of systems that have 5 minute POST times and need deterministic boot is an edge case of an edge case.

desktop machines are the edge case, there's a LOT more servers running Linux than people using Linux desktops

> Relying on emergent behavior for consistent boot order is bad design.

tell that to the distro authors who 10 years in can't tell the difference between network-online.target, network-pre.target, network.target

MindSpunk 6 hours ago
And a very large number of those Linux servers are running Linux VMs, which don't POST, use systemd, and have their boot time dominated by the guest OS. Those servers are probably hosting dozens of VMs too. Boot time makes a lot of difference here.
blibble 6 hours ago
seabios/tianocore still takes longer than /etc/rc on a BSD

amdahl's law's a bitch

6 hours ago
0x457 6 hours ago
> from desktops (that POST quickly)

I take you don't run DDR5?

mschuster91 6 hours ago
> I end up running it dozens of times when changing versions, checking out different branches, chasing dependencies.

Yeah... but neither of that is going to change stuff like the size of a data type, the endianness of the architecture you're running on, or the features / build configuration of some library the project depends on.

Parallelization is a bandaid (although a sorely needed!) IMHO, C/C++ libraries desperately need to develop some sort of standard that doesn't require a full gcc build for each tiny test. I'd envision something like nodejs's package.json, just with more specific information about the build details themselves. And for the stuff like datatype sizes, that should be provided by gcc/llvm in a fast-parseable way so that autotools can pick it up.

o11c 5 hours ago
There is the `-C` option of course. It's supposedly good for the standard tests that waste all the time, but not so much for the ad-hoc tests various projects use, which have an unfortunate chance of being buggy or varying across time.

... I wonder if it's possible to manually seed a cache file with only known-safe test results and let it still perform the unsafe tests? Be sure to copy the cache file to a temporary name ...

---

I've thought about rewriting `./configure` in C (I did it in Python once but Python's portability turned out to be poor - Python2 was bug-free but killed; Python3 was unfixably buggy for a decade or so). Still have a stub shell script that reads HOSTCC etc. then quickly builds and executes `./configure.bin`.

LegionMammal978 6 hours ago
If you do a lot of bisecting, or bootstrapping, or building compatibility matrices, or really anything that needs you to compile lots of old versions, the repeated ./configure steps really start feeling like a drag.
kazinator 5 hours ago
In a "reasonably well-behaved program", if you have the artifacts from a current configure, like a "config.h" header, they are compatible with older commits, even if configurations changed, as long as the configuration changes were additive: introducing some new test, along with a new symbol in "config.h".

It's possible to skip some of the ./configure steps. Especially for someone who knows the program very well.

LegionMammal978 3 hours ago
Perhaps you can get away with that for small, young, or self-contained projects. But for medium-to-large projects running more than a few years, the (different versions of) external or vendored dependencies tend to come and go, and they all have their own configurations. Long-running projects are also prone to internal reorganizations and overhauls to the build system. (Go back far enough, and you're having to wrangle patchsets for every few months' worth of versions since -fpermissive is no longer permissive enough to get it to build.)
asah 4 hours ago
For postgresql development, you run configure over and over...
csdvrx 6 hours ago
> it's like systemd trading off non-determinism for boot speed, when it takes 5 minutes to get through the POST

That's a bad analogy: if a given deterministic service ordering is needed for a service to correctly start (say because it doesn't start with the systemd unit), it means the non-deterministic systemd service units are not properly encoding the dependencies tree in the Before= and After=

When done properly, both solutions should work the same. However, the solution properly encoding the dependency graph (instead of just projecting it on a 1-dimensional sequence of numbers) will be more flexible: it's the better solution, because it will give you more speed but also more flexibility: you can see the branches any leaf depends on, remove leaves as needed, then cull the useless branches. You could add determinism if you want, but why bother?

It's like using the dependencies of linux packages, and leaving the job of resolving them to package managers (apt, pacman...): you can then remove the useless packages which are no longer required.

Compare that to doing a `make install` of everything to /usr/local in a specific order, as specified by a script: when done properly, both solutions will work, but one solution is clearly better than the other as it encodes more finely the existing dependencies instead of projecting them to a sequence.

You can add determinism if you want to follow a sequence (ex: `apt-get install make` before adding gcc, then add cuda...), or you can use meta package like build-essentials, but being restricted to a sequence gains you nothing.

blibble 6 hours ago
I don't think it is a bad analogy

given how complicated the boot process is ([1]), and it occurs once a month, I'd rather it was as deterministic as possible

vs. shaving 1% off the boot time

[1]: distros continue to ship subtlety broken unit files, because the model is too complicated

Aurornis 6 hours ago
Most systems do not have 5 minute POST times. That’s an extreme outlier.

Linux runs all over, including embedded systems where boot time is important.

Optimizing for edge cases on outliers isn’t a priority. If you need specific boot ordering, configure it that way. It doesn’t make sense for the entire Linux world to sacrifice boot speed.

timcobb 5 hours ago
I don't even think my Pentium 166 took 5 minutes to POST. Did computers ever take that long to POST??
yjftsjthsd-h 4 hours ago
Old machines probably didn't, no, but I have absolutely seen machines (Enterprise™ Servers) that took longer than that to get to the bootloader. IIRC it was mostly a combination of hardware RAID controllers and RAM... something. Testing?
lazide 3 hours ago
It takes awhile to enumerate a couple TB worth of RAM dimms and 20+ disks.
yjftsjthsd-h 3 hours ago
Yeah, it was somewhat understandable. I also suspect the firmware was... let's say underoptimized, but I agree that the task is truly not trivial.
lazide 2 hours ago
One thing I ran across when trying to figure this out previously - while some firmware is undoubtably dumb, a decent amount of it was that it was doing a lot more than typical PC firmware.

For instance, the slow RAM check POST I was experiencing is because it was also doing a quick single pass memory test. Consumer firmware goes ‘meh, whatever’.

Disk spin up, it was also staging out the disk power ups so that it didn’t kill the PSU - not a concern if you have 3-4 drives. But definitely a concern if you have 20.

Also, the raid controller was running basic SMART tests and the like. Which consumer stuff typically doesn’t.

Now how much any of this is worthwhile depends on the use case of course. ‘Farm of cheap PCs’ type cloud hosting environments, most these types of conditions get handled by software, and it doesn’t matter much if any single box is half broken.

If you have one big box serving a bunch of key infra, and reboot it periodically as part of ‘scheduled maintenance’ (aka old school on prem), then it does.

BobbyTables2 5 hours ago
Look at enterprise servers.

Competing POST in under 2 minutes is not guaranteed.

Especially the 4 socket beasts with lots of DIMMs.

Twirrim 3 hours ago
Physical servers do. It's always astounding to me how long it takes to initialise all that hardware.
kcexn 4 hours ago
Oh? What's an example of a common way for unit files to be subtlely broken?
TacticalCoder 5 hours ago
[dead]
moralestapia 5 hours ago
>The purpose of a ./configure script is basically to run the compiler a bunch of times and check which runs succeeded.

Wait is this true? (!)

gdwatson 3 hours ago
Historically, different Unixes varied a lot more than they do today. Say you want your program to use the C library function foo on platforms where it’s available and the function bar where it isn’t: You can write both versions and choose between them based on a C preprocessor macro, and the program will use the best option available for the platform where it was compiled.

But now the user has to set the preprocessor macro appropriately when he builds your program. Nobody wants to give the user a pop quiz on the intricacies of his C library every time he goes to install new software. So instead the developer writes a shell script that tries to compile a trivial program that uses function foo. If the script succeeds, it defines the preprocessor macro FOO_AVAILABLE, and the program will use foo; if it fails, it doesn’t define that macro, and the program will fall back to bar.

That shell script grew into configure. A configure script for an old and widely ported piece of software can check for a lot of platform features.

klysm 4 hours ago
The closer and deeper you look into the C toolchains the more grossed out you’ll be
acuozzo 3 hours ago
Hands have to get dirty somewhere. "As deep as The Worker's City lay underground, so high above towered the City of Metropolis."

The choices are:

1. Restrict the freedom of CPU designers to some approximation of the PDP11. No funky DSP chips. No crazy vector processors.

2. Restrict the freedom of OS designers to some approximation of Unix. No bespoke realtime OSes. No research OSes.

3. Insist programmers use a new programming language for these chips and OSes. (This was the case prior to C and Unix.)

4. Insist programmers write in assembly and/or machine code. Perhaps a macro-assembler is acceptable here, but this is inching toward C.

The cost of this flexibility is gross tooling to make it manageable. Can it be done without years and years of accrued M4 and sh? Perhaps, but that's just CMake and CMake is nowhere near as capable as Autotools & friends are when working with legacy platforms.

Am4TIfIsER0ppos 5 hours ago
Yes.
klysm 4 hours ago
autotools is a complete disaster. It’s mind boggling to think that everything we build is usually on top of this arcane system
malkia 3 hours ago
"./configure" has always been the wrong thing for a very long long time. Also slow...