- SN DBS - https://www.snsystems.com/ - Used by a lot of game developers, to spread mostly compilation (but also shader compile, or custom jobs).
- IncrediBuild - https://www.incredibuild.com/
- Fast build - https://www.fastbuild.org/
- icecream - https://github.com/icecc/icecream
- Goma - https://chromium.googlesource.com/infra/goma/client/
- Bazel / buck / like with various RBE back ends - https://bazel.build/remote/rbe
- distcc - https://www.distcc.org/
- ElectricAccelerator - https://docs.cloudbees.com/docs/cloudbees-build-acceleration/11.0/
- and many others...
I've had mostly experience with IncrediBuild in the past, currently SN-DBS, but colleagues are looking into FASTBuild. Though my personal favorite is bazelHowever, this is something that icecream aimed to solve, and in my case it really did. I can't remember the actual numbers but it provided a major gain and it was super easy to set up.
So I guess it would be more nteresting to see how nocc compares to icecream instead of distcc.
why don't you say what they meant to say but better, in less words?
* fix that silly "got another sha256" bug, it really shouldn't be hard
* optimal job count should be infinite from the perspective of the client's `make`, but the daemon should only send jobs to the servers based on how many jobs they are willing to perform in parallel. Possibly split out the "upload blobs" part from the "actually run the build" part?
* You should detect compiler version differences, which is essential for reliability. This is nontrivial (wrapper scripts, internal headers) but good solutions exist if you're willing to hard-code what compilers are supported.
They use KPHP which creates huge and a lot of files, similar to meta's php compiler or my B::C perl compiler in use at cPanel. In my case the outstanding work was to avoid compilation at all, split it up into many modules/libs, and only compile those which changed. The mnain advantage is instant startup and less memory. With many shared libs the startup advantage is less, but you can still use static libs and use a modern fast linker. And avoid -g and use -Os
`icecream ccache` (like `distcc ccache`) only handles the "multiple client, single server" case.
`ccache icecream ccache` (like `ccache distcc ccache`) just means you're wasting even more disk space.
Now, if you configure `ccache` to use remote storage, and you can make it actually work, then that combination will compete with `nocc` (though admittedly it follows different models). But that's a lot more moving parts.
Now, maybe remote caching is reliable nowadays, despite its inevitable reputation (given that it wasn't designed for it). But it's still more moving parts, and the magic is in ccache not icecream too.
Then again, this might be all asking for trouble - what if your colleague has different installed version. Best is to have these artifacts (recompiled and/or in source form) some other way.
You send all header files to 1,000 boxes. Same as nocc.
Oh, and if you really want, you can make Bazel ship your /usr/bin/gcc and co, too. It is just slow so nobody would like that. Or maybe in 2025 it does not feel slow any more (though still wasteful)?
Though it greatly simplifies things with little massage (e.g. all these #pragma line - and make them deterministic using some path mapping, etc.)
"colleagues" does not make much sense for an open-source project built by people all around the world with various environments and where the tool should accomodate all of them