This is not a fast way to sort on GPU. The fastest known sorting algorithm on CUDA is Onesweep, which uses a lot of sophisticated techniques to take advantage of GPU-style parallelism and work around its limitations.
Linebender is working (slowly) on adapting these ideas to GPUs more portably. There's a wiki page here with some resources:
The second one is Thomas Smith's independent reimplementation of Onesweep. For the official version, see https://github.com/NVIDIA/cccl . The Onesweep implementation is in cub/cub/agent/agent_radix_sort_onesweep.cuh .
As the other posts have said, this isn't the right algorithm. Onesweep and its kin are cool but intimidating. The magic is easier to understand if one looks into the core algorithm: radix sort.
It turns out that radix sort can be implemented in a way that is very straightforward to parallelize. It's a beautiful and elegant approach and worth knowing about!
I've looked at using it before with Unity, but I couldn't get past the bottleneck of having to get data to it and then back again. There's an overhead with using e.g. compute shaders as well but not nearly as much.
Kind of a fun toy problem to play around with. I noticed you had thread coarsening as an option to play around with - there is often some gain to be had here. I think this is also a fun thing to play around with Nsight on - things that are impacting your performance aren't always obvious and it is a pretty good profiler - might be worth playing around with. (I wrote about a fun thing I found with thread coarsening and automatic loop unrolling with Nsight here: https://www.spenceruresk.com/loop-unrolling-gone-bad-e81f66f...)
You may also want to look at other sorting algorithms - common CPU sorting algorithms are hard to maximize GPU hardware with - a network sort like bitonic sorting involves more work (and you have to pad to a power of 2) but often runs much faster on parallel hardware.
I had a fairly naive implementation that would sort 10M in around 10ms on an H100. I'm sure with more work they can get quite a bit faster, but they need to be fairly big to make up for the kernel launch overhead.
For a more convenient way to use GPUs for algorithms like this, the Futhark language [1] can be very valuable. It is a very high-level language that compiles to GPU instructions, which can be accessed as Python libraries. In their website there is an example of a merge sort implementation [2].
Lololol then why are you recommending it like you know anything about it? I will never understand this kind of comment on hn - like you don't get why hyping something up that you don't actually understand is bad?
It's great since it easily works with the Cuda driver API, unlike CUB which is mostly exclusive for the runtime API. It also has Onesweep but I havent been able to make that one work.
Love your notes explaining some of the GPU thread-indexing concepts!
Shameless plug for my own little post going a bit into the performance benefits of "vectorized" sorting, even vs. programmable L1 cache:
https://winwang.blog/posts/bitonic-sort/
The TL;DR is the implementer was able to get use GPUs to get merge sort speedups that exceeded CPUs once the number of elements being sorted was >10,000,000.
thrust::sort is an Nvidia C++ library; I am not clear whether it is related to CUDA or not actually; the article author started out with CUDA implementing a merge sort, but once it was slower than CPU the author tried thrust::sort library and was able to get a faster result in some cases. The article author did not yet try a parallel merge sort.
I would be curious if anyone knows what database engines take advantage of GPUs and see actual sort/query performance boosts and on what sized datasets. My impression is that a few engines have tried it, but the payoff is small enough that industry-wide people haven't adopted it.
Modern database engines are bandwidth bound. Moving data to and from GPUs is expensive and slow by database engine standards so any performance gain due to higher memory bandwidth is usually lost in the data transfer overhead and much lower effective I/O bandwidth.
Every database co-processor, not just GPUs, have had the same issue.
1. Parallel radix sort (CPU or GPU) can also be pretty fast. For input distributions without too much pattern and keys that are not too large, radix sort is probably the fastest if you can use it.
2. For typical applications, memory transfer speed matters more than sorting performance on the GPU. If most of your work is done on the CPU, transfering the memory to the GPU may take more time than sorting the array. Not sure if unified memory (apple M series chips and AMD new APU) can remedy this though.
Not a database engine, but I'm processing distributed dataframes (~400TB). We're seeing a perf/price ratio around 3x~4x, though it's only a limited sample of workloads/datasets. (Note: we are not sorting, yet)
As far as I can tell, it's less so that the payoff is small, but that the payoff is small considering the maturity/scarcity of GPU programming, and availability of GPUs (esp. on-prem).