He did this by hiking a fixed route, and at fixed intervals scare the birds so they would fly and count.
The total count was submitted to some office which used it to estimate the population.
One year he had to travel abroad when the counting had to be done, so he recruited a friend and explained in detail how to do it.
However when the day of the counting arrived his friend forgot, and it was a huge hassle anyway so he just submitted a number he figured was about right, and that was that.
Then one day the following year, the local newspaper had a frontpage headline stating "record increase in ptarmigan population".
The reason it was big news was that the population estimate was used to set the hunting quotas, something his friend had not considered...
I once worked on a reservation system for some pretty big ski resorts.
We were running late, working nights, and one of the last things we had to finish was the official statistics reports about number of guest nights etc that gets published by the government.
Lets just say that the statistics that year had little to do with reality.
I’m the author of this post. Happy to answer any questions, and love to get feedback.
The code for all of my posts can be found at https://github.com/samwho/visualisations and is MIT licensed, so you’re welcome to use it :)
Another interesting direction you can take reservoir sampling is instead of drawing a random number for each item (to see whether it replaces an existing item and which one), you generate a number from a geometric distribution telling you how many items you can safely skip before the next replacement.
That's especially interesting, if you can skip many items cheaply. Eg because you can fast forward on your tape drive (but you don't know up front how long your tape is), or because you send almost your whole system to sleep during skips.
For n items to sample from, this system does about O(k * log (n/k)) samples and skips.
Conceptually, I prefer the version of reservoir sampling that has you generate a fixed random 'priority' for each card as it arrives, and then you keep the top k items by priority around. That brings me to another related interesting algorithmic problem: selecting the top k items out of a stream of elements of unknown length in O(n) time and O(k) space. Naive approaches to reaching O(k) space will give you O(n log k) time, eg if you keep a min heap around.
What you can do instead is keep an unordered buffer of capacity up to 2k. As each item arrives, you add it to the buffer. When your buffer is full, you prune it to the top k element in O(k) with eg randomised quickselect or via median-of-medians. You do that O(2k) work every k elements for n elements total, given you the required O(n) = O(n * 2*k / k) runtime.
Another related topic is rendezvous hashing: https://en.wikipedia.org/wiki/Rendezvous_hashing
Tangentially related: https://www.keithschwarz.com/darts-dice-coins/ is a great write-up on the alias method for sampling from a discrete random distribution.
Btw, a slightly related question:
Supposed you have a really long text file, how would you randomly sample a line? Such that all lines in the text file have the exactly same probability. Ideally, you want to do this without spending O(size of file) time preprocessing.
(I don't think this is a good interview question, but it is an interesting question.)
One way: sample random characters until you randomly hit a newline. That's the newline at the end of your line.
https://claude.ai/public/artifacts/62d0d742-3316-421b-9a7b-d... has a 'very static' visualisation of sorting algorithms. Basically, we have a 2d plane, and we colour a pixel (x, y) black iff the sorting algorithm compares x with y when it runs. It's a resurrection (with AI) of an older project I was coding up manually at https://github.com/matthiasgoergens/static-sorting-visualisa...
I'm also working on making https://cs.stackexchange.com/q/56643/50292 with its answer https://cs.stackexchange.com/a/171695/50292 more accessible. It's a little algorithmic problem I've been working on: 'simulate' a heap in O(n) time. I'm also developing a new, really simple implementation of soft heaps. And on my write-up for the solution to https://github.com/matthiasgoergens/TwoTimePad/blob/master/d...
> I actually read that post on the alias method just the other day and was blown away. I think I’d like to try making a post on it. Wouldn’t be able to add anything that link hasn’t already said, but I think I can make it more accessible.
If memory serves right, they don't do much about how you can efficiently support changes to your discrete probability distribution.
I'd mostly just appreciate a beta tester / beta reader.
https://github.com/tmoertel/practice/blob/master/libraries%2...
It is limited to integer weights only to make it easy to verify that the algorithm implements the requested distribution exactly. (See the test file in the same directory.)
Each floating point number is also a rational number, and thus you could then restrict again to floating point afterwards.
[0] https://github.com/hmusgrave/zalias It's nothing special, just an Array-of-Struct-of-Array implementation so that biases and aliases are always in the same cache line.
I find it easier to reason about reservoir sampling in an alternative formulation: the article talks about flipping a random (biased) coin for each arrival. Instead we can re-interpret reservoir sampling as assigning a random priority to each item, and then keeping the items with the top k priority.
It's fairly easy to see in this reformulation whether specific combinations of algorithms would compose: you only need to think about whether they would still select the top k items by priority.
But depending on what you need, you might also just do (random priority + weight * category) or so. Or you just keep separate reservoirs for high importance items and for everything else.
The dogs on the playing cards were commissioned just for this post. They’re all made by the wonderful https://www.andycarolan.com/.
The colour palette is the Wong palette that I learned about from https://davidmathlogic.com/colorblind/.
Oh, and you can pet the dogs. :)
Thank you for using a colour-blind friendly palette; as someone with deuteranopia :)
I view all of my posts using the various colour blindness filters in the Chrome dev tools during development, to make sure I'm not using any ambiguous pairings. I'm glad that effort made you feel welcome and able to enjoy the content fully.
Doe's bandana is cool, your dogs must worship you for your commitment to them!
My only suggestion is a way to slow down or ^S the log to read the funny messages, since they were flying by so fast I could only get a glimpse, even with reservoir sampling.
something something "needs more emojis"! ;)
One thing that threw me for a bit is when it switched from the intro of picking 3 cards at random from a deck of 10 or 436,234 to picking just one card. It's seems as if it almost needs a section heading before "Now let me throw you a curveball: what if I were to show you 1 card at a time, and you had to pick 1 at random?" indicating that now we're switching to a simplifying assumption that we're holding only 1 card not 3, but we also don't know the size of the deck.
However, I'm not sure I understand the statistical soundness of this approach. I get that every log during a given period has the same chance to be included, but doesn't that mean that logs that happen during "slow periods" are disproportionately overrepresented in overall metrics?
For example, if I want to optimize my code, and I need to know which endpoints are using the most time across the entire fleet to optimize my total costs (CPU-seconds or whatever), this would be an inappropriate method to use, since endpoints that get bursty traffic would be disproportionally underrepresented compared to endpoints that get steady constant traffic. So I'd end up wasting my time working on endpoints that don't actually get a lot of traffic.
Or if I'm trying to plan capacity for different services, and I want to know how many nodes to be running for each service, services that get bursty traffic would be underrepresented as well, correct?
What are the use-cases that reservoir sampling are good for? What kind of statistical analysis can you do on the data that's returned by it?
Yes, of course.
You can fix this problem, however. There are (at least) two ways:
You can do an alternative interpretation and implementation of reservoir sampling: for each item you generate and store a random priority as it comes into the system. For each interval (eg each second) you keep the top k items by priority. If you want to aggregate multiple intervals, you keep the top k (or less) items over the intervals.
This will automatically deal with dealing all items the same, whether they arrived during busy or non-busy periods.
An alternative view of the same approach doesn't store any priorities, but stores the number of dropped items each interval. You can then do some arithmetic to tell you how to combine samples from different intervals; very similar to what's in the article.
> What are the use-cases that reservoir sampling are good for? What kind of statistical analysis can you do on the data that's returned by it?
Anything you can do on any unbiased sample? Or are you talking about the specific variant in the article where you do reservoir sampling afresh each second?
The use-case I chose in the post was more focusing on protecting some centralised service while making sure when you do throw things away, you're not doing it in a way that creates blind-spots (e.g. you pick a rate limit of N per minute and your traffic is inherently bursty around the top of the minute and you never see logs for anything in the tail end of the minute.)
A fun recent use-case you might have seen was in https://onemillionchessboards.com. Nolen uses reservoir sampling to maintain a list of boards with recent activity. I believe he is in the process of doing a technical write-up that'll go into more detail.
Reminds me a bit about https://distill.pub/
Was very sad when they announced their hiatus. Made me nervous about the viability of this sort of content.
You may also enjoy https://pudding.cool.
An advanced extension to this is that there are algorithms which calculate the number of records to skip rather than doing a trial per record. This has a good write-up of them: https://richardstartin.github.io/posts/reservoir-sampling
A light transport estimator is trying to figure out how much light flows through a scene (https://en.wikipedia.org/wiki/Radiance). For that it has to integrate the radiance across all the possible paths light could take, while maintaining the conservation of energy (https://en.wikipedia.org/wiki/Rendering_equation).
In all but the most trivial cases this integral of the rendering equation has no tractable closed form solution and solving it is thus done stochastically. The very basic idea is the Monte Carlo method (https://en.wikipedia.org/wiki/Monte_Carlo_method): Randomly sample as many paths as you can and average them. From there more sophisticated sampling strategies were developed over the last decades:
- Importance Sampling (IS)
- Multiple Importance Sampling (MIS)
- Sample Importance Resampling (SIR)
- Resampled Importance Sampling (RIS)
- Weighted Reservoir Sampling (WRS)
- And finally combining RIS and WRS into ReSTIR
For a in depth read see: https://agraphicsguynotes.com/posts/understanding_the_math_b...
On a practical level though, this would be the last thing I would use for log collection. I understand that when there is a spike, something has to be dropped. What should this something be?
I don't see the point of being "fair" about what is dropped.
I would use fairness as a last resort, after trying other things:
Drop lower priority logs: If your log messages have levels (debug, info, warning, error), prioritize higher-severity events, discarding the verbose/debug ones first.
Contextual grouping: Treat a sequence of logs as parts of an activity. For a successful activity, maybe record only the start and end events (or key state changes) and leave out repetitive in-between logs.
Aggregation and summarization: Instead of storing every log line during a spike, aggregate similar or redundant messages into a summarized entry. This not only reduces volume but also highlights trends.
Reservoir sampling can handle all of that.
> All in all, Algorithm R was known to Knuth and Waterman by 1975, and to a wider audience by 1981, when the second edition of The Art of Computer Programming volume 2 was published.
There's also a distributed version, easy with a map reduce.
Or the very simple algorithm: generate a random paired for each item in the stream and keep the top N ordered by that random.
I discuss these issues more here: https://blog.moertel.com/posts/2024-08-23-sampling-with-sql....
We use a variation of this sort of thing at $WORK to solve a related problem, where you want to estimate some percentile from a running stream, with the constraints that the percentile you want to choose changes from time to time but is generally static for a trillion or more iterations (and the underlying data is quasi-stationary). If you back the process by a splay tree, you can get amortized O(1) percentile estimates (higher error bars for a given RAM consumption than a number of other techniques, but very fast).
You can also play with the replacement probability to, e.g., have a "data half-life" (denominated either in time or discrete counts) and bias the estimate toward recent events, which is more suitable for some problems.
Observability: easily one of the more underestimated fields in computing.
It's interesting that people seem to think that sampling mathematics somehow applies to modems or RF but not to the data they are looking at. Things like aliasing absolutely matter for observability/telemetry.
I like the visualizations in this article, really good explanation.
I knew it from before my interview from a turbo pascal program I had seen that sampled dat tape backups of patient records from a hospital system. These samples were used for studies. That was a textbook example of it’s utility.