Oct 8, 2007
If you don't find programming algorithms interesting, this post is not for you.
Reservoir Sampling is an algorithm for sampling elements from a stream of data. Imagine you are given a really large stream of data elements, for example:
Your goal is to efficiently return a random sample of 1,000 elements evenly distributed from the original stream. How would you do it?
The right answer is generating random integers between 0
and N - 1
, then retrieving the elements at those indices and you have your answer. If you need to be generate unique elements, then just throw away indices you've already generated.
So, let me make the problem harder. You don't know N
(the size of the stream) in advance and you can't index directly into it. You can count it, but that requires making 2 passes of the data. You can do better. There are some heuristics you might try: for example to guess the length and hope to undershoot. It will either not work in one pass or will not be evenly distributed.
A relatively easy and correct solution is to assign a random number to every element as you see it in the stream, and then always keep the top 1,000 numbered elements at all times. This is similar to how a SQL Query with ORDER BY RAND()
works. This strategy works well, and only requires additionally storing the randomly generated number for each element.
Another, more complex option is reservoir sampling. First, you want to make a reservoir (array) of 1,000 elements and fill it with the first 1,000 elements in your stream. That way if you have exactly 1,000 elements, the algorithm works. This is the base case.
Next, you want to process the
How can you do this? Start with
I've shown that this produces a
So, the probability that the 2nd element survives this round is:
This is the probability we want.
This can be extended for the
It is pretty easy to prove that this works for all values of
This is the probability that a given element will be removed in that round given that it has made it to the reservoir so far. The probabilty that it isn't removed is the inverse. The final probability that it survives to the
This is the probability we want.
Now take the same problem above but add an extra challenge: How would you sample from a weighted distribution where each element has a given weight associated with it in the stream? This is sorta tricky. Pavlos S. Efraimidis figured out the solution in 2005 in a paper titled Weighted Random Sampling with a Reservoir. It works similarly to the assigning a random number solution above.
As you process the stream, assign each item a "key". For each item in the stream
Now, simply keep the top
To see how this works, lets start with non-weighted elements (ie: weight = 1).
Now, how does it work with weights? The probability of choosing
If
This is the problem that got me researching the weighted sample above. In both of the above algorithms, I can process the stream in O(N) time where N is length of the stream, in other words: in a single pass. If I want to break break up the problem on say 10 machines and solve it close to 10 times faster, how can I do that?
The answer is to have each of the 10 machines take roughly 1/10th of the input to process and generate their own reservoir sample from their subset of the data using the weighted variation above. Then, a final process must take the 10 output reservoirs and merge them.
The trick is that the final process must use the original "key" weights computed in the first pass. For example, If one of your 10 machines processed only 10 items in a size-10 sample, and the other 10 machines each processed 1 million items, you would expect that the one machine with 10 items would likely have smaller keys and hence be less likely to be selected in the final output. If you recompute keys in the final process, then all of the input items would be treated equally when they shouldn't.