Performing estimated counting of distinct elements in large datasets is a common task. While there are straightforward approaches, they can be memory-intensive and slow for massive datasets. Today I’m going to take a look at the F0 Estimator introduced in the paper Distinct Elements in Streams: An Algorithm for the (Text) Book∗. As often, this will be an implementation in F#.
First things first, what is the goal? There are some cases where the data being processed is so large, calculating exact counts of distinct values can be impractical. An alternative is to perform an estimated count of the items. The key is to provide an estimation that is accurate enough, while also being performant (across whatever axis matters). The goal here is not to go through the proofs, but to understand the implementation.
With that said, I need to implement the paper’s algorithm in F#. One nice thing aspect of this paper is the algorithm is pretty simple; that is also the point of the paper . I’ll get into performance later, but it is enjoyable to see benefits from a solution that isn’t too hard to understand. At a high level, the algorithm iterates through the data, keeping track of items it has seen (using a Set). This is the first place where probabilities come into play. The value is only added to the set randomly based on a variable sampling rate (p). Also, in order to save space, the set is going to have a limited capacity (thresh). This inevitably will lead to the set filling up. When this happens, space needs to be freed up for further processing. This is done by conditionally removing each item with a 50% probability. At the same time, the variable sampling rate (p) mentioned earlier is halved. This results in the dynamic that as processing progressess, it become less likely that a new item will be added to the set. Once processing is complete, the estimate is calcuated by: set size / p. Accuracy and space used are driven by the epsilon and delta parameters. Lower values result in higher accuracy, but more space being used; Higher values result in reduced accuracy, but space-used savings. And that’s pretty much it. Below is what the F# code looks like.
1 | let f0Estimator (A: string[]) (epsilon: float) (delta: float) = |
So, what do the results look like? For the following analysis I used a 1GB text file that has 27M lines, 184M words, and 132,876 unique words. There are two main things I care about, accuracy and performance. I’ll look at estimation accuracy first. Below are the results with several different epsilon and delta values. I also include thresh, which is the maximum size of the set being used (which directly impacts memory usage). Depending on your tolerance level, the results are reasonable. Looking at epsilon and delta values from 1 down to 0.1, you can see the tradeoff of using more memory. A lower thresh results in less memory, but it also includes a larger difference from the actual values; it also has a higher variation level. Using more memory (higher thresh), results in higher accuracy as well as more consistency in the results.
1 | Actual : 132876 |
Next, what does the performance look like? I used BenchmarkDotNet and again look at several different settings. For comparison, I use a naive “countDistinct” and a more efficient “countDistinctWithHashSet (see below). One thing that becomes quickly obvious is memory allocations are drastically reduced when using estimated counting. This is expected, and one of the primary goals. Estimates are several orders of magnitude less than the naive version, and at least half of the HashSet benchmark. This savings in memory allocations is a compelling reason to use estimated counting. As you might imagine, the amount of allocations correspond to epsilon and delta settings. Higher levels of accuracy require more allocations. The second take away is execution performance, its approximately twice as fast as both counts used as the control groups. These results are a nice demonstraiont, and concrete view, of the trade-offs, and why one would want to use estimated counting.
1 | // Naive |
1 | # Performance: |
There is one additional aspect to tease out. For comparison purposes, everything is based on an array. But considering the idea is to use a method like this on very large data, there is value in investigating its use when processing a stream. Below is an adaptation that performs an estimated distinct word count on a stream. The estimated counting logic is the same as above. The major difference is because the data size isn’t know beforehand, thresh (set size) needs to be explicitly provided. Manual calculations can be done to address a solution base on use case, anticipated size, and desired memory profile
1 | let streamF0Estimator (thresh: int) (stream: Stream) = |
So what does all this mean? First, if you’re willing to use estimates when counting distinct values, there are some good ways to get performance benefits in both processsing and memory. In reality though, this isn’t new. What is unique in this case is that the estimation algorithm is pretty easy to implement and understand. That can go a long way when you’re concerned with long term maintenance. If nothing else, there is value in investigating and understanding alternate approaches. That’s all I have for now. Until next time.
References:
Distinct Elements in Streams: An Algorithm for the (Text) Book∗ (original source: https://arxiv.org/pdf/2301.10191)