Occasionally the need arises in an F# project to perform benchmarking. BenchmarkDotNet is a powerful tool made exactly for this purpose. Today’s post provides an introductory look into the process.
Although #time
and Stopwatch
are useful for quick and dirty checks, BenchmarkDotNet allows a more comprehensive look at performance characteristics. This post will use sort
for a case study to display a sample of what can be done. Before getting started ensure you have .NET Core version 2.2. Select SDK for your platform. After that create a console F# project and install the BenchmarkDotNet package.
1 | dotnet new console --language F# --name BenchmarkSort |
First, the initial stuff. One note here is that I decided to use a complex type Foo
for my sorting benchmark. I could’ve used int
, but .NET has highly optimized methods for sorting native types like int
. To the level the playing field a bit I wanted to take this out of the equation.
1 | module Program |
Time to create the test functions. The comparison targets will be .NET’s built in List.sort
, then a hand-written QuickSort
, and BubbleSort
.
1 | let listSort (l :Foo list) = |
Now it is time to setup the benchmarking methods. First, I make a type SortComparison
. I have attached a MemoryDiagnoser
attribute so that I’ll get GC statistics back from the benchmarking run. The sorting methods will be tested against different list sizes (10, 1000, and 10000). This is defined in ListSize
, where the Params
attribute defines what BenchmarkDotNet should use for parameterization during the tests. Next, it is time to define what will be compared. To do this there are member functions marked with the Benchmark
attribute. That’s all there is to setting up the tests.
1 |
|
In Main, all that is needed is a simple runner.
1 |
|
Once everything is together, they just need to run.
1 | ~/Benchmark(master)$ dotnet run -c release |
Time to look at the results. The benchmark spews a ton of data, but I’ll just focus on the final results here.
The test results aren’t too surprising. .NET’s built in sort is more efficient for large lists, although QuickSort holds its own as long as the list isn’t too large. Both are faster than BubbleSort. With the GC stats, we can also see where additional GC’s start to hinder some of the algorithms.
This is great, and time to make it a bit more advanced. Multiple benchmarks can be placed and run in the same file. Here I add FakeComparison
and add a selector when the application is run. This is helpful when you want to keep different sets of benchmarking tests.
1 | type FakeComparison () = |
Now, when running, a prompt is provided.
1 | ~/Benchmark(master)$ dotnet run -c release |
There is one more aspect of reporting, and that is the final results. What I’ve shown has been part of the console output, but there is more. A BenchmarkDotNet.Artifacts
directory contains a detailed run log. It also contains specially formatted results, namely: csv, html, and github markdown. All of these being very useful for more advanced reporting or just simply dropping into a repo.
This provides the basis to explore BenchmarkDotNet in your next performance comparison endeavor. Be sure to check out the BenchmarkDotNet site for additional documentation. Until next time.