Skip to content

Commit

Permalink
add benchmarks to readme
Browse files Browse the repository at this point in the history
  • Loading branch information
ibraheemdev committed Jul 8, 2024
1 parent e36a87c commit 3f36cf4
Show file tree
Hide file tree
Showing 10 changed files with 1,370 additions and 2 deletions.
32 changes: 32 additions & 0 deletions BENCHMARKS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Benchmarks

*As always, benchmarks should be taken with a grain of salt. Always measure for your workload.*

Below are the benchmark results from the [`conc-map-bench`](https://github.com/xacrimon/conc-map-bench) benchmarking harness under varying workloads. All benchmarks were run on a Ryzen 3700X (16 threads) with [`ahash`](https://github.com/tkaitchuck/aHash) and the [`mimalloc`](https://github.com/microsoft/mimalloc) allocator.

### Read Heavy

| | |
:-------------------------:|:-------------------------:
![](assets/ReadHeavy.ahash.throughput.svg) | ![](assets/ReadHeavy.ahash.latency.svg)

### Exchange

| | |
:-------------------------:|:-------------------------:
![](assets/Exchange.ahash.throughput.svg) | ![](assets/Exchange.ahash.latency.svg)

### Rapid Grow

| | |
:-------------------------:|:-------------------------:
![](assets/RapidGrow.ahash.throughput.svg) | ![](assets/RapidGrow.ahash.latency.svg)

# Discussion

`papaya` is read-heavy workloads and outperforms all competitors in the read-heavy benchmark. It falls short in update and write-heavy workloads due to allocator pressure, which is expected. However, an important guarantee of `papaya` is that reads *never* block under any circumstances. This is crucial for providing consistent read latency regardless of write concurrency.

Additionally, `papaya` does a lot better in terms of latency distribution due to incremental resizing and the lack of bucket locks. Comparing histograms of `insert` latency between `papaya` and `dashmap`, we see that `papaya` manages to keep tail latency orders of magnitude lower. Some tail latency is unavoidable due to the large allocations necessary to resize a hash-table, but the distribution is much more consistent (notice the scale of the y-axis).

![](assets/papaya-hist.png)
![](assets/dashmap-hist.png)
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ A fast and ergonomic concurrent hash-table that features:
- An ergonomic lock-free API — no more deadlocks!
- Powerful atomic operations.
- Seamless usage in async contexts.
- Extremely fast and scalable reads (see [benchmarks]).
- Extremely scalable low-latency reads (see [performance](#performance)).
- Predictable latency across all operations.
- Efficient memory usage, with garbage collection powered by [`seize`].

Expand Down Expand Up @@ -190,6 +190,6 @@ The `Guard` trait supports both local and owned guards. Note the `'guard` lifeti

`papaya` also aims to provide predictable, consistent latency across all operations. Most operations are lock-free, and those that aren't only block under rare and constrained conditions. `papaya` also features [incremental resizing]. Predictable latency is an important part of performance that doesn't often show up in benchmarks, but has significant implications for real-world usage.

[benchmarks]: TODO
[benchmarks]: ./BENCHMARKS.md
[`seize`]: https://docs.rs/seize/latest
[incremental resizing]: https://docs.rs/papaya/latest/papaya/enum.ResizeMode.html
232 changes: 232 additions & 0 deletions assets/Exchange.ahash.latency.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 3f36cf4

Please sign in to comment.