You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm comparing the Contains method of 3 .NET HashSets using N with params ranging from 10,000 Contains calls to 100,000 calls (my Params increment by 1,000, so 91 params in total). I then graph the results and notice large spikes in the graph. I've run multiple times and noticed large spikes every time along with some smaller spikes. There should never be spikes (a point that is higher and then the next point is lower) in the graph because for every N Param, I'm just doing the same work as the prior N plus a little more work. I noticed in the .log file that the WorkloadWarmup mean times look ok, and it is only the WorkloadActual times that have this spike.
Below is the part of the log file that shows the spike. It happens when calling Contains for the C5 HashSet for a param of 41,000, but a similar spike has occurred for the other 2 HashSets that I'm testing at slightly different N values. Something seems to happen between the warmup and actual that causes this.
@Motvin thank you very much for the report! Could you share the source code of your benchmark? Do you observe this problem each time or only sometimes?
I've observed the problem, in some degree, every time I do a benchmark run with 91 or so N Params. The severity of the spikes in my graphs are different each time, but I notice that the graphs seem to start out with smooth lines and the spikiness starts happening maybe 40% through my test runs.
I included an Excel file in my solution folder with graphs that shows the spikes.
I first thought this was something I was doing wrong, and maybe it still is, but looking at the .log file I see the WorkloadWarmup times seem normal and only the WorkloadActual times seem elevated. I'm wondering if there is something happening in the Actual Run parts of BenchmarkDotNet that changes the CPU caches significantly. The things I'm testing can be very sensitive to CPU cache changes.
Attached below is my zipped project folder with just the basic stuff that shows the problem and I also removed the BenchmarkDotNet nuget package files because they are rather large, but I'm using the latest one (0.11.4 released on Feb. 15, 2019?).
I'm comparing the Contains method of 3 .NET HashSets using N with params ranging from 10,000 Contains calls to 100,000 calls (my Params increment by 1,000, so 91 params in total). I then graph the results and notice large spikes in the graph. I've run multiple times and noticed large spikes every time along with some smaller spikes. There should never be spikes (a point that is higher and then the next point is lower) in the graph because for every N Param, I'm just doing the same work as the prior N plus a little more work. I noticed in the .log file that the WorkloadWarmup mean times look ok, and it is only the WorkloadActual times that have this spike.
Below is the part of the log file that shows the spike. It happens when calling Contains for the C5 HashSet for a param of 41,000, but a similar spike has occurred for the other 2 HashSets that I'm testing at slightly different N values. Something seems to happen between the warmup and actual that causes this.
// **************************
// Benchmark: MinMaxIntRangeContains10_000to100_000.C5_Contains: RyuJitX64(Jit=RyuJit, Platform=X64) [N=41000]
// *** Execute ***
// Launch: 1 / 1
// Execute: E:\Proj\FastHashSet\HashSetBench\bin\Release\c96c584c-7da8-41d1-b399-10ff677c388d.exe --benchmarkName "HashSetBench.MinMaxIntRangeContains10_000to100_000.C5_Contains(N: 41000)" --job "RyuJitX64" --benchmarkId 95 in
// BeforeAnythingElse
// Benchmark Process Environment Information:
// Runtime=.NET Framework 4.7.2 (CLR 4.0.30319.42000), 64bit RyuJIT-v4.7.3362.0
// GC=Concurrent Workstation
// Job: RyuJitX64(Jit=RyuJit, Platform=X64)
OverheadJitting 1: 1 op, 203913.14 ns, 203.9131 us/op
WorkloadJitting 1: 1 op, 1102405.40 ns, 1.1024 ms/op
OverheadJitting 2: 16 op, 180363.39 ns, 11.2727 us/op
WorkloadJitting 2: 16 op, 13302838.71 ns, 831.4274 us/op
WorkloadPilot 1: 16 op, 20696629.34 ns, 1.2935 ms/op
WorkloadPilot 2: 32 op, 41250574.89 ns, 1.2891 ms/op
WorkloadPilot 3: 64 op, 82138206.56 ns, 1.2834 ms/op
WorkloadPilot 4: 128 op, 164356482.28 ns, 1.2840 ms/op
WorkloadPilot 5: 256 op, 328456687.85 ns, 1.2830 ms/op
WorkloadPilot 6: 512 op, 657090691.48 ns, 1.2834 ms/op
OverheadWarmup 1: 512 op, 1662.34 ns, 3.2467 ns/op
OverheadWarmup 2: 512 op, 831.17 ns, 1.6234 ns/op
OverheadWarmup 3: 512 op, 831.17 ns, 1.6234 ns/op
OverheadWarmup 4: 512 op, 831.17 ns, 1.6234 ns/op
OverheadWarmup 5: 512 op, 831.17 ns, 1.6234 ns/op
OverheadWarmup 6: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 1: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 2: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 3: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 4: 512 op, 1108.22 ns, 2.1645 ns/op
OverheadActual 5: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 6: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 7: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 8: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 9: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 10: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 11: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 12: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 13: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 14: 512 op, 831.17 ns, 1.6234 ns/op
OverheadActual 15: 512 op, 831.17 ns, 1.6234 ns/op
WorkloadWarmup 1: 512 op, 414978749.81 ns, 810.5054 us/op
WorkloadWarmup 2: 512 op, 414891754.26 ns, 810.3355 us/op
WorkloadWarmup 3: 512 op, 414943840.77 ns, 810.4372 us/op
WorkloadWarmup 4: 512 op, 414920291.02 ns, 810.3912 us/op
WorkloadWarmup 5: 512 op, 414833018.41 ns, 810.2207 us/op
WorkloadWarmup 6: 512 op, 415077935.82 ns, 810.6991 us/op
WorkloadWarmup 7: 512 op, 414927217.42 ns, 810.4047 us/op
// BeforeActualRun
WorkloadActual 1: 512 op, 604296028.68 ns, 1.1803 ms/op
WorkloadActual 2: 512 op, 603948323.53 ns, 1.1796 ms/op
WorkloadActual 3: 512 op, 603905933.98 ns, 1.1795 ms/op
WorkloadActual 4: 512 op, 603851908.08 ns, 1.1794 ms/op
WorkloadActual 5: 512 op, 604080202.14 ns, 1.1798 ms/op
WorkloadActual 6: 512 op, 604648720.83 ns, 1.1810 ms/op
WorkloadActual 7: 512 op, 604043353.71 ns, 1.1798 ms/op
WorkloadActual 8: 512 op, 604067734.62 ns, 1.1798 ms/op
WorkloadActual 9: 512 op, 604017310.45 ns, 1.1797 ms/op
WorkloadActual 10: 512 op, 603970488.01 ns, 1.1796 ms/op
WorkloadActual 11: 512 op, 603755492.63 ns, 1.1792 ms/op
WorkloadActual 12: 512 op, 603723354.15 ns, 1.1791 ms/op
WorkloadActual 13: 512 op, 603893743.52 ns, 1.1795 ms/op
WorkloadActual 14: 512 op, 604155284.29 ns, 1.1800 ms/op
WorkloadActual 15: 512 op, 603900669.92 ns, 1.1795 ms/op
// AfterActualRun
WorkloadResult 1: 512 op, 604295179.04 ns, 1.1803 ms/op
WorkloadResult 2: 512 op, 603947473.90 ns, 1.1796 ms/op
WorkloadResult 3: 512 op, 603905084.35 ns, 1.1795 ms/op
WorkloadResult 4: 512 op, 603851058.45 ns, 1.1794 ms/op
WorkloadResult 5: 512 op, 604079352.50 ns, 1.1798 ms/op
WorkloadResult 6: 512 op, 604042504.07 ns, 1.1798 ms/op
WorkloadResult 7: 512 op, 604066884.99 ns, 1.1798 ms/op
WorkloadResult 8: 512 op, 604016460.81 ns, 1.1797 ms/op
WorkloadResult 9: 512 op, 603969638.37 ns, 1.1796 ms/op
WorkloadResult 10: 512 op, 603754643.00 ns, 1.1792 ms/op
WorkloadResult 11: 512 op, 603722504.51 ns, 1.1791 ms/op
WorkloadResult 12: 512 op, 603892893.89 ns, 1.1795 ms/op
WorkloadResult 13: 512 op, 604154434.65 ns, 1.1800 ms/op
WorkloadResult 14: 512 op, 603899820.28 ns, 1.1795 ms/op
GC: 0 0 0 0 0
// AfterAll
Mean = 1.1796 ms, StdErr = 0.0001 ms (0.01%); N = 14, StdDev = 0.0003 ms
Min = 1.1791 ms, Q1 = 1.1795 ms, Median = 1.1796 ms, Q3 = 1.1798 ms, Max = 1.1803 ms
IQR = 0.0003 ms, LowerFence = 1.1790 ms, UpperFence = 1.1803 ms
ConfidenceInterval = [1.1793 ms; 1.1800 ms] (CI 99.9%), Margin = 0.0003 ms (0.03% of Mean)
Skewness = 0.28, Kurtosis = 2.41, MValue = 2
The text was updated successfully, but these errors were encountered: