You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`concurrency`| Number of threads for benchmarking. |`1`|
174
-
|`run_for_sec`| Duration of benchmark (in seconds). |`60`|
175
-
|`ramp_for_sec`| Duration of ramp-up time before benchmark (in seconds). |`0`|
171
+
### `concurrency`
172
+
173
+
-**Description:** Number of worker threads that concurrently execute benchmark transactions against the database. This parameter controls the level of parallelism during the actual benchmark execution phase. Increasing this value simulates more concurrent client accesses and higher workload intensity.
174
+
-**Default value:**`1`
175
+
176
+
### `run_for_sec`
177
+
178
+
-**Description:** Duration of the benchmark execution phase (in seconds). This parameter defines how long the benchmark will run and submit transactions to the database.
179
+
-**Default value:**`60`
180
+
181
+
### `ramp_for_sec`
182
+
183
+
-**Description:** Duration of the ramp-up period before the benchmark measurement phase begins (in seconds). During this warm-up period, the system executes transactions but does not record performance metrics. This allows the system to reach a steady state before collecting benchmark results.
184
+
-**Default value:**`0`
176
185
177
186
## Workload-specific parameters
178
187
179
188
Select a workload to see its available parameters.
|`num_accounts`| Number of bank accounts for benchmarking. |`100000`|
186
-
|`load_concurrency`| Number of threads for loading. |`1`|
187
-
|`load_batch_size`| Number of accounts in a single loading transaction. |`1`|
192
+
### `num_accounts`
193
+
194
+
-**Description:** Number of bank accounts to create for the benchmark workload. This parameter determines the size of the dataset and affects the working-set size.
195
+
-**Default value:**`100000`
196
+
197
+
### `load_concurrency`
198
+
199
+
-**Description:** Number of parallel threads used to load initial benchmark data into the database. This parameter controls how fast the data-loading phase completes. Increasing this value can significantly reduce data-loading time for large datasets. This is separate from the `concurrency` parameter used during benchmark execution.
200
+
-**Default value:**`1`
201
+
202
+
### `load_batch_size`
203
+
204
+
-**Description:** Number of accounts to insert within a single transaction during the initial data-loading phase. Larger batch sizes can improve loading performance by reducing the number of transactions, but may increase the execution time of each transaction.
|`num_warehouses`| Number of warehouses (scale factor) for benchmarking. |`1`|
193
-
|`rate_payment`| Percentage of Payment transaction. |`50`|
194
-
|`load_concurrency`| Number of threads for loading. |`1`|
208
+
### `num_warehouses`
209
+
210
+
-**Description:** Number of warehouses to create for the TPC-C benchmark workload. This value is the scale factor that determines the dataset size. Increasing this value creates a larger working set and enables various enterprise-scale testing.
211
+
-**Default value:**`1`
212
+
213
+
### `rate_payment`
214
+
215
+
-**Description:** Percentage of Payment transactions in the transaction mix, with the remainder being New-Order transactions. For example, a value of `50` means 50% of transactions will be Payment transactions and 50% will be New-Order transactions.
216
+
-**Default value:**`50`
217
+
218
+
### `load_concurrency`
219
+
220
+
-**Description:** Number of parallel threads used to load initial benchmark data into the database. This parameter controls how fast the data-loading phase completes. Increasing this value can significantly reduce data-loading time, especially for larger numbers of warehouses. This is separate from the `concurrency` parameter used during benchmark execution.
|`record_count`| Number of records for benchmarking. |`1000`|
200
-
|`payload_size`| Payload size (in bytes) of each record. |`1000`|
201
-
|`ops_per_tx`| Number of operations in a single transaction |`2`|
202
-
|`workload`| Workload type (A, C, or F). |`A`|
203
-
|`load_concurrency`| Number of threads for loading. |`1`|
204
-
|`load_batch_size`| Number of records in a single loading transaction. |`1`|
224
+
### `record_count`
225
+
226
+
-**Description:** Number of records to create for the YCSB benchmark workload. This parameter determines the size of the dataset and affects the working-set size during benchmark execution.
227
+
-**Default value:**`1000`
228
+
229
+
### `payload_size`
230
+
231
+
-**Description:** Size of the payload data (in bytes) for each record. This parameter controls the amount of data stored per record and affects database storage, memory usage, and I/O characteristics.
232
+
-**Default value:**`1000`
233
+
234
+
### `ops_per_tx`
235
+
236
+
-**Description:** Number of read or write operations to execute within a single transaction. This parameter affects transaction size and execution time. Higher values create longer-running transactions.
237
+
-**Default value:**`2`
238
+
239
+
### `workload`
240
+
241
+
-**Description:** YCSB workload type that defines the operation mix: **A** (50% reads, 50% read-modify-write operations), **C** (100% reads), or **F** (100% read-modify-write operations). Note that the workload A in this benchmark uses read-modify-write operations instead of pure blind writes because ScalarDL prohibits the blind writes. Each workload type simulates different application access patterns.
242
+
243
+
-**Default value:**`A`
244
+
245
+
### `load_concurrency`
246
+
247
+
-**Description:** Number of parallel threads used to load initial benchmark data into the database. This parameter controls how fast the data-loading phase completes. Increasing this value can significantly reduce data-loading time for large datasets. This is separate from the `concurrency` parameter used during benchmark execution.
248
+
-**Default value:**`1`
249
+
250
+
### `load_batch_size`
251
+
252
+
-**Description:** Number of records to insert within a single transaction during the initial data-loading phase. Larger batch sizes can improve loading performance by reducing the number of transactions, but may increase the execution time of each transaction.
0 commit comments