@@ -8,7 +8,7 @@ to run workflows in parallel.
8
8
Create a configuration file from a template:
9
9
10
10
``` bash
11
- plantcv-run-workflow --template my_config.txt
11
+ plantcv-run-workflow --template my_config.json
12
12
```
13
13
14
14
* class* ** plantcv.parallel.WorkflowConfig**
@@ -155,32 +155,32 @@ After defining the cluster, parameters are used to define the size of and reques
155
155
environment. These settings are defined in the ` cluster_config ` parameter. We define by default the following
156
156
parameters:
157
157
158
- ** n_workers** : (int, required, default = 1): the number of workers/slots to request from the cluster. Because we
158
+ * * * n_workers** : (int, required, default = 1): the number of workers/slots to request from the cluster. Because we
159
159
generally use 1 CPU per image analysis workflow, this is effectively the maximum number of concurrently running
160
160
workflows.
161
161
162
- ** cores** : (int, required, default = 1): the number of compute cores per workflow. This should be left as 1 unless a
162
+ * * * cores** : (int, required, default = 1): the number of compute cores per workflow. This should be left as 1 unless a
163
163
workflow is designed to use multiple CPUs/cores/threads.
164
164
165
- ** memory** : (str, required, default = "1GB"): the amount of memory/RAM used per workflow. Can be set as a number plus
165
+ * * * memory** : (str, required, default = "1GB"): the amount of memory/RAM used per workflow. Can be set as a number plus
166
166
units (KB, MB, GB, etc.).
167
167
168
- ** disk** : (str, required, default = "1GB"): the amount of disk space used per workflow. Can be set as a number plus
168
+ * * * disk** : (str, required, default = "1GB"): the amount of disk space used per workflow. Can be set as a number plus
169
169
units (KB, MB, GB, etc.).
170
170
171
- ** log_directory** : (str, optional, default = ` None ` ): directory where worker logs are stored. Can be set to a path or
171
+ * * * log_directory** : (str, optional, default = ` None ` ): directory where worker logs are stored. Can be set to a path or
172
172
environmental variable.
173
173
174
- ** local_directory** : (str, optional, default = ` None ` ): dask working directory location. Can be set to a path or
174
+ * * * local_directory** : (str, optional, default = ` None ` ): dask working directory location. Can be set to a path or
175
175
environmental variable.
176
176
177
- ** job_extra_directives** : (dict, optional, default = ` None ` ): extra parameters sent to the scheduler. Specified as a dictionary
177
+ * * * job_extra_directives** : (dict, optional, default = ` None ` ): extra parameters sent to the scheduler. Specified as a dictionary
178
178
of key-value pairs (e.g. ` {"getenv": "true"} ` ).
179
179
180
180
!!! note
181
181
` n_workers ` is the only parameter used by ` LocalCluster ` , all others are currently ignored. ` n_workers ` , ` cores ` ,
182
182
` memory ` , and ` disk ` are required by the other clusters. All other parameters are optional. Additional parameters
183
- defined in the [ dask-jobqueu API] ( https://jobqueue.dask.org/en/latest/api.html ) can be supplied.
183
+ defined in the [ dask-jobqueue API] ( https://jobqueue.dask.org/en/latest/api.html ) can be supplied.
184
184
185
185
### Example
186
186
0 commit comments