You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The output will be saved in `outputs/optimization/fixed-BigBiGAN-NAME/DATE/`, with the final checkpoint in `latest.pth`.
160
+
This should take less than 5 minutes to run. The output will be saved in `outputs/optimization/fixed-BigBiGAN-NAME/DATE/`, with the final checkpoint in `latest.pth`.
158
161
159
162
**Segmentation with precomputed generations**
160
163
@@ -170,7 +173,7 @@ data_gen.save_size=1000000 \
170
173
data_gen.kwargs.batch_size=1 \
171
174
data_gen.kwargs.generation_batch_size=128
172
175
```
173
-
This will generate 1 million image-label pairs and save them to `YOUR_OUTPUT_DIR/images`. Note that `YOUR_OUTPUT_DIR` should be an _absolute path_, not a relative one, because Hydra changes the working directory. You may also want to tune the `generation_batch_size` to maximize GPU utilization on your machine.
176
+
This will generate 1 million image-label pairs and save them to `YOUR_OUTPUT_DIR/images`. Note that `YOUR_OUTPUT_DIR` should be an _absolute path_, not a relative one, because Hydra changes the working directory. You may also want to tune the `generation_batch_size` to maximize GPU utilization on your machine. It takes around 3-4 hours to generate 1 million images on a single V100 GPU.
174
177
175
178
Once you have generated data, you can train a segmentation model:
176
179
```bash
@@ -179,6 +182,7 @@ name=NAME \
179
182
data_gen=saved \
180
183
data_gen.data.root="YOUR_OUTPUT_DIR_FROM_ABOVE"
181
184
```
185
+
It takes around 3 hours on 1 GPU to complete 18000 iterations, by which point the model has converged (in fact you can probably get away with fewer steps, I would guess around ~5000).
0 commit comments