Skip to content

Commit e8ed663

Browse files
committed
Fix bad link in docs, again
1 parent dd53a38 commit e8ed663

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

SelfplayTraining.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ You may need to play with learning rates, batch sizes, and the balance between t
4242

4343
Example instructions to start up these things (assuming you have appropriate machines set up), with some base directory $BASEDIR to hold the all the models and training data generated with a few hundred GB of disk space. The below commands assume you're running from the root of the repo and that you can run bash scripts.
4444
* **Selfplay engine:** `cpp/katago selfplay -output-dir $BASEDIR/selfplay -models-dir $BASEDIR/models -config cpp/configs/training/SELFPLAYCONFIG.cfg >> log.txt 2>&1 & disown`
45-
* Some example configs for different numbers of GPUs are: cpp/configs/training/selfplay*.cfg. See [cpp/configs/training/README.md](cpp/configs/training/selfplay/README.md) for some notes about what the configs are. You may want to copy and edit them depending on your specs - for example to change the sizes of various tables depending on how much memory you have, or to specify gpu indices if you're doing things like putting some mix of training, gatekeeper, and self-play on the same machines or GPUs instead of on separate ones. Note that the number of game threads in these configs is very large, probably far larger than the number of cores on your machine. This is intentional, as each thread only currently runs synchronously with respect to neural net queries, so a large number of parallel games is needed to take advantage of batching.
45+
* Some example configs for different numbers of GPUs are: cpp/configs/training/selfplay*.cfg. See [cpp/configs/training/README.md](cpp/configs/training/README.md) for some notes about what the configs are. You may want to copy and edit them depending on your specs - for example to change the sizes of various tables depending on how much memory you have, or to specify gpu indices if you're doing things like putting some mix of training, gatekeeper, and self-play on the same machines or GPUs instead of on separate ones. Note that the number of game threads in these configs is very large, probably far larger than the number of cores on your machine. This is intentional, as each thread only currently runs synchronously with respect to neural net queries, so a large number of parallel games is needed to take advantage of batching.
4646
* Take a look at the generated `log.txt` for any errors and/or for running stats on started games and occasional neural net query stats.
4747
* Edit the config to change the number of playouts used or other parameters, or to set a cap on the number of games generated after which selfplay should terminate.
4848
* If `models-dir` is empty, selfplay will use a random number generator instead to produce data, so selfplay is the **starting point** of setting up the full closed loop.

0 commit comments

Comments
 (0)