You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: configs/instaboost/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ InstaBoost have been already integrated in the data pipeline, thus all you need
32
32
33
33
## Results and Models
34
34
35
-
- All models were trained on `coco_2017_train` and tested on `coco_2017_val` for conveinience of evaluation and comparison. In the paper, the results are obtained from `test-dev`.
35
+
- All models were trained on `coco_2017_train` and tested on `coco_2017_val` for convenience of evaluation and comparison. In the paper, the results are obtained from `test-dev`.
36
36
- To balance accuracy and training time when using InstaBoost, models released in this page are all trained for 48 Epochs. Other training and testing configs strictly follow the original framework.
37
37
- For results and models in MMDetection V1.x, please refer to [Instaboost](https://github.com/GothicAi/Instaboost).
Copy file name to clipboardexpand all lines: configs/scnet/README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -48,4 +48,4 @@ The results on COCO 2017val are shown in the below table. (results on test-dev a
48
48
### Notes
49
49
50
50
- Training hyper-parameters are identical to those of [HTC](https://github.com/open-mmlab/mmdetection/tree/master/configs/htc).
51
-
- TTA means Test Time Augmentation, which applies horizonal flip and multi-scale testing. Refer to [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_r50_fpn_1x_coco.py).
51
+
- TTA means Test Time Augmentation, which applies horizontal flip and multi-scale testing. Refer to [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_r50_fpn_1x_coco.py).
Copy file name to clipboardexpand all lines: docs/3_exist_data_new_model.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# 3: Train with customized models and standard datasets
2
2
3
-
In this note, you will know how to train, test and inference your own customized models under standard datasets. We use the cityscapes dataset to train a customized Cascade Mask R-CNN R50 model as an example to demonstrate the whole process, which using [`AugFPN`](https://github.com/Gus-Guo/AugFPN) to replace the defalut`FPN` as neck, and add `Rotate` or `Translate` as training-time auto augmentation.
3
+
In this note, you will know how to train, test and inference your own customized models under standard datasets. We use the cityscapes dataset to train a customized Cascade Mask R-CNN R50 model as an example to demonstrate the whole process, which using [`AugFPN`](https://github.com/Gus-Guo/AugFPN) to replace the default`FPN` as neck, and add `Rotate` or `Translate` as training-time auto augmentation.
Copy file name to clipboardexpand all lines: docs/faq.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ We list some common troubles faced by many users and their corresponding solutio
47
47
2. You may also need to check the compatibility between the `setuptools`, `Cython`, and `PyTorch` in your environment.
48
48
49
49
- "Segmentation fault".
50
-
1. Check you GCC version and use GCC 5.4. This usually caused by the incompatibility between PyTorch and the environment (e.g., GCC < 4.9 for PyTorch). We also recommand the users to avoid using GCC 5.5 because many feedbacks report that GCC 5.5 will cause "segmentation fault" and simply changing it to GCC 5.4 could solve the problem.
50
+
1. Check you GCC version and use GCC 5.4. This usually caused by the incompatibility between PyTorch and the environment (e.g., GCC < 4.9 for PyTorch). We also recommend the users to avoid using GCC 5.5 because many feedbacks report that GCC 5.5 will cause "segmentation fault" and simply changing it to GCC 5.4 could solve the problem.
51
51
52
52
2. Check whether PyTorch is correctly installed and could use CUDA op, e.g. type the following command in your terminal.
53
53
@@ -73,7 +73,7 @@ We list some common troubles faced by many users and their corresponding solutio
73
73
1. Check if the dataset annotations are valid: zero-size bounding boxes will cause the regression loss to be Nan due to the commonly used transformation for box regression. Some small size (width or height are smaller than 1) boxes will also cause this problem after data augmentation (e.g., instaboost). So check the data and try to filter out those zero-size boxes and skip some risky augmentations on the small-size boxes when you face the problem.
74
74
2. Reduce the learning rate: the learning rate might be too large due to some reasons, e.g., change of batch size. You can rescale them to the value that could stably train the model.
75
75
3. Extend the warmup iterations: some models are sensitive to the learning rate at the start of the training. You can extend the warmup iterations, e.g., change the `warmup_iters` from 500 to 1000 or 2000.
76
-
4. Add gradient clipping: some models requires gradient clipping to stablize the training process. The default of `grad_clip` is `None`, you can add gradient clippint to avoid gradients that are too large, i.e., set`optimizer_config=dict(_delete_=True, grad_clip=dict(max_norm=35, norm_type=2))`in your config file. If your config does not inherits from any basic config that contains `optimizer_config=dict(grad_clip=None)`, you can simply add `optimizer_config=dict(grad_clip=dict(max_norm=35, norm_type=2))`.
76
+
4. Add gradient clipping: some models requires gradient clipping to stabilize the training process. The default of `grad_clip` is `None`, you can add gradient clippint to avoid gradients that are too large, i.e., set`optimizer_config=dict(_delete_=True, grad_clip=dict(max_norm=35, norm_type=2))`in your config file. If your config does not inherits from any basic config that contains `optimizer_config=dict(grad_clip=None)`, you can simply add `optimizer_config=dict(grad_clip=dict(max_norm=35, norm_type=2))`.
77
77
- ’GPU out of memory"
78
78
1. There are some scenarios when there are large amount of ground truth boxes, which may cause OOM during target assignment. You can set `gpu_assign_thr=N` in the config of assigner thus the assigner will calculate box overlaps through CPU when there are more than N GT boxes.
79
79
2. Set `with_cp=True` in the backbone. This uses the sublinear strategy in PyTorch to reduce GPU memory cost in the backbone.
Copy file name to clipboardexpand all lines: docs/robustness_benchmarking.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ pip install imagecorruptions
34
34
```
35
35
36
36
Compared to imagenet-c a few changes had to be made to handle images of arbitrary size and greyscale images.
37
-
We also modfied the 'motion blur' and 'snow' corruptions to remove dependency from a linux specific library,
37
+
We also modified the 'motion blur' and 'snow' corruptions to remove dependency from a linux specific library,
38
38
which would have to be installed separately otherwise. For details please refer to the [imagecorruptions repository](https://github.com/bethgelab/imagecorruptions).
Copy file name to clipboardexpand all lines: docs/tutorials/config.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -297,7 +297,7 @@ test_pipeline = [
297
297
std=[58.395, 57.12, 57.375],
298
298
to_rgb=True),
299
299
dict(
300
-
type='Pad', # Padding config to pad images divisable by 32.
300
+
type='Pad', # Padding config to pad images divisible by 32.
301
301
size_divisor=32),
302
302
dict(
303
303
type='ImageToTensor', # convert image to tensor
@@ -387,7 +387,7 @@ evaluation = dict( # The config to build the evaluation hook, refer to https://
387
387
metric=['bbox', 'segm']) # Metrics used during evaluation
388
388
optimizer =dict( # Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch
389
389
type='SGD', # Type of optimizers, refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/optimizer/default_constructor.py#L13 for more details
390
-
lr=0.02, # Learning rate of optimizers, see detail usages of the parameters in the documentaion of PyTorch
390
+
lr=0.02, # Learning rate of optimizers, see detail usages of the parameters in the documentation of PyTorch
391
391
momentum=0.9, # Momentum
392
392
weight_decay=0.0001) # Weight decay of SGD
393
393
optimizer_config =dict( # Config used to build the optimizer hook, refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/optimizer.py#L8 for implementation details.
Copy file name to clipboardexpand all lines: docs/tutorials/customize_dataset.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ The annotation json files in COCO format has the following necessary keys:
45
45
46
46
There are three necessary keys in the json file:
47
47
48
-
-`images`: contains a list of images with their informations like `file_name`, `height`, `width`, and `id`.
48
+
-`images`: contains a list of images with their information like `file_name`, `height`, `width`, and `id`.
49
49
-`annotations`: contains the list of instance annotations.
50
50
-`categories`: contains the list of categories names and their ID.
51
51
@@ -157,7 +157,7 @@ We use this way to support CityScapes dataset. The script is in [cityscapes.py](
157
157
**Note**
158
158
159
159
1. For instance segmentation datasets, **MMDetection only supports evaluating mask AP of dataset in COCO format for now**.
160
-
2. It is recommanded to convert the data offline before training, thus you can still use `CocoDataset` and only need to modify the path of annotations and the training classes.
160
+
2. It is recommended to convert the data offline before training, thus you can still use `CocoDataset` and only need to modify the path of annotations and the training classes.
0 commit comments