-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
对比平台不一样,不能直接比较 #3
Comments
I have the same concern |
Thanks. We follow the recent advances (e.g., FasterSeg, DF1-Seg, BiSeNetV2, and STDC) and utilize "1080Ti+TensorRT" to measure the inference speeds of LPS-Net-S/M/L on the Cityscapes (Table 5), CamVid (Table 6) and BDD100K (Table 7) datasets. The data precision is FP32 (as stated in "Measure the Latency" of this repository and Section 4.2 of the paper). The DDRNet measures its inference speeds on 2080Ti, which is more advanced than 1080Ti. For your concern, we additionally evaluate the inference speed of DDRNet on the Cityscapes dataset with 1080Ti+TensorRT. DDRNet-23-slim achieves 115.2FPS (1080Ti+TensorRT), which is slightly faster than 101.6FPS (2080Ti+PyTorch), but still slower than LPS-Net-L (151.8FPS). |
Thanks for your reply! |
这篇论文的结果是在1080ti +TensorRT上对比的,也没说TensorRT采用的什么精度,而DDRNet等是在pytorch上跑的,不能直接比较
The text was updated successfully, but these errors were encountered: