Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

对比平台不一样,不能直接比较 #3

Open
frangkyy opened this issue Sep 2, 2022 · 3 comments
Open

对比平台不一样,不能直接比较 #3

frangkyy opened this issue Sep 2, 2022 · 3 comments

Comments

@frangkyy
Copy link

frangkyy commented Sep 2, 2022

这篇论文的结果是在1080ti +TensorRT上对比的,也没说TensorRT采用的什么精度,而DDRNet等是在pytorch上跑的,不能直接比较

@RolandGao
Copy link

I have the same concern

@YihengZhang-CV
Copy link
Owner

YihengZhang-CV commented Oct 5, 2022

Thanks.

We follow the recent advances (e.g., FasterSeg, DF1-Seg, BiSeNetV2, and STDC) and utilize "1080Ti+TensorRT" to measure the inference speeds of LPS-Net-S/M/L on the Cityscapes (Table 5), CamVid (Table 6) and BDD100K (Table 7) datasets. The data precision is FP32 (as stated in "Measure the Latency" of this repository and Section 4.2 of the paper).

The DDRNet measures its inference speeds on 2080Ti, which is more advanced than 1080Ti. For your concern, we additionally evaluate the inference speed of DDRNet on the Cityscapes dataset with 1080Ti+TensorRT. DDRNet-23-slim achieves 115.2FPS (1080Ti+TensorRT), which is slightly faster than 101.6FPS (2080Ti+PyTorch), but still slower than LPS-Net-L (151.8FPS).

@RolandGao
Copy link

Thanks for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants