diff --git a/ICG/dlr_icg_video_presentation.png b/ICG/dlr_icg_video_presentation.png new file mode 100644 index 0000000..4494569 Binary files /dev/null and b/ICG/dlr_icg_video_presentation.png differ diff --git a/ICG/readme.md b/ICG/readme.md index 9c71a18..f4c9a43 100644 --- a/ICG/readme.md +++ b/ICG/readme.md @@ -4,12 +4,20 @@ Iterative Corresponding Geometry: Fusing Region and Depth for Highly Efficient 3D Tracking of Textureless Objects Manuel Stoiber, Martin Sundermeyer, Rudolph Triebel Conference on Computer Vision and Pattern Recognition (CVPR) 2022 -[Paper](https://arxiv.org/abs/2203.05334) +[Paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Stoiber_Iterative_Corresponding_Geometry_Fusing_Region_and_Depth_for_Highly_Efficient_CVPR_2022_paper.pdf), [supplementary](https://openaccess.thecvf.com/content/CVPR2022/supplemental/Stoiber_Iterative_Corresponding_Geometry_CVPR_2022_supplemental.pdf) ## Abstract Tracking objects in 3D space and predicting their 6DoF pose is an essential task in computer vision. State-of-the-art approaches often rely on object texture to tackle this problem. However, while they achieve impressive results, many objects do not contain sufficient texture, violating the main underlying assumption. In the following, we thus propose ICG, a novel probabilistic tracker that fuses region and depth information and only requires the object geometry. Our method deploys correspondence lines and points to iteratively refine the pose. We also implement robust occlusion handling to improve performance in real-world settings. Experiments on the YCB-Video, OPT, and Choi datasets demonstrate that, even for textured objects, our approach outperforms the current state of the art with respect to accuracy and robustness. At the same time, ICG shows fast convergence and outstanding efficiency, requiring only 1.3 ms per frame on a single CPU core. Finally, we analyze the influence of individual components and discuss our performance compared to deep learning-based methods. The source code of our tracker is publicly available. ## Videos + +

+ +
+ Presentation CVPR 2022 +

+
+

@@ -197,6 +205,8 @@ If you find our work useful, please cite us with: author = {Stoiber, Manuel and Sundermeyer, Martin and Triebel, Rudolph}, title = {Iterative Corresponding Geometry: Fusing Region and Depth for Highly Efficient 3D Tracking of Textureless Objects}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, - year = {2022} + month = {June}, + year = {2022}, + pages = {6855-6865} } ``` diff --git a/RBGT/readme.md b/RBGT/readme.md index caf0888..c1046e5 100644 --- a/RBGT/readme.md +++ b/RBGT/readme.md @@ -66,7 +66,8 @@ If you find our work useful, please cite us with: title = {A Sparse Gaussian Approach to Region-Based 6DoF Object Tracking}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, - year = {2020} + year = {2020}, + page = {666–682} } ``` diff --git a/SRT3D/readme.md b/SRT3D/readme.md index 24339e7..9a8a662 100644 --- a/SRT3D/readme.md +++ b/SRT3D/readme.md @@ -5,7 +5,7 @@ ### Paper SRT3D: A Sparse Region-Based 3D Object Tracking Approach for the Real World Manuel Stoiber, Martin Pfanne, Klaus H. Strobl, Rudolph Triebel, and Alin Albu-Schäffer -International Journal of Computer Vision: [Paper](https://arxiv.org/abs/2110.12715) +International Journal of Computer Vision (IJCV) 2022: [Paper](https://link.springer.com/article/10.1007/s11263-022-01579-8) ### Abstract Region-based methods have become increasingly popular for model-based, monocular 3D tracking of texture-less objects in cluttered scenes. However, while they achieve state-of-the-art results, most methods are computationally expensive, requiring significant resources to run in real-time. In the following, we build on our previous work and develop SRT3D, a sparse region-based approach to 3D object tracking that bridges this gap in efficiency. Our method considers image information sparsely along so-called correspondence lines that model the probability of the object's contour location. We thereby improve on the current state of the art and introduce smoothed step functions that consider a defined global and local uncertainty. For the resulting probabilistic formulation, a thorough analysis is provided. Finally, we use a pre-rendered sparse viewpoint model to create a joint posterior probability for the object pose. The function is maximized using second-order Newton optimization with Tikhonov regularization. During the pose estimation, we differentiate between global and local optimization, using a novel approximation for the first-order derivative employed in the Newton method. In multiple experiments, we demonstrate that the resulting algorithm improves the current state of the art both in terms of runtime and quality, performing particularly well for noisy and cluttered images encountered in the real world. @@ -40,7 +40,11 @@ If you find our work useful, please cite us with: author = {Stoiber, Manuel and Pfanne, Martin and Strobl, Klaus H. and Triebel, Rudolph and Albu-Schaeffer, Alin}, journal = {International Journal of Computer Vision}, title = {SRT3D: A Sparse Region-Based 3D Object Tracking Approach for the Real World}, - year = {2022} + month = {April}, + year = {2022}, + volume = {130}, + number = {4}, + pages = {1008-1030} } ``` diff --git a/readme.md b/readme.md index 1f03cd3..bc33329 100644 --- a/readme.md +++ b/readme.md @@ -2,7 +2,7 @@ Tracking a rigid object in 3D space and determining its 6DoF pose is an essential task in computer vision. Its application ranges from augmented reality to robotic perception. Given consecutive image frames and a 3D model of the object, the goal is to robustly estimate both the rotation and translation of a known object relative to the camera. While the problem has been thoroughly studied, many challenges such as partial occlusions, appearance changes, motion blur, background clutter, and real-time requirements still exist. -In this repository, we will continuously publish algorithms and code of our ongoing research on 3D object tracking. The folders for the different algorithms include everything necessary to reproduce results presented in our papers and to support full reusability in different projects and applications. +In this repository, we will continuously publish algorithms and code of our ongoing research on 3D object tracking. The folders for the different algorithms include everything necessary to reproduce results presented in our papers. Note that the code for each new paper also includes an updated version of previous work. If you want to use our tracker in your own project or application, please use the code from the latest publication. Currently, the latest version of our code can be found in the folder [__ICG__](https://github.com/DLR-RM/3DObjectTracking/tree/master/ICG). The algorithms corresponding to the following papers are included: * [__RBGT__](https://github.com/DLR-RM/3DObjectTracking/tree/master/RBGT)