Skip to content

Commit

Permalink
docs(readmes): update information and links
Browse files Browse the repository at this point in the history
  • Loading branch information
Manuel Stoiber committed Jul 15, 2022
1 parent c1fbf92 commit 540ab4a
Show file tree
Hide file tree
Showing 5 changed files with 21 additions and 6 deletions.
Binary file added ICG/dlr_icg_video_presentation.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 12 additions & 2 deletions ICG/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,20 @@
Iterative Corresponding Geometry: Fusing Region and Depth for Highly Efficient 3D Tracking of Textureless Objects
Manuel Stoiber, Martin Sundermeyer, Rudolph Triebel
Conference on Computer Vision and Pattern Recognition (CVPR) 2022
[Paper](https://arxiv.org/abs/2203.05334)
[Paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Stoiber_Iterative_Corresponding_Geometry_Fusing_Region_and_Depth_for_Highly_Efficient_CVPR_2022_paper.pdf), [supplementary](https://openaccess.thecvf.com/content/CVPR2022/supplemental/Stoiber_Iterative_Corresponding_Geometry_CVPR_2022_supplemental.pdf)

## Abstract
Tracking objects in 3D space and predicting their 6DoF pose is an essential task in computer vision. State-of-the-art approaches often rely on object texture to tackle this problem. However, while they achieve impressive results, many objects do not contain sufficient texture, violating the main underlying assumption. In the following, we thus propose ICG, a novel probabilistic tracker that fuses region and depth information and only requires the object geometry. Our method deploys correspondence lines and points to iteratively refine the pose. We also implement robust occlusion handling to improve performance in real-world settings. Experiments on the YCB-Video, OPT, and Choi datasets demonstrate that, even for textured objects, our approach outperforms the current state of the art with respect to accuracy and robustness. At the same time, ICG shows fast convergence and outstanding efficiency, requiring only 1.3 ms per frame on a single CPU core. Finally, we analyze the influence of individual components and discuss our performance compared to deep learning-based methods. The source code of our tracker is publicly available.

## Videos
<a href="https://www.youtube.com/watch?v=eYd_3TnJIaE">
<p align="center">
<img src="dlr_icg_video_presentation.png" height=300>
<br>
<em>Presentation CVPR 2022</em>
</p>
</a>

<a href="https://youtu.be/qMr1RHCsnDk?t=10">
<p align="center">
<img src="dlr_icg_video_real-world.png" height=300>
Expand Down Expand Up @@ -197,6 +205,8 @@ If you find our work useful, please cite us with:
author = {Stoiber, Manuel and Sundermeyer, Martin and Triebel, Rudolph},
title = {Iterative Corresponding Geometry: Fusing Region and Depth for Highly Efficient 3D Tracking of Textureless Objects},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022}
month = {June},
year = {2022},
pages = {6855-6865}
}
```
3 changes: 2 additions & 1 deletion RBGT/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,8 @@ If you find our work useful, please cite us with:
title = {A Sparse Gaussian Approach to Region-Based 6DoF Object Tracking},
booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)},
month = {November},
year = {2020}
year = {2020},
page = {666–682}
}
```

Expand Down
8 changes: 6 additions & 2 deletions SRT3D/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
### Paper
SRT3D: A Sparse Region-Based 3D Object Tracking Approach for the Real World
Manuel Stoiber, Martin Pfanne, Klaus H. Strobl, Rudolph Triebel, and Alin Albu-Schäffer
International Journal of Computer Vision: [Paper](https://arxiv.org/abs/2110.12715)
International Journal of Computer Vision (IJCV) 2022: [Paper](https://link.springer.com/article/10.1007/s11263-022-01579-8)

### Abstract
Region-based methods have become increasingly popular for model-based, monocular 3D tracking of texture-less objects in cluttered scenes. However, while they achieve state-of-the-art results, most methods are computationally expensive, requiring significant resources to run in real-time. In the following, we build on our previous work and develop SRT3D, a sparse region-based approach to 3D object tracking that bridges this gap in efficiency. Our method considers image information sparsely along so-called correspondence lines that model the probability of the object's contour location. We thereby improve on the current state of the art and introduce smoothed step functions that consider a defined global and local uncertainty. For the resulting probabilistic formulation, a thorough analysis is provided. Finally, we use a pre-rendered sparse viewpoint model to create a joint posterior probability for the object pose. The function is maximized using second-order Newton optimization with Tikhonov regularization. During the pose estimation, we differentiate between global and local optimization, using a novel approximation for the first-order derivative employed in the Newton method. In multiple experiments, we demonstrate that the resulting algorithm improves the current state of the art both in terms of runtime and quality, performing particularly well for noisy and cluttered images encountered in the real world.
Expand Down Expand Up @@ -40,7 +40,11 @@ If you find our work useful, please cite us with:
author = {Stoiber, Manuel and Pfanne, Martin and Strobl, Klaus H. and Triebel, Rudolph and Albu-Schaeffer, Alin},
journal = {International Journal of Computer Vision},
title = {SRT3D: A Sparse Region-Based 3D Object Tracking Approach for the Real World},
year = {2022}
month = {April},
year = {2022},
volume = {130},
number = {4},
pages = {1008-1030}
}
```

Expand Down
2 changes: 1 addition & 1 deletion readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Tracking a rigid object in 3D space and determining its 6DoF pose is an essential task in computer vision. Its application ranges from augmented reality to robotic perception. Given consecutive image frames and a 3D model of the object, the goal is to robustly estimate both the rotation and translation of a known object relative to the camera. While the problem has been thoroughly studied, many challenges such as partial occlusions, appearance changes, motion blur, background clutter, and real-time requirements still exist.

In this repository, we will continuously publish algorithms and code of our ongoing research on 3D object tracking. The folders for the different algorithms include everything necessary to reproduce results presented in our papers and to support full reusability in different projects and applications.
In this repository, we will continuously publish algorithms and code of our ongoing research on 3D object tracking. The folders for the different algorithms include everything necessary to reproduce results presented in our papers. Note that the code for each new paper also includes an updated version of previous work. If you want to use our tracker in your own project or application, please use the code from the latest publication. Currently, the latest version of our code can be found in the folder [__ICG__](https://github.com/DLR-RM/3DObjectTracking/tree/master/ICG).

The algorithms corresponding to the following papers are included:
* [__RBGT__](https://github.com/DLR-RM/3DObjectTracking/tree/master/RBGT)
Expand Down

0 comments on commit 540ab4a

Please sign in to comment.