Skip to content

Commit 2d4354c

Browse files
committed
first commit
0 parents  commit 2d4354c

19 files changed

+203
-0
lines changed

LICENSE

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
MIT License
2+
3+
Copyright (c) 2024 DeepDuke
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE.

README.md

Lines changed: 182 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,182 @@
1+
# MOE-Dataset: A Dense LiDAR Moving Event Dataset
2+
3+
<!-- <div align="center">
4+
5+
![motion label structure](./assets/img/MOE_logo.png)
6+
7+
</div> -->
8+
9+
<div align="center">
10+
<img src="./assets/img/MOE_logo.png" width="600" height="400">
11+
</div>
12+
13+
<div align="center">
14+
15+
### [Project Page](https://sites.google.com/view/moe-dataset) | [Paper] | [Competition](https://codalab.lisn.upsaclay.fr/competitions/18028) | [Dataset]()
16+
17+
</div>
18+
19+
## Dataset
20+
21+
### Sequences Format
22+
23+
We collect multiple sequences from simulators/real urban campus containing both high-density moving objects for indoor/outdoor scenes. An overview for these sequences are as follows. (Actually there are much more moving/non-moving points than you can in these animations since we lower the opacity of pointcloud to make these visualization more human-friendly visible.)
24+
25+
26+
<div align="center">
27+
28+
| 00 | 01 | 02 | 03 | 04 |
29+
| :-------------: | :-------------: | :-------------: | :-------------: | :-------------: |
30+
| <img src="./assets/gif/seq_00.gif" width="150" height="90"> | <img src="./assets/gif/seq_01.gif" width="150" height="90"> | <img src="./assets/gif/seq_02.gif" width="150" height="90"> | <img src="./assets/gif/seq_03.gif" width="150" height="90"> | <img src="./assets/gif/seq_04.gif" width="150" height="90"> |
31+
32+
</div>
33+
34+
<div align="center">
35+
36+
| 05 | 06 | 07 | 08 | 09 |
37+
| :-------------: | :-------------: | :-------------: | :-------------: | :-------------: |
38+
| <img src="./assets/gif/seq_05.gif" width="150" height="90"> | <img src="./assets/gif/seq_06.gif" width="150" height="90"> | <img src="./assets/gif/seq_07.gif" width="150" height="90"> | <img src="./assets/gif/seq_08.gif" width="150" height="90"> | <img src="./assets/gif/seq_09.gif" width="150" height="90"> |
39+
40+
</div>
41+
42+
The folder of whole dataset is like this:
43+
44+
```bash
45+
MOE-Dataset
46+
├── 00
47+
│   ├── gt_poses.txt
48+
│   ├── label/
49+
│   └── pcd/
50+
├── 01
51+
│   ├── gt_poses.txt
52+
│   ├── label/
53+
│   └── pcd/
54+
├── 02
55+
│   ├── gt_poses.txt
56+
│   ├── label/
57+
│   └── pcd/
58+
├── 03
59+
│   ├── gt_poses.txt
60+
│   ├── label/
61+
│   └── pcd/
62+
... ...
63+
├── 08
64+
│   ├── gt_poses.txt
65+
│   ├── pcd/
66+
|
67+
├── 09
68+
│   ├── gt_poses.txt
69+
│   ├── pcd/
70+
|___ ___
71+
72+
```
73+
#### Pointcloud
74+
75+
Each scan frame is provided as `xxxxxx.pcd` with `x,y,z` information. We choose to privode the raw `*.pcd` file for eaiser visualization and loading, although binary format adopted by [SemanticKITTI](https://github.com/PRBonn/semantic-kitti-api) may be more storage efficient. It's very easy to read the pointcloud files using library like [Open3D](https://github.com/isl-org/Open3D).
76+
77+
#### Label
78+
For the label file `xxxxxx.txt`, each row is the hierarchical motion label for each point in the cooresponding pcd file, which is defined as:
79+
80+
<div align="center">
81+
82+
<img src="./assets/img/motion-label-structure.jpg" width="500" height="300">
83+
84+
</div>
85+
86+
Thus, each row in the label file `xxxxxx.txt` is
87+
88+
```python
89+
Moveable_ID Moving_Status Class_ID
90+
```
91+
92+
We denote the moveable point as `1` otherwise `0`, the moving status as `1` otherwise `0`. BTW, we only provide the semantic class ID for moving points otherise `-1` for non-moving points.
93+
94+
#### Ground truth Pose
95+
96+
For each sequence, the ground truth poses for all scans are store in `gt_poses.txt`. Each row in `gt_poses.txt` represents the pose for the cooresponding ordered scan. For example, first row represents the pose for `000000.pcd` in that sequence. We adopt the KIIT format to reprensent the pose, that is the first 3 rows of elements in the `4x4` transformation matrix.
97+
98+
```python
99+
# Each row in gt_poses.txt
100+
R11 R12 R13 x R21 R22 R23 y R31 R32 R33 z
101+
102+
# The cooresponding 4x4 transformation matrix
103+
T =
104+
[
105+
R11 R12 R13 x
106+
R21 R22 R23 y
107+
R31 R32 R33 z
108+
0 0 0 1
109+
]
110+
```
111+
112+
### Download
113+
114+
We put the zipped sequence file in the release of this repository. Please download and unzip them. Note: Some pcd file of sequences may be splitted into several zipped file due to the siingle file limitation of Github is 2GB. Please merge the unzipped files into same sequence folder.
115+
116+
117+
## Benchmark
118+
119+
We evaluate some SOTA algorithms on sequences `00,01,02` to set up a reference benchmark for their performance. We test 3 offline non-learning methods - [Removert](https://github.com/gisbi-kim/removert), [ERASOR](https://github.com/LimHyungTae/ERASOR), and [Octomap](https://github.com/Kin-Zhang/octomap/tree/feat/benchmark); 3 online non-learning methods - [Dynablox](https://github.com/ethz-asl/dynablox), [DOD](https://github.com/UTS-RI/dynamic_object_detection/), [M-detector](https://github.com/hku-mars/M-detector); two representives for differnt branches of learning based methods - [MotionBEV](https://github.com/xiekkki/motionbev) and [InsMOS](https://github.com/nubot-nudt/InsMOS). For non-learning based methods, we tried to tune those params on each sequence. For learn-based methods, we test them using model trained on the larger scale dataset SemanticKITTI to validate their generalization ability. Actually current learning-based methods are verry easy to overfit on single structure dataset (like urban high ways), and not so robust on different structured high-density scenes in this dataset.
120+
121+
<div align="center">
122+
123+
| Erasor on Seq 00 | MotionBEV on Seq 01 | Insmos on Seq 02 |
124+
| ----- | ----- | ----- |
125+
| <img src="./assets/gif/Erasor.gif" width="300" height="180"> | <img src="./assets/gif/MotionBEV.gif" width="300" height="180"> | <img src="./assets/gif/Insmos.gif" width="300" height="180"> |
126+
127+
</div>
128+
129+
130+
We compute the mean IoU metric over sequences `{00, 01, 02}` for these algorithms. We refer the moving point as positve while non-moving point as negative.
131+
132+
```
133+
IoU = TP / (TP + FP + FN)
134+
```
135+
136+
<div align="center">
137+
138+
| Seq | [Removert](https://github.com/gisbi-kim/removert) | [ERASOR](https://github.com/LimHyungTae/ERASOR) | [Octomap](https://github.com/Kin-Zhang/octomap/tree/feat/benchmark) | [Dynablox](https://github.com/ethz-asl/dynablox) | [DOD](https://github.com/UTS-RI/dynamic_object_detection/) | [M-detector](https://github.com/hku-mars/M-detector) | [MotionBEV](https://github.com/xiekkki/motionbev) | [InsMOS](https://github.com/nubot-nudt/InsMOS) |
139+
| :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |
140+
| 00 | 0.297 | 0.378 | 0.328 | 0.320 | **0.786** | 0.305 | 0.002 | 0.495 |
141+
| 01 | 0.028 | 0.028 | 0.031 | 0.195 | 0.142 | 0.174 | 0.055 | **0.282** |
142+
| 02 | 0.421 | 0.627 | 0.652 | 0.492 | 0.595 | 0.044 | 0.069 | 0.379 |
143+
| mIoU | 0.249 | 0.344 | 0.337 | 0.336 | **0.508** | 0.174 | 0.042 | 0.385 |
144+
145+
</div>
146+
147+
## Competition and Leaderboard
148+
149+
<div align="center">
150+
151+
<img src="./assets/img/Competition.png" width="500" height="230">
152+
153+
</div>
154+
155+
To facilliate the related community, we utilize part sequences `{05,06,07,08,09}` to host a Moving Event Detection with LiDAR on [CodaLab](https://codalab.lisn.upsaclay.fr/). So our dataset will only provide motion label for sequences `{00, 01, 02, 03, 04}`, for sequences `{05, 06, 07, 08, 09}` we will only publish the pointcloud files and ground truth poses.
156+
157+
The link for our competition is at https://codalab.lisn.upsaclay.fr/competitions/18028. If you are interested in it, please feel free to take part in it!
158+
159+
## Issues
160+
161+
If you have any question about this dataset, please feel free to open a issue for discussion. TKS!
162+
163+
## Acknowledgement
164+
165+
We want thank the following open source code and dataset.
166+
167+
- [Removert](https://github.com/gisbi-kim/removert)
168+
- [ERASOR](https://github.com/LimHyungTae/ERASOR)
169+
- [An improved version of Octomap](https://github.com/Kin-Zhang/octomap/tree/feat/benchmark)
170+
- [Dynablox](https://github.com/ethz-asl/dynablox)
171+
- [DOD (dynamic_object_detection)](https://github.com/UTS-RI/dynamic_object_detection/)
172+
- [DynamicMap_Benchmark](https://github.com/KTH-RPL/DynamicMap_Benchmark)
173+
- [M-detector](https://github.com/hku-mars/M-detector)
174+
- [MotionBEV](https://github.com/xiekkki/motionbev)
175+
- [InsMOS](https://github.com/nubot-nudt/InsMOS)
176+
- [SemanticKITI Dataset](https://github.com/PRBonn/semantic-kitti-api)
177+
178+
## Citation
179+
180+
Citation format is comming soon!
181+
182+
If you find this dataset useful to you, please feel free to give us a star! TKS!

assets/gif/Erasor.gif

4.41 MB
Loading

assets/gif/Insmos.gif

2.56 MB
Loading

assets/gif/MotionBEV.gif

6.18 MB
Loading

assets/gif/seq_00.gif

3.21 MB
Loading

assets/gif/seq_01.gif

10.2 MB
Loading

assets/gif/seq_02.gif

6.37 MB
Loading

assets/gif/seq_03.gif

4.98 MB
Loading

assets/gif/seq_04.gif

10.4 MB
Loading

0 commit comments

Comments
 (0)