-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
393 lines (358 loc) · 18.7 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="description"
content="Few-shot Detection w/o Fine-tuning for Autonomous Exploration">
<meta name="keywords" content="Few-shot detection, Online, Robotic exploration">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- <title>AirDet</title> -->
<!-- Global site tag (gtag.js) - Google Analytics -->
<!-- <script async src="https://www.googletagmanager.com/gtag/js?id=G-PYVRSFMDRL"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag() {
dataLayer.push(arguments);
}
gtag('js', new Date());
gtag('config', 'G-PYVRSFMDRL');
</script> -->
<link rel="icon" type="image/png" href="./static/images/ai4ce.png">
<link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
rel="stylesheet">
<link rel="stylesheet" href="./static/css/bulma.min.css">
<link rel="stylesheet" href="./static/css/bulma-carousel.min.css">
<link rel="stylesheet" href="./static/css/bulma-slider.min.css">
<link rel="stylesheet" href="./static/css/fontawesome.all.min.css">
<link rel="stylesheet"
href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="./static/css/index.css">
<link rel="icon" href="./static/images/ai4ce.png">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script defer src="./static/js/fontawesome.all.min.js"></script>
<script src="./static/js/bulma-carousel.min.js"></script>
<script src="./static/js/bulma-slider.min.js"></script>
<script src="./static/js/index.js"></script>
</head>
<body>
<nav class="navbar" role="navigation" aria-label="main navigation">
<div class="navbar-brand">
<a role="button" class="navbar-burger" aria-label="menu" aria-expanded="false">
<span aria-hidden="true"></span>
<span aria-hidden="true"></span>
<span aria-hidden="true"></span>
</a>
</div>
<!-- <div class="navbar-menu">
<div class="navbar-start" style="flex-grow: 1; justify-content: center;">
<a class="navbar-item" href="https://jaraxxus-me.github.io/">
<span class="icon">
<i class="fas fa-home"></i>
</span>
</a>
<div class="navbar-item has-dropdown is-hoverable">
<a class="navbar-link">
More Research
</a>
<div class="navbar-dropdown">
<a class="navbar-item" href="https://ieeexplore.ieee.org/document/9561564">
ADTrack - ICRA 2021
</a>
<a class="navbar-item" href="https://openaccess.thecvf.com/content/ICCV2021/papers/Cao_HiFT_Hierarchical_Feature_Transformer_for_Aerial_Tracking_ICCV_2021_paper.pdf">
HiFT - ICCV 2021
</a>
</div>
</div>
</div> -->
</div>
</nav>
<section class="hero">
<div class="hero-body">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column has-text-centered">
<!-- <h1 class="title is-1 publication-title"><img src="./static/images/drone.svg" width="120">AirDet </h1> -->
<h1 class="title is-2 publication-title">Collaborative Multi-Object Tracking with Conformal Uncertainty Propagation</h1>
<div class="column is-full_width">
<h2 class="title is-4">IEEE Robotics and Automation Letters</h2>
</div>
<div class="is-size-5 publication-authors">
<span class="author-block">
<a href="https://scholar.google.com/citations?hl=en&user=EWyaceAAAAAJ">Sanbao Su</a><sup>1</sup>,</span>
<span class="author-block">
<a href="https://scholar.google.com/citations?hl=en&user=StIsMNgAAAAJ">Songyang Han</a><sup>1</sup>,
</span>
<span class="author-block">
<a href="https://scholar.google.com/citations?hl=en&user=i_aajNoAAAAJ&view_op=list_works&sortby=pubdate">Yiming Li</a><sup>2</sup>,</span>
<span class="author-block">
<a href="https://www.linkedin.com/in/zhili-zhang-a63a02201/">Zhili Zhang</a><sup>1</sup>,
</span>
<span class="author-block">
<a href="https://scholar.google.com/citations?user=YeG8ZM0AAAAJ&hl=en">Chen Feng</a><sup>2</sup>
</span>
<span class="author-block">
<a href="https://scholar.google.com/citations?hl=en&user=7hR0r_EAAAAJ">Caiwen Ding </a><sup>1</sup>
</span>
<span class="author-block">
<a href="https://scholar.google.com/citations?hl=en&user=fH2YF6YAAAAJ">Fei Miao</a><sup>1</sup>
</span>
</div>
<div class="is-size-5 publication-authors">
<span class="author-block"><sup>1</sup>Department of Computer Science and Engineering, University of Connecticut</span>
</div>
<div class="is-size-5 publication-authors">
<span class="author-block"><sup>2</sup>Tandon School of Engineering, New York University</span>
</div>
<div class="column has-text-centered">
<div class="publication-links">
<!-- PDF Link. -->
<span class="link-block">
<a href="https://arxiv.org/pdf/2303.14346.pdf"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-file-pdf"></i>
</span>
<span>Paper</span>
</a>
</span>
<span class="link-block">
<a href="https://arxiv.org/abs/2303.14346"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="ai ai-arxiv"></i>
</span>
<span>arXiv</span>
</a>
</span>
<!-- Video Link. -->
<!-- <span class="link-block">
<a href="https://www.youtube.com/watch?v=MrKrnHhk8IA"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-youtube"></i>
</span>
<span>Video</span>
</a>
</span> -->
<!-- Code Link. -->
<span class="link-block">
<a href="https://github.com/susanbao/mot_cup"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-github"></i>
</span>
<span>Code</span>
</a>
</span>
<!-- Video Link. -->
<span class="link-block">
<a href="https://www.youtube.com/watch?v=OJXmIbx-Y5M"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-youtube"></i>
</span>
<span>Video</span>
</a>
</span>
<!-- Dataset Link. -->
<!-- <span class="link-block">
<a href="https://github.com/Jaraxxus-Me/AirDet"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="far fa-images"></i>
</span>
<span>Data</span>
</a>
</span> -->
<!-- <span class="link-block">
<a href="https://github.com/Jaraxxus-Me/AirDet_ROS"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-robot"></i>
</span>
<span>ROS</span>
</a>
</span> -->
<!-- <span class="link-block">
<a href="https://zhuanlan.zhihu.com/p/545249730"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-blog"></i>
</span>
<span>Blog</span>
</a>
</span> -->
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<!-- Paper video. -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Video</h2>
<div class="publication-video">
<iframe src="https://www.youtube.com/embed/OJXmIbx-Y5M"
frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
</div>
</div>
</div>
<!--/ Paper video. -->
</div>
</section>
<section class="hero teaser">
<div class="container is-max-desktop">
<div class="hero-body">
<img src="static\images\teaser.png" class="center"/>
<!-- <video id="teaser" autoplay muted loop height="100%">
<source src="./static/images/SUBT_video.mp4"
type="video/mp4">
</video> -->
<!-- <img class="rounded" src="./media/nice-slam/teaser.png" > -->
<br><br><br>
<!-- <h2 class="subtitle has-text-centered">
</h2> -->
<!-- <h2 class="subtitle has-text-centered">
(The <span style="color:#000000;">black</span> / <span style="color:#ff0000;">red</span> lines are the ground truth / predicted camera trajectory)
</h2> -->
<h2 class="is-size-6 has-text-centered"> Difference in data association for multi-object tracking (MOT) with and without considering uncertainty. Ground truth bounding boxes are in <span style="color:#00CC66;">green</span>, detected bounding boxes in <span style="color:#FF8000;">orange</span>, and tracklets' bounding boxes in <span style="color:#ff0000;">green</span>, labeled with object IDs. Shadow ellipses indicate detected bounding box uncertainty. SORT, which doesn't consider uncertainty, is on the left side of the figure, while our MOT-CUP framework, which incorporates uncertainty, is on the right side. At time t-1, both MOT algorithms output tracklet ID 186. However, at time t, SORT fails to associate the low-quality detected object with tracklet 186 due to a large IoU distance. Thus, SORT removes the tracklet. In contrast, our MOT-CUP framework quantifies the uncertainty of COD with a larger shadow ellipse to represent the uncertainty of the bounding box for tracklet 186, and successfully associates the low-quality detected object by considering the COD uncertainty.</h2>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<!-- Paper video. -->
<!-- <div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Video</h2>
<div class="publication-video">
<iframe src="https://www.youtube.com/embed/V5hYTz5os0M?rel=0&showinfo=0"
frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
</div>
</div>
</div> -->
<!--/ Paper video. -->
<!-- <br> -->
<!-- Abstract. -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
Object detection and multiple object tracking (MOT) are essential components of self-driving systems. Accurate detection and uncertainty quantification are both critical for onboard modules, such as perception, prediction, and planning, to improve the safety and robustness of autonomous vehicles. Collaborative object detection (COD) has been proposed to improve detection accuracy and reduce uncertainty by leveraging the viewpoints of multiple agents. However, little attention has been paid on how to leverage the uncertainty quantification from COD to enhance MOT performance. In this paper, as the first attempt, we design the uncertainty propagation framework to address this challenge, called MOT-CUP. Our framework first quantifies the uncertainty of COD through direct modeling and conformal prediction, and propogates this uncertainty information during the motion prediction and association steps. MOT-CUP is designed to work with different collaborative object detectors and baseline MOT algorithms. We evaluate MOT-CUP on V2X-Sim, a comprehensive collaborative perception dataset, and demonstrate a 2% improvement in accuracy and a 2.67X reduction in uncertainty compared to the baselines, e.g., SORT and ByteTrack. MOT-CUP demonstrates the importance of uncertainty quantification in both COD and MOT, and provides the first attempt to improve the accuracy and reduce the uncertainty in MOT based on COD through uncertainty propogation.
</p>
</div>
</div>
</div>
<!--/ Abstract. -->
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<!-- <br> -->
<!-- Abstract. -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Contribution</h2>
<div class="content has-text-justified">
<li>
To the best of our knowledge, our MOT-CUP framework is the <strong>first attempt to leverage quantified uncertainty from collaborative object detection to improve MOT performance.</strong>
This framework can be applied to <strong>object detection model and MOT algorithms.</strong>
</li>
<li>
In the collaborative object detection stage, we employ <strong>modeling and conformal prediction techniques to rigorously quantify the uncertainty.</strong>
</li>
<li>
For MOT, we further improve the original MOT algorithm by designing two novel methods that <strong>effectively leverage uncertainty information for both the Kalman Filter and association.</strong>
</li>
</div>
</div>
</div>
<!--/ Abstract. -->
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<!-- Method. -->
<div class="columns is-centered has-text-centered">
<div class="column is-full_width">
<hr>
<h2 class="title is-3">Method</h2>
<br>
<img src="static\images\main.png" class="center"/>
<div class="content has-text-justified">
<br>
<p>
Overview of our MOT-CUP framework. The <span style="color:#ff0000;">red</span> color highlights the novelties and important techniques in our MOT-CUP framework. In the collaborative object detection stage (COD) stage, we rigorously calculate uncertainty quantification (UQ) of each object detection via direct modeling (DM) and conformal prediction (CP). In the motion prediction stage of MOT, we adopt a Standard Deviation-based Kalman Filter (SDKF) to enhance the Kalman Filter process, that leverages the UQ results and predicts the locations of the objects in the next time step with higher precision. In the association step, we first apply the baseline association method and then associate the unmatched detections and tracklets with the Negative Log Likelihood similarity metric, called NLLAI.
</p>
</div>
</div>
</div>
<hr>
<!-- Applications.-->
<div class="columns is-centered has-text-centered">
<div class="column is-full_width">
<h2 class="title is-3">Qualitative Results</h2>
</div>
</div>
<p>
 
</p>
<!-- <h3 class="title is-4">Attention of Detection Head</h3> -->
<div class="column is-full_width">
<img src="static\images\exp.png" class="center"/>
<div class="content has-text-justified">
<br>
<p>
Visualization of results of the detection, original SORT, and our MOT-CUP framework over consecutive three frames. The collaborative object detector we used here is Upper-bound. In this visualization, <span style="color:#00CC66;">green</span> boxes are ground truth bounding boxes, <span style="color:#FF8000;">orange</span> boxes are detected bounding boxes, and <span style="color:#ff0000;">red</span> boxes are tracklets' bounding boxes as the output of MOT. The numbers beside the red boxes indicate object IDs. We observe that our MOT-CUP outperforms the original SORT algorithm in tracking object 332, as indicated by the red arrow. Furthermore, our MOT-CUP improves the accuracy of location, compared with the object detector, such as object 332 in frame 60. Overall, our results demonstrate the importance of considering uncertainty in MOT.
</p>
</p>
</div>
</div>
</div>
</section>
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title">BibTeX</h2>
<pre><code>@article{Su2023mot_cup,
author = {Su, Sanbao and Han, Songyang and Li, Yiming and Zhang, Zhili and and Feng, Chen and Ding, Caiwen and Miao, Fei},
title = {Collaborative Multi-Object Tracking with Conformal Uncertainty Propagation},
year={2023},
<!-- booktitle={IEEE International Conference on Robotics and Automation (ICRA)} -->
}</code></pre>
</div>
</section>
<!-- <section class="section" id="Acknowledgements">
<div class="container is-max-desktop content">
<h2 class="title">Acknowledgements</h2>
The work was done when Bowen Li and Pranay Reddy were interns at The Robotics Institute, CMU. The authors would like to thank all members of the Team Explorer for providing data collected from the DARPA Subterranean Challenge. Our code is built upon <a href="https://github.com/fanq15/FewX">FewX</a>, for which we sincerely express our gratitute to the authors.
</div>
</section> -->
<footer class="footer">
<div class="container">
<div class="content has-text-centered">
</div>
<div class="columns is-centered">
<div class="column is-8">
<div class="content">
<p>
This website is licensed under a <a rel="license"
href="http://creativecommons.org/licenses/by-sa/4.0/">Creative
Commons Attribution-ShareAlike 4.0 International License</a>.
This webpage template is from <a href="https://github.com/nerfies/nerfies.github.io">Nerfies</a>.
We sincerely thank <a href="https://keunhong.com/">Keunhong Park</a> for developing and open-sourcing this template.
</p>
</div>
</div>
</p>
</div>
</div>
</div>
</div>
</footer>
</body>
</html>