Skip to content

Commit

Permalink
Added the poster and video
Browse files Browse the repository at this point in the history
  • Loading branch information
GloryyrolG committed Sep 25, 2023
1 parent df4e30a commit 6cdb758
Show file tree
Hide file tree
Showing 29 changed files with 79 additions and 24 deletions.
1 change: 1 addition & 0 deletions Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ GEM
webrick (1.8.1)

PLATFORMS
ruby
x86_64-linux

DEPENDENCIES
Expand Down
Binary file added _site/assets/media/mhentropy_pdf.pdf
Binary file not shown.
Binary file added _site/assets/media/mhentropy_poster.pdf
Binary file not shown.
Binary file added _site/assets/media/mhentropy_slides.pdf
Binary file not shown.
51 changes: 39 additions & 12 deletions _site/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
<meta content="width=device-width, initial-scale=1" name="viewport" />
<!-- <link href="assets/media/favicon.ico" rel="shortcut icon" /> -->
<title>
MHEntropy: Multiple Hypotheses Meet Entropy for Pose and Shape Recovery
MHEntropy: Entropy Meets Multiple Hypotheses for Pose and Shape Recovery
</title>
<meta content="MHEntropy" property="og:title" />
<!-- <meta content="We present an approach for 3D global human mesh recovery from monocular videos recorded with dynamic cameras. Our approach is robust to severe and long-term occlusions and tracks human bodies even when they go outside the camera's field of view. To achieve this, we first propose a deep generative motion infiller, which autoregressively infills the body motions of occluded humans based on visible motions. Additionally, in contrast to prior work, our approach reconstructs human meshes in consistent global coordinates even with dynamic cameras. Since the joint reconstruction of human motions and camera poses is underconstrained, we propose a global trajectory predictor that generates global human trajectories based on local body movements. Using the predicted trajectories as anchors, we present a global optimization framework that refines the predicted trajectories and optimizes the camera poses to match the video evidence such as 2D keypoints. Experiments on challenging indoor and in-the-wild datasets with dynamic cameras demonstrate that the proposed approach outperforms prior methods significantly in terms of motion infilling and global mesh recovery." name="description" property="og:description" /> -->
Expand All @@ -25,22 +25,23 @@
</div>
<div class="n-title">
<h1>
MHEntropy: Multiple Hypotheses Meet Entropy for Pose and Shape Recovery
MHEntropy: <u>Entropy</u> Meets <u>M</u>ultiple <u>H</u>ypotheses for Pose and Shape Recovery
<!-- MHEntropy: Entropy Meets Multiple Hypotheses for Pose and Shape Recovery -->
</h1>
</div>
<div class="n-byline">
<div class="byline">
<ul class="authors">
<li>
<a href="" target="_blank">Rongyu Chen*</a>
<a href="https://gloryyrolg.github.io/" target="_blank">Rongyu CHEN*</a>
<!-- <sup>1, 2</sup> -->
</li>
<li>
<a href="" target="_blank">Linlin Yang*</a>
<a href="https://www.mu4yang.com/" target="_blank">Linlin YANG*</a>
<!-- <sup>1</sup> -->
</li>
<li>
<a href="" target="_blank">Angela Yao†</a>
<a href="https://www.comp.nus.edu.sg/~ayao/" target="_blank">Angela YAO</a>
<!-- <sup>1</sup> -->
</li>
</ul>
Expand All @@ -49,30 +50,37 @@ <h1>
<!-- <sup>
1
</sup> -->
National University of Singapore
<a href="https://cvml.comp.nus.edu.sg/" target="_blank">
Computer Vision & Machine Learning Group, School of Computing, National University of Singapore
</a>
</li>
</ul>
<ul class="authors venue">
<li>
ICCV 2023
IEEE/CVF International Conference on Computer Vision (ICCV) 2023
</li>
</ul>
<ul class="authors links">
<li>
<a href="" target="_blank">
<a href="assets/media/mhentropy_pdf.pdf" target="_blank">
<button class="btn"><i class="fa fa-file-pdf"></i> Paper</button>
</a>
</li>
<li>
<a href="" target="_blank">
<!-- <li>
<a href="https://www.youtube.com/watch?v=0riX3iJeVyM" target="_blank">
<button class="btn"><i class="fab fa-youtube fa-w-18"></i> Video</button>
</a>
</li>
</li> -->
<li>
<a href="https://github.com/gloryyrolg/MHEntropy" target="_blank">
<button class="btn"><i class="fab fa-github"></i> Code</button>
</a>
</li>
<li>
<a href="assets/media/mhentropy_slides.pdf" target="_blank">
<button class="btn"><i class="fa fa-file-pdf"></i> Slides</button>
</a>
</li>
</ul>
</div>
</div>
Expand All @@ -91,13 +99,32 @@ <h1>
</div>
</div>

<h2 id="abstract">
Abstract
</h2>
<p>
For monocular RGB-based 3D pose and shape estimation, multiple solutions are often feasible due to factors like occlusions and truncations. This work presents a multi-hypothesis probabilistic framework by optimizing the Kullback–Leibler divergence (KLD) between the data and model distribution. Our formulation reveals a connection between the pose entropy and diversity in the multiple hypotheses that has been neglected by previous works. For a comprehensive evaluation, besides the best hypothesis (BH) metric, we factor in visibility for evaluating diversity. Additionally, our framework is label-friendly – it can be learned from only partial 2D keypoints, such as visible keypoints. Experiments on both ambiguous and realworld benchmarks demonstrate that our method outperforms other state-of-the-art multi-hypothesis methods.
</p>

<h2>
Poster
</h2>
<embed src="assets/media/mhentropy_poster.pdf" width="100%" height="550px" />

<h2>
Presentation
</h2>
<div class="videoWrapper shadow">
<iframe width="705" height="397" border-style=none src="https://www.youtube.com/embed/0riX3iJeVyM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>

<h2 id="citation">
Citation
</h2>
<pre class="bibtex">
<code>
@inproceedings{chenyang2023MHEntropy,
title={MHEntropy: Multiple Hypotheses Meet Entropy for Pose and Shape Recovery},
title={ {MHEntropy}: Entropy Meets Multiple Hypotheses for Pose and Shape Recovery},
author={Chen, Rongyu and Yang, Linlin and Yao, Angela},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2023}
Expand Down
File renamed without changes.
File renamed without changes
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Binary file removed assets/media/WalkingAround_Joe_glamr_vs_hybrik.mp4
Binary file not shown.
Binary file removed assets/media/basketball_glamr_vs_hybrik.mp4
Binary file not shown.
Binary file removed assets/media/favicon.ico
Binary file not shown.
Binary file removed assets/media/glamr_overview.png
Binary file not shown.
Binary file removed assets/media/glamr_res1.mp4
Binary file not shown.
Binary file removed assets/media/glamr_res2.mp4
Binary file not shown.
Binary file removed assets/media/glamr_res3.mp4
Binary file not shown.
Binary file removed assets/media/glamr_sample.mp4
Binary file not shown.
Binary file removed assets/media/glamr_teaser.mp4
Binary file not shown.
Binary file added assets/media/mhentropy_pdf.pdf
Binary file not shown.
Binary file added assets/media/mhentropy_poster.pdf
Binary file not shown.
Binary file added assets/media/mhentropy_slides.pdf
Binary file not shown.
Binary file removed assets/media/running_glamr_vs_hybrik.mp4
Binary file not shown.
51 changes: 39 additions & 12 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
<meta content="width=device-width, initial-scale=1" name="viewport" />
<!-- <link href="assets/media/favicon.ico" rel="shortcut icon" /> -->
<title>
MHEntropy: Multiple Hypotheses Meet Entropy for Pose and Shape Recovery
MHEntropy: Entropy Meets Multiple Hypotheses for Pose and Shape Recovery
</title>
<meta content="MHEntropy" property="og:title" />
<!-- <meta content="We present an approach for 3D global human mesh recovery from monocular videos recorded with dynamic cameras. Our approach is robust to severe and long-term occlusions and tracks human bodies even when they go outside the camera's field of view. To achieve this, we first propose a deep generative motion infiller, which autoregressively infills the body motions of occluded humans based on visible motions. Additionally, in contrast to prior work, our approach reconstructs human meshes in consistent global coordinates even with dynamic cameras. Since the joint reconstruction of human motions and camera poses is underconstrained, we propose a global trajectory predictor that generates global human trajectories based on local body movements. Using the predicted trajectories as anchors, we present a global optimization framework that refines the predicted trajectories and optimizes the camera poses to match the video evidence such as 2D keypoints. Experiments on challenging indoor and in-the-wild datasets with dynamic cameras demonstrate that the proposed approach outperforms prior methods significantly in terms of motion infilling and global mesh recovery." name="description" property="og:description" /> -->
Expand All @@ -33,22 +33,23 @@
</div>
<div class="n-title">
<h1>
MHEntropy: Multiple Hypotheses Meet Entropy for Pose and Shape Recovery
MHEntropy: <u>Entropy</u> Meets <u>M</u>ultiple <u>H</u>ypotheses for Pose and Shape Recovery
<!-- MHEntropy: Entropy Meets Multiple Hypotheses for Pose and Shape Recovery -->
</h1>
</div>
<div class="n-byline">
<div class="byline">
<ul class="authors">
<li>
<a href="" target="_blank">Rongyu Chen*</a>
<a href="https://gloryyrolg.github.io/" target="_blank">Rongyu CHEN*</a>
<!-- <sup>1, 2</sup> -->
</li>
<li>
<a href="" target="_blank">Linlin Yang*</a>
<a href="https://www.mu4yang.com/" target="_blank">Linlin YANG*</a>
<!-- <sup>1</sup> -->
</li>
<li>
<a href="" target="_blank">Angela Yao†</a>
<a href="https://www.comp.nus.edu.sg/~ayao/" target="_blank">Angela YAO</a>
<!-- <sup>1</sup> -->
</li>
</ul>
Expand All @@ -57,30 +58,37 @@ <h1>
<!-- <sup>
1
</sup> -->
National University of Singapore
<a href="https://cvml.comp.nus.edu.sg/" target="_blank">
Computer Vision & Machine Learning Group, School of Computing, National University of Singapore
</a>
</li>
</ul>
<ul class="authors venue">
<li>
ICCV 2023
IEEE/CVF International Conference on Computer Vision (ICCV) 2023
</li>
</ul>
<ul class="authors links">
<li>
<a href="" target="_blank">
<a href="assets/media/mhentropy_pdf.pdf" target="_blank">
<button class="btn"><i class="fa fa-file-pdf"></i> Paper</button>
</a>
</li>
<li>
<a href="" target="_blank">
<!-- <li>
<a href="https://www.youtube.com/watch?v=0riX3iJeVyM" target="_blank">
<button class="btn"><i class="fab fa-youtube fa-w-18"></i> Video</button>
</a>
</li>
</li> -->
<li>
<a href="https://github.com/gloryyrolg/MHEntropy" target="_blank">
<button class="btn"><i class="fab fa-github"></i> Code</button>
</a>
</li>
<li>
<a href="assets/media/mhentropy_slides.pdf" target="_blank">
<button class="btn"><i class="fa fa-file-pdf"></i> Slides</button>
</a>
</li>
</ul>
</div>
</div>
Expand All @@ -99,13 +107,32 @@ <h1>
</div>
</div>

<h2 id="abstract">
Abstract
</h2>
<p>
For monocular RGB-based 3D pose and shape estimation, multiple solutions are often feasible due to factors like occlusions and truncations. This work presents a multi-hypothesis probabilistic framework by optimizing the Kullback–Leibler divergence (KLD) between the data and model distribution. Our formulation reveals a connection between the pose entropy and diversity in the multiple hypotheses that has been neglected by previous works. For a comprehensive evaluation, besides the best hypothesis (BH) metric, we factor in visibility for evaluating diversity. Additionally, our framework is label-friendly – it can be learned from only partial 2D keypoints, such as visible keypoints. Experiments on both ambiguous and realworld benchmarks demonstrate that our method outperforms other state-of-the-art multi-hypothesis methods.
</p>

<h2>
Poster
</h2>
<embed src="assets/media/mhentropy_poster.pdf" width="100%" height="550px" />

<h2>
Presentation
</h2>
<div class="videoWrapper shadow">
<iframe width="705" height="397" border-style=none src="https://www.youtube.com/embed/0riX3iJeVyM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>

<h2 id="citation">
Citation
</h2>
<pre class="bibtex">
<code>
@inproceedings{chenyang2023MHEntropy,
title={{MHEntropy}: Multiple Hypotheses Meet Entropy for Pose and Shape Recovery},
title={ {MHEntropy}: Entropy Meets Multiple Hypotheses for Pose and Shape Recovery},
author={Chen, Rongyu and Yang, Linlin and Yao, Angela},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2023}
Expand Down

0 comments on commit 6cdb758

Please sign in to comment.