Skip to content

Commit

Permalink
update demo videos
Browse files Browse the repository at this point in the history
  • Loading branch information
aiden-ygu committed May 21, 2024
1 parent f7322a8 commit eead340
Show file tree
Hide file tree
Showing 10 changed files with 35 additions and 17 deletions.
Binary file added image_assets/video_demo/CT-Abdomen_short.mp4
Binary file not shown.
Binary file added image_assets/video_demo/CT-COVID.mp4
Binary file not shown.
Binary file added image_assets/video_demo/CT-nodule.mp4
Binary file not shown.
Binary file added image_assets/video_demo/Cat.mp4
Binary file not shown.
Binary file removed image_assets/video_demo/Demo-example3.mp4
Binary file not shown.
Binary file removed image_assets/video_demo/Demo-upload3.mp4
Binary file not shown.
Binary file added image_assets/video_demo/MRI-Brain-T1Gd.mp4
Binary file not shown.
Binary file added image_assets/video_demo/Pathology_all_cells.mp4
Binary file not shown.
Binary file added image_assets/video_demo/Pathology_prompts.mp4
Binary file not shown.
52 changes: 35 additions & 17 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -233,43 +233,61 @@ <h2 class="title is-2 publication-title" style="font-family: 'Google Lexend', sa
<!-- sections for segmentation, detection, recognition -->
<section class="section" id="segmentation" style="display: flex; justify-content: space-between;align-items: center; margin:5%;">
<!-- left be caption and right be video -->
<div class="text-section" style="width: 40%;">
<div class="text-section" style="width: 30%;">
<h1 style="font-size: 2em;">Everything</h1>
<p>On image <a href="#" style="color: #6366f1;"><strong>segmentation</strong></a>, we showed that BiomedParse is broadly applicable, outperforming state-of-the-art methods on 102,855 test image-mask-label triples across 9 imaging modalities.</p>
</div>
<div class="video-section" style="width: 60%;">
<div class="video-section" style="width: 35%;">
<video autoplay loop muted>
<source src="./image_assets/video_demo/Demo-example3.mp4" type="video/mp4">
<source src="./image_assets/video_demo/MRI-Brain-T1Gd.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
<div class="video-section" style="width: 35%;">
<video autoplay loop muted>
<source src="./image_assets/video_demo/Pathology_prompts.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
</section>

<section class="section" id="detection" style="display: flex; justify-content: space-between;align-items: center; margin:5%;">
<section class="section" id="detection" style="display: flex; wrap; justify-content: space-between;align-items: center; margin:5%;">
<!-- left be caption and right be video -->
<div class="text-section" style="width: 40%;">
<div class="text-section" style="width: 30%;">
<h1 style="font-size: 2em;">Everywhere</h1>
<p>BiomedParse is also able to identify invalid user inputs describing objects that do not exist in the image. On object <a href="#" style="color: #6366f1;""><strong>detection</strong></a>, which aims to locate a specific object of interest, BiomedParse again attained state-of-the-art performance, especially on objects with irregular shapes.</p>
</div>
<div class="video-section" style="width: 60%;">
<div class="video-section" style="width: 35%;">
<video autoplay loop muted>
<source src="./image_assets/video_demo/Demo-example3.mp4" type="video/mp4">
<source src="./image_assets/video_demo/CT-Abdomen_short.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
<div class="video-section" style="width: 35%;">
<video autoplay loop muted>
<source src="./image_assets/video_demo/Pathology_all_cells.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
</section>

<section class="section" id="recognition" style="display: flex; justify-content: space-between;align-items: center; margin:5%;">
<!-- left be caption and right be video -->
<div class="text-section" style="width: 40%;">
<div class="text-section" style="width: 30%;">
<h1 style="font-size: 2em;">All at Once</h1>
<p>On object <a href="#" style="color: #6366f1;""><strong>recognition</strong></a>, which aims to identify all objects in a given image along with their semantic types, we showed that \ourmethod can simultaneously segment and label all biomedical objects in an image without any user-provided input</p>
<p>On object <a href="#" style="color: #6366f1;""><strong>recognition</strong></a>, which aims to identify all objects in a given image along with their semantic types, we showed that BiomedParse can simultaneously segment and label all biomedical objects in an image without any user-provided input</p>
</div>
<div class="video-section" style="width: 60%;">
<video autoplay loop muted>
<source src="./image_assets/video_demo/Demo-example3.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<div class="video-section" style="width: 35%;">
<video autoplay loop muted>
<source src="./image_assets/video_demo/Cat.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
<div class="video-section" style="width: 35%;">
<video autoplay loop muted>
<source src="./image_assets/video_demo/CT-nodule.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
</section>

Expand Down Expand Up @@ -605,11 +623,11 @@ <h2 class="title is-3">Related Work</h2>
</ol>
<li><a href="https://arxiv.org/abs/2203.11926" style="color: #000000; text-decoration: underline;">Focal Modulation Networks</a> by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan and Jianfeng Gao.</li>
</ol>
<li><a href="https://arxiv.org/pdf/2007.15779" style="color: #000000; text-decoration: underline;">Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing</a> by Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon.</li>
<li><a href="https://arxiv.org/pdf/2007.15779" style="color: #000000; text-decoration: underline;">Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing</a> by Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodonzzg Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon.</li>
</ol>
<li><a href="https://arxiv.org/pdf/2007.15779" style="color: #000000; text-decoration: underline;">BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs</a> by Sheng Zhang, Yanbo Xu, Naoto Usuyama, Hanwen Xu, Jaspreet Bagga, Robert Tinn, Sam Preston, Rajesh Rao, Mu Wei, Naveen Valluri, Cliff Wong, Andrea Tupini, Yu Wang, Matt Mazzola, Swadheen Shukla, Lars Liden, Jianfeng Gao, Matthew P. Lungren, Tristan Naumann, Sheng Wang, Hoifung Poon.</li>
<li><a href="https://arxiv.org/pdf/2007.15779" style="color: #6366f1; text-decoration: underline;">BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs</a> by Sheng Zhang, Yanbo Xu, Naoto Usuyama, Hanwen Xu, Jaspreet Bagga, Robert Tinn, Sam Preston, Rajesh Rao, Mu Wei, Naveen Valluri, Cliff Wong, Andrea Tupini, Yu Wang, Matt Mazzola, Swadheen Shukla, Lars Liden, Jianfeng Gao, Matthew P. Lungren, Tristan Naumann, Sheng Wang, Hoifung Poon.</li>
</ol>
<li><a href="https://arxiv.org/abs/2310.10765" style="color: #000000; text-decoration: underline;">BiomedJourney: Counterfactual Biomedical Image Generation by Instruction-Learning from Multimodal Patient Journeys</a> by Yu Gu, Jianwei Yang, Naoto Usuyama, Chunyuan Li, Sheng Zhang, Matthew P. Lungren, Jianfeng Gao, Hoifung Poon.</li>
<li><a href="https://arxiv.org/abs/2310.10765" style="color: #6366f1; text-decoration: underline;">BiomedJourney: Counterfactual Biomedical Image Generation by Instruction-Learning from Multimodal Patient Journeys</a> by Yu Gu, Jianwei Yang, Naoto Usuyama, Chunyuan Li, Sheng Zhang, Matthew P. Lungren, Jianfeng Gao, Hoifung Poon.</li>
</div>
</div>
</div>
Expand Down

0 comments on commit eead340

Please sign in to comment.