Skip to content

Commit

Permalink
add: 1 taslp paper
Browse files Browse the repository at this point in the history
  • Loading branch information
jindongwang committed Jul 27, 2023
1 parent 23ffd13 commit 4e776e5
Show file tree
Hide file tree
Showing 3 changed files with 16 additions and 2 deletions.
9 changes: 9 additions & 0 deletions _bibliography/pubs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,15 @@ @inproceedings{zhu2023improving
year={2023},
}

@article{zhu2023boosting,
title={Boosting cross-domain speech recognition with self-supervision},
author={Zhu, Han and Cheng, Gaofeng and Wang, Jindong and Hou, Wenxin and Zhang, Pengyuan and Yan, Yonghong},
journal={IEEE Transactions on Audio, Speech and Language Processing (TASLP)},
year={2023},

arxiv={https://arxiv.org/abs/2206.09783}
}

@inproceedings{qin2023generalizable,
title={Generalizable Low-Resource Activity Recognition with Diverse and Discriminative Representation Learning},
author={Qin, Xin and Wang, Jindong and Ma, Shuo and Lu, Wang and Zhu, Yongchun and Xie, Xing and Chen, Yiqiang},
Expand Down
7 changes: 7 additions & 0 deletions _news/taslp23.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
layout: post
date: 2022-04-15
inline: true
---

Paper *Boosting Cross-Domain Speech Recognition with Self-Supervision* is accepted by TASLP! [[paper](https://arxiv.org/abs/2206.09783)]
2 changes: 0 additions & 2 deletions _pages/publications.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,10 @@ nav: true
- PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization. Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang. [[arxiv](https://arxiv.org/abs/2306.05087)] [[code](https://github.com/WeOpenML/PandaLM)]
- Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup. Damien Teney, Jindong Wang, Ehsan Abbasnejad. [[arxiv](https://arxiv.org/abs/2305.16817)]
- Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations. Hao Chen, Ankit Shah, Jindong Wang, Ran Tao, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj. [[arxiv](https://arxiv.org/abs/2305.12715)]
- Exploring Vision-Language Models for Imbalanced Learning. Yidong Wang, Zhuohao Yu, Jindong Wang, Qiang Heng, Hao Chen, Wei Ye, Rui Xie, Xing Xie, Shikun Zhang. [[arxiv](https://arxiv.org/abs/2304.01457)] [[code](https://github.com/Imbalance-VLM/Imbalance-VLM)]
- An Embarrassingly Simple Baseline for Imbalanced Semi-Supervised Learning. Hao Chen, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Marios Savvides, Bhiksha Raj. [[arxiv](https://arxiv.org/abs/2211.11086)]
- FIXED: Frustratingly Easy Domain Generalization with Mixup. Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie. [[arxiv](https://arxiv.org/abs/2211.05228)]
- Conv-Adapter: Exploring Parameter Efficient Transfer Learning for ConvNets. Hao Chen, Ran Tao, Han Zhang, Yidong Wang, Wei Ye, Jindong Wang, Guosheng Hu, and Marios Savvides. [[arxiv](https://arxiv.org/abs/2208.07463)]
- Equivariant Disentangled Transformation for Domain Generalization under Combination Shift. Yivan Zhang, Jindong Wang, Xing Xie, and Masashi Sugiyama. [[arxiv](https://arxiv.org/abs/2208.02011)]
- Boosting Cross-Domain Speech Recognition with Self-Supervision. Han Zhu, Gaofeng Cheng, Jindong Wang, Wenxin Hou, Pengyuan Zhang, and Yonghong Yan. [[arxiv](https://arxiv.org/abs/2206.09783)]
- Learning Invariant Representations across Domains and Tasks. Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun Xu, Tao Qin, and Tie-Yan Liu. [[arxiv](https://arxiv.org/abs/2103.05114)]
- Learning to match distributions for domain adaptation. Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, and Tie-Yan Liu. [[arxiv](https://arxiv.org/abs/2007.10791)]

Expand Down

0 comments on commit 4e776e5

Please sign in to comment.