Skip to content

Commit

Permalink
Add news and course information
Browse files Browse the repository at this point in the history
  • Loading branch information
jindongwang committed Nov 27, 2023
1 parent ce206e0 commit d4cd968
Show file tree
Hide file tree
Showing 7 changed files with 41 additions and 5 deletions.
20 changes: 20 additions & 0 deletions _bibliography/pubs.bib
Original file line number Diff line number Diff line change
@@ -1,6 +1,26 @@
---
---
@inproceedings{wang2024fixed,
title={FIXED: Frustratingly Easy Domain Generalization with Mixup},
author={Lu, Wang and Wang, Jindong and Yu, Han and Huang, Lei and Zhang, Xiang and Chen, Yiqiang and Xie, Xing},
booktitle={Conference on Parsimony and Learning (CPAL)},
year={2024},
corr={true},
abbr={CPAL},
arxiv={https://arxiv.org/abs/2211.05228},
code={https://github.com/jindongwang/transferlearning/tree/master/code/DeepDG}
}

@inproceedings{yu2024upnet,
title={UP-Net: An Uncertainty-Driven Prototypical Network for Few-Shot Fault Diagnosis},
author={Yu, Ge and Wang, Jindong and Liu, Jinhai and Zhang, Xi and Chen, Yiqiang and Xie, Xing},
journal={IEEE Transactions on Neural Networks and Learning Systems (TNNLS)},
year={2024},
abbr={TNNLS}
}


@inproceedings{wang2024optimization,
title={Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition},
author={Wang, Shuoyuan and Wang, Jindong and Xi, Huajun and Zhang, Bob and Zhang, Lei and Wei, Hongxin},
Expand Down
7 changes: 7 additions & 0 deletions _news/cpal23.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
layout: post
date: 2023-11-21
inline: true
---

Our paper "FIXED: Frustratingly Easy Domain Generalization with Mixup" is accepted by Conference on Parsimony and Learning (CPAL) 2023! [[arxiv](https://arxiv.org/abs/2211.05228)]
7 changes: 7 additions & 0 deletions _news/tnnls23.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
layout: post
date: 2023-11-26
inline: true
---

Our paper "UP-Net: An Uncertainty-Driven Prototypical Network for Few-Shot Fault Diagnosis" is accepted by IEEE TNNLS!
6 changes: 3 additions & 3 deletions _pages/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,11 @@ social: true # includes social icons at the bottom of the page
Senior Researcher, Microsoft Research Asia<br>
Building 2, No. 5 Danling Street, Haidian District, Beijing, China<br>
jindongwang [at] outlook.com, jindong.wang [at] microsoft.com<br>
[Google scholar](https://scholar.google.com/citations?user=hBZ_tKsAAAAJ) | [DBLP](https://dblp.org/pid/19/2969-1.html) | [Github](https://github.com/jindongwang) || [Twitter](https://twitter.com/jd92wang) | [Zhihu](https://www.zhihu.com/people/jindongwang) | [Wechat](http://jd92.wang/assets/img/wechat_public_account.jpg) | [Bilibili](https://space.bilibili.com/477087194) || [CV](https://go.jd92.wang/cv) [CV (Chinese)](https://go.jd92.wang/cvchinese)
[Google scholar](https://scholar.google.com/citations?user=hBZ_tKsAAAAJ) | [DBLP](https://dblp.org/pid/19/2969-1.html) | [Github](https://github.com/jindongwang) || [Twitter/X](https://twitter.com/jd92wang) | [Zhihu](https://www.zhihu.com/people/jindongwang) | [Wechat](http://jd92.wang/assets/img/wechat_public_account.jpg) | [Bilibili](https://space.bilibili.com/477087194) || [CV](https://go.jd92.wang/cv) [CV (Chinese)](https://go.jd92.wang/cvchinese)

Dr. Jindong Wang is currently a Senior Researcher at Microsoft Research Asia. He obtained his Ph.D from Institute of Computing Technology, Chinese Academy of Sciences in 2019. He visited Qiang Yang’s group at Hong Kong University of Science and Technology in 2018. His research interest includes robust machine learning, transfer learning, semi-supervised learning, and federated learning. He has published over 50 papers with 6900 citations at leading conferences and journals such as ICLR, NeurIPS, TKDE, TASLP etc. He has 6 highly cited papers in [Google Scholar metrics](https://www.aminer.cn/ai2000?domain_ids=5dc122672ebaa6faa962c2a4). His paper "FedHealth" received the best application paper award at IJCAI FL workshop and it is the most cited paper among all federated learning for healthcare papers. He also received other awards including best paper award at ICCSE'18 and the prestigous excellent Ph.D thesis award (only 1 at ICT each year). In 2022 and 2023, he was selected as one of the [AI 2000 Most Influential Scholars](https://www.aminer.cn/ai2000?domain_ids=5dc122672ebaa6faa962c2a4) by AMiner between 2012-2022. He serves as the senior program committee member of IJCAI and AAAI, and PC members for top conferences like ICML, NeurIPS, ICLR, CVPR etc. He opensourced several projects to help build a better community, such as transferlearning, torchSSL, USB, personalizedFL, and robustlearn, which received over 12K stars on Github. He published a textbook [Introduction to Transfer Learning](http://jd92.wang/tlbook) to help starters quickly learn transfer learning. He gave tutorials at [IJCAI'22](https://dgresearch.github.io/), [WSDM'23](https://dgresearch.github.io/), and [KDD'23](https://mltrust.github.io/).
Dr. Jindong Wang is currently a Senior Researcher at Microsoft Research Asia. He obtained his Ph.D from Institute of Computing Technology, Chinese Academy of Sciences in 2019. He visited Qiang Yang’s group at Hong Kong University of Science and Technology in 2018. His research interest includes robust machine learning, transfer learning, semi-supervised learning, and federated learning. His recent interest is large language models. He has published over 50 papers with 7000 citations at leading conferences and journals such as ICLR, NeurIPS, TKDE, TASLP etc. He has 6 highly cited papers according to [Google Scholar metrics](https://scholar.google.com/citations?view_op=top_venues). He received the best paper award at ICCSE'18 and IJCAI'19 federated learning workshop and the prestigous excellent Ph.D thesis award (only 1 at ICT each year). In 2022 and 2023, he was selected as one of the [AI 2000 Most Influential Scholars](https://www.aminer.cn/ai2000?domain_ids=5dc122672ebaa6faa962c2a4) by AMiner between 2013-2023. He serves as the senior program committee member of IJCAI and AAAI, and reviewers for top conferences and journals like ICML, NeurIPS, ICLR, CVPR, TPAMI, AIJ etc. He opensourced several projects to help build a better community, such as transferlearning, torchSSL, USB, personalizedFL, and robustlearn, which received over 12K stars on Github. He published a textbook [Introduction to Transfer Learning](http://jd92.wang/tlbook) to help starters quickly learn transfer learning. He gave tutorials at [IJCAI'22](https://dgresearch.github.io/), [WSDM'23](https://dgresearch.github.io/), and [KDD'23](https://mltrust.github.io/).

**Research interest:** robust machine learning, out-of-distribution / domain generalization, transfer learning, semi-supervised learning, federated learning, and related applications such as activity recognition and computer vision. These days, I'm particularly interested in Large Language Models (LLMs) [evaluation](https://llm-eval.github.io/) and [robustness enhancement](https://llm-enhance.github.io/). See this [page](https://jd92.wang/research/) for more details. *Interested in [internship](https://zhuanlan.zhihu.com/p/102558267) or collaboration? Contact me.*
**Research interest:** robust machine learning, out-of-distribution / domain generalization, transfer learning, semi-supervised learning, federated learning, and related applications such as activity recognition and computer vision. These days, I'm particularly interested in Large Language Models (LLMs) [evaluation](https://llm-eval.github.io/) and [enhancement](https://llm-enhance.github.io/). See this [page](https://jd92.wang/research/) for more details. *Interested in [internship](https://zhuanlan.zhihu.com/p/102558267) or collaboration? Contact me.*

**Announcement:** I'm experimenting a new form of research collaboration. You can click [here](https://forms.office.com/r/32Fs6uAjT6) if you are interested!

Expand Down
1 change: 0 additions & 1 deletion _pages/publications.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ nav: true
- Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup. Damien Teney, Jindong Wang, Ehsan Abbasnejad. [[arxiv](https://arxiv.org/abs/2305.16817)]
- Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations. Hao Chen, Ankit Shah, Jindong Wang, Ran Tao, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj. [[arxiv](https://arxiv.org/abs/2305.12715)]
- An Embarrassingly Simple Baseline for Imbalanced Semi-Supervised Learning. Hao Chen, Yue Fan, Yidong Wang, Jindong Wang, Bernt Schiele, Xing Xie, Marios Savvides, Bhiksha Raj. [[arxiv](https://arxiv.org/abs/2211.11086)]
- FIXED: Frustratingly Easy Domain Generalization with Mixup. Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie. [[arxiv](https://arxiv.org/abs/2211.05228)]
- Conv-Adapter: Exploring Parameter Efficient Transfer Learning for ConvNets. Hao Chen, Ran Tao, Han Zhang, Yidong Wang, Wei Ye, Jindong Wang, Guosheng Hu, and Marios Savvides. [[arxiv](https://arxiv.org/abs/2208.07463)]
- Equivariant Disentangled Transformation for Domain Generalization under Combination Shift. Yivan Zhang, Jindong Wang, Xing Xie, and Masashi Sugiyama. [[arxiv](https://arxiv.org/abs/2208.02011)]
- Learning Invariant Representations across Domains and Tasks. Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun Xu, Tao Qin, and Tie-Yan Liu. [[arxiv](https://arxiv.org/abs/2103.05114)]
Expand Down
4 changes: 3 additions & 1 deletion _pages/research.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,9 @@ Open source:
##### Out-of-distribution (Domain) generalization and adaptation for distribution shift

- **[UbiComp'24]** [Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition](https://arxiv.org/abs/2310.18562). Shuoyuan Wang, Jindong Wang, HuaJun Xi, Bob Zhang, Lei Zhang, Hongxin Wei.
- **[NeurIPS'23]** Generating and Distilling Discrete Adversarial Examples from Large-Scale Models. Andy Zhou, Jindong Wang, Yu-Xiong Wang, Haohan Wang.
- **[CPAL'24]** [Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition](https://arxiv.org/abs/2310.18562). Shuoyuan Wang, Jindong Wang , Huajun Xi, Bob Zhang, Lei Zhang, and Hongxin Wei.
- **[TNNLS'24]** UP-Net: An Uncertainty-Driven Prototypical Network for Few-Shot Fault Diagnosis. Ge Yu, Jindong Wang, Jinhai Liu, Xi Zhang, Yiqiang Chen, and Xing Xie.
- **[NeurIPS'23]** [Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models](https://arxiv.org/abs/2311.01441). Andy Zhou, Jindong Wang, Yu-Xiong Wang, Haohan Wang.
- **[ICCV'23]** Improving Generalization of Adversarial Training via Robust Critical Fine-Tuning. Kaijie Zhu, Xixu Hu, Jindong Wang, Xing Xie, Ge Yang.
- **[ICLR'23]** [Out-of-distribution Representation Learning for Time Series Classification](https://arxiv.org/abs/2209.07027). Wang Lu, Jindong Wang, Xinwei Sun, Yiqiang Chen, and Xing Xie.
- **[KDD'23]** [Domain-Specific Risk Minimization for Out-of-Distribution Generalization](https://arxiv.org/pdf/2208.08661.pdf). YiFan Zhang, Jindong Wang, Jian Liang, Zhang Zhang, Baosheng Yu, Liang Wang, Xing Xie, and Dacheng Tao.
Expand Down
1 change: 1 addition & 0 deletions _pages/talks.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ nav: true

#### Invited Course

- Invited course: **Transfer learning and large languaeg models**, at City University of Hong Kong. 2023.
- Invited course: **Transfer learning**, at Institute of Computing Technology, CAS. 2023.
- Invited course: **Transfer learning**, at Tsinghua University. Dec. 2019. (THU's
advanced machine learning course for EE graduates) [[Class photo](http://jd92.wang/image/img_thu.png)]
Expand Down

0 comments on commit d4cd968

Please sign in to comment.