Skip to content

Commit

Permalink
Update about.md
Browse files Browse the repository at this point in the history
  • Loading branch information
jindongwang committed May 14, 2024
1 parent df3ddd6 commit cedbf4e
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions _pages/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ Building 2, No. 5 Danling Street, Haidian District, Beijing, China<br>
jindongwang [at] outlook.com, jindong.wang [at] microsoft.com<br>
[Google scholar](https://scholar.google.com/citations?&user=hBZ_tKsAAAAJ&view_op=list_works&sortby=pubdate) | [DBLP](https://dblp.org/pid/19/2969-1.html) | [Github](https://github.com/jindongwang) || [Twitter/X](https://twitter.com/jd92wang) | [Zhihu](https://www.zhihu.com/people/jindongwang) | [Wechat](http://jd92.wang/assets/img/wechat_public_account.jpg) | [Bilibili](https://space.bilibili.com/477087194) || [CV](https://go.jd92.wang/cv) [CV (Chinese)](https://go.jd92.wang/cvchinese)

Dr. Jindong Wang is currently a Senior Researcher at Microsoft Research Asia. He obtained his Ph.D from Institute of Computing Technology, Chinese Academy of Sciences in 2019. In 2018, he visited Prof. Qiang Yang’s group at Hong Kong University of Science and Technology. His research interest includes robust machine learning, transfer learning, semi-supervised learning, and federated learning. His recent interest is large language models. He has published over 50 papers with 9000+ citations at leading conferences and journals such as ICLR, NeurIPS, TPAMI, IJCV etc. His research is reported by [Forbes](https://www.forbes.com/sites/lanceeliot/2023/11/11/the-answer-to-why-emotionally-worded-prompts-can-goose-generative-ai-into-better-answers-and-how-to-spur-a-decidedly-positive-rise-out-of-ai/?sh=38038fb137e5) and other international media. He has several Google scholar highly cited papers, Huggingface featured papers, and paperdigest most influential papers. He received the best paper award at ICCSE'18 and IJCAI'19 federated learning workshop and the prestigous excellent Ph.D thesis award (only 1 at ICT each year). In 2023, he was selected by Stanford University as one of the [World's 2% Scientists](https://ecebm.com/2023/10/04/stanford-university-names-worlds-top-2-scientists-2023/) and one of the [AI Most Influential Scholars](https://www.aminer.cn/ai2000?domain_ids=5dc122672ebaa6faa962c2a4) by AMiner. He serves as the associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS), guest editor for ACM Transactions on Intelligent Systems and Technology (TIST), senior program committee member of IJCAI and AAAI, and reviewers for top conferences and journals like ICML, NeurIPS, ICLR, CVPR, TPAMI, AIJ etc. He leads several impactful open-source projects, including [transferlearning](https://github.com/jindongwang/transferlearning), [PromptBench](https://github.com/microsoft/promptbench), [torchSSL](https://github.com/torchssl/torchssl), [USB](https://github.com/microsoft/Semi-superised-learning), [personalizedFL](https://github.com/microsoft/PersonalizedFL), and [robustlearn](https://github.com/microsoft/robustlearn), which received over 16K stars on Github. He published a textbook [Introduction to Transfer Learning](http://jd92.wang/tlbook) to help starters quickly learn transfer learning. He gave tutorials at [IJCAI'22](https://dgresearch.github.io/), [WSDM'23](https://dgresearch.github.io/), [KDD'23](https://mltrust.github.io/), and [AAAI'24](https://ood-timeseries.github.io/).
Dr. Jindong Wang is currently a Senior Researcher at Microsoft Research Asia. He obtained his Ph.D from Institute of Computing Technology, Chinese Academy of Sciences in 2019. In 2018, he visited Prof. Qiang Yang’s group at Hong Kong University of Science and Technology. His research interest includes robust machine learning, transfer learning, semi-supervised learning, and federated learning. His recent interest is large language models. He has published over 50 papers with 10000+ citations at leading conferences and journals such as ICLR, NeurIPS, TPAMI, IJCV etc. His research is reported by [Forbes](https://www.forbes.com/sites/lanceeliot/2023/11/11/the-answer-to-why-emotionally-worded-prompts-can-goose-generative-ai-into-better-answers-and-how-to-spur-a-decidedly-positive-rise-out-of-ai/?sh=38038fb137e5) and other international media. He has several Google scholar highly cited papers, Huggingface featured papers, and paperdigest most influential papers. He received the best paper award at ICCSE'18 and IJCAI'19 federated learning workshop and the prestigous excellent Ph.D thesis award (only 1 at ICT each year). In 2023, he was selected by Stanford University as one of the [World's 2% Scientists](https://ecebm.com/2023/10/04/stanford-university-names-worlds-top-2-scientists-2023/) and one of the [AI Most Influential Scholars](https://www.aminer.cn/ai2000?domain_ids=5dc122672ebaa6faa962c2a4) by AMiner. He serves as the associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS), guest editor for ACM Transactions on Intelligent Systems and Technology (TIST), area chair for NeurIPS, KDD, and ACMMM, senior program committee member of IJCAI and AAAI, and reviewers for top conferences and journals like ICML, NeurIPS, ICLR, CVPR, TPAMI, AIJ etc. He leads several impactful open-source projects, including [transferlearning](https://github.com/jindongwang/transferlearning), [PromptBench](https://github.com/microsoft/promptbench), [torchSSL](https://github.com/torchssl/torchssl), [USB](https://github.com/microsoft/Semi-superised-learning), [personalizedFL](https://github.com/microsoft/PersonalizedFL), and [robustlearn](https://github.com/microsoft/robustlearn), which received over 16K stars on Github. He published a textbook [Introduction to Transfer Learning](http://jd92.wang/tlbook) to help starters quickly learn transfer learning. He gave tutorials at [IJCAI'22](https://dgresearch.github.io/), [WSDM'23](https://dgresearch.github.io/), [KDD'23](https://mltrust.github.io/), and [AAAI'24](https://ood-timeseries.github.io/).

**Research interest:** (See this [page](https://jd92.wang/research/) for more details)
- Machine learning: robust machine learning, OOD / domain generalization, transfer learning, semi-supervised learning, federated learning, and related applications.
- Large language models: LLM [evaluation](https://llm-eval.github.io/) and [enhancement](https://llm-enhance.github.io/).
- Large language models: LLM [evaluation](https://llm-eval.github.io/), [enhancement](https://llm-enhance.github.io/), and AI for social sciences.
- *Interested in [internship](https://zhuanlan.zhihu.com/p/102558267) or collaboration? Contact me.* I'm experimenting a new form of research collaboration. You can click [here](https://forms.office.com/r/32Fs6uAjT6) if you are interested!

0 comments on commit cedbf4e

Please sign in to comment.