Qiming Bao is a Ph.D. Candidate at the Strong AI Lab, NAOInstitute, University of Auckland, New Zealand, supervised by Professor Michael Witbrock. His research interests include natural language processing and reasoning. He has over five years of research and development experience, and has published several papers in top conferences in the fields of AI/NLP/Reasoning, including ACL, AAAI/EAAI, IJCAI, ICLR, EACL, LLM@IJCAI, AGI@ICLR, and IJCLR-NeSy. His method named AMR-LDA (GPT-4 + AMR-LDA Prompt Augmentation) has achieved the #1 ranking on one of the most challenged logical reasoning reading comprehension leaderboards (ReClor) and we are the first group scored above 90% on the hidden test set around the world. Two of his logical reasoning datasets called PARARULE-Plus and AbductionRules have been collected by LogiTorch, ReasoningNLP, Prompt4ReasoningPapers and OpenAI/Evals. Qiming has given public guest talks and academic visit at Microsoft Research Asia, Samsung AI Center Cambridge UK, IEEE Vehicular Technology Society, ZJU-NLP Group, Zhejiang University, The University of Melbourne, Institute of Automation, Chinese Academy of Sciences, Shenzhen MSU-BIT University, University of Massachusetts - Amherst and Pen State University on his main research topic, "Natural Language Processing and Reasoning".
Qiming is an AI researcher and engineer at Xtracta in Auckland, New Zealand, where he used the PEFT adapter for continual training on the large multimodal models InternVL2 and Qwen2-VL for intelligent document processing. He investigated and implemented alternative attention mechanisms to extend the effective sequence length in multi-modal document processing models such as LayoutLMv3 and ERNIE-LayoutX. He replicated the multi-task, multimodal pre-training code for LayoutLMv3, which Microsoft did not open source, including masked language modelling, masked image modelling, and word-patch alignment. He integrated DeepSpeed and adapters into ERNIE-LayoutX and LayoutLMv3, which can reduce training costs, result in a smaller model size, and make it easier to deploy in the production environment. He successfully applied for the Research & Development Tax Incentive (RDTI) grants from Callaghan Innovation (New Zealand's Innovation Agency) for both 2022 and 2023, each offering a tax credit equal to 15% of eligible R&D expenditure. This credit can be utilised to reduce the income tax payable by the company. Prior to this role, he worked as a research and development engineer in AIIT at Peking University, where he focused on automatic abstract generation and GPT-2 based dialog chatbot development. Qiming also has a great deal of teaching experience, having worked as a teaching assistant for three years. He earned a Bachelor of Science (Honours) in Computer Science (First Class) from the University of Auckland and completed a Summer Research Internship with Scholarship in Precision Driven Health & Orion Health. In addition, he was selected as one of ten students to participate in the Summer Research Program funded by Precision Driven Health, where the main topic was developing a Medical Chatbot based on Deep Learning and Knowledge Graph.
-
[09 December 2024] Our paper (Qiming Bao, Juho Leinonen, Alex Peng, Wanjun Zhong, Timothy Pistotti, Alice Huang, Paul Denny, Michael Witbrock and Jiamou Liu) "Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models" has been accepted by AAAI/EAAI-25 [Paper link] [Source code].
-
[22 August 2024] Our paper (Qiming Bao, Gaël Gendron, Alex Peng, Wanjun Zhong, Neset Tan, Yang Chen, Michael Witbrock, Jiamou Liu) "Assessing and Enhancing the Robustness of Large Language Models with Task Structure Variations for Logical Reasoning" has been accepted by ICONIP-24 [Paper link] [Source code].
-
[16 May 2024] Our paper (Qiming Bao, Alex Peng, Zhenyun Deng, Wanjun Zhong, Gaël Gendron, Timothy Pistotti, Neşet Tan, Nathan Young, Yang Chen, Yonghua Zhu, Michael Witbrock and Jiamou Liu) "Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning" has been accepted for publication in the Findings of 62nd Annual Meeting of the Association for Computational Linguistics (ACL-24) [#1 on the ReClor Leaderboard] [Paper link] [Source code].
-
[17 April 2024] Our paper (Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie) "Large Language Models Are Not Strong Abstract Reasoners" has been accepted by IJCAI 2024 [Paper link] [Source code and evaluation platform].
-
[05 March 2024] Our paper (Qiming Bao, Juho Leinonen, Alex Peng, Wanjun Zhong, Timothy Pistotti, Alice Huang, Paul Denny, Michael Witbrock and Jiamou Liu) "Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models" has been accepted by AGI@ICLR 2024 [Paper link] [Source code].
-
[05 March 2024] Our paper (Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie) "Large Language Models Are Not Strong Abstract Reasoners Yet" has been accepted by AGI@ICLR 2024 [Paper link] [Source code and evaluation platform].
-
[01 February 2024] Our paper Our paper (Zhongsheng Wang, Jiamou Liu, Qiming Bao, Hongfei Rong, Jingfeng Zhang) "ChatLogic: Integrating Logic Programming with Large Language Models for Multi-step Reasoning" has been accepted by NucLeaR@AAAI 2024 [Paper link] [Source code].
-
[24 June 2023] Our paper (Qiming Bao, Gaël Gendron, Alex Peng, Wanjun Zhong, Neset Tan, Yang Chen, Michael Witbrock, Jiamou Liu) "A Systematic Evaluation of Large Language Models on Out-of-Distribution Logical Reasoning Tasks" has been accepted by LLM@IJCAI'23 [Paper link] [Source code].
-
[24 June 2023] Our paper (Qiming Bao, Alex Peng, Zhenyun Deng, Wanjun Zhong, Gaël Gendron, Timothy Pistotti, Neşet Tan, Nathan Young, Yang Chen, Yonghua Zhu, Michael Witbrock and Jiamou Liu) "Enhancing Logical Reasoning of Large Language Models through Logic-Driven Data Augmentation" has been accepted by LLM@IJCAI'23 [#1 on the ReClor Leaderboard] [Paper link] [Source code].
-
[10 April 2023] Our paper (Qianqian Qi, Qiming Bao*, Alex Yuxuan Peng, Jiamou Liu, Michael Witbrock) "A Dynamic Prompt-tuning Method for Data Augmentation with Associated Knowledge" has been accepted by ICLR-23 TinyPapers [Paper link].
-
[01 March 2023] Our paper (Neset Tan, Alex Peng, Joshua Bensemann, Qiming Bao, Tim Hartill, Mark Gahegan, and Michael Witbrock) "Input-length-shortening and text generation via attention values" has been accepted by AAAI-EMC^2-23 [Paper link].
-
[23 January 2023] Our paper (Neset Ozkan TAN, Trung Nguyen, Josh Bensemann, Alex Peng, Qiming Bao, Yang Chen, Mark Gahegan and Michael Witcbrock) "Multi2Claim: Generating Scientific Claims from Multi-Choice Questions for Scientific Fact-Checking" has been accepted and published by EACL-23 [Paper link].
-
[16 July 2022] Our paper (Qiming Bao, Alex Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, Jiamou Liu) "Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation" has been accepted for presentation to the 2nd International Joint Conference on Learning & Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy-22) [Paper link] [Source code and dataset] [Presentation recording].
-
[24 February 2022] Our paper (Nathan Young, Qiming Bao, Joshua Ljudo Bensemann, Michael J. Witbrock) "AbductionRules: Training Transformers to Explain Unexpected Inputs" has been accpeted for publication in the Findings of 60th Annual Meeting of the Association for Computational Linguistics (ACL-22) [Paper link] [Source code].
-
[13 November 2021] Our paper (Lin Ni, Qiming Bao, Xiaoxuan Li, Qianqian Qi, Paul Denny, Jim Warren, Michael Witbrock and Jiamou Liu) "DeepQR: Neural-based Quality Ratings for Learnersourced Multiple-Choice Questions" has been accepted for publication in the Long Paper of Twelfth AAAI Symposium on Educational Advances in Artificial Intelligence (AAAI/EAAI-22) [Paper link].
-
[04 February 2020] Our paper (Qiming Bao, Lin Ni, Jiamou Liu) "HHH: An Online Medical Chatbot System based on Knowledge Graph and Hierarchical Bi-Directional Attention" has been accepted for publication in the Long Paper of Australasian Computer Science Week (ACSW-20) [Paper link] [Source code] [Presentation slide] [Recording].
-
[04 October 2023] Our AMR-LDA prompt augmentation with GPT-4 achieves #1 on the ReClor: A Logical Reasoning Reading Comprehension Leaderboard [Leaderboard] [Paper link] [Source code].
-
[13 April 2023] Our two pull requests to OpenAI/Evals have been merged [A Group of more Challenge Logical Reasoning Datasets] [A Larger Deep Multi-Step Deductive Reasoning Dataset].
-
[18 January 2023] We tested and listed the cases that GPT-3, ChatGPT-3.5 and GPT-4 failed, including multi-step reasoning, logical equivalence, and logical reasoning to the spreadsheet created by Gary Marcus and Ernest Davis at NYU. [Tweets Link for Multi-Step Reasoning and Logical Equivalence] [Tweets Link for Logical Reasoning Reading Comprehension] [Tweets Link for GPT-4 fails on Logical Reasoning Reading Comprehension] [Spreadsheet Link] [Submit your cases].
-
[07 December 2022] Qiming Bao achieved #2 (Submission name: AMR-LDA-Ensemble (AMR-LDA-Deberta-v2-xxlarge(Ense)) and #4 (Submission name: AMR-LDA (DeBERTa-v2-xxlarge-AMR-LDA-Cont)) on the ReClor: A Logical Reasoning Reading Comprehension Leaderboard [Leaderboard] [Paper] [Source code] [Model weights].
-
[17 November 2022] Our PARARULE Plus (Multi-step deductive reasoning) and AbductionRules (Abductive reasoning) datasets are collected as part of LogiTorch.ai.
-
[09 January 2022] One of the contributers to an open source tool for simplifying bibtex called "SimBiber" [GitHub link].