GDST Basic Research Fund (PI: Dr. Bo Han, Department of Computer Science, Hong Kong Baptist University)
Project Award Information
Award Number: GDST BRF 2022A1515011652
Title: Trustworthy Deep Reasoning with Human-level Constraints
Principal Investigator (PI): Dr. Bo Han, Department of Computer Science, Hong Kong Baptist University
Project Summary
The success of deep learning is mainly due to large-scale high-quality data, such as ImageNet and SQuAD datasets. In practice, these datasets cannot be easily acquired. For example, in computer-aided diagnosis domain, medical images tend to be small-scale and low-quality, which in turn limits the development of deep learning. This urges us to rethink the bottleneck factors of deep learning: data quantity and quality. Namely, how does deep learning handle small-scale low-quality data well? Existing works in robust deep learning (RDL) can partially address this issue (i.e., low-quality data). Specifically, RDL tends to be data-driven and unconscious, which belongs to system-1 deep learning. Given large-scale data, RDL can display the robust perception mechanically to a certain degree. However, when small-scale data, RDL is still far from human-level AI. The exploration of RDL is suitable for an initial research but too restrictive for a lot of real-world applications (e.g., medical imaging analysis). To address the above issues simultaneously, we need system-2 deep learning, which is data-saving and conscious. This proposal aims to echo system-2 deep learning preliminarily, which provides a solution to small-scale low-quality data. In high level, our system-2 intelligent system should behave more human-like: trustworthy perception and logical reasoning. To unify two aspects, we propose a trustworthy deep reasoning (TDR) framework, where human-level constraints (e.g., logic or casality) are embedded into trustworthy learning procedure. The benefit of such embedding is to reduce the estimation error caused by small-scale data. The goal of this project is to develop models, algorithms and prototype system for trustworthy deep reasoning from small-scale low-quality (i.e., noisy) data and deploy the system in real-world field.
Research Publications
label-noise learning through the lens of causality (ICML'23)
graph neural network based knowledge graph reasoning (KDD'23)
robust LLMs reasoning in chain-of-thought prompting with noisy rationales (NeurIPS'24)
discovery of the hidden world with large language models (NeurIPS'24)
unveiling causal reasoning in large language models (NeurIPS'24)
one-shot subgraph reasoning on large-scale knowledge graphs (ICLR'24)
a robust method to discover causal or anticausal relation (ICLR'25)
eliciting causal abilities in large language models for reasoning tasks (AAAI'25)
Software
label-noise learning through the lens of causality, [code]
graph neural network based knowledge graph reasoning, [code]
robust LLMs reasoning in chain-of-thought prompting with noisy rationales, [code]
discovery of the hidden world with large language models, [code]
unveiling causal reasoning in large language models, [code]
one-shot subgraph reasoning on large-scale knowledge graphs, [code]
a robust method to discover causal or anticausal relation, [code]
eliciting causal abilities in large language models for reasoning tasks, [code]
Collaborators
University: Carnegie Mellon University, The University of Sydney, The University of Melbourne, The University of Tokyo, MBZUAI, Hong Kong University of Science and Technology, The Chinese University of Hong Kong, Tsinghua University, Shanghai Jiao Tong University, Wuhan University
Industry: JD Explore Academy
Acknowlewdgement
This material is based upon work supported by the GDST under Grant No. 2022A1515011652. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the GDST.
|