Bo Han


RIKEN Collaborative Research Fund

(PI: Dr. Bo Han, Department of Computer Science, Hong Kong Baptist University & RIKEN Center for Advanced Intelligence Project)

    Project Award Information

  • Title: New Directions in Trustworthy Machine Learning

  • Principal Investigator (PI): Dr. Bo Han, Department of Computer Science, Hong Kong Baptist University & RIKEN Center for Advanced Intelligence Project

    Project Summary

    Trustworthy machine learning (TML) under imperfect data has recently brought much attention in the data-centric fields of machine learning (ML) and artificial intelligence (AI). Specifically, there are mainly three types of imperfect data along with their challenges for ML, including i) label-level imperfection: noisy labels; ii) feature-level imperfection: adversarial examples; iii) distribution-level imperfection: out-of-distribution data. Therefore, in this collaborative project, we systematically investigate three types of imperfect data in the age of deep learning, and propose our solutions with academic partners in RIKEN AIP. More importantly, this project aims to explore new directions in trustworthy machine learning, such as trustworthy foundation models, trustworthy federated learning, and trustworthy causal learning.

    Research Publications

  • forgetting may make learning with noisy labels more robust (ICML'20)

  • confidence scores make instance-dependent label-noise learning possible (ICML'21, Long Oral)

  • maximum mean discrepancy test is aware of adversarial attacks (ICML'21)

  • adversarial training with complementary labels (NeurIPS'22, Spotlight)

  • collaborate to improve adversarial robustness (NeurIPS'22)

  • learning to discover novel classes given very limited data (ICLR'22, Spotlight)

  • exploiting class activation value for partial-label learning (ICLR'22)

  • diversity-enhancing generative network for few-shot hypothesis adaptation (ICML'23)

  • diversified outlier exposure for out-of-distribution detection (NeurIPS'23)

  • what if the input is expanded in OOD detection (NeurIPS'24)

  • self-calibrated tuning of vision-language models for out-of-distribution detection (NeurIPS'24)

  • balancing similarity and complementarity for unimodal and multimodal federated learning (ICML'24)

  • accurate forgetting for heterogeneous federated continual learning (ICLR'24)

  • decoupling the class label and the target concept in machine unlearning (arXiv'24)

  • towards effective evaluations and comparison for LLM unlearning methods (ICLR'25)

    Software

  • forgetting may make learning with noisy labels more robust, [code]

  • confidence scores make instance-dependent label-noise learning possible, [code]

  • maximum mean discrepancy test is aware of adversarial attacks, [code]

  • adversarial training with complementary labels, [code]

  • collaborate to improve adversarial robustness, [code]

  • learning to discover novel classes given very limited data, [code]

  • exploiting class activation value for partial-label learning, [code]

  • diversity-enhancing generative network for few-shot hypothesis adaptation, [code]

  • diversified outlier exposure for out-of-distribution detection, [code]

  • what if the input is expanded in OOD detection, [code]

  • self-calibrated tuning of vision-language models for out-of-distribution detection, [code]

  • balancing similarity and complementarity for unimodal and multimodal federated learning, [code]

  • accurate forgetting for heterogeneous federated continual learning, [code]

  • decoupling the class label and the target concept in machine unlearning, [code]

  • towards effective evaluations and comparison for LLM unlearning methods, [code]

    Collaborators

  • Institute: RIKEN Center for Advanced Intelligence Project

    Acknowlewdgement

    This material is based upon work supported by the RIKEN AIP. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the RIKEN AIP.