Bo Han


NSFC Young Scientists Fund

(PI: Dr. Bo Han, Department of Computer Science, Hong Kong Baptist University)

    Project Award Information

  • Award Number: NSFC YSF 62006202

  • Title: The Research on the Automated Trustworthy Machine Learning

  • Principal Investigator (PI): Dr. Bo Han, Department of Computer Science, Hong Kong Baptist University

    Project Summary

    Machine Learning has been widely applied to different domains, and the design of classical learning algorithms is based on underlying assumptions: Static environment (i.e., free of noisy labels and adversarial examples) and modeling by experts. However, such assumptions hinder the deployment and application of learning algorithms in real-world dynamic environment and non-expert scenario. Therefore, this project proposes “Automated Trustworthy Machine Learning” paradigm, which aims to improve the reliability, robustness and efficiency of learning algorithms. First, we propose to design instance-dependent label-noise learning algorithms from the view of “confidence scores” and “universal probability”, which addresses the more challenging instance-dependent label noise beyond class-conditional label noise. Second, we propose to design novel adversarial learning algorithms from the perspective of “optimization” and “unlabeled data”, which breaks the performance bottleneck of standard adversarial training algorithms. Lastly, we propose to formulate the synergistic interaction between trustworthy learning (e.g., label-noise learning and adversarial learning) and automated learning as a bi-level programming. Meanwhile, this project preliminarily discusses how to provide the theoretical guarantees for automated trustworthy machine learning, and how to extend the proposed learning algorithms into multi-label multi-instance, distributed multi-models, learning and reasoning scenarios. This project also aims to apply the proposed learning algorithms to more real-world problems.

    Research Publications

    The following papers focus on instance-dependent label-noise learning:
  • confidence scores make instance-dependent label-noise learning possible (ICML'21, Long Oral)

  • instance-dependent label-noise learning under a structural causal model (NeurIPS'21)

  • tackling instance-dependent label noise via a universal probabilistic model (AAAI'21)

  • exploiting class activation value for partial-label learning (ICLR'22)

  • fair classification with instance-dependent label noise (CLeaR'22)

  • learning with mixed closed-set and open-set noisy labels (PAMI'22)

  • a holistic view of label noise transition matrix in deep learning and beyond (ICLR'23, Spotlight)

  • latent class-conditional noise model (PAMI'23)

  • a parametrical model for instance-dependent label noise (PAMI'23)

    The following papers focus on adversarial robustness:
  • probabilistic margins for instance reweighting in adversarial training (NeurIPS'21)

  • maximum mean discrepancy is aware of adversarial attacks (ICML'21)

  • learning diverse-structured networks for adversarial robustness (ICML'21)

  • geometry-aware instance-reweighted adversarial training (ICLR'21, Oral)

  • adversarial robustness through the lens of causality (ICLR'22)

  • understanding and improving graph injection attack by promoting unnoticeability (ICLR'22)

  • bilateral dependency optimization against model-inversion attacks (KDD'22)

  • on strengthening and defending graph reconstruction attack with markov chain approximation (ICML'23)

    The following papers focus on automated trustworthy learning:
  • searching to exploit memorization effect in learning from noisy labels (ICML'20)

  • efficient two-stage evolutionary architecture search (ECCV'22)

  • efficient neural architecture search with local intrinsic dimension (AAAI'23)

    Software

  • confidence scores make instance-dependent label-noise learning possible, [code]

  • instance-dependent label-noise learning under a structural causal model, [code]

  • tackling instance-dependent label noise via a universal probabilistic model, [code]

  • exploiting class activation value for partial-label learning, [code]

  • fair classification with instance-dependent label noise, [code]

  • a holistic view of label noise transition matrix in deep learning and beyond, [code]

  • latent class-conditional noise model, [code]

  • a parametrical model for instance-dependent label noise, [code]

  • probabilistic margins for instance reweighting in adversarial training, [code]

  • maximum mean discrepancy is aware of adversarial attacks, [code]

  • learning diverse-structured networks for adversarial robustness, [code]

  • geometry-aware instance-reweighted adversarial training, [code]

  • adversarial robustness through the lens of causality, [code]

  • understanding and improving graph injection attack by promoting unnoticeability, [code]

  • bilateral dependency optimization against model-inversion attacks, [code]

  • on strengthening and defending graph reconstruction attack with markov chain approximation, [code]

  • searching to exploit memorization effect in learning from noisy labels, [code]

  • efficient two-stage evolutionary architecture search, [code]

  • efficient neural architecture search with local intrinsic dimension, [code]

    Education

  • UG Course: COMP3057 (2021 Autumn, 2022 Autumn), COMP4015 (2020 Autumn)

  • PG Course: COMP7250 (2021 Spring, 2022 Spring), COMP7160 (2021 Autumn, 2022 Autumn), COMP7180 (2022 Autumn)

  • Tutorial: IJCAI'21 Learning with Noisy Supervision, ACML'21 Learning under Noisy Supervision, CIKM'22 Learning and Mining with Noisy Labels

  • Undergraduate Research Programme (UGRP): Yifeng Chen, Xinyue Hu

  • Summer Undergraduate Research Fellowship: Yifeng Chen, Xinyue Hu

    Collaborators

  • University: Carnegie Mellon University, The University of Texas at Austin, The University of Sydney, The University of Melbourne, The University of Tokyo, Mohamed bin Zayed University of Artificial Intelligence, The Chinese University of Hong Kong, Hong Kong University of Science and Technology, Tsinghua University

  • Institute: RIKEN Center for Advanced Intelligence Project, Max Planck Institute for Intelligent Systems

  • Industry: Microsoft Research, Alibaba Research

    Acknowlewdgement

    This material is based upon work supported by the NSFC under Grant No. 62006202. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSFC.