Bo Han


Home


Bo Han

Assistant Professor in Machine Learning @ Department of Computer Science
Hong Kong Baptist University Faculty of Science

BAIHO Visiting Scientist @ Imperfect Information Learning Team
RIKEN Center for Advanced Intelligence Project

[Google Scholar] [Github] [Group Website] [Research Blog]
E-mail: bhanml@comp.hkbu.edu.hk (for general academic work) & bo.han@a.riken.jp

TMLR Group is always looking for highly self-motivated PhD/RA/Visiting students and Postdoc researchers. Please read this document for recruiting information, browse this link and this link for group-wise research, and check this link and CSRankings for department information. Meanwhile, TMLR Group is happy to host remote research trainees. Due to the large number of emails we receive, we cannot respond to every email individually. Thanks!


News

See more news here.


Research

    My research interests lie in machine learning, deep learning and foundation models. My long-term goal is to develop trustworthy intelligent systems (e.g., trustworthy foundation models), which can learn and reason from a massive volume of complex (e.g., weakly supervised, self-supervised, out-of-distribution, causal, fair, and privacy-preserving) data (e.g, label, example, preference, domain, similarity, graph, demonstration, and prompt) federatively and automatically. Recently, I develop core machine learning methodology. Besides, I am actively applying our fundamental research into the interdisciplinary domain.


Selected Projects

  • RGC Early CAREER Scheme (PI): Trustworthy Deep Learning from Open-set Corrupted Data [Link] [Website]

  • NSFC General Program (PI): The Research on Trustworthy Federated Learning in Imperfect Environments [Link] [Website]

  • NSFC Young Scientists Fund (PI): The Research on the Automated Trustworthy Machine Learning [Link] [Website]

  • GDST Basic Research Fund (PI): Trustworthy Graph Representation Learning under Out-of-distribution Data [Website]

  • GDST Basic Research Fund (PI): Trustworthy Deep Reasoning with Human-level Constraints [Website]

  • RIKEN Collaborative Research Fund (PI): New Directions in Trustworthy Machine Learning [Website]

  • RIKEN BAIHO Award (PI): Development of Robust Deep Learning Technologies for Heavily Noisy Data [Link] [Website]


Research Highlights

(* indicates advisees/co-advisees; updating through the time; see the full list here)
  • On the Learnability of Out-of-distribution Detection.
    Z. Fang, S. Li, F. Liu, B. Han, and J. Lu.
    Journal of Machine Learning Research (JMLR), 2024, [PDF].

  • Combating Exacerbated Heterogeneity for Robust Models in Federated Learning.
    J. Zhu*, J. Yao, T. Liu, Q. Yao, J. Xu, and B. Han.
    In Proceedings of 11th International Conference on Learning Representations (ICLR'23), [PDF] [Code] [Poster].

  • Latent Class-Conditional Noise Model.
    J. Yao, B. Han, Z. Zhou, Y. Zhang, and I.W. Tsang.
    IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023, [PDF] [Code].

  • Watermarking for Out-of-distribution Detection.
    Q. Wang*, F. Liu, Y. Zhang, J. Zhang, C. Gong, T. Liu, and B. Han.
    In Advances in Neural Information Processing Systems 35 (NeurIPS'22), [PDF] [Code] [Poster] [Spotlight].

  • Is Out-of-distribution Detection Learnable?
    Z. Fang, S. Li, J. Lu, J. Dong, B. Han, and F. Liu.
    In Advances in Neural Information Processing Systems 35 (NeurIPS'22), [PDF] [Poster] [Oral, Outstanding Paper Award].

  • CausalAdv: Adversarial Robustness through the Lens of Causality.
    Y. Zhang*, M. Gong, T. Liu, G. Niu, X. Tian, B. Han, B. Schölkopf, and K. Zhang.
    In Proceedings of 10th International Conference on Learning Representations (ICLR'22), [PDF] [Code] [Poster].

  • Confidence Scores Make Instance-dependent Label-noise Learning Possible.
    A. Berthon*, B. Han, G. Niu, T. Liu, and M. Sugiyama.
    In Proceedings of 38th International Conference on Machine Learning (ICML'21), [PDF] [Code] [Poster] [Long Oral].

  • How does Disagreement Help Generalization against Label Corruption?
    X. Yu*, B. Han, J. Yao, G. Niu, I.W. Tsang, and M. Sugiyama.
    In Proceedings of 36th International Conference on Machine Learning (ICML'19), [PDF] [Code] [Poster] [Long Oral].

  • Towards Robust ResNet: A Small Step but A Giant Leap.
    J. Zhang*, B. Han, L. Wynter, B. Low, and M. Kankanhalli.
    In Proceedings of 28th International Joint Conference on Artificial Intelligence (IJCAI'19), [PDF] [Code] [Poster].

  • Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels.
    B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I.W. Tsang, and M. Sugiyama.
    In Advances in Neural Information Processing Systems 31 (NeurIPS'18), [PDF] [Code] [Poster] [Most Influential Paper].

  • Masking: A New Perspective of Noisy Supervision.
    B. Han, J. Yao, G. Niu, M. Zhou, I.W. Tsang, Y. Zhang, and M. Sugiyama.
    In Advances in Neural Information Processing Systems 31 (NeurIPS'18), [PDF] [Code] [Poster].


Selected Publications

(* indicates advisees/co-advisees; see the full list here)
  • Machine Learning with Noisy Labels: From Theory to Heuristics.
    M. Sugiyama, T. Liu, B. Han, N. Lu, and G. Niu.
    Adaptive Computation and Machine Learning series, The MIT Press, 2024, [PDF].

  • Trustworthy Machine Learning under Imperfect Data.
    B. Han and T. Liu.
    Computer Science series, Springer Nature, 2024, [PDF].

  • A Survey of Label-noise Representation Learning: Past, Present and Future.
    B. Han, Q. Yao, T. Liu, G. Niu, I.W. Tsang, J.T. Kwok, and M. Sugiyama.
    arXiv preprint arXiv:2011.04406, 2020, [PDF].
    (the draft is kept updating; any comments and suggestions are welcome)

  • DeepInception: Hypnotize Large Language Model to Be Jailbreaker.
    X. Li*, Z. Zhou*, J. Zhu*, J. Yao, T. Liu, and B. Han.
    arXiv preprint arXiv:2311.03191, [PDF] [Code] [Project] [Blog] [News] [DeepTech].

  • Do CLIPs Always Generalize Better than ImageNet Models?
    Q. Wang*, Y. Lin, Y. Chen*, L. Schmidt, B. Han, and T. Zhang.
    arXiv preprint arXiv:2403.11497, [PDF] [Code].

  • Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection.
    C. Cao*, Z. Zhong, Z. Zhou*, Y. Liu, T. Liu, and B. Han.
    In Proceedings of 41th International Conference on Machine Learning (ICML'24), [PDF] [Code] [Poster] [Blog].

  • MOKD: Cross-domain Finetuning for Few-shot Classification via Maximizing Optimized Kernel Dependence.
    H. Tian*, F. Liu, T. Liu, B. Du, Y.M. Cheung, and B. Han.
    In Proceedings of 41th International Conference on Machine Learning (ICML'24), [PDF] [Code] [Poster].

  • NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation.
    P. Zheng*, Y. Zhang*, Z. Fang, T. Liu, D. Lian, and B. Han.
    In Proceedings of 12th International Conference on Learning Representations (ICLR'24), [PDF] [Code] [Poster] [Blog] [Spotlight].

  • Robust Training of Federated Models with Extremely Label Deficiency.
    Y. Zhang*, Z. Yang*, X. Tian, N. Wang, T. Liu, and B. Han.
    In Proceedings of 12th International Conference on Learning Representations (ICLR'24), [PDF] [Code] [Poster].

  • On the Learnability of Out-of-distribution Detection.
    Z. Fang, S. Li, F. Liu, B. Han, and J. Lu.
    Journal of Machine Learning Research (JMLR), 2024, [PDF].

  • Searching to Exploit Memorization Effect in Deep Learning with Noisy Labels.
    H. Yang, Q. Yao, B. Han, and J.T. Kwok.
    IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024, [PDF] [Code].

  • A Time-consistency Curriculum for Learning from Instance-dependent Noisy Labels.
    S. Wu, T. Zhou, Y. Du, J. Yu, B. Han, and T. Liu.
    IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024, [PDF] [Code].

  • Does Confusion Really Hurts Novel Class Discovery?
    H. Chi*, W. Yang, F. Liu, L. Lan, and B. Han.
    International Journal of Computer Vision (IJCV), 2024, [PDF] [Code].

  • USN: A Robust Imitation Learning Method against Diverse Action Noise.
    X. Yu*, B. Han, and I.W. Tsang.
    Journal of Artificial Intelligence Research (JAIR), 2024, [PDF] [Code].

  • Exploit CAM by itself: Complementary Learning System for Weakly Supervised Semantic Segmentation.
    W. Yang, J. Mai*, F. Zhang, T. Liu, and B. Han.
    Transactions on Machine Learning Research (TMLR), 2024, [PDF] [Code].

  • Learning to Augment Distributions for Out-of-distribution Detection.
    Q. Wang*, Z. Fang, Y. Zhang, F. Liu, Y. Li, and B. Han.
    In Advances in Neural Information Processing Systems 36 (NeurIPS'23), [PDF] [Code] [Poster].

  • FedFed: Feature Distillation against Data Heterogeneity in Federated Learning.
    Z. Yang*, Y. Zhang, Y. Zheng, X. Tian, H. Peng, T. Liu, and B. Han.
    In Advances in Neural Information Processing Systems 36 (NeurIPS'23), [PDF] [Code] [Poster].

  • On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation.
    Z. Zhou*, C. Zhou*, X. Li*, J. Yao, Q. Yao, and B. Han.
    In Proceedings of 40th International Conference on Machine Learning (ICML'23), [PDF] [Code] [Poster].

  • Detecting Out-of-distribution Data through In-distribution Class Prior.
    X. Jiang*, F. Liu, Z. Fang, H. Chen, T. Liu, F. Zheng, and B. Han.
    In Proceedings of 40th International Conference on Machine Learning (ICML'23), [PDF] [Code] [Poster].

  • Combating Exacerbated Heterogeneity for Robust Models in Federated Learning.
    J. Zhu*, J. Yao, T. Liu, Q. Yao, J. Xu, and B. Han.
    In Proceedings of 11th International Conference on Learning Representations (ICLR'23), [PDF] [Code] [Poster].

  • A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond.
    Y. Lin*, R. Pi*, W. Zhang, X. Xia, J. Gao, X. Zhou, T. Liu, and B. Han.
    In Proceedings of 11th International Conference on Learning Representations (ICLR'23), [PDF] [Code] [Poster] [Spotlight].

  • AdaProp: Learning Adaptive Propagation for Graph Neural Network based Knowledge Graph Reasoning.
    Y. Zhang, Z. Zhou*, Q. Yao, X. Chu, and B. Han.
    In Proceedings of 29th ACM Conference on Knowledge Discovery and Data Mining (KDD'23), [PDF] [Code].

  • Learning from Noisy Pairwise Similarity and Unlabeled Data.
    S. Wu, T. Liu, B. Han, J. Yun, G. Niu, and M. Sugiyama.
    Journal of Machine Learning Research (JMLR), 2023, [PDF] [Code].

  • Latent Class-Conditional Noise Model.
    J. Yao, B. Han, Z. Zhou, Y. Zhang, and I.W. Tsang.
    IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023, [PDF] [Code].

  • KRADA: Known-region-aware Domain Alignment for Open-set Domain Adaptation in Semantic Segmentation.
    C. Zhou*, F. Liu, C. Gong, R. Zeng, T. Liu, W.K. Cheung, and B. Han.
    Transactions on Machine Learning Research (TMLR), 2023, [PDF] [Code].

  • Server-Client Collaborative Distillation for Federated Reinforcement Learning.
    W. Mai*, J. Yao, C. Gong, Y. Zhang, Y.M. Cheung, and B. Han.
    ACM Transactions on Knowledge Discovery from Data (TKDD), 2023, [PDF] [Code].

  • Plantorganelle Hunter is An Effective Deep-learning-based Method for Plant Organelle Phenotyping in Electron Microscopy.
    X. Feng, Z. Yu, ..., F. Liu, B. Han, B. Zechmann, Y. He, and F. Liu.
    Nature Plants, 2023, [PDF] [Code].

  • Watermarking for Out-of-distribution Detection.
    Q. Wang*, F. Liu, Y. Zhang, J. Zhang, C. Gong, T. Liu, and B. Han.
    In Advances in Neural Information Processing Systems 35 (NeurIPS'22), [PDF] [Code] [Poster] [Spotlight].

  • Is Out-of-distribution Detection Learnable?
    Z. Fang, S. Li, J. Lu, J. Dong, B. Han, and F. Liu.
    In Advances in Neural Information Processing Systems 35 (NeurIPS'22), [PDF] [Poster] [Oral, Outstanding Paper Award].

  • Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning.
    Z. Tang*, Y. Zhang, S. Shi, X. He, B. Han, and X. Chu.
    In Proceedings of 39th International Conference on Machine Learning (ICML'22), [PDF] [Code] [Poster].

  • Contrastive Learning with Boosted Memorization.
    Z. Zhou*, J. Yao, Y. Wang, B. Han, and Y. Zhang.
    In Proceedings of 39th International Conference on Machine Learning (ICML'22), [PDF] [Code] [Poster].

  • Understanding and Improving Graph Injection Attack by Promoting Unnoticeability.
    Y. Chen*, H. Yang, Y. Zhang, K. Ma, T. Liu, B. Han, and J. Cheng.
    In Proceedings of 10th International Conference on Learning Representations (ICLR'22), [PDF] [Code] [Poster].

  • CausalAdv: Adversarial Robustness through the Lens of Causality.
    Y. Zhang*, M. Gong, T. Liu, G. Niu, X. Tian, B. Han, B. Schölkopf, and K. Zhang.
    In Proceedings of 10th International Conference on Learning Representations (ICLR'22), [PDF] [Code] [Poster].

  • Robust Weight Perturbation for Adversarial Training.
    C. Yu, B. Han, M. Gong, L. Shen, S. Ge, B. Du, and T. Liu.
    In Proceedings of 31th International Joint Conference on Artificial Intelligence (IJCAI'22), [PDF] [Code].

  • Bilateral Dependency Optimization: Defending Against Model-inversion Attacks.
    X. Peng*, F. Liu, J. Zhang, J. Ye, L. Lan, T. Liu, and B. Han.
    In Proceedings of 28th ACM Conference on Knowledge Discovery and Data Mining (KDD'22), [PDF] [Code] [Poster].

  • Device-Cloud Collaborative Recommendation via Meta Controller.
    J. Yao, F. Wang, X. Ding, S. Chen, B. Han, J. Zhou, and H. Yang.
    In Proceedings of 28th ACM Conference on Knowledge Discovery and Data Mining (KDD'22), [PDF] [Poster].

  • Fair Classification with Instance-dependent Label Noise.
    S. Wu, M. Gong, B. Han, Y. Liu, and T. Liu.
    In Proceedings of 1st Conference on Causal Learning and Reasoning (CLeaR'22), [PDF] [Code].

  • Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm Regularization.
    Q. Yao, Y. Wang, B. Han, and J.T. Kwok.
    Journal of Machine Learning Research (JMLR), 2022, [PDF] [Code].

  • Learning with Mixed Open-set and Closed-set Noisy Labels.
    X. Xia, B. Han, N. Wang, J. Deng, J. Li, Y. Mao, and T. Liu.
    IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022, [PDF] [Code].

  • NoiLIn: Improving Adversarial Training and Correcting Stereotype of Noisy Labels.
    J. Zhang, X. Xu, B. Han, T. Liu, L. Cui, G. Niu, and M. Sugiyama.
    Transactions on Machine Learning Research (TMLR), 2022, [PDF] [Code].

  • TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation.
    H. Chi*, F. Liu, W. Yang, L. Lan, T. Liu, B. Han, W.K. Cheung, and J.T. Kwok.
    In Advances in Neural Information Processing Systems 34 (NeurIPS'21), [PDF] [Code] [Poster] [Spotlight].

  • Instance-dependent Label-noise Learning under a Structural Causal Model.
    Y. Yao, T. Liu, M. Gong, B. Han, G. Niu, and K. Zhang.
    In Advances in Neural Information Processing Systems 34 (NeurIPS'21), [PDF] [Code] [Poster].

  • Confidence Scores Make Instance-dependent Label-noise Learning Possible.
    A. Berthon*, B. Han, G. Niu, T. Liu, and M. Sugiyama.
    In Proceedings of 38th International Conference on Machine Learning (ICML'21), [PDF] [Code] [Poster] [Long Oral].

  • Maximum Mean Discrepancy is Aware of Adversarial Attacks.
    R. Gao*, F. Liu, J. Zhang, B. Han, T. Liu, G. Niu, and M. Sugiyama.
    In Proceedings of 38th International Conference on Machine Learning (ICML'21), [PDF] [Code] [Poster].

  • Geometry-aware Instance-reweighted Adversarial Training.
    J. Zhang, J. Zhu*, G. Niu, B. Han, M. Sugiyama, and M. Kankanhalli.
    In Proceedings of 9th International Conference on Learning Representations (ICLR'21), [PDF] [Code] [Poster] [Oral].

  • Robust Early-learning: Hindering the Memorization of Noisy Labels.
    X. Xia, T. Liu, B. Han, C. Gong, N. Wang, Z. Ge, and Y. Chang.
    In Proceedings of 9th International Conference on Learning Representations (ICLR'21), [PDF] [Code] [Poster].

  • Learning with Group Noise.
    Q. Wang*, J. Yao, C. Gong, T. Liu, M. Gong, H. Yang, and B. Han.
    In Proceedings of 35th AAAI Conference on Artificial Intelligence (AAAI'21), [PDF] [Code].

  • Device-Cloud Collaborative Learning for Recommendation.
    J. Yao, F. Wang, K. Jia, B. Han, J. Zhou, and H. Yang.
    In Proceedings of 27th ACM Conference on Knowledge Discovery and Data Mining (KDD'21), [PDF] [Poster].

  • Provably Consistent Partial-Label Learning.
    L. Feng, J. Lv, B. Han, M. Xu, G. Niu, X. Geng, B. An, and M. Sugiyama.
    In Advances in Neural Information Processing Systems 33 (NeurIPS'20), [PDF] [Code] [Poster].

  • Reducing Estimation Error for Transition Matrix in Label-noise Learning.
    Y. Yao, T. Liu, B. Han, M. Gong, J. Deng, G. Niu, and M. Sugiyama.
    In Advances in Neural Information Processing Systems 33 (NeurIPS'20), [PDF] [Code] [Poster].

  • SIGUA: Forgetting May Make Learning with Noisy Labels More Robust.
    B. Han, G. Niu, X. Yu, Q. Yao, M. Xu, I.W. Tsang, and M. Sugiyama.
    In Proceedings of 37th International Conference on Machine Learning (ICML'20), [PDF] [Code] [Poster].

  • Variational Imitation Learning from Diverse-quality Demonstrations.
    V. Tangkaratt, B. Han, M. Khan, and M. Sugiyama.
    In Proceedings of 37th International Conference on Machine Learning (ICML'20), [PDF] [Code] [Poster].

  • Attacks Which Do Not Kill Training Make Adversarial Learning Stronger.
    J. Zhang*, X. Xu, B. Han, G. Niu, L. Cui, M. Sugiyama, and M. Kankanhalli.
    In Proceedings of 37th International Conference on Machine Learning (ICML'20), [PDF] [Code] [Poster].

  • Searching to Exploit Memorization Effect in Learning from Noisy Labels.
    Q. Yao, H. Yang, B. Han, G. Niu, and J.T. Kwok.
    In Proceedings of 37th International Conference on Machine Learning (ICML'20), [PDF] [Code] [Poster].

  • A Bi-level Formulation for Label Noise Learning with Spectral Cluster Discovery.
    Y. Luo, B. Han, and C. Gong.
    In Proceedings of 29th International Joint Conference on Artificial Intelligence (IJCAI'20), [PDF] [Code].

  • Are Anchor Points Really Indispensable in Label-noise Learning?
    X. Xiao, T. Liu, N. Wang, B. Han, C. Gong, G. Niu, and M. Sugiyama.
    In Advances in Neural Information Processing Systems 32 (NeurIPS'19), [PDF] [Code] [Poster].

  • How does Disagreement Help Generalization against Label Corruption?
    X. Yu*, B. Han, J. Yao, G. Niu, I.W. Tsang, and M. Sugiyama.
    In Proceedings of 36th International Conference on Machine Learning (ICML'19), [PDF] [Code] [Poster] [Long Oral].

  • Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations.
    Q. Yao, J.T. Kwok, and B. Han.
    In Proceedings of 36th International Conference on Machine Learning (ICML'19), [PDF] [Code] [Poster].

  • Towards Robust ResNet: A Small Step but A Giant Leap.
    J. Zhang*, B. Han, L. Wynter, B. Low, and M. Kankanhalli.
    In Proceedings of 28th International Joint Conference on Artificial Intelligence (IJCAI'19), [PDF] [Code] [Poster].

  • Privacy-preserving Stochastic Gradual Learning.
    B. Han, I.W. Tsang, X. Xiao, L. Chen, S.-F. Fung, and C. Yu.
    IEEE Transactions on Knowledge and Data Engineering (TKDE), 2019, [PDF].

  • Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels.
    B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I.W. Tsang, and M. Sugiyama.
    In Advances in Neural Information Processing Systems 31 (NeurIPS'18), [PDF] [Code] [Poster] [Most Influential Paper].

  • Masking: A New Perspective of Noisy Supervision.
    B. Han, J. Yao, G. Niu, M. Zhou, I.W. Tsang, Y. Zhang, and M. Sugiyama.
    In Advances in Neural Information Processing Systems 31 (NeurIPS'18), [PDF] [Code] [Poster].

  • Millionaire: A Hint-guided Approach for Crowdsourcing.
    B. Han, Q. Yao, Y. Pan, I.W. Tsang, X. Xiao, Q. Yang, and M. Sugiyama.
    Machine Learning Journal (MLJ), 108(5): 831–858, 2018, [PDF] [Slides].

  • Stagewise Learning for Noisy k-ary Preferences.
    Y. Pan, B. Han, and I.W. Tsang.
    Machine Learning Journal (MLJ), 107(8): 1333–1361, 2018, [PDF].

  • Robust Plackett-Luce Model for k-ary Crowdsourced Preferences.
    B. Han, Y. Pan, and I.W. Tsang.
    Machine Learning Journal (MLJ), 107(4): 675–702, 2017, [PDF].


Biography

    Bo Han is currently an Assistant Professor in Machine Learning and a Director of Trustworthy Machine Learning and Reasoning Group at Hong Kong Baptist University, and a BAIHO Visiting Scientist of Imperfect Information Learning Team at RIKEN Center for Advanced Intelligence Project (RIKEN AIP), where his research focuses on machine learning, deep learning, foundation models, and their applications. He was a Visiting Research Scholar at MBZUAI MLD (2024), hosted by Prof. Kun Zhang, a Visiting Faculty Researcher at Microsoft Research (2022) and Alibaba DAMO Academy (2021), and a Postdoc Fellow at RIKEN AIP (2019-2020), working with Prof. Masashi Sugiyama. He received his Ph.D. degree in Computer Science from University of Technology Sydney (2015-2019), primarily advised by Prof. Ivor W. Tsang. He has co-authored three machine learning monographs, including Machine Learning with Noisy Labels (MIT Press), Trustworthy Machine Learning under Imperfect Data (Springer Nature), and Trustworthy Machine Learning from Data to Models (Foundations and Trends). He has served as Senior Area Chair of NeurIPS, and Area Chairs of NeurIPS, ICML and ICLR. He has also served as Associate Editors of IEEE TPAMI, MLJ and JAIR, and Editorial Board Members of JMLR and MLJ. He received Outstanding Paper Award at NeurIPS, Most Influential Paper at NeurIPS, Notable Area Chair at NeurIPS, Outstanding Area Chair at ICLR, and Outstanding Associate Editor at IEEE TNNLS. He received the RGC Early CAREER Scheme, NSFC General Program, IJCAI Early Career Spotlight, RIKEN BAIHO Award, Dean's Award for Outstanding Achievement, Microsoft Research StarTrack Program, and Faculty Research Awards from ByteDance, Baidu, Alibaba and Tencent.


Acknowledgement

    TMLR group is/was gratefully supported by UGC and RGC of Hong Kong, NSFC, GDST, RIKEN AIP, CCF, CAAI, GRG, HKBU, HKBU RC, HKBU CSD, and industry research labs (Microsoft, Google, NVIDIA, ByteDance, Baidu, Alibaba, Tencent).