Bo Han
Home
News
TMLR Group is always looking for highly self-motivated PhD/RA/Visiting students and Postdoc researchers to work on trustworthy learning and reasoning algorithms, theories and systems. Various types of fellowship are available for outstanding applicants, such as HKPFS Fellowship, NVIDIA-HKBU Fellowship, and multiple government & industry-funded fellowships.
TMLR Group is jointly looking for visiting scholars for Asian Trustworthy Machine Learning (ATML) Fellowships [Class 2023].
TMLR Group releases several papers that represent our research flavors: [Co-teaching], [OOD Theory], [Watermarking], [SFAT], [GRA], [DeepInception].
TMLR Group delivers several talks that represent our group-wise research thrusts: [Video 1], [Video 2], [Video 3].
Sep 2024: TMLR Group will co-organize IEEE International Conference on Data Science and Advanced Analytics 2025.
Aug 2024: TMLR Group will co-organize International Symposium on Trustworthy Learning 2025.
Jul 2024: I will serve as Session Chairs of Oral Labels and Oral Robustness and Safety for ICML'24.
Jun 2024: I will serve as Associate Editor for IEEE TPAMI.
Jun 2024: TMLR Group is honored to receive Dean's Award for Outstanding Achievement 2024.
Jun 2024: TMLR Group's research is honored to be highlighted by Faculty of Science [News].
May 2024: TMLR Group is honored to receive IJCAI Early Career Spotlight 2024 [Paper] [Slides] [Poster].
May 2024: TMLR Group is honored to receive ByteDance Faculty Research Award 2024 (from ByteDance AI Lab).
Apr 2024: TMLR Group is honored to receive Tencent WeChat Faculty Research Award 2024 (from Tencent WeChat) [News].
Mar 2024: I will serve as Senior Area Chair and Workshop Co-Chair for NeurIPS'24.
Mar 2024: TMLR Group will co-author a ML monograph invited by Foundations and Trends® in Privacy and Security.
Dec 2023: I am honored to receive NeurIPS'23 Notable Area Chair.
Dec 2023: I regularly serve as Area Chair for NeurIPS, ICML, ICLR, UAI and AISTATS (see Professional Service).
Nov 2023: TMLR Group will co-organize International Workshop on Weakly Supervised Learning 2023.
Oct 2023: I am honored to be selected as World’s Top 2% Scientists 2023 [News].
Oct 2023: TMLR Group will co-organize HKBU-RIKEN AIP Joint Workshop on AI and ML [News].
Sep 2023: I am honored to receive IEEE TNNLS Outstanding Associate Editor.
Sep 2023: TMLR Group is honored to receive CCF-Baidu Faculty Research Award 2023 (from Baidu Research) [News].
Sep 2023: TMLR Group is honored to host Distinguished Lecture Series provided by Prof. Chengqi Zhang [Video].
Aug 2023: TMLR Group is honored to receive NSFC General Program 2023 [News].
Aug 2023: TMLR Group will co-author a ML monograph accepted by Springer Nature.
Aug 2023: TMLR Group's collaborative work with Alibaba has been deployed into its online advertisement platform [News].
Aug 2023: I will serve as Action Editor for MLJ and JAIR and Area Chair for AISTATS'24.
Jul 2023: TMLR Group has co-authored an interdisciplinary article published by Nature Plants.
May 2023: TMLR Group is honored to host Distinguished Seminar Series provided by Prof. Kun Zhang.
May 2023: TMLR Group is honored to host Distinguished Lecture Series provided by Prof. Kilian Weinberger [Video].
Mar 2023: TMLR Group is honored to host Distinguished Lecture Series provided by Prof. Tuomas Sandholm [Video].
Feb 2023: I will serve as Area Chair for NeurIPS'23.
Jan 2023: TMLR Group's collaborative work with Tencent achieves state-of-the-art results in GOOD benchmark.
Dec 2022: I will serve as Area Chair for ICML'23 and UAI'23 and Publication & Publicity Co-Chair for ACML'23.
Dec 2022: I will serve as Session Chair of Featured Papers for NeurIPS'22.
Nov 2022: TMLR Group is honored to jointly receive NeurIPS'22 Outstanding Paper Award [Paper].
Aug 2022: TMLR Group will jointly organize Online Asian Machine Learning School 2022.
Aug 2022: I will join the Editorial Board of JMLR and MLJ and serve as Area Chair for ICLR'23.
Aug 2022: I will serve as Action Editor for TMLR and Associate Editor for IEEE TNNLS.
May 2022: TMLR Group is honored to jointly receive NeurIPS'18 Most Influential Paper [Paper].
May 2022: TMLR Group is honored to receive Tencent AI Faculty Research Award 2022 (from Tencent AI Lab) [News].
Feb 2022: RIKEN Team will author a ML monograph accepted by MIT Press.
Feb 2022: TMLR Group is honored to host Distinguished Lecture Series provided by Prof. Masashi Sugiyama [Video].
Feb 2022: TMLR Group is honored to receive Alibaba Faculty Research Award 2022 (from Alibaba Research).
Jan 2022: TMLR Group is honored to receive NVIDIA Collaborative Research Award 2022 (from NVIDIA Research).
Jan 2022: RIKEN Team will organize TrustML Young Scientist Seminars 2022.
Oct 2021: TMLR Group is honored to receive Microsoft Research StarTrack Program 2021 (with AI for Science team).
May 2021: I am honored to receive ICLR'21 Outstanding Area Chair.
Jun 2020: TMLR Group is honored to receive RGC Early CAREER Scheme 2020.
Mar 2020: I am honored to receive RIKEN BAIHO Award 2019.
See more news here.
Research
My research interests lie in machine learning, deep learning and foundation models. My long-term goal is to develop trustworthy intelligent systems (e.g., trustworthy foundation models), which can learn and reason from a massive volume of complex (e.g., weakly supervised, self-supervised, out-of-distribution, causal, fair, and privacy-preserving) data (e.g, label, example, preference, domain, similarity, graph, demonstration, and prompt) federatively and automatically. Recently, I develop core machine learning methodology. Besides, I am actively applying our fundamental research into the interdisciplinary domain.
Selected Projects
RGC Early CAREER Scheme (PI): Trustworthy Deep Learning from Open-set Corrupted Data [Link] [Website]
NSFC General Program (PI): The Research on Trustworthy Federated Learning in Imperfect Environments [Link] [Website]
NSFC Young Scientists Fund (PI): The Research on the Automated Trustworthy Machine Learning [Link] [Website]
GDST Basic Research Fund (PI): Trustworthy Graph Representation Learning under Out-of-distribution Data [Website]
GDST Basic Research Fund (PI): Trustworthy Deep Reasoning with Human-level Constraints [Website]
RIKEN Collaborative Research Fund (PI): New Directions in Trustworthy Machine Learning [Website]
RIKEN BAIHO Award (PI): Development of Robust Deep Learning Technologies for Heavily Noisy Data [Link] [Website]
Research Highlights
(* indicates advisees/co-advisees; updating through the time; see the full list here)
Benchmarking the Reasoning Robustness against Noisy Rationales in Chain-of-thought Prompting.
Z. Zhou*, R. Tao*, J. Zhu*, Y. Luo*, Z. Wang, and B. Han.
In Advances in Neural Information Processing Systems 37 (NeurIPS'24), [PDF] [Code] [Poster].
On the Learnability of Out-of-distribution Detection.
Z. Fang, S. Li, F. Liu, B. Han, and J. Lu.
Journal of Machine Learning Research (JMLR), 2024, [PDF].
Combating Exacerbated Heterogeneity for Robust Models in Federated Learning.
J. Zhu*, J. Yao, T. Liu, Q. Yao, J. Xu, and B. Han.
In Proceedings of 11th International Conference on Learning Representations (ICLR'23), [PDF] [Code] [Poster].
Latent Class-Conditional Noise Model.
J. Yao, B. Han, Z. Zhou, Y. Zhang, and I.W. Tsang.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023, [PDF] [Code].
Watermarking for Out-of-distribution Detection.
Q. Wang*, F. Liu, Y. Zhang, J. Zhang, C. Gong, T. Liu, and B. Han.
In Advances in Neural Information Processing Systems 35 (NeurIPS'22), [PDF] [Code] [Poster] [Spotlight].
Is Out-of-distribution Detection Learnable?
Z. Fang, S. Li, J. Lu, J. Dong, B. Han, and F. Liu.
In Advances in Neural Information Processing Systems 35 (NeurIPS'22), [PDF] [Poster] [Oral, Outstanding Paper Award].
CausalAdv: Adversarial Robustness through the Lens of Causality.
Y. Zhang*, M. Gong, T. Liu, G. Niu, X. Tian, B. Han, B. Schölkopf, and K. Zhang.
In Proceedings of 10th International Conference on Learning Representations (ICLR'22), [PDF] [Code] [Poster].
Confidence Scores Make Instance-dependent Label-noise Learning Possible.
A. Berthon*, B. Han, G. Niu, T. Liu, and M. Sugiyama.
In Proceedings of 38th International Conference on Machine Learning (ICML'21), [PDF] [Code] [Poster] [Long Oral].
How does Disagreement Help Generalization against Label Corruption?
X. Yu*, B. Han, J. Yao, G. Niu, I.W. Tsang, and M. Sugiyama.
In Proceedings of 36th International Conference on Machine Learning (ICML'19), [PDF] [Code] [Poster] [Long Oral].
Towards Robust ResNet: A Small Step but A Giant Leap.
J. Zhang*, B. Han, L. Wynter, B. Low, and M. Kankanhalli.
In Proceedings of 28th International Joint Conference on Artificial Intelligence (IJCAI'19), [PDF] [Code] [Poster].
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels.
B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I.W. Tsang, and M. Sugiyama.
In Advances in Neural Information Processing Systems 31 (NeurIPS'18), [PDF] [Code] [Poster] [Most Influential Paper].
Masking: A New Perspective of Noisy Supervision.
B. Han, J. Yao, G. Niu, M. Zhou, I.W. Tsang, Y. Zhang, and M. Sugiyama.
In Advances in Neural Information Processing Systems 31 (NeurIPS'18), [PDF] [Code] [Poster].
Selected Publications
(* indicates advisees/co-advisees; see the full list here)
Machine Learning with Noisy Labels: From Theory to Heuristics.
M. Sugiyama, T. Liu, B. Han, N. Lu, and G. Niu.
Adaptive Computation and Machine Learning series, The MIT Press, 2024, [PDF].
Trustworthy Machine Learning under Imperfect Data.
B. Han and T. Liu.
Computer Science series, Springer Nature, 2024, [PDF].
A Survey of Label-noise Representation Learning: Past, Present and Future.
B. Han, Q. Yao, T. Liu, G. Niu, I.W. Tsang, J.T. Kwok, and M. Sugiyama.
arXiv preprint arXiv:2011.04406, 2020, [PDF].
(the draft is kept updating; any comments and suggestions are welcome)
DeepInception: Hypnotize Large Language Model to Be Jailbreaker.
X. Li*, Z. Zhou*, J. Zhu*, J. Yao, T. Liu, and B. Han.
arXiv preprint arXiv:2311.03191, [PDF] [Code] [Project] [Blog] [News] [DeepTech].
Benchmarking the Reasoning Robustness against Noisy Rationales in Chain-of-thought Prompting.
Z. Zhou*, R. Tao*, J. Zhu*, Y. Luo*, Z. Wang, and B. Han.
In Advances in Neural Information Processing Systems 37 (NeurIPS'24), [PDF] [Code] [Poster].
Pseudo-Private Data Guided Model Inversion Attacks.
X. Peng*, B. Han, F. Liu, T. Liu, and M. Zhou.
In Advances in Neural Information Processing Systems 37 (NeurIPS'24), [PDF] [Code] [Poster].
Do CLIP Models Always Generalize Better than ImageNet Models?
Q. Wang*, Y. Lin, Y. Chen*, L. Schmidt, B. Han, and T. Zhang.
In Advances in Neural Information Processing Systems 37 (NeurIPS'24), [PDF] [Code] [Poster].
Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection.
C. Cao*, Z. Zhong, Z. Zhou*, Y. Liu, T. Liu, and B. Han.
In Proceedings of 41th International Conference on Machine Learning (ICML'24), [PDF] [Code] [Poster] [Blog].
MOKD: Cross-domain Finetuning for Few-shot Classification via Maximizing Optimized Kernel Dependence.
H. Tian*, F. Liu, T. Liu, B. Du, Y.M. Cheung, and B. Han.
In Proceedings of 41th International Conference on Machine Learning (ICML'24), [PDF] [Code] [Poster].
NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation.
P. Zheng*, Y. Zhang*, Z. Fang, T. Liu, D. Lian, and B. Han.
In Proceedings of 12th International Conference on Learning Representations (ICLR'24), [PDF] [Code] [Poster] [Blog] [Spotlight].
Robust Training of Federated Models with Extremely Label Deficiency.
Y. Zhang*, Z. Yang*, X. Tian, N. Wang, T. Liu, and B. Han.
In Proceedings of 12th International Conference on Learning Representations (ICLR'24), [PDF] [Code] [Poster].
Trustworthy Machine Learning under Imperfect Data.
B. Han (with TMLR Group members).
In Proceedings of 33rd International Joint Conference on Artificial Intelligence (IJCAI'24), [PDF] [Slides] [Poster] [EC Spotlight].
On the Learnability of Out-of-distribution Detection.
Z. Fang, S. Li, F. Liu, B. Han, and J. Lu.
Journal of Machine Learning Research (JMLR), 2024, [PDF].
Searching to Exploit Memorization Effect in Deep Learning with Noisy Labels.
H. Yang, Q. Yao, B. Han, and J.T. Kwok.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024, [PDF] [Code].
A Time-consistency Curriculum for Learning from Instance-dependent Noisy Labels.
S. Wu, T. Zhou, Y. Du, J. Yu, B. Han, and T. Liu.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024, [PDF] [Code].
Does Confusion Really Hurts Novel Class Discovery?
H. Chi*, W. Yang, F. Liu, L. Lan, and B. Han.
International Journal of Computer Vision (IJCV), 2024, [PDF] [Code].
USN: A Robust Imitation Learning Method against Diverse Action Noise.
X. Yu*, B. Han, and I.W. Tsang.
Journal of Artificial Intelligence Research (JAIR), 2024, [PDF] [Code].
Exploit CAM by itself: Complementary Learning System for Weakly Supervised Semantic Segmentation.
W. Yang, J. Mai*, F. Zhang, T. Liu, and B. Han.
Transactions on Machine Learning Research (TMLR), 2024, [PDF] [Code].
Learning to Augment Distributions for Out-of-distribution Detection.
Q. Wang*, Z. Fang, Y. Zhang, F. Liu, Y. Li, and B. Han.
In Advances in Neural Information Processing Systems 36 (NeurIPS'23), [PDF] [Code] [Poster].
FedFed: Feature Distillation against Data Heterogeneity in Federated Learning.
Z. Yang*, Y. Zhang, Y. Zheng, X. Tian, H. Peng, T. Liu, and B. Han.
In Advances in Neural Information Processing Systems 36 (NeurIPS'23), [PDF] [Code] [Poster].
On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation.
Z. Zhou*, C. Zhou*, X. Li*, J. Yao, Q. Yao, and B. Han.
In Proceedings of 40th International Conference on Machine Learning (ICML'23), [PDF] [Code] [Poster].
Detecting Out-of-distribution Data through In-distribution Class Prior.
X. Jiang*, F. Liu, Z. Fang, H. Chen, T. Liu, F. Zheng, and B. Han.
In Proceedings of 40th International Conference on Machine Learning (ICML'23), [PDF] [Code] [Poster].
Combating Exacerbated Heterogeneity for Robust Models in Federated Learning.
J. Zhu*, J. Yao, T. Liu, Q. Yao, J. Xu, and B. Han.
In Proceedings of 11th International Conference on Learning Representations (ICLR'23), [PDF] [Code] [Poster].
A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond.
Y. Lin*, R. Pi*, W. Zhang, X. Xia, J. Gao, X. Zhou, T. Liu, and B. Han.
In Proceedings of 11th International Conference on Learning Representations (ICLR'23), [PDF] [Code] [Poster] [Spotlight].
AdaProp: Learning Adaptive Propagation for Graph Neural Network based Knowledge Graph Reasoning.
Y. Zhang, Z. Zhou*, Q. Yao, X. Chu, and B. Han.
In Proceedings of 29th ACM Conference on Knowledge Discovery and Data Mining (KDD'23), [PDF] [Code].
Learning from Noisy Pairwise Similarity and Unlabeled Data.
S. Wu, T. Liu, B. Han, J. Yun, G. Niu, and M. Sugiyama.
Journal of Machine Learning Research (JMLR), 2023, [PDF] [Code].
Latent Class-Conditional Noise Model.
J. Yao, B. Han, Z. Zhou, Y. Zhang, and I.W. Tsang.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023, [PDF] [Code].
KRADA: Known-region-aware Domain Alignment for Open-set Domain Adaptation in Semantic Segmentation.
C. Zhou*, F. Liu, C. Gong, R. Zeng, T. Liu, W.K. Cheung, and B. Han.
Transactions on Machine Learning Research (TMLR), 2023, [PDF] [Code].
Server-Client Collaborative Distillation for Federated Reinforcement Learning.
W. Mai*, J. Yao, C. Gong, Y. Zhang, Y.M. Cheung, and B. Han.
ACM Transactions on Knowledge Discovery from Data (TKDD), 2023, [PDF] [Code].
Plantorganelle Hunter is An Effective Deep-learning-based Method for Plant Organelle Phenotyping in Electron Microscopy.
X. Feng, Z. Yu, ..., F. Liu, B. Han, B. Zechmann, Y. He, and F. Liu.
Nature Plants, 2023, [PDF] [Code].
Watermarking for Out-of-distribution Detection.
Q. Wang*, F. Liu, Y. Zhang, J. Zhang, C. Gong, T. Liu, and B. Han.
In Advances in Neural Information Processing Systems 35 (NeurIPS'22), [PDF] [Code] [Poster] [Spotlight].
Is Out-of-distribution Detection Learnable?
Z. Fang, S. Li, J. Lu, J. Dong, B. Han, and F. Liu.
In Advances in Neural Information Processing Systems 35 (NeurIPS'22), [PDF] [Poster] [Oral, Outstanding Paper Award].
Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning.
Z. Tang*, Y. Zhang, S. Shi, X. He, B. Han, and X. Chu.
In Proceedings of 39th International Conference on Machine Learning (ICML'22), [PDF] [Code] [Poster].
Contrastive Learning with Boosted Memorization.
Z. Zhou*, J. Yao, Y. Wang, B. Han, and Y. Zhang.
In Proceedings of 39th International Conference on Machine Learning (ICML'22), [PDF] [Code] [Poster].
Understanding and Improving Graph Injection Attack by Promoting Unnoticeability.
Y. Chen*, H. Yang, Y. Zhang, K. Ma, T. Liu, B. Han, and J. Cheng.
In Proceedings of 10th International Conference on Learning Representations (ICLR'22), [PDF] [Code] [Poster].
CausalAdv: Adversarial Robustness through the Lens of Causality.
Y. Zhang*, M. Gong, T. Liu, G. Niu, X. Tian, B. Han, B. Schölkopf, and K. Zhang.
In Proceedings of 10th International Conference on Learning Representations (ICLR'22), [PDF] [Code] [Poster].
Robust Weight Perturbation for Adversarial Training.
C. Yu, B. Han, M. Gong, L. Shen, S. Ge, B. Du, and T. Liu.
In Proceedings of 31th International Joint Conference on Artificial Intelligence (IJCAI'22), [PDF] [Code].
Bilateral Dependency Optimization: Defending Against Model-inversion Attacks.
X. Peng*, F. Liu, J. Zhang, J. Ye, L. Lan, T. Liu, and B. Han.
In Proceedings of 28th ACM Conference on Knowledge Discovery and Data Mining (KDD'22), [PDF] [Code] [Poster].
Device-Cloud Collaborative Recommendation via Meta Controller.
J. Yao, F. Wang, X. Ding, S. Chen, B. Han, J. Zhou, and H. Yang.
In Proceedings of 28th ACM Conference on Knowledge Discovery and Data Mining (KDD'22), [PDF] [Poster].
Fair Classification with Instance-dependent Label Noise.
S. Wu, M. Gong, B. Han, Y. Liu, and T. Liu.
In Proceedings of 1st Conference on Causal Learning and Reasoning (CLeaR'22), [PDF] [Code].
Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm Regularization.
Q. Yao, Y. Wang, B. Han, and J.T. Kwok.
Journal of Machine Learning Research (JMLR), 2022, [PDF] [Code].
Learning with Mixed Open-set and Closed-set Noisy Labels.
X. Xia, B. Han, N. Wang, J. Deng, J. Li, Y. Mao, and T. Liu.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022, [PDF] [Code].
NoiLIn: Improving Adversarial Training and Correcting Stereotype of Noisy Labels.
J. Zhang, X. Xu, B. Han, T. Liu, L. Cui, G. Niu, and M. Sugiyama.
Transactions on Machine Learning Research (TMLR), 2022, [PDF] [Code].
TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation.
H. Chi*, F. Liu, W. Yang, L. Lan, T. Liu, B. Han, W.K. Cheung, and J.T. Kwok.
In Advances in Neural Information Processing Systems 34 (NeurIPS'21), [PDF] [Code] [Poster] [Spotlight].
Instance-dependent Label-noise Learning under a Structural Causal Model.
Y. Yao, T. Liu, M. Gong, B. Han, G. Niu, and K. Zhang.
In Advances in Neural Information Processing Systems 34 (NeurIPS'21), [PDF] [Code] [Poster].
Confidence Scores Make Instance-dependent Label-noise Learning Possible.
A. Berthon*, B. Han, G. Niu, T. Liu, and M. Sugiyama.
In Proceedings of 38th International Conference on Machine Learning (ICML'21), [PDF] [Code] [Poster] [Long Oral].
Maximum Mean Discrepancy is Aware of Adversarial Attacks.
R. Gao*, F. Liu, J. Zhang, B. Han, T. Liu, G. Niu, and M. Sugiyama.
In Proceedings of 38th International Conference on Machine Learning (ICML'21), [PDF] [Code] [Poster].
Geometry-aware Instance-reweighted Adversarial Training.
J. Zhang, J. Zhu*, G. Niu, B. Han, M. Sugiyama, and M. Kankanhalli.
In Proceedings of 9th International Conference on Learning Representations (ICLR'21), [PDF] [Code] [Poster] [Oral].
Robust Early-learning: Hindering the Memorization of Noisy Labels.
X. Xia, T. Liu, B. Han, C. Gong, N. Wang, Z. Ge, and Y. Chang.
In Proceedings of 9th International Conference on Learning Representations (ICLR'21), [PDF] [Code] [Poster].
Learning with Group Noise.
Q. Wang*, J. Yao, C. Gong, T. Liu, M. Gong, H. Yang, and B. Han.
In Proceedings of 35th AAAI Conference on Artificial Intelligence (AAAI'21), [PDF] [Code].
Device-Cloud Collaborative Learning for Recommendation.
J. Yao, F. Wang, K. Jia, B. Han, J. Zhou, and H. Yang.
In Proceedings of 27th ACM Conference on Knowledge Discovery and Data Mining (KDD'21), [PDF] [Poster].
Provably Consistent Partial-Label Learning.
L. Feng, J. Lv, B. Han, M. Xu, G. Niu, X. Geng, B. An, and M. Sugiyama.
In Advances in Neural Information Processing Systems 33 (NeurIPS'20), [PDF] [Code] [Poster].
Reducing Estimation Error for Transition Matrix in Label-noise Learning.
Y. Yao, T. Liu, B. Han, M. Gong, J. Deng, G. Niu, and M. Sugiyama.
In Advances in Neural Information Processing Systems 33 (NeurIPS'20), [PDF] [Code] [Poster].
SIGUA: Forgetting May Make Learning with Noisy Labels More Robust.
B. Han, G. Niu, X. Yu, Q. Yao, M. Xu, I.W. Tsang, and M. Sugiyama.
In Proceedings of 37th International Conference on Machine Learning (ICML'20), [PDF] [Code] [Poster].
Variational Imitation Learning from Diverse-quality Demonstrations.
V. Tangkaratt, B. Han, M. Khan, and M. Sugiyama.
In Proceedings of 37th International Conference on Machine Learning (ICML'20), [PDF] [Code] [Poster].
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger.
J. Zhang*, X. Xu, B. Han, G. Niu, L. Cui, M. Sugiyama, and M. Kankanhalli.
In Proceedings of 37th International Conference on Machine Learning (ICML'20), [PDF] [Code] [Poster].
Searching to Exploit Memorization Effect in Learning from Noisy Labels.
Q. Yao, H. Yang, B. Han, G. Niu, and J.T. Kwok.
In Proceedings of 37th International Conference on Machine Learning (ICML'20), [PDF] [Code] [Poster].
A Bi-level Formulation for Label Noise Learning with Spectral Cluster Discovery.
Y. Luo, B. Han, and C. Gong.
In Proceedings of 29th International Joint Conference on Artificial Intelligence (IJCAI'20), [PDF] [Code].
Are Anchor Points Really Indispensable in Label-noise Learning?
X. Xiao, T. Liu, N. Wang, B. Han, C. Gong, G. Niu, and M. Sugiyama.
In Advances in Neural Information Processing Systems 32 (NeurIPS'19), [PDF] [Code] [Poster].
How does Disagreement Help Generalization against Label Corruption?
X. Yu*, B. Han, J. Yao, G. Niu, I.W. Tsang, and M. Sugiyama.
In Proceedings of 36th International Conference on Machine Learning (ICML'19), [PDF] [Code] [Poster] [Long Oral].
Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations.
Q. Yao, J.T. Kwok, and B. Han.
In Proceedings of 36th International Conference on Machine Learning (ICML'19), [PDF] [Code] [Poster].
Towards Robust ResNet: A Small Step but A Giant Leap.
J. Zhang*, B. Han, L. Wynter, B. Low, and M. Kankanhalli.
In Proceedings of 28th International Joint Conference on Artificial Intelligence (IJCAI'19), [PDF] [Code] [Poster].
Privacy-preserving Stochastic Gradual Learning.
B. Han, I.W. Tsang, X. Xiao, L. Chen, S.-F. Fung, and C. Yu.
IEEE Transactions on Knowledge and Data Engineering (TKDE), 2019, [PDF].
Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels.
B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I.W. Tsang, and M. Sugiyama.
In Advances in Neural Information Processing Systems 31 (NeurIPS'18), [PDF] [Code] [Poster] [Most Influential Paper].
Masking: A New Perspective of Noisy Supervision.
B. Han, J. Yao, G. Niu, M. Zhou, I.W. Tsang, Y. Zhang, and M. Sugiyama.
In Advances in Neural Information Processing Systems 31 (NeurIPS'18), [PDF] [Code] [Poster].
Millionaire: A Hint-guided Approach for Crowdsourcing.
B. Han, Q. Yao, Y. Pan, I.W. Tsang, X. Xiao, Q. Yang, and M. Sugiyama.
Machine Learning Journal (MLJ), 108(5): 831–858, 2018, [PDF] [Slides].
Stagewise Learning for Noisy k-ary Preferences.
Y. Pan, B. Han, and I.W. Tsang.
Machine Learning Journal (MLJ), 107(8): 1333–1361, 2018, [PDF].
Robust Plackett-Luce Model for k-ary Crowdsourced Preferences.
B. Han, Y. Pan, and I.W. Tsang.
Machine Learning Journal (MLJ), 107(4): 675–702, 2017, [PDF].
Biography
Bo Han is currently an Assistant Professor in Machine Learning and a Director of Trustworthy Machine Learning and Reasoning Group at Hong Kong Baptist University, and a BAIHO Visiting Scientist of Imperfect Information Learning Team at RIKEN Center for Advanced Intelligence Project (RIKEN AIP), where his research focuses on machine learning, deep learning, foundation models, and their applications. He was a Visiting Research Scholar at MBZUAI MLD (2024), hosted by Prof. Kun Zhang, a Visiting Faculty Researcher at Microsoft Research (2022) and Alibaba DAMO Academy (2021), and a Postdoc Fellow at RIKEN AIP (2019-2020), working with Prof. Masashi Sugiyama. He received his Ph.D. degree in Computer Science from University of Technology Sydney (2015-2019), primarily advised by Prof. Ivor W. Tsang. He has co-authored three machine learning monographs, including Machine Learning with Noisy Labels (MIT Press), Trustworthy Machine Learning under Imperfect Data (Springer Nature), and Trustworthy Machine Learning from Data to Models (Foundations and Trends). He has served as Senior Area Chair of NeurIPS, and Area Chairs of NeurIPS, ICML and ICLR. He has also served as Associate Editors of IEEE TPAMI, MLJ and JAIR, and Editorial Board Members of JMLR and MLJ. He received Outstanding Paper Award at NeurIPS, Most Influential Paper at NeurIPS, Notable Area Chair at NeurIPS, Outstanding Area Chair at ICLR, and Outstanding Associate Editor at IEEE TNNLS. He received the RGC Early CAREER Scheme, NSFC General Program, IJCAI Early Career Spotlight, RIKEN BAIHO Award, Dean's Award for Outstanding Achievement, Microsoft Research StarTrack Program, and Faculty Research Awards from ByteDance, Baidu, Alibaba and Tencent.
Acknowledgement
TMLR group is/was gratefully supported by UGC and RGC of Hong Kong, NSFC, GDST, RIKEN AIP, CCF, CAAI, GRG, HKBU, HKBU RC, HKBU CSD, and industry research labs (Microsoft, Google, NVIDIA, ByteDance, Baidu, Alibaba, Tencent).
|