Bo Han


Home


Bo Han

Bo Han

Postdoc Fellow (with Masashi Sugiyama)
Imperfect Information Learning Team
RIKEN Center for Advanced Intelligence Project

E-mail: bo.han@riken.jp
[Google Scholar] [Github]

I will join the Department of Computer Science at Hong Kong Baptist University as an assistant professor.
I will be affiliated with Imperfect Information Learning Team at RIKEN-AIP as a visiting scientist.
I will hire PhD/MPhil/Visiting students to begin in 2020 Spring. Please read this document for more information.


News

  • Call for application to HKBU - Hong Kong PhD Fellowship Scheme (HKPFS).

  • Oct 2019: I will give a talk "Robust Deep Learning: Challenges and New Directions" at
    Workshop: Theory towards Brains, Machines and Minds, Center for Brain Science, RIKEN.

  • Sep 2019: I will serve as a PC member for AISTATS'20.

  • Sep 2019: I will give a talk "Robust Deep Learning: Challenges and New Directions" at
    International Research Center for Neurointelligence, The University of Tokyo.

  • Sep 2019: I will co-organize ACML 2019 Challenge on AutoWSL
    (with Quanming Yao, Wei-Wei Tu, Isabelle Guyon, and Qiang Yang).

  • Sep 2019: Paper "T-Revision" is accepted to NeurIPS'19. Congrats, all co-authors!

  • Aug 2019: I will co-organize ACML 2019 Workshop on Weakly-supervised Learning
    (with Gang Niu, Quanming Yao, Giogio Patrini, Aditya Menon, Clayton Scott and Masashi Sugiyama).

  • Aug 2019: I will co-organize ACML 2019 Tutorial on Towards Noisy Supervision (with Ivor W. Tsang).

  • Aug 2019: I will serve as a PC member for AAAI'20.

  • Jul 2019: I will serve as a PC member for ICLR'20.


Research

    My research interests lie in machine learning, deep learning and artificial intelligence. My long-term goal is to develop intelligent systems, which can learn from a massive volume of complex (e.g., weakly-supervised, adversarial, and private) data (e.g, single-/multi-label, ranking, domain, similarity, graph, and demonstration) automatically. Recently, I develop core machine learning methodology. Besides, I am actively applying our fundamental research into the healthcare domain (e.g., electronic health records analysis and medical image understanding).
    My current research work center around four major themes:
  • Weakly-supervised Machine Learning: How can we train complex models robustly using weakly-supervised information?

  • Security, Privacy and Robustness in Machine Learning: How can we preserve the security, privacy and robustness in training complex models?

  • Automated Machine Learning: How can we reason about intelligent systems without human intervention?

  • Interdisciplinary Problems: How can we apply the above fundamental research to the healthcare domain?


Workshops, Tutorials and Challenges


Selected Publications

  • Are Anchor Points Really Indispensable in Label-noise Learning?
    X. Xiao, T. Liu, N. Wang, B. Han, C. Gong, G. Niu, and M. Sugiyama.
    In Advances in Neural Information Processing Systems (NeurIPS'19), [PDF] [Code] [Poster].

  • How does Disagreement Help Generalization against Label Corruption?
    X. Yu, B. Han, J. Yao, G. Niu, I.W. Tsang, and M. Sugiyama.
    In Proceedings of 36th International Conference on Machine Learning (ICML'19), [PDF] [Code] [Slides] [Poster].

  • Towards Robust ResNet: A Small Step but A Giant Leap.
    J. Zhang, B. Han, L. Wynter, B. Low, and M. Kankanhalli.
    In Proceedings of 28th International Joint Conference on Artificial Intelligence (IJCAI'19), [PDF] [Code] [Poster].

  • Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations.
    Q. Yao, J.T. Kwok, and B. Han.
    In Proceedings of 36th International Conference on Machine Learning (ICML'19), [PDF] [Code].

  • Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels.
    B. Han*, Q. Yao*, X. Yu, G. Niu, M. Xu, W. Hu, I.W. Tsang, and M. Sugiyama.
    In Advances in Neural Information Processing Systems (NeurIPS'18), [PDF] [Code] [Poster].

  • Masking: A New Perspective of Noisy Supervision.
    B. Han*, J. Yao*, G. Niu, M. Zhou, I.W. Tsang, Y. Zhang, and M. Sugiyama.
    In Advances in Neural Information Processing Systems (NeurIPS'18), [PDF] [Code] [Poster].

  • Millionaire: A Hint-guided Approach for Crowdsourcing.
    B. Han*, Q. Yao*, Y. Pan, I.W. Tsang, X. Xiao, Q. Yang, and M. Sugiyama.
    Machine Learning Journal (MLJ), 108(5): 831–858, 2018, [PDF] [Slides].

  • Stagewise Learning for Noisy k-ary Preferences.
    Y. Pan, B. Han, and I.W. Tsang.
    Machine Learning Journal (MLJ), 107: 1333–1361, 2018, [PDF].

  • Robust Plackett-Luce Model for k-ary Crowdsourced Preferences.
    B. Han*, Y. Pan*, and I.W. Tsang.
    Machine Learning Journal (MLJ), 107(4): 675–702, 2017, [PDF].


Brief Biography

    Bo Han is a postdoc fellow at RIKEN Center for Advanced Intelligence Project (RIKEN-AIP), advised by Masashi Sugiyama. He received his Ph.D. degree in Computer Science from University of Technology Sydney (2015-2019), advised by Ivor W. Tsang and Ling Chen. During 2018-2019, he was a research intern with the AI Residency Program at RIKEN-AIP, working on robust deep learning projects with Masashi Sugiyama, Gang Niu and Mingyuan Zhou. His current research interests lie in machine learning, deep learning and artificial intelligence. His long-term goal is to develop intelligent systems, which can learn from a massive volume of complex (e.g., weakly-supervised, adversarial, and private) data (e.g, single-/multi-label, ranking, domain, similarity, graph and demonstration) automatically. He has served as program committes of NeurIPS, ICML, ICLR, AISTATS, UAI, AAAI, and ACML. He received the UTS Research Publication Award (2017 and 2018).


Sponsors

RIKEN-AIP International Research Center for Neurointelligence