Bo Han


Codes and Data from TMLR Group (Reproducible Research)

  • Masking: A New Perspective of Noisy Supervision, [code].

  • Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels, [code].

  • How does Disagreement Help Generalization against Label Corruption, [code].

  • Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations, [code].

  • Towards Robust ResNet: A Small Step but A Giant Leap, [code].

  • Are Anchor Points Really Indispensable in Label-noise Learning, [code].

  • SIGUA: Forgetting May Make Learning with Noisy Labels More Robust, [code].

  • Variational Imitation Learning from Diverse-quality Demonstrations, [code].

  • Friendly Adversarial Training, [code].

  • Searching to Exploit Memorization Effect in Learning from Noisy Labels, [code].

  • Learning with Multiple Complementary Labels, [code].

  • Provably Consistent Partial-Label Learning, [code].

  • Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning, [code].

  • Part-dependent Label Noise: Towards Instance-dependent Label Noise, [code].

  • Learning with Group Noise, [code].

  • Tackling Instance-Dependent Label Noise via a Universal Probabilistic Model, [code].

  • Geometry-aware Instance-reweighted Adversarial Training, [code].

  • Robust Early-learning: Hindering the Memorization of Noisy Labels, [code].

  • Confidence Scores Make Instance-dependent Label-noise Learning Possible, [code].

  • Maximum Mean Discrepancy is Aware of Adversarial Attacks, [code].

  • Learning Diverse-Structured Networks for Adversarial Robustness, [code].

  • Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels, [code].

  • Provably End-to-end Label-noise Learning without Anchor Points, [code].

  • Probabilistic Margins for Instance Reweighting in Adversarial Training, [code].

  • Instance-dependent Label-noise Learning under a Structural Causal Model, [code].

  • Adversarial Robustness Through the Lens of Causality, [code].

  • Exploiting Class Activation Value for Partial-Label Learning, [code].

  • Understanding and Improving Graph Injection Attack by Promoting Unnoticeability, [code].

  • Reliable Adversarial Distillation with Unreliable Teachers, [code].

  • Rethinking Class-Prior Estimation for Positive-Unlabeled Learning, [code].

  • Sample Selection with Uncertainty of Losses for Learning with Noisy Labels, [code].

  • Contrastive Learning with Boosted Memorization, [code].

  • Fast and Reliable Evaluation of Adversarial Robustness, [code].

  • Virtual Homogeneity Learning, [code].

  • Modeling Adversarial Noise for Adversarial Defense, [code].

  • Improving Adversarial Robustness via Natural and Adversarial Mutual Information, [code].

  • Understanding Robust Overfitting of Adversarial Training and Beyond, [code].

  • Estimating Instance-dependent Label-noise Transition Matrix using DNNs, [code].

  • NoiLin: Improving Adversarial Training and Correcting Stereotype of Noisy Labels, [code].

  • Fair Classification with Instance-dependent Label Noise, [code].

    At Trustworthy Machine Learning and Reasoning Group, Department of Computer Science, HKBU, we are developing core machine learning methodology that can learn and reason from a massive volume of complex data automatically and federatively. Please check this page, which hosts our program codes used in our published papers.

Codes and Data from RIKEN Team (Reproducible Research)

    At Imperfect Information Learning Team, Center for Advanced Intelligence Project (AIP), RIKEN, we are developing reliable and robust machine learning methods/algorithms that can cope with various factors such as weak supervision and noisy supervision. Please check this page, which hosts our program codes used in our published papers.