Jingfeng Zhang


Home


Jingfeng Zhang

Jingfeng Zhang (张 景锋)

Apr. 2023 onwards, Research Scientist @ RIKEN Center for Advanced Intelligence Project
 

Jan. 2021 - Mar. 2023, Postdoctoral Researcher @ Imperfect Information Learning Team
RIKEN Center for Advanced Intelligence Project
Supervised by Prof. Masashi Sugiyama

Aug. 2016 - Dec. 2020, Doctor of Philosophy @ School of Computing
National University of Singapore
Supervised by Prof. Mohan Kankanhalli

Sept. 2012 - Jul. 2016, Bachelor of Engineering @ Taishan College
Shandong University

[Google Scholar] [Github]
E-mail: jingfeng.zhang@riken.jp / jingfeng.zhang9660@gmail.com

I will soon serve as a Lecturer in the Faculty of Science at the University of Auckland, New Zealand.

TrustML YSS is inviting speakers across the globe, contact me if you are interested!


News

  • March 2023: I am invited to give an onsite talk at the EPFL-CIS and RIKEN-AIP Joint Workshop on Machine Learning and Artificial Intelligence 2023.

  • Feb 2023: I am invited to give an onsite talk at the Rising Stars in AI Symposium 2023 at KAUST.

  • Jan. - Feb. 2023: I get married and enjoy a great Chinese New Year break ^^.

  • December 2022: I will serve as reviewer for CVPR2023, KDD2023 and associate editor for IEEE Transactions on Artificial Intelligence.

  • November 2022: Traveling to NeurIPS 2022. See you in New Orleans, US!

  • November 2022: I have given a talk at MIRAI2.0; welcome the Swedish researchers to visit our RIKEN AIP. [News]

  • October 2022: I will travel to the Kansai area and give invited talks at Kyoto University, Osaka University, RIKEN NRT@Nara and RIKEN Sci & Technol Hub@Kobe.

  • September 2022: Two papers get accepted at NeurIPS 2022!

  • September 2022: I have given invited talks at Ludwig Maximilian University of Munich, the Technical University of Munich, and the Technical University of Berlin.

  • September 2022: I am invited to serve as a reviewer for AISTAT 2023.

  • September 2022: I have served on the Ph.D. dissertation defense committee at the University of Luxembourg. Congratulations to Dr. Salah Ghamizi for passing his Ph.D. defense.

  • September 2022: I have given a research talk at EPFL-CIS & RIKEN-AIP Joint Workshop on Machine Learning Featuring Rising Stars from the field from Switzerland and Japan. [EPFL's Video] [RIKEN's Video]

  • September 2022: I have given a research talk at MBZUAI. [Video]

  • August 2022: I will serve as reviewer for AAAI 2023 and ICLR 2023.

  • August 2022: I have given an invited talk at Visual Informatics Group in University of Texas at Austin.

  • August 2022: I have given an invited talk at CCF Jinan Trusted Artificial Intelligence International Academic Forum. [link]

  • July 2022: The research proposal "Evaluating and Enhancing the Reliability of Semi-Supervised Learning" has been accepted at RIKEN-Kyushu Univ Science & Technology Hub Collaborative Research Program FY2022, and I serve as the co-leader.

  • June 2022: I am invited to join the thesis defense Jury and will visit the University of Luxembourg in September.

  • June 2022: I gave an online talk at the University of Manchester titled "Pioneer Studies on Adversarial Robustness of Deep Image Denoisers and Non-parametric Two-Sample Tests."

  • May 2022: I am invited to give an online talk at the EPFL CIS – RIKEN AIP workshop in September. [AIP Web] [EPFL-CIS Web]

  • May 2022: I am invited to give an onsite talk at Infomation-Based Induction Sciences and Machine Learning (IBISML) titled "Evaluating and Enhancing Reliabilities of AI-Powered Tools."

  • May 2022: I will serve as reviewer for Conference on Neural Information Processing Systems (NeurIPS 2022).

  • April 2022: I gave an online talk at Shanghai Jiao Tong University titled "Adversarial Robustness from Basic Science to some Applications."

  • March 2022: I am invited to serve as a reviewer for European Conference on Computer Vision (ECCV 2022).

  • March 2022: My research proposal "Adversarial robustness meets imperfect training set" has been accepted in the Grants-in-Aid for Scientific Research-KAKENHI, Early-Career Scientists, JSPS.

  • March 2022: I am honored to receive the RIKEN Ohbu award (research incentive award).

  • March 2022: I will give an invited talk at AAAI22' Workshop on Adversarial Machine Learning and Beyond. [Video]

  • Feburary 2022: I am invited to serve as a reviewer for Transactions on Machine Learning Research (TMLR).

  • Feburary 2022: I will be serving as a reviewer for ACM SIGKDD 2022.

See more news here.


Research

    I am machine learning researcher with the research interest in trustworthy machine learning. My long-term goal is to develop safe, trustworthy, reliable, and extensible machine learning (ML) technologies. Researching adversarial training (AT) is the first step towards achieving my long-term goal.
    [Background] There is a widely observed fact that the outputs of naturally occurring systems are mostly smooth with respect to the inputs, and so the ML systems should follow this fact. However, many existing ML systems neglect this fact, which causes the trustworthy issues. One example is the small adversarial noise failing the ML model’s predictions, even if the model is trained via standard isotropic smoothing techniques such as the random noise or the random data augmentation. Another example is the crafted noises in the training set manipulating the ML’s learning process, which outputs an untrusted (poisoned) model.

    What is AT? AT is a trendy robust learning algorithm. Unlike standard training (ST) that directly learns from the natural data, AT learns from the adversarial data within a bounded distance of their natural counterparts. At each training epoch, the adversarial data are generated towards the direction that reduces the model’s probability of correct classifications. AT aims to achieve two purposes: (1) correctly classify the data (the same as ST) and (2) make the decision boundary thick so that no data lie nearby the decision boundary.

    My research is to develop the effective robust learning algorithms, discover the new knowledge, apply the developed techniques to enhance the reliability, and further evaluate the vulnerabilities of the learning algorithms.
  • To evaluate the vulnerabilities of the learning algorithms, we develop the training-phase attacks.
    Know the enemy and know yourself: Towards poisoning attacks against adversarial training in 2022. (under review)
    Model and method: Training-time attack for cooperative multi-agent reinforcement learning in 2022. (under review)
  • Besides, I am also interested in the privacy protection.
    Turn poisoning into a strength: preventing unauthorized exploitation by poisoning voice in 2022. (under review)
    Bilateral dependency optimization: defending against model-inversion attacks in 2021. (KDD2022 accept!)


Publications

(* indicates equal contributions)
    1. GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks.
      S. Ghamizi, J. Zhang, M. Cordy, M. Papadakis, M. Sugiyama, Y. Le Traon.
      The 40th International Conference on Machine Learning (ICML 2023), [PDF] [Code] [Poster].

    2. Synergy-of-Experts: Collaborate to Improve Adversarial Robustness.
      S. Cui*, J. Zhang*, J. Liang, B. Han, M. Sugiyama, C. Zhang.
      36th Annual Conference on Neural Information Processing Systems (NeurIPS 2022), [PDF] [Code] [Poster].

    3. Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks.
      J. Zhou*, J. Zhou*, J. Zhang, T. Liu, G. Niu, B. Han, M. Sugiyama.
      36th Annual Conference on Neural Information Processing Systems (NeurIPS 2022), [PDF] [Code] [Poster].

    4. NoiLin: Improving Adversarial Training and Correcting Stereotype of Noisy Labels.
      J. Zhang*, X. Xu*, B. Han, T. Liu, N. Gang, L. Cui, M. Sugiyama.
      Transactions on Machine Learning Research (TMLR 2022), [PDF] [Code].

    5. Bilateral Dependency Optimization: Defending Against Model-inversion Attacks.
      X. Peng*, F. Liu*, J. Zhang, L. Lan, J. Ye, T. Liu, B. Han.
      28th ACM SIGKDD conference on Knowledge Discovery and Data Mining (KDD 2022), [PDF] [Code] [Poster].

    6. Adversarial Attacks and Defense For Non-parametric Two Sample Tests.
      X. Xu*, J. Zhang*, F. Liu, M. Sugiyama, and M. Kankanhalli.
      The 39th International Conference on Machine Learning (ICML 2022), [PDF] [Code] [Poster].

    7. Towards Adversarially Robust Image Denoising.
      H. Yan, J. Zhang, J. Feng, M. Sugiyama, and V. Y. F. Tan.
      The 31st International Joint Conference on Artificial Intelligence (IJCAI 2022), [PDF] [Code] [Poster].

    8. Decision Boundary-aware Data Augmentation for Adversarial Training.
      C. Chen*, J. Zhang*, X. Xu, L. Lyu, C. Chen, T. Hu, G. Chen.
      IEEE Transactions on Dependable and Secure Computing (IEEE TDSC 2022), [PDF] [Code].

    9. Reliable Adversarial Distillation with Unreliable Teachers.
      J. Zhu, J. Yao, B. Han, J. Zhang, T. Liu, G. Niu, J. Zhou, J. Xu, H. Yang.
      In Proceedings of 10th International Conference on Learning Representations (ICLR 2022), [PDF] [Code] [Poster].

    10. CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection.
      H. Yan, J. Zhang, G. Niu, J. Feng, V. Y. F. Tan, M. Sugiyama.
      In Proceedings of 38th International Conference on Machine Learning (ICML 2021), [PDF] [Code] [Poster].

    11. Maximum Mean Discrepancy is Aware of Adversarial Attacks.
      R. Gao*, F. Liu*, J. Zhang*, B. Han, T. Liu, G. Niu, and M. Sugiyama.
      In Proceedings of 38th International Conference on Machine Learning (ICML 2021), [PDF] [ Code] [Poster].

    12. Learning Diverse-Structured Networks for Adversarial Robustness.
      X. Du*, J. Zhang*, B. Han, T. Liu, Y. Rong, G. Niu, J. Huang and M. Sugiyama.
      In Proceedings of 38th International Conference on Machine Learning (ICML 2021), [PDF] [ Code] [Poster].

    13. Geometry-aware Instance-reweighted Adversarial Training.
      J. Zhang, J. Zhu, G. Niu, B. Han, M. Sugiyama, and M. Kankanhalli.
      In Proceedings of 9th International Conference on Learning Representations (ICLR 2021, Oral), [PDF] [Code] [Poster].

    14. Attacks Which Do Not Kill Training Make Adversarial Learning Stronger.
      J. Zhang*, X. Xu*, B. Han, G. Niu, L. Cui, M. Sugiyama, and M. Kankanhalli.
      In Proceedings of 37th International Conference on Machine Learning (ICML 2020), [PDF] [Code] [Poster].

    15. Towards Robust ResNet: A Small Step but A Giant Leap.
      J. Zhang, B. Han, L. Wynter, B. Low, and M. Kankanhalli.
      In Proceedings of 28th International Joint Conference on Artificial Intelligence (IJCAI 2019), [PDF] [Code] [Poster].


Professional Service

    Reviewer for prestigious conferences

    ICLR'21-23, ICML'20-22, NeurIPS'21-22, CVPR'22-23, IJCAI'20-22, AAAI'22-23, AISTAT'22-23, ACM Multimedia'21, ACM SIGKDD'22-23, ACML'21-23, etc.

    Reviewer for prestigious journals

    MLJ, TMLR, TNNLS, PR, Neural Networks, etc.

    Research advisor

    (Students who closely work or worked with me.)
    [09/2020 - Present] Hanshu Yan PhD Student@NUS (with Dr. Vincent Tan)
    [04/2019 - Present] Xilie Xu Undergrad@SDU → PhD Student@NUS (with Prof. Mohan Kankanhalli)
    [03/2020 - Present] Jianing Zhu Undergrad@SCU → PhD Student@BUHK (with Dr. Bo Han)
    [04/2021 - Present] Bo Song Master Student@SDU (with Prof. Lei Liu)
    [04/2021 - Present] Ting Zhou Master Student@SDU (with Prof. Lei Liu)
    [07/2020 – 3/2021] Xuefeng Du (now PhD Student@UW-Madison) RA@BUHK (with Dr. Bo Han)
    [07/2020 – 3/2021] Ruize Gao (now PhD Student@NUS) RA@BUHK (with Dr. Bo Han)


Funding

    RIKEN-Kyushu Univ Science & Technology Hub Collaborative Research Program, Japan [FY2022]


Brief Biography

    Jingfeng Zhang is currently a postdoctoral researcher at RIKEN-AIP in Tokyo, Japan.

    During his undergraduate period, he studied at University of Cambridge for two months and University of California, Los Angeles for three months. Finally, he went to Singapore for his Ph.D. study; he had a happy life there with many lovely friends and supportive mentors. He likes Malaysian durian!

    During his Ph.D. study, he had two valuable internship experiences. In 2018, he was interning at IBM Singapore advised by Laura Wynter, working on the "robust ResNet" project. In 2019, he was interning at the Imperfect Information Learning Team at RIKEN-AIP advised by Bo Han, Gang Niu, and Masashi Sugiyama, working on the "adversarial robustness" project.