Bingjie Yan
Logo Institute of Computing Technology, Chinese Academy of Sciences

I am a final-year master's student majoring in Computer Science @Institute of Computing Technology, Chinese Academy of Sciences, under the supervision of Prof. Yiqiang Chen and Prof. Xinlong Jiang. Before that, I received my B. Eng. degree in Software Engineering from @Hainan University.

Research Interests:
AI4Medical & Healthcare: Medical Image Analysis, Radiology and Biomedical Imaging, Computational Biology.
Foundation Model: Vision-Language Model, Collaboration of Large and Small Models, Privacy & Security of LLMs.
Federated Learning: Trustworthy, Privacy-preserving, Heterogeneity, and FL Application in Medical & Healthcare.
Optimization: Distributed Optimization, Online Convex Optimization, Long-term Constraints

Please feel free to send me an email, if you have any questions. I'd like to communicate with you.

I am looking for a Ph.D position in 2025 fall, please contact me if you are interested in me!


Education
  • Institute of Computing Technology, Chinese Academy of Sciences
    Institute of Computing Technology, Chinese Academy of Sciences
    Master Student
    Sep. 2022 - present
  • Hainan University
    Hainan University
    B.S. in Software Engineering
    Sep. 2018 - Jul. 2022
Honors & Awards
  • National Scholarship (3%)
    2024
  • Outstanding Graduate of Hainan University
    2022
Services
  • IEEE Hainan University Branch
    President
    Mar. 2021 - Jun. 2022
  • Association of Robotics and Artificial Intelligence
    Co-Founder & Vice President
    Jul. 2020 - Jun. 2022
  • Cyberspace Security Association @Hainan University
    Vice President
    Sep. 2020 - Jun. 2021
Experience
  • Hong Kong Baptist University
    Research Assistant
    Jun. 2024 - Jul. 2024
  • FedML Inc.
    Remote Research Intern
    Jun. 2022 - Sep. 2022
News (view all )
2024
Our paper "Survey on Knowledge Distillation for Large Language Models: Methods, Evaluation, and Application" is accepted by ACM TIST(JCR-Q1)!
Sep 06
Our paper "EyeGraphGPT: Knowledge Graph Enhanced Multimodal Large Language Model for Ophthalmic Report Generation" is accepted by IEEE BIBM'24 (CCF-B). Congrats to Zhirui!
Aug 21
Our paper "Buffalo: Biomedical Vision-Language Understanding with Cross-Modal Prototype and Federated Foundation Model Collaboration" is accepted by ACM CIKM'24 (CCF-B, CORE-A). Thanks to the co-authors!
Jul 16
Our paper "Correlation-Driven Multi-Modality Graph Decomposition for Cross-Subject Emotion Recognition" is accepted by ACM MM'24 (CCF-A, CORE-A*). Congrats to Wuliang!
Jul 16
Our paper "PrivFusion: Privacy-Preserving Model Fusion via Decentralized Federated Graph Matching" is accepted by TKDE (CCF-A, JCR-Q1, CORE-A*). Congrats to Qian Chen!
Jun 26
Selected Publications (view all )
Buffalo: Biomedical Vision-Language Understanding with Cross-Modal Prototype and Federated Foundation Model Collaboration
Buffalo: Biomedical Vision-Language Understanding with Cross-Modal Prototype and Federated Foundation Model Collaboration

Bingjie Yan, Qian Chen, Yiqiang Chen, Xinlong Jiang, Wuliang Huang, Bingyu Wang, Zhirui Wang, Chenlong Gao, Teng Zhang ( corresponding author )

ACM CIKM'24, CCF-B, CORE-A (Acceptance Rate: 22.7%) 2024 Oral

Federated learning (FL) enables collaborative learning across multiple biomedical data silos with multimodal foundation models while preserving privacy. Due to the heterogeneity in data processing and collection methodologies across diverse medical institutions and the varying medical inspections patients undergo, modal heterogeneity exists in practical scenarios, where severe modal heterogeneity may even prevent model training. With privacy considerations, data transfer cannot be permitted, restricting knowledge exchange among different clients. To trickle these issues, we propose a cross-modal prototype imputation method for visual-language understanding (Buffalo) with only a slight increase in communication cost, which can improve the performance of fine-tuning general foundation models for downstream biomedical tasks. We conducted extensive experiments on medical report generation and biomedical visual question-answering tasks. The results demonstrate that Buffalo can fully utilize data from all clients to improve model generalization compared to other modal imputation methods in three modal heterogeneity scenarios, approaching or even surpassing the performance in the ideal scenario without missing modality.

Buffalo: Biomedical Vision-Language Understanding with Cross-Modal Prototype and Federated Foundation Model Collaboration

Bingjie Yan, Qian Chen, Yiqiang Chen, Xinlong Jiang, Wuliang Huang, Bingyu Wang, Zhirui Wang, Chenlong Gao, Teng Zhang ( corresponding author )

ACM CIKM'24, CCF-B, CORE-A (Acceptance Rate: 22.7%) 2024 Oral

Federated learning (FL) enables collaborative learning across multiple biomedical data silos with multimodal foundation models while preserving privacy. Due to the heterogeneity in data processing and collection methodologies across diverse medical institutions and the varying medical inspections patients undergo, modal heterogeneity exists in practical scenarios, where severe modal heterogeneity may even prevent model training. With privacy considerations, data transfer cannot be permitted, restricting knowledge exchange among different clients. To trickle these issues, we propose a cross-modal prototype imputation method for visual-language understanding (Buffalo) with only a slight increase in communication cost, which can improve the performance of fine-tuning general foundation models for downstream biomedical tasks. We conducted extensive experiments on medical report generation and biomedical visual question-answering tasks. The results demonstrate that Buffalo can fully utilize data from all clients to improve model generalization compared to other modal imputation methods in three modal heterogeneity scenarios, approaching or even surpassing the performance in the ideal scenario without missing modality.

Model Trip: Enhancing Privacy and Fairness in Model Fusion across Multi-Federations for Trustworthy Global Healthcare
Model Trip: Enhancing Privacy and Fairness in Model Fusion across Multi-Federations for Trustworthy Global Healthcare

Qian Chen, Yiqiang Chen, Bingjie Yan, Xinlong Jiang, Xiaojin Zhang, Yan Kang, Teng Zhang, Wuliang Huang, Chenlong Gao, Lixin Fan, Qiang Yang ( corresponding author )

ICDE'24, CCF-A 2024 Oral

Federated Learning has emerged as a revolutionary innovation in the evolving landscape of global healthcare, fostering collaboration among institutions and facilitating collaborative data analysis. As practical applications continue to proliferate, numerous federations have formed in different regions. The optimization and sustainable development of federation-pretrained models have emerged as new challenges. These challenges primarily encompass privacy, population shift and data dependency, which may lead to severe consequences such as the leakage of sensitive information within models and training samples, unfair model performance and resource burdens. To tackle these issues, we propose FairFusion, a cross-federation model fusion approach that enhances privacy and fairness. FairFusion operates across federations within a Model Trip paradigm, integrating knowledge from diverse federations to continually enhance model performance. Through federated model fusion, multi-objective quantification and optimization, FairFusion obtains trustworthy solutions that excel in utility, privacy and fairness. We conduct comprehensive experiments on three public real-world healthcare datasets. The results demonstrate that FairFusion achieves outstanding model fusion performance in terms of utility and fairness across various model structures and subgroups with sensitive attributes while guaranteeing model privacy.

Model Trip: Enhancing Privacy and Fairness in Model Fusion across Multi-Federations for Trustworthy Global Healthcare

Qian Chen, Yiqiang Chen, Bingjie Yan, Xinlong Jiang, Xiaojin Zhang, Yan Kang, Teng Zhang, Wuliang Huang, Chenlong Gao, Lixin Fan, Qiang Yang ( corresponding author )

ICDE'24, CCF-A 2024 Oral

Federated Learning has emerged as a revolutionary innovation in the evolving landscape of global healthcare, fostering collaboration among institutions and facilitating collaborative data analysis. As practical applications continue to proliferate, numerous federations have formed in different regions. The optimization and sustainable development of federation-pretrained models have emerged as new challenges. These challenges primarily encompass privacy, population shift and data dependency, which may lead to severe consequences such as the leakage of sensitive information within models and training samples, unfair model performance and resource burdens. To tackle these issues, we propose FairFusion, a cross-federation model fusion approach that enhances privacy and fairness. FairFusion operates across federations within a Model Trip paradigm, integrating knowledge from diverse federations to continually enhance model performance. Through federated model fusion, multi-objective quantification and optimization, FairFusion obtains trustworthy solutions that excel in utility, privacy and fairness. We conduct comprehensive experiments on three public real-world healthcare datasets. The results demonstrate that FairFusion achieves outstanding model fusion performance in terms of utility and fairness across various model structures and subgroups with sensitive attributes while guaranteeing model privacy.

FedEYE: A Scalable and Flexible End-to-end Federated Learning Platform for Ophthalmology
FedEYE: A Scalable and Flexible End-to-end Federated Learning Platform for Ophthalmology

Bingjie Yan*, Danmin Cao*, Xinlong Jiang, Yiqiang Chen, Weiwei Dai, Fan Dong, Wuliang Huang, Teng Zhang, Chenlong Gao, Qian Chen, Zhen Yan, Zhirui Wang (* equal contribution, corresponding author )

Patterns (Cell Press, JCR-Q1, IF=6.7) 2024

Federated learning (FL) enables training machine learning models on decentralized medical data while preserving privacy. Despite growing research on FL algorithms and systems, building real-world FL applications requires extensive expertise, posing barriers for medical researchers. FedEYE, an end-to-end FL platform tailored for ophthalmologists without programming skills, is developed here to easily create federated projects on tasks like image classification. The platform provides rich capabilities, scalability, flexible deployment, and separation of concerns. With user-friendly interfaces and comprehension of underlying mechanisms, FedEYE strives to democratize FL for ophthalmology.

FedEYE: A Scalable and Flexible End-to-end Federated Learning Platform for Ophthalmology

Bingjie Yan*, Danmin Cao*, Xinlong Jiang, Yiqiang Chen, Weiwei Dai, Fan Dong, Wuliang Huang, Teng Zhang, Chenlong Gao, Qian Chen, Zhen Yan, Zhirui Wang (* equal contribution, corresponding author )

Patterns (Cell Press, JCR-Q1, IF=6.7) 2024

Federated learning (FL) enables training machine learning models on decentralized medical data while preserving privacy. Despite growing research on FL algorithms and systems, building real-world FL applications requires extensive expertise, posing barriers for medical researchers. FedEYE, an end-to-end FL platform tailored for ophthalmologists without programming skills, is developed here to easily create federated projects on tasks like image classification. The platform provides rich capabilities, scalability, flexible deployment, and separation of concerns. With user-friendly interfaces and comprehension of underlying mechanisms, FedEYE strives to democratize FL for ophthalmology.

All publications