Welcome to our dedicated resource page aimed at helping researchers explore the use of Foundation Models (FM) in medical imaging (MI). Our goal is to provide a starting point for those interested in understanding and contributing to this rapidly evolving field. This is an organically evolving resource, so please get in touch if you have suggestions for additions or modifications.
🔑 Key Themes
- Foundation Model: Foundation models are large pre-trained deep learning models that can be fine-tuned for specific tasks.
- “On the Opportunities and Risks of Foundation Models” by Bommasani et al. (2021): This paper discusses the transformative potential of foundation models across various domains, including their capabilities and limitations. It highlights the need for careful consideration of ethical, societal, and technical challenges associated with deploying such models.
- “The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources” by Longpre et al. (2023): “The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources” by Longpre et al. (2023): This paper presents a curated list of over 250 tools to aid responsible development of foundation models across text, vision, and speech. It identifies gaps like insufficient tools for ethical data sourcing, lack of reproducibility in evaluations, an English-centric focus, and the need for system-level assessments of impact.
- Self-Supervised Learning: Utilizing unlabeled medical images to pre-train models, which is especially valuable given the limited availability of labeled medical data.
- “Masked Autoencoders Are Scalable Vision Learners” by He et al. (2022): Introduces Masked Autoencoders (MAE) for self-supervised learning in vision tasks, showing that reconstructing masked portions of images can serve as effective pre-training.
- “A Simple Framework for Contrastive Learning of Visual Representations” by Chen et al. (2020): Presents SimCLR, a framework for contrastive learning that learns representations by maximizing agreement between differently augmented views of the same data.
- Ethical and Regulatory Considerations: Ensuring patient privacy and data security when using large datasets, and complying with healthcare regulations like HIPAA and GDPR.
- “Ethics and Governance of Artificial Intelligence for Health” by the World Health Organization (2021): This comprehensive report explores the ethical and governance issues associated with the use of AI in health. It provides guidance on how to maximize the benefits of AI while minimizing risks, covering topics such as transparency, accountability, equity, and inclusiveness in the deployment of AI technologies in healthcare.
📖 Recommended Literature
Foundation Models improve Fairness
- “How Fair are Medical Imaging Foundation Models?” by Khan, Afzal, Mirza, and Fang , published in PMLR (2023)
- “Demographic Bias in Misdiagnosis by Computational Pathology Models” by Vaidya, A., Chen, R.J., Williamson, D.F.K., et al., published in Nature Medicine (2024).
- “Classes Are Not Equal: An Empirical Study on Image Recognition Fairness” by Jiequan Cui, Beier Zhu, Xin Wen, Xiaojuan Qi, Bei Yu, and Hanwang Zhang, published in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023).
Bias in Foundation Models
- “Risk of Bias in Chest Radiography Deep Learning Foundation Models” by Ben Glocker, et al., published in Radiology: Artificial Intelligence (2023).
Benchmark
- “FairMedFM: Fairness Benchmarking for Medical Imaging Foundation Models”. by Jin, Ruinan, Zikang Xu, Yuan Zhong, Qiongsong Yao, Qi Dou, S. Kevin Zhou, e Xiaoxiao Li.
Generative Models improve Fairness
- “Generative Models Improve Fairness of Medical Classifiers Under Distribution Shifts” by Musgrave, Xiao, and Belongie, published in Nature Medicine (2024).
Balanced datasets
- “Classes Are Not Equal: An Empirical Study on Image Recognition Fairness” by Jiequan Cui, Beier Zhu, Xin Wen, Xiaojuan Qi, Bei Yu, and Hanwang Zhang, published in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023).
🤖 Models
Retinal Foundation Models
- REMEDIS (BiT with simCLR) (closed) 2022
- RetFound (ViT with MAE) (open) 2023
Chest X-Ray
- REMEDIS (BiT with simCLR) (open) 2022
Pathology
- REMEDIS (BiT with simCLR) (open) 2022
🤝 Community and Collaboration
Conferences and Workshops:
We hope these resources are helpful in your research journey. Foundation models have the potential to revolutionize medical imaging by enabling more accurate diagnostics and personalized treatment plans. We look forward to collaborating and growing together in this exciting field.