Master Assignment
Visual Prompt Tuning for Generalized Medical Foundation Models
Type: Master CS
Period: TBD
Student: (Unassigned)
If you are interested please contact :
Medical Foundation Models are being continuously produced, promising generalizability across multiple domains and modalities. However, for specialized tasks their performance benefits from fine-tuning and, recently, prompt tuning, across any modality. In this project, you should:
- Fine-tune medical foundation models for a downstream task of your choice (e.g. visual segmentation/classification, multimodal…)
- Implement appropriate prompt tuning for the same medical foundation model and downsteam task.
- Compare the methods in terms of efficiency and performance across medical datasets, paying particular attention to distribution drifts.
REFERENCES:
- Prompt-based Adaptation in Large-scale Vision Models: A Survey
- CVPR 2024: Foundation Models + Visual Prompting Are About to Disrupt Computer Vision
- Promise: Prompt-Driven 3d Medical Image Segmentation Using Pretrained Image Foundation Models
- Visual Prompt Tuning
- Awesome-Medical-Efficient-Fine-Tuning
- Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning?
- Hulu-Med: A Transparent Generalist Model towards Holistic Medical Vision-Language Understanding
- MedSegBench: A comprehensive benchmark for medical image segmentation in diverse data modalities
- OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution Detection
