You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
MedTrinity-25M which is the current largest publicly available medical dataset with multiple modalities.
MedTrinity-25M provides the most enriched annotations, supporting a comprehensive range of multimodal tasks such as captioning and report generation, as well as vision-centric tasks like classification and segmentation. This dataset can be utilized to support large-scale pre-training of multimodal medical AI models, contributing to the development of future foundation models in the medical domain.
The text was updated successfully, but these errors were encountered:
satheeshkola-532
changed the title
[Feature] Any plans to fine tune this medical vlm with UCSC-VLAA/MedTrinity-25M dataset ?
[Feature] Any plans to fine tune the OpenGVLab/Mini-InternVL2-4B-DA-Medical vlm with UCSC-VLAA/MedTrinity-25M dataset ?
Jan 27, 2025
Motivation
MedTrinity-25M which is the current largest publicly available medical dataset with multiple modalities.
MedTrinity-25M provides the most enriched annotations, supporting a comprehensive range of multimodal tasks such as captioning and report generation, as well as vision-centric tasks like classification and segmentation. This dataset can be utilized to support large-scale pre-training of multimodal medical AI models, contributing to the development of future foundation models in the medical domain.
Related resources
Homepage: https://github.com/yunfeixie233/MedTrinity-25M
Paperlink: https://arxiv.org/abs/2408.02900
Githubrepo: https://github.com/UCSC-VLAA/MedTrinity-25M
Dataset: https://huggingface.co/datasets/UCSC-VLAA/MedTrinity-25M
Additional context
No response
The text was updated successfully, but these errors were encountered: