Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Any plans to fine tune the OpenGVLab/Mini-InternVL2-4B-DA-Medical vlm with UCSC-VLAA/MedTrinity-25M dataset ? #881

Open
satheeshkola-532 opened this issue Jan 27, 2025 · 0 comments

Comments

@satheeshkola-532
Copy link

Motivation

MedTrinity-25M which is the current largest publicly available medical dataset with multiple modalities.
MedTrinity-25M provides the most enriched annotations, supporting a comprehensive range of multimodal tasks such as captioning and report generation, as well as vision-centric tasks like classification and segmentation. This dataset can be utilized to support large-scale pre-training of multimodal medical AI models, contributing to the development of future foundation models in the medical domain.

Related resources

Homepage: https://github.com/yunfeixie233/MedTrinity-25M
Paperlink: https://arxiv.org/abs/2408.02900
Githubrepo: https://github.com/UCSC-VLAA/MedTrinity-25M
Dataset: https://huggingface.co/datasets/UCSC-VLAA/MedTrinity-25M

Additional context

No response

@satheeshkola-532 satheeshkola-532 changed the title [Feature] Any plans to fine tune this medical vlm with UCSC-VLAA/MedTrinity-25M dataset ? [Feature] Any plans to fine tune the OpenGVLab/Mini-InternVL2-4B-DA-Medical vlm with UCSC-VLAA/MedTrinity-25M dataset ? Jan 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant