-
Notifications
You must be signed in to change notification settings - Fork 429
Add Vision / VLM models to environments and GRPO trainer #409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ment and grpo trainer, mainly the get lgps function
…rs_vlm into add_vlm_support
…nto add_vlm_support
…utting TODO where I think I need to pass vision data
… need to check how it goes with text only and need to test it with Qwen 2.5 VL
…rs_vlm into add_vlm_support
…e with pixel values after
…rs_vlm into add_vlm_support
…o improve prompt so some pass + need to fix for text only
…e on the reward calculation of the env for japaneese
|
Sounds good, I'll work on that ! |
|
Hello @willccbb , For point 2, I managed to clean up the problematic dependencies, handle lazy imports, and reorganize the code into utils/image_utils.py and utils/processing_utils.py. Is it ok with the current pattern ? For point 1, I tried to find a task that was challenging enough to demonstrate the relevance of the training, but not too expensive to run. I used an OCR environment I set up on Prime Hub (ocr-vl): https://app.primeintellect.ai/dashboard/environments/ulrick-bl/ocr-vl I trained Qwen 2.5 VL 3B on the "hi" (Hindi) scope, since the model doesn’t perform very well on this task. The reward is mainly based on format and CER. Warning that there are some issues in the dataset I use as the base for the env such as this type of data where the screen fails because of popup and on which my small training setup was very sensitive : I trained with the following setup : args = vf.grpo_defaults(run_name="ocr-vl") I would say the training is stable and started off very well. The first slowdown in reward progression was due to a series of poor-quality images in the data, like the ones I showed earlier. Nevertheless, we can observe the model improving and maintaining stable training performance on the task, which highlights the relevance of the implementation. If needed, I can spend some time cleaning the dataset and retraining it. Let’s keep in touch if there’s anything else to adjust, test, or adapt. |
… into add_vlm_support
|
+1 @willccbb any idea when this MR might be merged or available |


Description
This Pull Request introduces support for Vision-Language Models (VLMs) into the environments and the GRPO trainer. This functionality is implemented by tracking pixel values and image grids as the base inputs, and then transforming images into Base64 format to comply with VLLM/OpenAI chat formats. The implementation is adapted to work with both standard text tokenizers and multimodal/mixin processors. It also adds Image and Answer login in wanDB table to simplify data analysis during training.
The motivation for adding VLM support is strategic: I believe Vision-Language environments are critical for advancing AGI and Reinforcement Learning (RL) research. This feature was necessary to begin testing several promising, high-value environments.
Type of Change
Testing
uv run pytestIt was end to end tested with 3 Prime-RL envs :
OCR VL with Qwen VL 2.5 3B and 7B : https://app.primeintellect.ai/dashboard/environments/ulrick-bl/ocr-vl (single turn image)
Rebus VL Thinking with Qwen VL 2.5 7B : https://app.primeintellect.ai/dashboard/environments/ulrick-bl/rebus-vl-thinking (single turn image)
Semantix with Qwen 2.5 0.5B : https://app.primeintellect.ai/dashboard/environments/ulrick-bl/semantic (multi turn text)
Test Coverage
Checklist