Skip to content

Conversation

@Lu-Yang666
Copy link

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Lu-Yang666, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the reinforcement learning framework by integrating support for Process Reward Models (PRM). It introduces new configuration options, a specialized workflow for combining PRM-derived rewards with standard rewards, and a standalone service for efficient PRM inference. The changes are exemplified by a new training script for the GSM8K dataset, aiming to leverage the granular feedback of PRMs to improve model performance on complex reasoning tasks.

Highlights

  • New PRM Configuration: Introduced PRMRewardHyperparameters and PRMConfig dataclasses to define and manage parameters specific to Process Reward Model (PRM) integration within the training framework.
  • PRM-Enhanced Workflow: Implemented a new PRMRLVRWorkflow that incorporates PRM scores into the reward calculation, featuring a reward shaping mechanism and a clipping strategy to refine the learning signal.
  • External PRM Scoring Service: Added a FastAPI service (prm_service.py) designed to host and provide scores from a PRM model (Qwen2.5-Math-PRM-7B), enabling decoupled and scalable PRM inference.
  • GSM8K PRM Training Example: Provided a new example script (gsm8k_dapo_prm.py) demonstrating how to set up and run Dapo training on the GSM8K dataset with the newly integrated PRM functionality.
  • Cache Directory Update: Modified the default local cache directory path in cli_args.py and launcher.py to a new, more specific location.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Process-based Reward Models (PRM) by adding new configurations, a dedicated workflow, an example training script, and a PRM scoring service. The changes are a good step towards incorporating PRM into the training loop.

My review focuses on improving code quality, maintainability, and portability. I've identified several critical issues, including a bug in the reward calculation and syntax errors in shell scripts. I've also pointed out multiple instances of hardcoded, user-specific paths and other values that should be made configurable to make the code more portable and easier to use in different environments. Additionally, there's some dead code and debug prints that should be cleaned up.

Please address the critical and high-severity comments to ensure the new functionality is robust and correct.

# probabilities = F.softmax(prm_outputs[0], dim=-1)* token_masks.unsqueeze(-1)
# sample = probabilities[0]
# prm_reward = sample[sample != 0].view(-1, 2)[:, 1][0].item()
resp = requests.post("http://localhost:8001/score", json={"text": conversation_str})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The URL for the PRM scoring service is hardcoded. This makes the example script inflexible and difficult to run if the service is on a different host or port. It would be better to make this configurable, for example, by reading it from an environment variable or from the experiment configuration. You will need to add import os for the suggestion to work.

Suggested change
resp = requests.post("http://localhost:8001/score", json={"text": conversation_str})
resp = requests.post(os.getenv("PRM_SERVICE_URL", "http://localhost:8001/score"), json={"text": conversation_str})

logger = logging.getLogger("Launcher Utils")

LOCAL_CACHE_DIR = "/tmp/areal"
LOCAL_CACHE_DIR = "/data/yl/AReaL/tmp/areal"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The LOCAL_CACHE_DIR is hardcoded to a user-specific path. This makes the code non-portable and will likely cause it to fail on other developers' machines. It's better to use a more standard temporary directory or allow this path to be configured via an environment variable.

Suggested change
LOCAL_CACHE_DIR = "/data/yl/AReaL/tmp/areal"
LOCAL_CACHE_DIR = os.environ.get("AREAL_CACHE_DIR", "/tmp/areal")

from areal.api.io_struct import FinetuneSpec, StepInfo, WeightUpdateMeta
from areal.dataset import get_custom_dataset
from areal.engine.ppo.actor import FSDPPPOActor
from areal.engine.ppo.prm import FSDPPPOPrm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The FSDPPPOPrm class is imported but never used in this file. This unused import should be removed to keep the code clean.

Comment on lines +44 to +57
print(f"conversation str: {conversation_str}")
# prm_input_ids = prm_tokenizer.encode(
# conversation_str,
# return_tensors="pt",
# ).to(prm_model.device)
# prm_outputs = prm_model(input_ids=prm_input_ids)
# step_sep_id = prm_tokenizer.encode("<extra_0>")[0]
# token_masks = (prm_input_ids == step_sep_id)
# probabilities = F.softmax(prm_outputs[0], dim=-1)* token_masks.unsqueeze(-1)
# sample = probabilities[0]
# prm_reward = sample[sample != 0].view(-1, 2)[:, 1][0].item()
resp = requests.post("http://localhost:8001/score", json={"text": conversation_str})
prm_reward = resp.json()["reward"]
print(f"prm_reward: {prm_reward}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These print statements appear to be for debugging. They should be removed or replaced with proper logging using the logging module to avoid cluttering the output.

Suggested change
print(f"conversation str: {conversation_str}")
# prm_input_ids = prm_tokenizer.encode(
# conversation_str,
# return_tensors="pt",
# ).to(prm_model.device)
# prm_outputs = prm_model(input_ids=prm_input_ids)
# step_sep_id = prm_tokenizer.encode("<extra_0>")[0]
# token_masks = (prm_input_ids == step_sep_id)
# probabilities = F.softmax(prm_outputs[0], dim=-1)* token_masks.unsqueeze(-1)
# sample = probabilities[0]
# prm_reward = sample[sample != 0].view(-1, 2)[:, 1][0].item()
resp = requests.post("http://localhost:8001/score", json={"text": conversation_str})
prm_reward = resp.json()["reward"]
print(f"prm_reward: {prm_reward}")
# prm_input_ids = prm_tokenizer.encode(
# conversation_str,
# return_tensors="pt",
# ).to(prm_model.device)
# prm_outputs = prm_model(input_ids=prm_input_ids)
# step_sep_id = prm_tokenizer.encode("<extra_0>")[0]
# token_masks = (prm_input_ids == step_sep_id)
# probabilities = F.softmax(prm_outputs[0], dim=-1)* token_masks.unsqueeze(-1)
# sample = probabilities[0]
# prm_reward = sample[sample != 0].view(-1, 2)[:, 1][0].item()
resp = requests.post("http://localhost:8001/score", json={"text": conversation_str})
prm_reward = resp.json()["reward"]

Comment on lines +68 to +74
# prm_tokenizer = AutoTokenizer.from_pretrained(config.prm_path, local_files_only=True, trust_remote_code=True)
# prm_model = AutoModel.from_pretrained(
# config.prm_path,
# torch_dtype=torch.bfloat16,
# local_files_only=True,
# trust_remote_code=True,
# ).eval()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of commented-out code should be removed to improve code clarity and maintainability.

Comment on lines +31 to +32
# prm_model: PreTrainedModel,
# prm_tokenizer: PreTrainedTokenizerFast,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These parameters for prm_model and prm_tokenizer are commented out, along with their usage later in the file. This dead code should be removed to improve code clarity and maintainability.

Copy link
Collaborator

@garrett4wade garrett4wade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Lu-Yang666 , thanks for the great contribution! The feature looks great, but it may not ready to be merged in its current form.

Please:

  • Clean code: remove unused comments and prints for debugging
  • Format files according to the contribution buide
  • Follow or respond to gemini's suggestions


logger = logging.getLogger("Launcher Utils")

LOCAL_CACHE_DIR = "/tmp/areal"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should revert.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can keep these scripts for internal usage. :)

We instead recommend creating a README under the examples/prm folder to show the usage of the PRM example.

gconfig: GenerationHyperparameters = field(
default_factory=GenerationHyperparameters
)
prmconfig: PRMRewardHyperparameters = field(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like that we can just inheirt GRPOConfig and add two new fields prm_path and reward_shaping_alpha? BTW if you refer to reward scaling, you can use actor.reward_scaling rather than creating a new field.

Comment on lines 123 to 129
# clip mechanism
avg_prm_reward = sum(prm_rewards) / len(prm_rewards)
for i, val in enumerate(prm_rewards):
if val > avg_prm_reward:
rewards[i] = 0
for res, r in zip(results, rewards):
res["rewards"] = torch.tensor([float(r)])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add some comments or configurations to control this behavior?

This workflow still uses an outcome-based reward. How's the PRM actually used?

# probabilities = F.softmax(prm_outputs[0], dim=-1)* token_masks.unsqueeze(-1)
# sample = probabilities[0]
# prm_reward = sample[sample != 0].view(-1, 2)[:, 1][0].item()
resp = requests.post("http://localhost:8001/score", json={"text": conversation_str})
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can add this URL in the config.

@github-actions
Copy link

This pull request has been automatically marked as stale because it has not had recent activity within the last 14 days.

Please add a comment or push new commits to keep it active.

Thank you for your contribution!

@github-actions github-actions bot added the stale label Oct 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants