Skip to content

Conversation

@ansh-info
Copy link

@ansh-info ansh-info commented Dec 11, 2025

• Summary

Issue: #475

  • Make RULER judge provider-agnostic by allowing a LangChain ChatModel to be passed directly (e.g., ChatOllama, ChatNVIDIA) via judge_chat_model, while keeping the existing LiteLLM/OpenAI path intact.
  • Maintain context-window passthrough (num_ctx/max_input_tokens) for LiteLLM; the LangChain path is unaffected by LiteLLM defaults.

Motivation

  • RULER was tightly coupled to LiteLLM/OpenAI identifiers, blocking clean use of non-OpenAI providers (Ollama, NVIDIA) as judges.

Details

  • ruler/ruler_score_group accept judge_chat_model; if provided, use LangChain ainvoke on LangChain messages and parse JSON into Response.
  • Keep existing LiteLLM behavior when judge_chat_model is not supplied.
  • Context-window overrides still apply on the LiteLLM path; LangChain path bypasses LiteLLM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant