-
Notifications
You must be signed in to change notification settings - Fork 369
Add configprovider to openai text generation and model config #1043
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@microsoft-github-policy-service agree |
fixes #1042 |
applications/evaluation/Evaluators/Faithfulness/FaithfulnessEvaluator.cs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you try overriding the model?
I tested the PR, and requests are always sent to the model defined in the configuration, not the one in the context.
I suspect that the client used internally (Semantic Kernel) doesn't support passing the model ID in the request.
Motivation and Context (Why the change? What's the scenario?)
I want to be able to change models from openai text generation in runtime.
High level description (Approach, Design)
The ability to change gpt models in runtime, to ask question to open ai