-
Notifications
You must be signed in to change notification settings - Fork 765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: azure ai inference support #364
base: main
Are you sure you want to change the base?
Conversation
Hi @santiagxf! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
…santiagxf/azure-ai-inference
…santiagxf/azure-ai-inference
We have been adding several automated tests to make sure that the providers support all the capabilities we want like tool calling, multi-modal. Can you include ways to reproduce tests at providers/tests/inference/test_* |
Thanks @santiagxf -- could you please take a look at https://github.com/meta-llama/llama-stack/blob/main/llama_stack/providers/tests/README.md and run the tests? See #265 for an example of how tests are added and run. |
@@ -45,6 +45,9 @@ def get_sampling_options(params: SamplingParams) -> dict: | |||
def text_from_choice(choice) -> str: | |||
if hasattr(choice, "delta") and choice.delta: | |||
return choice.delta.content | |||
|
|||
if hasattr(choice, "message"): | |||
return choice.message.content |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this a bad merge?
@@ -149,4 +149,13 @@ def available_providers() -> List[ProviderSpec]: | |||
config_class="llama_stack.providers.remote.inference.databricks.DatabricksImplConfig", | |||
), | |||
), | |||
remote_provider_spec( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just naming feedback: given that we are trying to get azure to provide a full llama stack distribution, suggest calling the provider: azure_ai
What does this PR do?
This PR introduces support for Azure AI model inference API with the id
remote::azure-ai-inference
.Closes # (issue)
Feature/Issue validation/testing/test plan
Please describe the tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration or test plan.
Test A
Logs for Test A
Test B
Logs for Test B
Sources
Please link relevant resources if necessary.
Before submitting
Pull Request section?
to it if that's the case.
Thanks for contributing 🎉!