-
Notifications
You must be signed in to change notification settings - Fork 765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: enable xpu support for meta-reference stack #558
base: main
Are you sure you want to change the base?
Conversation
Hi @dvrogozh! Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention. You currently have a record in our system, but the CLA is no longer valid, and will need to be resubmitted. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
See: meta-llama/llama-stack#558 Signed-off-by: Dmitry Rogozhkin <[email protected]>
See: meta-llama/llama-stack#558 Signed-off-by: Dmitry Rogozhkin <[email protected]>
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
Requires: meta-llama/llama-models#233 Signed-off-by: Dmitry Rogozhkin <[email protected]>
@dvrogozh are you interesting in moving forward with this change? if so, could you add a Test Plan? |
@ashwinb, thank you for taking a look. Yes, I plan to move forward with this PR. The reason it's currently a draft is it's dependency from PR in llama-models: meta-llama/llama-models#233 which should be reviewed and merged first. I am waiting for its review. Can you help with that?
Yes, sure, I will help adding tests covering different devices. If you need anything specific, please, let me know. |
This PR adds support of non-cuda XPU backend device into meta-reference stack path. Submitting as a draft PR to facilitate discussion around another patch in llama-models:
Requires: meta-llama/llama-models#233