-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenVINO EP Weights Sharing Feature #23553
Conversation
@jywu-msft @adrianlizarraga @HectorSVC Kindly Review & Merge |
/azp run Linux OpenVINO CI Pipeline |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline |
Azure Pipelines successfully started running 8 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you update the PR title to better describe the changes you've made?
Changed the title as requested |
/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline |
Pull request contains merge conflicts. |
Could you resolve the merge conflict? |
d8857de
to
6371811
Compare
Fixed the conflicts kindly review & merge |
/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline |
/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline |
/azp run Linux OpenVINO CI Pipeline |
Azure Pipelines successfully started running 10 pipeline(s). |
Azure Pipelines successfully started running 1 pipeline(s). |
Azure Pipelines successfully started running 8 pipeline(s). |
It seems there is no unit test for this feature for OpenVINO EP. Could you please add a unit test for it? |
Yes. It would be great to have something demonstrate how this feature get used from model generation to model inference. |
Agreed, the OVEP unit tests are, well, absent. We started working on adding OVEP unit tests but those will come later and not be part of this PR. |
bf4dc5b
to
568a64d
Compare
* Rename EP instance context as session_context * Add support for GetEpContextNodes * enable config option for ovep weight sharing * add config option for ovep weight sharing * Refactor the conditional blocks in OVEP for compilation * Convert initializers with external data to graph inputs * create, store and export metadata for ovep weight sharing * fix error handling in weight sharing * fix crash issue while setting up inputs for wai model * pass weight sharing option to OVEP qdq stripping pass * Aligning OVEP variable names to match the session option value they hold * Add plumbing for context sharing plus refactoring around option handling * Store metadata in shared context * fix: fix provider options * create ov tensor from meta data and external data * create ov tensor * Add support for binding weight as input tensors * Fix for mapping subgraph to ov compiled network arguments * Fix for using so_share_ep_contexts without ep.context* flags * Add remote tensor support for NPU weight sharing * Use a single ov::Core copy across OVEP * Decouple provider option cache_dir from session option ep.context_file_path * Add support for serialization and deserialization of metadata to disk * Load blobs from relative path stored in ep_cache_context * Use remote L0 tensors for shared weights * fix linux ci issues * fix ci issues * Fix Windows build failure * Use ifstream to load weights instead of mmaped file * Fix for epctx models made up entirely of OVEP epctx nodes * Limit ov::Core lifetime to that of provider object * Enforce shared tensors cleanup on shutdown * Add support for default device type based on project configuration * fix: Fixed concrete_backend_ pointer double free issue on Linux * Preetha/weight sharing fix (#545) * Move variables from subgraph to session context for model specific properties * Fix for redundant subgraph creation * Remove unused variable --------- Co-authored-by: Javier E. Martinez <[email protected]> Co-authored-by: saurabhkale117 <[email protected]> Co-authored-by: Preetha Veeramalai <[email protected]> Co-authored-by: ankitm3k <[email protected]> Co-authored-by: Eric Crawford <[email protected]>
Co-authored-by: saurabh <[email protected]>
* Fix blob generation with AUTO:GPU,CPU * Remove unused variable
* Use ep.context_file_path to get base path when creating session from memory * Fixed lint issues --------- Co-authored-by: Javier E. Martinez <[email protected]>
/azp run Linux OpenVINO CI Pipeline |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline |
/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline |
Azure Pipelines successfully started running 8 pipeline(s). |
Azure Pipelines successfully started running 10 pipeline(s). |
It was updated according to the comments.
Description
These changes are done to ensure that weight sharing happens between two model using session context option ep_weight_sharing. Key changes introduced in this feature are:
Creating a shared context between two models
Extracting external constant initializers and re labelling them back as inputs to the model to allow weight loading in the direct blob.
Creating EP Context Nodes when Subgraph partitioning is happening.
Motivation and Context
This change was required to ensure that LLM with prefill and kvcache models can use the same share
The change was also required to ensure EP Context nodes can be formed even when model is being subgraph partitioned.