Support TfLite schema buffer and custom options offsets#3197
Draft
ddavis-2015 wants to merge 5 commits intotensorflow:mainfrom
Draft
Support TfLite schema buffer and custom options offsets#3197ddavis-2015 wants to merge 5 commits intotensorflow:mainfrom
ddavis-2015 wants to merge 5 commits intotensorflow:mainfrom
Conversation
@tensorflow/micro Allow for models >2Gb (and less than 4Gb) in size, as generated by the TfLite converter. Parse TfLite schema Buffer tables where the offset and size fields are active. Parse TfLite schema Operator tables where the large_custom_options_offset and large_custom_options_size fields are active. Correctly process the Offline Memory Planner metadata buffer. Correctly process the compression metadata buffer. Add unit tests for all of the above. bug=fixes tensorflow#3196
…rse the model correctly for cortex-m3-qemu builds. Update TestAllocatePersistentTfLiteTensor test to match updated test_conv_model.tflite model.
f8a8b80 to
8cfa65a
Compare
…d test_conv_model.cc
veblush
reviewed
Oct 2, 2025
Collaborator
There was a problem hiding this comment.
Thanks for the PR! I've left a couple of minor comments regarding __func__ and 2Gb.
Beyond those, I have two broader suggestions for improvement:
Optimize GetBufferStartFromRootPointer Calls
I'm concerned that this function's internal loop could increase initialization time. Since this is for a niche use case, could we make it to avoid if possible? Avoiding this call when it's not needed would be great.
Simplify Buffer Access Logic
The current approach to accessing the buffer requires handling two separate cases, which leads to some repetitive and lengthy code. To improve readability and maintainability, would it be possible to encapsulate this logic in a helper function or a macro?
| } | ||
|
|
||
| TfLiteStatus MicroInterpreter::PrepareNodeAndRegistrationDataFromFlatbuffer() { | ||
| // needed for custom options when model is larger than 2Gb |
| const uint8_t* flatbuffer_start = | ||
| flatbuffers::GetBufferStartFromRootPointer(model_); | ||
| if (flatbuffer_start == nullptr) { | ||
| MicroPrintf("%s: Unable to locate flatbuffer start", __func__); |
Collaborator
There was a problem hiding this comment.
Isn't it okay not to have __func__ here?
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
@tensorflow/micro
Allow for models >2Gb (and less than 4Gb) in size, as generated by the TfLite converter.
Parse TfLite schema Buffer tables where the offset and size fields are active.
Parse TfLite schema Operator tables where the large_custom_options_offset and large_custom_options_size fields are active.
Correctly process the Offline Memory Planner metadata buffer.
Correctly process the compression metadata buffer.
Add unit tests for all of the above.
bug=fixes #3196