Skip to content

Feature Request: Add Patch-Based Inference Support (Inspired by MCUNetV2) #3032

@Josiah-MCS

Description

@Josiah-MCS

Feature Request: Add Patch-Based Inference Support (Inspired by MCUNetV2)

Problem Statement

TensorFlow Lite Micro (TFLM) currently lacks support for patch-based inference, as introduced in MCUNetV2. This technique processes input images in smaller patches sequentially, reducing peak memory usage, thus enabling inference on higher resolution images on resource-constrained devices like microcontrollers.

References

Metadata

Metadata

Assignees

No one assigned

    Labels

    type:featureNew functionality or hardware support implementation.type:supportDocumentation, general questions, or project help.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions