Feature Request: Add Patch-Based Inference Support (Inspired by MCUNetV2)
Problem Statement
TensorFlow Lite Micro (TFLM) currently lacks support for patch-based inference, as introduced in MCUNetV2. This technique processes input images in smaller patches sequentially, reducing peak memory usage, thus enabling inference on higher resolution images on resource-constrained devices like microcontrollers.
References