Let’s make your AI
tiny and fast.
Try out our inference engine for Arm Cortex-M. It is the fastest and smallest in the world. On average it gives a speedup of 1.9x, a RAM reduction of 2.0x and a code size reduction of 3.3x. Accuracy does not change. No binarization, no pruning.