Thank you for submitting your model!

Your submission:

  • Ticket ID #
  • Email: 
  • Date and time: 
  • Filename: 

Benchmarking takes 3-30 minutes. Sit tight.

You’ll receive the benchmark report by email shortly. This will include latency and RAM numbers for both Plumerai's inference engine as well as for TensorFlow Lite for Microcontrollers, making it easy for you to compare.

Here’s a glimpse into the setup in our lab.

Contact us and use our inference engine in your product

Get started

Learn more

Plumerai's other product offerings

People detection

Our people detection models achieve EfficientDet-level accuracy while requiring as little as 200 KiB RAM. They leverage our Binarized Neural Network technology, our large proprietary dataset, and intelligent data pipeline. Optimized implementations are available for Arm Cortex-M, Arm Cortex-A and other platforms.

Hardware

For customers that require the most energy-efficient solution, we provide Ikva. This is a custom IP core that is highly-optimized for BNNs and 8-bit deep learning models. It is fully supported by our extensive tool flow and ultra-fast and memory-efficient inference engine that’s integrated with TensorFlow Lite. Ikva can be used on FPGAs or integrated into ASICs or SoCs.