Introduction to TensorFlow Lite


TensorFlow Lite is a mobile library designed to deploy models on mobile, microcontrollers and edge devices. It comes with tools that enable on-device machine learning on mobile devices using 5 aspects − latency, privacy, connectivity, size, and power consumption.

It provides support on Android, iOS, embedded Linux and microcontrollers. Supports multiple languages such as Java, Swift, Objective C, C++, and Python. Also provides hardware acceleration and model optimization.

The documentation provides end-to-end examples for machine learning projects such as image classification, object detection, question answering, pose estimation, text classification, and many more on different platforms.

There are two aspects to developing a model in Tensorflow Lite −

  • Building a Tensorflow Lite model

  • Running Inference.

Building a TensorFlow Lite Model

A Tensorflow Lite model is represented in a portable format known as FlatBuffers, i.e a .tflite file extension. It has reduced size and quick inference that enables TensorflowLite to execute efficiently on divides that have limited compute and memory resources. It also includes metadata about model, pre- and post-processing pipelines which is in human-readable format.

Inference

Inference is the process of executing a Tensorflow Lite model on-device which helps make predictions on new data. It can be done in two ways depending on if the model has metadata or not.

  • With Metadata − Use out-of-the-box API or build custom inference pipelines. Using Android devices, you can generate code wrappers using Android Studio ML Model Binding or Tensorflow Lite Code Generator.

  • Without Metadata − Use the Tensorflow Lite Interpreter API which is supported on multiple platforms.

Updated on: 14-Oct-2022

380 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements