ONNX - Environment Setup



Setting up an environment to work with ONNX is essential for creating, converting, and deploying machine learning models. In this tutorial we will learn about installing ONNX, its dependencies, and setting up ONNX Runtime for efficient model inference.

The ONNX environment setup involves installing the ONNX Runtime, its dependencies, and the required tools to convert and run machine learning models in ONNX format.

Setting Up ONNX for Python

Python is the most commonly used language for ONNX development. To set up the ONNX environment in Python, you need to install ONNX and model exporting libraries for popular frameworks like PyTorch, TensorFlow, and Scikit-learn. ONNX is required to convert and export models to the ONNX format.

pip install onnx

Installing ONNX Runtime

ONNX Runtime is the primary tool for running models in ONNX format. It is available for both CPU and GPU (CUDA and ROCm) environments.

Installing ONNX Runtime for CPU

To install the CPU version of ONNX Runtime, simply run the following command in your terminal −

pip install onnxruntime

This installs the basic ONNX Runtime package for CPU execution.

Installing ONNX Runtime for GPU

If you want to utilize GPU acceleration, ONNX Runtime provides support for both CUDA (NVIDIA) and ROCm (AMD) platforms. The default CUDA version supported by ONNX Runtime is 11.8.

pip install onnxruntime-gpu

This installs the ONNX Runtime for CUDA 11.x

Install Model Exporting Libraries

Depending on the framework you're working with, install the corresponding library for converting models.

  • PyTorch: ONNX support is built into PyTorch. Following is the command.

    pip install torch
    
  • TensorFlow: Install tf2onnx to convert TensorFlow models.

    pip install tf2onnx
    
  • Scikit-learn: Use skl2onnx to export models from Scikit-learn.

    pip install skl2onnx
    

Setting Up ONNX for Other Languages

C#/C++/WinML

For C# and C++ projects, ONNX Runtime offers native support for Windows ML (WinML) and GPU acceleration. We can install ONNX Runtime for CPU in C# using the following −

dotnet add package Microsoft.ML.OnnxRuntime

Similarly, use the following to install ONNX Runtime for GPU (CUDA) −

dotnet add package Microsoft.ML.OnnxRuntime.Gpu

JavaScript

ONNX Runtime is also available for JavaScript in both browser and Node.js environments. Following is the command to install ONNX Runtime for browsers −

npm install onnxruntime-web

Similarly, to install ONNX Runtime for Node.js use the following −

npm install onnxruntime-node

Setting Up ONNX for Mobile (iOS/Android)

ONNX Runtime can be set up for mobile platforms, including iOS and Android.

  • iOS: Add ONNX Runtime to your Podfile and run pod install

    pod 'onnxruntime-c'
    
  • Android: In your Android Studio project, add ONNX Runtime to your build.gradle file −

    dependencies {
       implementation 'com.microsoft.onnxruntime
       :onnxruntime-android:latest.release'
    }
    
Advertisements