Qualcomm AI Hub
Qualcomm AI Hub contains a large collection of pretrained AI models that are optimized to run on Dragonwing hardware on the NPU.
End-to-end examples
Here's a list of example applications (in Python) that implement models from AI Hub that run on the NPU of your Dragonwing board:
To run other models, keep reading!
Finding supported models
Models in AI Hub are categorized by supported Qualcomm chipset. To see models that will run on your development kit:
Go to the model list.
Under 'Chipset', select:
RB3 Gen 2 Vision Kit: 'Qualcomm QCS6490 (Proxy)'
RUBIK Pi 3: 'Qualcomm QCS6490 (Proxy)'
IQ-9075 EVK: 'Qualcomm QCS9075 (Proxy)'
Under 'Model precision', select: 'Quantized'. The NPU on your Dragonwing board only runs quantized models.
Deploying a model to NPU (Python)
As an example, let's deploy the Lightweight-Face-Detection model.
Running the example repository
All AI Hub models come with an example repository. This is a good starting point, as it shows exactly how to run the model. It shows what the input to your network should look like, and how to interpret the output (here, to map the output tensor to bounding boxes). The example repositories do NOT run on the NPU or GPU yet - they run without acceleration. Let's see what our input/output should look like before we move this model to the NPU.
On the AI Hub page for Lightweight-Face-Detection, click "Model repository". This links you to a README file with instructions on how to run the example repository.
To deploy this model, open the terminal on your development board, or an ssh session to your development board, and:
Create a new venv and install some base packages:
Download an image with a face (640x480 resolution, JPG format) onto your development board, e.g. via:

Input image with three people [source](https://www.pexels.com/photo/three-people-looking-excited-5622566/)
Input resolution: AI Hub models require correctly sized inputs. You can find the required resolution under "Technical Details > Input resolution" (in HEIGHT x WIDTH (here 480x640 => 640x480 for wxh)); or inspect the size of the input tensor on the TFLite or ONNX file.
Follow the instructions under 'Example & Usage' for the Facial Landmark Detection model:
You can find the output image in
out/FaceDetLitebNet_output.png.If you're connected over ssh, you can copy the output image back to your host computer via:

Lightweight face detection output Alright! We have a working model. For reference, on the RB3 Gen 2 Vision Kit, running this model takes 189.7ms per inference.
Porting the model to NPU
Now that we have a working reference model, let's run it on the NPU. There are three parts that you need to implement.
You need to preprocess the data, e.g. convert the image into features that you can pass to the neural network.
You need to export the model to ONNX or TFLite, and run the model through LiteRT or ONNX Runtime.
You need to postprocess the output, e.g. convert the output of the neural network to bounding boxes of faces.
The model is straight forward, as you can read in the LiteRT and ONNX Runtime pages. However, the pre- and post-processing code might not be...
Preprocessing inputs
For image models most AI Hub models take a matrix of (HEIGHT, WIDTH, CHANNELS) (LiteRT) or (CHANNELS, HEIGHT, WIDTH) (ONNX) scaled from 0..1. If you have 1 channel, convert the image to grayscale first. If your model is quantized (most likely) you'll also need to read zero_point and scale, and scale the pixels accordingly (this is easy in LiteRT as they contain the quantization parameters, but ONNX does not have these). Typically you'll end up with data scaled linearly 0..255 (uint8) or -128..127 (int8) for quantized models - so that's relatively easy. A function that demonstrates all this in Python can be found below in the example code (def load_image_litert).
HOWEVER... This is not guaranteed; and this is where the AI Hub example code comes in. Every AI Hub example contains the exact code used to scale inputs. In our current example - Lightweight-Face-Detection - the input is shaped (480, 640, 1). However, if you look at the preprocessing code the data is not converted to grayscale, but instead only the blue channel of an RGB image is taken:
This kind of things are very easy to get wrong. So if you see non-matching results between your implementation and the AI Hub example: read the code. This applies even more for non-image inputs (e.g. audio). Use the demo code to understand what the model actually expects.
Postprocessing outputs
The same applies to postprocessing. For example, there's no standard way of mapping the output of a neural network to bounding boxes (to detect faces). For Lightweight-Face-Detection you can find the code here: face_det_lite/app.py#L77.
If you're targeting Python, it's often easiest to copy the postprocessing code into your application; as AI Hub has a lot of dependencies that you might not want. In addition the postprocessing code operates on PyTorch tensors, and your inference runs under LiteRT or ONNX Runtime; thus, you'll need to change some small aspects. We'll show this just below in the end-to-end example.
End-to-end example (Python)
With the explanation behind us, let's look at some code.
Open a terminal on your development board, and set up the base requirements for this example:
The NPU only supports uint8/int8 quantized models. Fortunately AI Hub contains pre-quantized and optimized models already. You can either:
Download the model for this tutorial (mirrored on CDN):
Or, for any other model - download the model from AI Hub and push to your development board:
Go to Lightweight-Face-Detection.
Click "Download model".
Select "TFLite" for runtime, and "w8a8" for precision.

Downloading w8a8 quantized model from AI Hub in TFLite format If your model is only available in ONNX format, see Run models using ONNX Runtime for instructions. The same principles as in this tutorial apply.
Download the model.
If you're not downloading the model directly on your Dragonwing development board, you'll need to push the model over ssh:
Find the IP address of your development board. Run on your development board:
Push the .tflite file. Run from your computer:
Create a new file
face_detection.py. This file contains the model invocation, plus the preprocessing and postprocessing code from the AI Hub example (see inline comments).Run the model on the CPU:
This already brings down our time per inference from 189.7ms to 35.6ms.
Run the model on the NPU:
🎉 That's it. By quantizing this model and porting it to the NPU we've sped the model up 79 times (!). Hopefully you have a good idea of the qualities of AI Hub, and the potential power of the NPU and the AI Engine Direct SDK. You're not limited to Python either, f.e. the LiteRT page has C++ examples as well.
Deploying a model to NPU (Edge Impulse)
Image classification, visual regression, and certain object detection models can be deployed through Edge Impulse.
TODO: Missing Softmax() layers on classification models; and YOLO-X does not work with the TFLite version that we have in EI.
Last updated