Edge Impulse

Edge Impulse is the easiest way to build new edge AI models for Qualcomm Dragonwing devices. It's an end-to-end platform that helps you build datasets, train models, and run models with full hardware acceleration. It supports building AI models using audio, image and other sensor data - or bringing your own model in a variety of formats.

TODO: Add a screenshot.

Train an AI model

To start building with Edge Impulse:

  1. Make sure you've followed the device setup for your development board.

  2. Sign up for a free developer account at studio.edgeimpulse.com.

  3. From the terminal or ssh session on your development board, install Node.js 22 from the Nodesource PPA:

    # Remove existing installation (if needed)
    rm -f /usr/local/bin/node /usr/local/bin/npm
    
    # Install Node.js v22
    curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
    sudo apt install -y nodejs
    
    # Verify installation (might need to open a new terminal window)
    node -v
    # ... Should return v22.x.x
  4. Then, install Edge Impulse for Linux, and connect your development board to Edge Impulse:

    # Install the CLI
    npm install -g edge-impulse-linux
    
    # Connect to your project (to switch projects, add --clean)
    edge-impulse-linux
    Qualcomm Dragonwing development board connected to Edge Impulse
    Qualcomm Dragonwing development board connected to Edge Impulse
  5. Follow one of the end-to-end tutorials to build your first AI model.

Tip: Use the target selector on the top right corner to select your Qualcomm Dragonwing development board, and get accurate performance information.

  1. To run your model, from the terminal or ssh session on your development board:

    edge-impulse-linux-runner

    This will automatically build and download your model, and run it on the NPU (quantized models only).

    Or, to manually download the EIM file, search for "Linux (AARCH64 with Qualcomm QNN)" in the Deployment page in your Edge Impulse project.

    An object tracking model running on the NPU of a Qualcomm Dragonwing development board
    An object tracking model running on the NPU of a Qualcomm Dragonwing development board

Bring Your Own Model

Edge Impulse also lets you bring your own model (BYOM) in SavedModel, ONNX, TFLite, LiteRT or scikit-learn format. Models deployed through BYOM are fully supported on Dragonwing platforms, NPU acceleration (for quantized models). See Edge Impulse docs > Bring Your Own Model.

Tips

Seeing NPU performance

Dragonwing platforms have a powerful NPU (neural processing unit) which can drastically speed up AI inference. To see the effect that an NPU has on performance, you can run your model on the CPU via:

edge-impulse-linux-runner --force-target runner-linux-aarch64

E.g. on an example quantized YOLO-based model with 7M parameters on the RB3 Gen 2 Vision Kit the CPU takes 47ms. per inference, and just 2ms. (!) on the NPU.

Last updated