LLMs using Genie
A select number of Large Language Models (LLMs) and Vision Language Models (VLMs) can run on the NPU on your Dragonwing development board using the Qualcomm Gen AI Inference Extensions (Genie). These models have been ported and optimized by Qualcomm to be as efficient as possible on hardware. Genie only supports a subset of manually ported models, so if your favourite model is not listed, look at Run LLMs / VLMs using llama.cpp to run models on the GPU as a fallback.
Not supported on IQ-9075 EVK: There are no available models yet for the IQ-9075 EVK. Use llama.cpp instead.
Installing AI Runtime SDK - Community Edition
First install the AI Runtime SDK - Community Edition. Open the terminal on your development board, or an ssh session to your development board, and run:
# Install the SDK
wget -qO- https://cdn.edgeimpulse.com/qc-ai-docs/device-setup/install_ai_runtime_sdk.sh | bash
# Use the SDK in your current session
source ~/.bash_profileFinding supported models
Genie-compatible LLM models can be found in a few places:
Under 'Chipset', select:
RB3 Gen 2 Vision Kit: 'Qualcomm QCS6490'
RUBIK Pi 3: 'Qualcomm QCS6490'
IQ-9075 EVK: 'Qualcomm QCS9075'
Under 'NLP', select "Text Generation".
Under 'Chipset', select:
RB3 Gen 2 Vision Kit: 'Qualcomm QCS6490 (Proxy)'
RUBIK Pi 3: 'Qualcomm QCS6490 (Proxy)'
IQ-9075 EVK: 'Qualcomm QCS9075 (Proxy)'
Under 'Domain/Use Case', select "Generative AI".
As an example, let's deploy the Qwen2.5-0.5B-Instruct model - which runs on QCS6490-based Dragwoning development boards like the Rubik Pi 3 and RB3 Gen 2 Vision Kit.
Running Qwen2.5-0.5B-Instruct
When you download a model you'll need 3 files:
One or more
*.serialized.binfiles - these contain the weights of the model.tokenizer.json- a serialized configuration file that defines how text is split into tokens, mapping between characters, subwords, and their integer IDs used by an LLM. These can typically be downloaded from the model space on HuggingFace. A list of links for Genie-supported models is on quic/ai-hub-apps: LLM On-Device Deployment > Prepare Genie configs.A Genie config file - with instructions on how to run this model through Genie. These can be found on GitHub for models in AI Hub: quic/ai-hub-apps: tutorials/llm_on_genie/configs/genie.
Let's grab all of these and run Qwen2.5-0.5B-Instruct. Open the terminal on your development board, or an ssh session to your development board, and:
Download the model onto your development board. Either:
Download the model from our CDN (only done for the Qwen model):
Download the model from Aplux model zoo:
Go to the Aplux model zoo: Qwen2.5-0.5B-Instruct.
Sign up for an Aplux account.
Under 'Device', select the QCS6490.
Click "Download Model & Test code".

Downloading Genie-compatible models for the QCS6490 After downloading, push the ZIP file to your development board over ssh:
Find the IP address of your development board. Run on your development board:
Push the .zip file. Run from your computer:
Unzip the model. From your development board:
Run your model:
Great! You now have this LLM running under Genie.
Serving a UI or API through QAI AppBuilder
To use Genie models from your application you can use the QAI AppBuilder repository. The AppBuilder repo has both a OpenAI compatible chat completion API, as well as a Web UI to interact with your model (just like llama.cpp).
Heavy development: The AppBuilder is under heavy development. We've tried to pin the versions as much as we can, but using newer versions of the AppBuilder might not work with the instructions below.
Install the AppBuilder:
Run the Web UI (from the
samples/directory):Now open http://192.168.1.253:8976 (replace with your IP) in your web browser (on your computer) to interact with the model. Make sure to select the model first using the "models" dropdown.

ai-engine-direct-helper WebUI demo You can also programmatically access this server using the OpenAI Chat Completions API. E.g. from Python:
Start the server (from the
samples/directory):From a new terminal, create a new venv and install
requests:Create a new file
chat.py:Run
chat.py:(Model seems to always return
IBM-Granite, you can disregard this)
Tips and tricks
Downloading files from HuggingFace that require authentication
If you want to download files, e.g. the tokenizer.json file from Llama-3.2-1B-Instruct, that require permission or authentication:
Go to the model page on HuggingFace, sign in (or sign up), and fill in the form to get access to the model.
Create a new HuggingFace access token with 'Read' permissions at https://huggingface.co/settings/tokens, and configure it on your development board:
Once you're granted access you can now download the tokenizer:
Last updated