Genie
A select number of Large Language Models (LLMs) and Vision Language Models (VLMs) can run on the NPU on your Dragonwing development board using the Qualcomm Gen AI Inference Extensions (Genie). These models have been ported and optimized by Qualcomm to be as efficient as possible on hardware. Genie only supports a subset of manually ported models, so if your favourite model is not listed, look at Run LLMs / VLMs using llama.cpp to run models on the GPU as a fallback.
Installing AI Runtime SDK - Community Edition
First install the AI Runtime SDK - Community Edition. Open the terminal on your development board, or an ssh session to your development board, and run:
wget -qO- https://cdn.edgeimpulse.com/qc-ai-docs/device-setup/install_ai_runtime_sdk_2.35.sh | bash
Finding supported models
Genie-compatible LLM models can be found in a few places:
Under 'Chipset', select:
RB3 Gen 2 Vision Kit: 'Qualcomm QCS6490'
RUBIK Pi 3: 'Qualcomm QCS6490'
Under 'NLP', select "Text Generation".
Under 'Chipset', select:
RB3 Gen 2 Vision Kit: 'Qualcomm QCS6490 (Proxy)'
RUBIK Pi 3: 'Qualcomm QCS6490 (Proxy)'
Under 'Domain/Use Case', select "Generative AI".
As an example, let's deploy the Qwen2.5-0.5B-Instruct model - which runs on all Dragwoning development boards.
Running Qwen2.5-0.5B-Instruct
When you download a model you'll need 3 files:
One or more
*.serialized.bin
files - these contain the weights of the model.tokenizer.json
- a serialized configuration file that defines how text is split into tokens, mapping between characters, subwords, and their integer IDs used by an LLM. These can typically be downloaded from the model space on HuggingFace. A list of links for Genie-supported models is on quic/ai-hub-apps: LLM On-Device Deployment > Prepare Genie configs.A Genie config file - with instructions on how to run this model through Genie. These can be found on GitHub for models in AI Hub: quic/ai-hub-apps: tutorials/llm_on_genie/configs/genie.
Let's grab all of these and run Qwen2.5-0.5B-Instruct. Open the terminal on your development board, or an ssh session to your development board, and:
Download the model onto your development board.
Go to the Aplux model zoo: Qwen2.5-0.5B-Instruct.
Sign up for an Aplux account.
Under 'Device', select the QCS6490.
Click "Download Model & Test code".
Downloading Genie-compatible models for the QCS6490 After downloading, push the ZIP file to your development board over ssh:
Find the IP address of your development board. Run on your development board:
ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1' # ... Example: # 192.168.1.253
Push the .zip file. Run from your computer:
scp qnn229_qcs6490_cl4096.zip ubuntu@192.168.1.253:~/qnn229_qcs6490_cl4096.zip
Unzip the model. From your development board:
mkdir -p genie-models/ unzip -d genie-models/qwen2.5-0.5b-instruct/ qnn229_qcs6490_cl4096.zip rm qnn229_qcs6490_cl4096.zip
Run your model:
cd genie-models/qwen2.5-0.5b-instruct/ genie-t2t-run -c ./qwen2.5-0.5b-instruct-htp.json -p '<|im_start|>system You are Qwen, created by Alibaba Cloud. You are a helpful assistant that responds in English.<|im_end|><|im_start|>user What is the capital of the Netherlands?<|im_end|><|im_start|>assistant' # Using libGenie.so version 1.9.0 # # [BEGIN]: # The capital of the Netherlands is Amsterdam.[END]
Great! You now have this LLM running under Genie.
Serving a UI or API through QAI AppBuilder
To use Genie models from your application you can use the QAI AppBuilder repository. The AppBuilder repo has both a OpenAI compatible chat completion API, as well as a Web UI to interact with your model (just like llama.cpp).
Heavy development: The AppBuilder is under heavy development. We've tried to pin the versions as much as we can, but using newer versions of the AppBuilder might not work with the instructions below.
Install the AppBuilder:
sudo apt install -y yq # Clone the repository (can switch back to upstream once https://github.com/quic/ai-engine-direct-helper/pull/16 is landed) git clone https://github.com/edgeimpulse/ai-engine-direct-helper cd ai-engine-direct-helper git submodule update --init --recursive git checkout linux-paths # Create a new venv python3 -m venv .venv source .venv/bin/activate # Build the wheel pip3 install setuptools python setup.py bdist_wheel pip3 install ./dist/qai_appbuilder-*-linux_aarch64.whl # Install other dependencies pip3 install \ uvicorn==0.35.0 \ pydantic_settings==2.10.1 \ fastapi==0.116.1 \ langchain==0.3.27 \ langchain-core==0.3.75 \ langchain-community==0.3.29 \ sse_starlette==3.0.2 \ pypdf==6.0.0 \ python-pptx==1.0.2 \ docx2txt==0.9 \ openai==1.107.0 \ json-repair==0.50.1 \ qai_hub==0.36.0 \ py3_wget==1.0.13 \ torch==2.8.0 \ transformers==4.56.1 \ gradio==5.44.1 \ diffusers==0.35.1 # Where you've downloaded the weights, and created the config files before WEIGHTS_DIR=~/genie-models/qwen2.5-0.5b-instruct/ MODEL_NAME=qwen2_5-0_5b-instruct # Create a new directory and link the files mkdir -p samples/genie/python/models/$MODEL_NAME cd samples/genie/python/models/$MODEL_NAME # Patch up config cp $WEIGHTS_DIR/*instruct-htp.json config.json jq --arg pwd "$PWD" '.dialog.tokenizer.path |= if startswith($pwd + "/") then . else $pwd + "/" + . end' config.json > tmp && mv tmp config.json jq --arg pwd "$PWD" '.dialog.engine.backend.extensions |= if startswith($pwd + "/") then . else $pwd + "/" + . end' config.json > tmp && mv tmp config.json jq --arg pwd "$PWD" '.dialog.engine.model.binary["ctx-bins"] |= map(if startswith($pwd + "/") then . else $pwd + "/" + . end)' config.json > tmp && mv tmp config.json # Symlink other files ln -s $WEIGHTS_DIR/*.json . ln -s $WEIGHTS_DIR/*okenizer.json tokenizer.json ln -s $WEIGHTS_DIR/*.serialized.bin . echo "prompt_tags_1: <|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\nGive me a short introduction to large language model. prompt_tags_2: <|im_end|>\n<|im_start|>assistant\n" > prompt.conf # Navigate back to samples/ directory cd ../../../.. # Create empty tokenizer files, otherwise they will be downloaded... (which will fail) if [ ! -f genie/python/models/Phi-3.5-mini/tokenizer.json ]; then echo '{}' > genie/python/models/Phi-3.5-mini/tokenizer.json fi if [ ! -f genie/python/models/IBM-Granite-v3.1-8B/tokenizer.json ]; then echo '{}' > genie/python/models/IBM-Granite-v3.1-8B/tokenizer.json fi
Run the Web UI (from the
samples/
directory):# Find the IP address of your development board ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1' # ... Example: # 192.168.1.253 # Run the Web UI python webui/GenieWebUI.py
Now open http://192.168.1.253:8976 (replace with your IP) in your web browser (on your computer) to interact with the model. Make sure to select the model first using the "models" dropdown.
ai-engine-direct-helper WebUI demo You can also programmatically access this server using the OpenAI Chat Completions API. E.g. from Python:
Start the server (from the
samples/
directory):python genie/python/GenieAPIService.py --modelname "qwen2_5-0_5b-instruct" --loadmodel --profile
From a new terminal, create a new venv and install
requests
:python3 -m venv .venv-chat source .venv/bin/activate pip3 install requests
Create a new file
chat.py
:import requests # if running from your own computer, replace localhost with the IP address of your development board url = "http://localhost:8910/v1/chat/completions" payload = { "model": "qwen2_5-0_5b-instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Explain Qualcomm in one sentence."} ], "temperature": 0.7, "max_tokens": 200 } response = requests.post(url, headers={ "Content-Type": "application/json" }, json=payload) print(response.json())
Run
chat.py
:python3 chat.py # {'id': 'genie-llm', 'model': 'IBM-Granite', 'object': 'chat.completion', 'created': 1757512757, 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': 'Qualcomm is a leading American technology company that designs, manufactures, and markets mobile phone chips and other wireless communication products.', 'tool_call_id': None, 'tool_calls': None}, 'finish_reason': 'stop'}], 'usage': {'prompt_tokens': 0, 'completion_tokens': 0, 'total_tokens': 0}}
(Model seems to always return
IBM-Granite
, you can disregard this)
Tips and tricks
Downloading files from HuggingFace that require authentication
If you want to download files, e.g. the tokenizer.json
file from Llama-3.2-1B-Instruct, that require permission or authentication:
Go to the model page on HuggingFace, sign in (or sign up), and fill in the form to get access to the model.
Create a new HuggingFace access token with 'Read' permissions at https://huggingface.co/settings/tokens, and configure it on your development board:
export HF_TOKEN=hf_gs... # Optionally add ^ to ~/.bash_profile to ensure it gets loaded automatically in the future.
Once you're granted access you can now download the tokenizer:
wget --header="Authorization: Bearer $HF_TOKEN" https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct/resolve/main/tokenizer.json
Last updated