Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ Begin by installing the latest version of [Android Studio](https://developer.and

Next, install the following command-line tools:
- `cmake`; a cross-platform build system.
- `python3`; interpreted programming language, used by project to fetch dependencies and models.
- `git`; a version control system that you use to clone the Voice Assistant codebase.
- `adb`; Android Debug Bridge, used to communicate with and control Android devices.

Expand All @@ -22,9 +23,20 @@ Install these tools with the appropriate command for your OS:
{{< tabpane code=true >}}
{{< tab header="Linux/Ubuntu" language="bash">}}
sudo apt update
sudo apt install git adb cmake -y
sudo apt install git adb cmake python3 -y
{{< /tab >}}
{{< tab header="macOS" language="bash">}}
brew install git android-platform-tools cmake
brew install git android-platform-tools cmake python
{{< /tab >}}
{{< /tabpane >}}

Ensure the correct version of python is installed, the project needs python version 3.9 or later:

{{< tabpane code=true >}}
{{< tab header="Linux/Ubuntu" language="bash">}}
python3 --version
{{< /tab >}}
{{< tab header="macOS" language="bash">}}
python3 --version
{{< /tab >}}
{{< /tabpane >}}
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,26 @@ This process includes the following stages:
- A neural network analyzes these features to predict the most likely transcription based on grammar and context.
- The recognized text is passed to the next stage of the pipeline.

The voice assistant pipeline imports and builds a separate module to provide this STT functionality. You can access this at:

```
https://gitlab.arm.com/kleidi/kleidi-examples/speech-to-text
```

and build for various platforms:

|Platform|Details|
|---|---|
|Linux|x86_64 - KleidiAI is disabled by default, aarch64 - KleidiAI is enabled by default.|
|Android|Cross-compile for an Android device, ensure the Android NDK path is set and correct toolchain file is provided. KleidiAI enabled by default.|
|MacOS|Native or cross-compilation for a Mac device. KleidiAI and SME kernels can be used if available on device.|

Currently, this module uses [whisper.cpp](https://github.com/ggml-org/whisper.cpp) and wraps the backend library by a thin C++ layer. The module also provides JNI bindings for developers targetting Android based applications.

{{% notice %}}
You can get more information on how to build and use this module [here](https://gitlab.arm.com/kleidi/kleidi-examples/speech-to-text/-/blob/main/README.md?ref_type=heads)
{{% /notice %}}

## Large Language Model

Large Language Models (LLMs) enable natural language understanding and, in this application, are used for question-answering.
Expand All @@ -41,8 +61,37 @@ The text transcription from the previous part of the pipeline is used as input t

By default, the LLM runs asynchronously, streaming tokens as they are generated. The UI updates in real time with each token, which is also passed to the final pipeline stage.

The voice assistant pipeline imports and builds a separate module to provide this LLM functionality. You can access this at:

```
https://gitlab.arm.com/kleidi/kleidi-examples/large-language-models
```

and build for various platforms:

|Platform|Details|
|---|---|
|Linux|x86_64 - KleidiAI is disabled by default, aarch64 - KleidiAI is enabled by default.|
|Android|Cross-compile for an Android device, ensure the Android NDK path is set and correct toolchain file is provided. KleidiAI enabled by default.|
|MacOS|Native or cross-compilation for a Mac device. KleidiAI and SME kernels can be used if available on device.|

Currently, this module provides a thin C++ layer as well as JNI bindings for developers targetting Android based applications, supported backends are:
|Framework|Dependency|Input modalities supported|Output modalities supported|Neural Network|
|---|---|---|---|---|
|llama.cpp|https://github.com/ggml-org/llama.cpp|`image`, `text`|`text`|phi-2,Qwen2-VL-2B-Instruct|
|onnxruntime-genai|https://github.com/microsoft/onnxruntime-genai|`text`|`text`|phi-4-mini-instruct-onnx|
|mediapipe|https://github.com/google-ai-edge/mediapipe|`text`|`text`|gemma-2b-it-cpu-int4|



{{% notice %}}
You can get more information on how to build and use this module [here](https://gitlab.arm.com/kleidi/kleidi-examples/large-language-models/-/blob/main/README.md?ref_type=heads)
{{% /notice %}}

## Text-to-Speech

This part of the application pipeline uses the Android Text-to-Speech API along with additional logic to produce smooth, natural speech.

In synchronous mode, speech playback begins only after the full LLM response is received. By default, the application operates in asynchronous mode, where speech synthesis starts as soon as a full or partial sentence is ready. Remaining tokens are buffered and processed by the Android Text-to-Speech engine to ensure uninterrupted playback.

You are now familiar with the building blocks of this application, so you can build the voice assistant for an Android device in the next step.
Original file line number Diff line number Diff line change
Expand Up @@ -16,33 +16,51 @@ By default, Android devices ship with developer mode disabled. To enable it, fol

Once developer mode is enabled, connect your phone to your computer with USB. It should appear as a running device in the top toolbar. Select the device and click **Run** (a small green triangle, as shown below). This transfers the app to your phone and launches it.

In the graphic below, a Google Pixel 8 Pro phone is connected to the USB cable:

In the graphic below, a Samsung Galaxy Z Flip 6 phone is connected to the USB cable:
![upload image alt-text#center](upload.png "Upload the Voice App")
=======

## Launch the Voice Assistant

The app starts with this welcome screen:

![welcome image alt-text#center](voice_assistant_view1.jpg "Welcome Screen")
![welcome image alt-text#center](voice_assistant_view1.png "Welcome Screen")

Tap **Press to talk** at the bottom of the screen to begin speaking your request.

## Voice Assistant controls

### View performance counters
You can use application controls to enable extra functionality or gather performance data.

You can toggle performance counters such as:
- Speech recognition time.
- LLM encode tokens per second.
- LLM decode tokens per second.
- Speech generation time.
|Button|Control name|Description|
|---|---|---|
|1|Performance counters|Performance counters are hidden by default, click this to show speech recognition time, LLM encode and decode rate.|
|2|Speech generation|Speech generation is disabled by default, click this to use Android Text-to-Speech and get audible answers.|
|3|Reset conversation|By default, the application keeps context so you can follow-up questions, click this to reset voice assistant conversation history.|

Click the icon circled in red in the top left corner to show or hide these metrics:

![performance image alt-text#center](voice_assistant_view2.jpg "Performance Counters")
![performance image alt-text#center](voice_assistant_view2.png "Performance Counters")

### Multimodal Question Answering

If you have built the application using the default `llama.cpp` backend, you can also use it in multimodal `(input + text)` question answering mode.

For this, click the image button first:

![use image alt-text#center](voice_assistant_multimodal_1.png "Add image button")

This will bring up the photos you can chose from:

![choose image alt-text#center](choose_image.png "Choose image from the gallery")

Choose the image, and add image for voice assistant:

![add image alt-text#center](add_image.png "Add image to the question")

You can now ask questions related to this image, the large language model will you the image and text for multimodal question answering.

To reset the Voice Assistant's conversation history, click the icon circled in red in the top right:
![ask question image alt-text#center](voice_assistant_multimodal_2.png "Add image to the question")

![reset image alt-text#center](voice_assistant_view3.jpg "Reset the Voice Assistant's Context")
Now that you have explored how the android application is set up and built, you can see in detail how KleidiAI library is used in the next step.

Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,5 @@ To disable KleidiAI during build:

KleidiAI simplifies development by abstracting away low-level optimization: developers can write high-level code while the KleidiAI library selects the most efficient implementation at runtime based on the target hardware. This is possible thanks to its deeply optimized micro-kernels tailored for Arm architectures.

As newer versions of the architecture become available, KleidiAI becomes even more powerful: simply updating the library allows applications like the Voice Assistant to take advantage of the latest architectural improvements - such as SME2 — without requiring any code changes. This means better performance on newer devices with no additional effort from developers.
As newer versions of the architecture become available, KleidiAI becomes even more powerful: simply updating the library allows applications like the Voice Assistant to take advantage of the latest architectural improvements - such as SME2 — without requiring any code changes. This means better performance on newer devices with no additional effort from developers.

Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,19 @@ title: Accelerate Voice Assistant performance with KleidiAI and SME2

minutes_to_complete: 30

who_is_this_for: This is an introductory topic for developers who want to accelerate Voice Assistant performance on Android devices using KleidiAI and SME2.
who_is_this_for: This is an introductory topic for developers who want to see a multi-model pipeline of a Voice Assistant application and accelerate the performance on Android devices using KleidiAI and SME2.

learning_objectives:
- Compile and run a Voice Assistant Android application.
- Optimize performance using KleidiAI and SME2.

prerequisites:
- An Android phone that supports the i8mm Arm architecture feature (8-bit integer matrix multiplication). This Learning Path was tested on a Samsung Galaxy Z Flip 6.
- An Android phone that supports the i8mm Arm architecture feature (8-bit integer matrix multiplication). This Learning Path was tested on a Google Pixel 8 Pro.
- A development machine with [Android Studio](https://developer.android.com/studio) installed.

author: Arnaud de Grandmaison
author:
- Arnaud de Grandmaison
- Nina Drozd

skilllevels: Introductory
subjects: Performance and Architecture
Expand All @@ -22,10 +24,11 @@ armips:
tools_software_languages:
- Java
- Kotlin
- C++
operatingsystems:
- Android
- Linux
- macOS
- Android

further_reading:

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.