Skip to content

Adding learning path for distributed inference with llama.cpp on Arm #2150

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 28, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
193 changes: 97 additions & 96 deletions assets/contributors.csv
Original file line number Diff line number Diff line change
@@ -1,96 +1,97 @@
author,company,github,linkedin,twitter,website
Jason Andrews,Arm,jasonrandrews,jason-andrews-7b05a8,,
Pareena Verma,Arm,pareenaverma,pareena-verma-7853607,,
Ronan Synnott,Arm,,ronansynnott,,
Florent Lebeau,Arm,,,,
Brenda Strech,Remote.It,bstrech,bstrech,@remote_it,www.remote.it
Liliya Wu,Arm,Liliyaw,liliya-wu-8b6227216,,
Julio Suarez,Arm,jsrz,juliosuarez,,
Gabriel Peterson,Arm,gabrieldpeterson,gabrieldpeterson,@gabedpeterson,https://corteximplant.com/@gabe
Christopher Seidl,Arm,,,,
Michael Hall,Arm,,,,
Kasper Mecklenburg,Arm,,,,
Mathias Brossard,Arm,,,,
Julie Gaskin,Arm,,,,
Pranay Bakre,Arm,,,,
Elham Harirpoush,Arm,,,,
Frédéric -lefred- Descamps,OCI,,,,lefred.be
Fr�d�ric -lefred- Descamps,OCI,,,,lefred.be
Kristof Beyls,Arm,,,,
David Spickett,Arm,,,,
Uma Ramalingam,Arm,uma-ramalingam,,,
Konstantinos Margaritis,VectorCamp,markos,konstantinosmargaritis,@freevec1,https://vectorcamp.gr/
Diego Russo,Arm,diegorusso,diegor,diegor,https://www.diegor.it
Jonathan Davies,Arm,,,,
Zhengjun Xing,Arm,,,,
Leandro Nunes,Arm,,,,
Dawid Borycki,,dawidborycki,,,
Ying Yu,Arm,,,,
Bolt Liu,Arm,,,,
Roberto Lopez Mendez,Arm,,,,
Arnaud de Grandmaison,Arm,Arnaud-de-Grandmaison-ARM,arnauddegrandmaison,,
Jose-Emilio Munoz-Lopez,Arm,,,,
James Whitaker,Arm,,,,
Johanna Skinnider,Arm,,,,
Varun Chari,Arm,,,,
Adnan AlSinan,Arm,,,,
Graham Woodward,Arm,,,,
Basma El Gaabouri,Arm,,,,
Gayathri Narayana Yegna Narayanan,Arm,,,,
Alexandros Lamprineas,Arm,,,,
Annie Tallund,Arm,annietllnd,annietallund,,
Cyril Rohr,RunsOn,crohr,cyrilrohr,,
Rin Dobrescu,Arm,,,,
Przemyslaw Wirkus,Arm,PrzemekWirkus,przemyslaw-wirkus-78b73352,,
Nader Zouaoui,Day Devs,nader-zouaoui,nader-zouaoui,@zouaoui_nader,https://daydevs.com/
Alaaeddine Chakroun,Day Devs,Alaaeddine-Chakroun,alaaeddine-chakroun,,https://daydevs.com/
Koki Mitsunami,Arm,,kmitsunami,,
Chen Zhang,Zilliz,,,,
Tianyu Li,Arm,,,,
Georgios Mermigkis,VectorCamp,gMerm,georgios-mermigkis,,https://vectorcamp.gr/
Ben Clark,Arm,,,,
Han Yin,Arm,hanyin-arm,nacosiren,,
Willen Yang,Arm,,,,
Daniel Gubay,,,,,
Paul Howard,,,,,
Iago Calvo Lista,Arm,,,,
Stephen Theobald,Arm,,,,
ThirdAI,,,,,
Preema Merlin Dsouza,,,,,
Dominica Abena O. Amanfo,,,,,
Arm,,,,,
Albin Bernhardsson,,,,,
Przemyslaw Wirkus,,,,,
Zach Lasiuk,,,,,
Daniel Nguyen,,,,,
Joe Stech,Arm,JoeStech,joestech,,
visualSilicon,,,,,
Konstantinos Margaritis,VectorCamp,,,,
Kieran Hejmadi,,,,,
Alex Su,,,,,
Chaodong Gong,,,,,
Owen Wu,Arm,,,,
Koki Mitsunami,,,,,
Nikhil Gupta,,,,,
Nobel Chowdary Mandepudi,Arm,,,,
Ravi Malhotra,Arm,,,,
Masoud Koleini,,,,,
Na Li,Arm,,,,
Tom Pilar,,,,,
Cyril Rohr,,,,,
Odin Shen,Arm,odincodeshen,odin-shen-lmshen,,
Avin Zarlez,Arm,AvinZarlez,avinzarlez,,https://www.avinzarlez.com/
Shuheng Deng,Arm,,,,
Yiyang Fan,Arm,,,,
Julien Jayat,Arm,JulienJayat-Arm,julien-jayat-a980a397,,
Geremy Cohen,Arm,geremyCohen,geremyinanutshell,,
Barbara Corriero,Arm,,,,
Nina Drozd,Arm,NinaARM,ninadrozd,,
Jun He,Arm,JunHe77,jun-he-91969822,,
Gian Marco Iodice,Arm,,,,
Aude Vuilliomenet,Arm,,,,
Andrew Kilroy,Arm,,,,
Peter Harris,Arm,,,,
Chenying Kuo,Adlink,evshary,evshary,,
William Liang,,wyliang,,,
Waheed Brown,Arm,https://github.com/armwaheed,https://www.linkedin.com/in/waheedbrown/,,
author,company,github,linkedin,twitter,website
Jason Andrews,Arm,jasonrandrews,jason-andrews-7b05a8,,
Pareena Verma,Arm,pareenaverma,pareena-verma-7853607,,
Ronan Synnott,Arm,,ronansynnott,,
Florent Lebeau,Arm,,,,
Brenda Strech,Remote.It,bstrech,bstrech,@remote_it,www.remote.it
Liliya Wu,Arm,Liliyaw,liliya-wu-8b6227216,,
Julio Suarez,Arm,jsrz,juliosuarez,,
Gabriel Peterson,Arm,gabrieldpeterson,gabrieldpeterson,@gabedpeterson,https://corteximplant.com/@gabe
Christopher Seidl,Arm,,,,
Michael Hall,Arm,,,,
Kasper Mecklenburg,Arm,,,,
Mathias Brossard,Arm,,,,
Julie Gaskin,Arm,,,,
Pranay Bakre,Arm,,,,
Elham Harirpoush,Arm,,,,
Frédéric -lefred- Descamps,OCI,,,,lefred.be
Fr�d�ric -lefred- Descamps,OCI,,,,lefred.be
Kristof Beyls,Arm,,,,
David Spickett,Arm,,,,
Uma Ramalingam,Arm,uma-ramalingam,,,
Konstantinos Margaritis,VectorCamp,markos,konstantinosmargaritis,@freevec1,https://vectorcamp.gr/
Diego Russo,Arm,diegorusso,diegor,diegor,https://www.diegor.it
Jonathan Davies,Arm,,,,
Zhengjun Xing,Arm,,,,
Leandro Nunes,Arm,,,,
Dawid Borycki,,dawidborycki,,,
Ying Yu,Arm,,,,
Bolt Liu,Arm,,,,
Roberto Lopez Mendez,Arm,,,,
Arnaud de Grandmaison,Arm,Arnaud-de-Grandmaison-ARM,arnauddegrandmaison,,
Jose-Emilio Munoz-Lopez,Arm,,,,
James Whitaker,Arm,,,,
Johanna Skinnider,Arm,,,,
Varun Chari,Arm,,,,
Adnan AlSinan,Arm,,,,
Graham Woodward,Arm,,,,
Basma El Gaabouri,Arm,,,,
Gayathri Narayana Yegna Narayanan,Arm,,,,
Alexandros Lamprineas,Arm,,,,
Annie Tallund,Arm,annietllnd,annietallund,,
Cyril Rohr,RunsOn,crohr,cyrilrohr,,
Rin Dobrescu,Arm,,,,
Przemyslaw Wirkus,Arm,PrzemekWirkus,przemyslaw-wirkus-78b73352,,
Nader Zouaoui,Day Devs,nader-zouaoui,nader-zouaoui,@zouaoui_nader,https://daydevs.com/
Alaaeddine Chakroun,Day Devs,Alaaeddine-Chakroun,alaaeddine-chakroun,,https://daydevs.com/
Koki Mitsunami,Arm,,kmitsunami,,
Chen Zhang,Zilliz,,,,
Tianyu Li,Arm,,,,
Georgios Mermigkis,VectorCamp,gMerm,georgios-mermigkis,,https://vectorcamp.gr/
Ben Clark,Arm,,,,
Han Yin,Arm,hanyin-arm,nacosiren,,
Willen Yang,Arm,,,,
Daniel Gubay,,,,,
Paul Howard,,,,,
Iago Calvo Lista,Arm,,,,
Stephen Theobald,Arm,,,,
ThirdAI,,,,,
Preema Merlin Dsouza,,,,,
Dominica Abena O. Amanfo,,,,,
Arm,,,,,
Albin Bernhardsson,,,,,
Przemyslaw Wirkus,,,,,
Zach Lasiuk,,,,,
Daniel Nguyen,,,,,
Joe Stech,Arm,JoeStech,joestech,,
visualSilicon,,,,,
Konstantinos Margaritis,VectorCamp,,,,
Kieran Hejmadi,,,,,
Alex Su,,,,,
Chaodong Gong,,,,,
Owen Wu,Arm,,,,
Koki Mitsunami,,,,,
Nikhil Gupta,,,,,
Nobel Chowdary Mandepudi,Arm,,,,
Ravi Malhotra,Arm,,,,
Masoud Koleini,,,,,
Na Li,Arm,,,,
Tom Pilar,,,,,
Cyril Rohr,,,,,
Odin Shen,Arm,odincodeshen,odin-shen-lmshen,,
Avin Zarlez,Arm,AvinZarlez,avinzarlez,,https://www.avinzarlez.com/
Shuheng Deng,Arm,,,,
Yiyang Fan,Arm,,,,
Julien Jayat,Arm,JulienJayat-Arm,julien-jayat-a980a397,,
Geremy Cohen,Arm,geremyCohen,geremyinanutshell,,
Barbara Corriero,Arm,,,,
Nina Drozd,Arm,NinaARM,ninadrozd,,
Jun He,Arm,JunHe77,jun-he-91969822,,
Gian Marco Iodice,Arm,,,,
Aude Vuilliomenet,Arm,,,,
Andrew Kilroy,Arm,,,,
Peter Harris,Arm,,,,
Chenying Kuo,Adlink,evshary,evshary,,
William Liang,,wyliang,,,
Waheed Brown,Arm,https://github.com/armwaheed,https://www.linkedin.com/in/waheedbrown/,,
Aryan Bhusari,Arm,,https://www.linkedin.com/in/aryanbhusari,,
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
title: Distributed inference using llama.cpp

draft: true
cascade:
draft: true

minutes_to_complete: 30

who_is_this_for: This learning path is for developers with some experience using llama.cpp who want to learn about distributed inference.

learning_objectives:
- Set up the main host and worker nodes using llama.cpp
- Run a large quantized model (e.g., Llama 3.1 405B) on CPUs in a distributed manner on Arm machines

prerequisites:
- An AWS Graviton4 c8g.16xlarge instance to test Arm performance optimizations, or any [Arm based instance](/learning-paths/servers-and-cloud-computing/csp/) from a cloud service provider or an on-premise Arm server.
- Familiarity with -> [Deploy a Large Language Model (LLM) chatbot with llama.cpp using KleidiAI on Arm servers](/learning-paths/servers-and-cloud-computing/llama-cpu)
- Familiarity with AWS

author: Aryan Bhusari

### Tags
skilllevels: Introductory
subjects: ML
armips:
- Neoverse
tools_software_languages:
- LLM
- GenAI
- AWS
operatingsystems:
- Linux



further_reading:
- resource:
title: Llama.cpp rpc-server code
link: https://github.com/ggml-org/llama.cpp/tree/master/tools/rpc
type: Code



### FIXED, DO NOT MODIFY
# ================================================================================
weight: 1 # _index.md always has weight of 1 to order correctly
layout: "learningpathall" # All files under learning paths have this same wrapper
learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content.
---
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
# ================================================================================
# FIXED, DO NOT MODIFY THIS FILE
# ================================================================================
weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation.
title: "Next Steps" # Always the same, html page title.
layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing.
---
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
---
title: Overview and Worker Node Configuration
weight: 2

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Before you begin
The instructions in this Learning Path are for any Arm server running Ubuntu 24.04.2 LTS. You will need at least three Arm server instances with at least 64 cores and 128GB of RAM to run this example. The instructions have been tested on an AWS Graviton4 c8g.16xlarge instance

## Overview
llama.cpp is a C++ library that enables efficient inference of LLaMA and similar large language models on CPUs, optimized for local and embedded environments. Just over a year ago from its publication date, rgerganov’s RPC code was merged into llama.cpp, enabling distributed inference of large LLMs across multiple CPU-based machines—even when the models don’t fit into the memory of a single machine. In this learning path, we’ll explore how to run a 405B parameter model on Arm-based CPUs.

For the purposes of this demonstration, the following experimental setup will be used:
- Total number of instances: 3
- Instance type: c8g.16xlarge
- Model: Llama-3.1-405B_Q4_0.gguf

One of the three nodes will serve as the master node, which physically hosts the model file. The other two nodes will act as worker nodes. In llama.cpp, remote procedure calls (RPC) are used to offload both the model and the computation over TCP connections between nodes. The master node forwards inference requests to the worker nodes, where all the actual computation is performed.

## Implementation

1. To get started, follow [this learning path](/learning-paths/servers-and-cloud-computing/llama-cpu) up to the step where you clone the llama.cpp repository. Since this setup involves multiple instances (or devices), you will need to replicate the initial setup on each device. Specifically, after executing the command below on all devices, continue with this learning path starting from Step 2.

```bash
git clone https://github.com/ggerganov/llama.cpp
```
2. Now we can build the llama.cpp library with the RPC feature enabled by compiling it with the -DLLAMA_RPC=ON flag
```bash
cd llama.cpp
mkdir -p build-rpc
cd build-rpc
cmake .. -DGGML_RPC=ON -DLLAMA_BUILD_SERVER=ON
cmake --build . --config Release
```

`llama.cpp` is now built in the `build-rpc/bin` directory.
Check that `llama.cpp` has built correctly by running the help command:
```bash
cd build-rpc
bin/llama-cli -h
```
If everything was built correctly, you should see a list of all the available flags that can be used with llama-cli.
3. Now, choose two of the three devices to act as backend workers. If the devices had varying compute capacities, the ones with the highest compute should be selected—especially for a 405B model. However, since all three devices have identical compute capabilities in this case, you can select any two to serve as backend workers.

Communication between the master node and the worker nodes occurs through a socket created on each worker. This socket listens for incoming data from the master—such as model parameters, tokens, hidden states, and other inference-related information.
{{% notice Note %}}The RPC feature in llama.cpp is not secure by default, so you should never expose it to the open internet. To mitigate this risk, ensure that the security groups for all your EC2 instances are properly configured—restricting access to only trusted IPs or internal VPC traffic. This helps prevent unauthorized access to the RPC endpoints.{{% /notice %}}
Use the following command to start the listeneing on the worker nodes:
```bash
bin/rpc-server -p 50052 -H 0.0.0.0 -t 64
```
Below are the available flag options that can be used with the rpc-server functionality:

```output
-h, --help show this help message and exit
-t, --threads number of threads for the CPU backend (default: 6)
-d DEV, --device device to use
-H HOST, --host HOST host to bind to (default: 127.0.0.1)
-p PORT, --port PORT port to bind to (default: 50052)
-m MEM, --mem MEM backend memory size (in MB)
-c, --cache enable local file cache
```
Setting the host to 0.0.0.0 might seem counterintuitive given the earlier security warning, but it’s acceptable in this case because the security groups have been properly configured to block any unintended or unauthorized access.
Loading