Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
title: Install Model Gym and Explore Neural Graphics Examples
weight: 2

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## What is Neural Graphics?

Neural graphics is an intersection of graphics and machine learning. Rather than relying purely on traditional GPU pipelines, neural graphics integrates learned models directly into the rendering stack. The techniques are particularly powerful on mobile devices, where battery life and performance constraints limit traditional compute-heavy rendering approaches. The goal is to deliver high visual fidelity without increasing GPU cost. This is achieved by training and deploying compact neural networks optimized for the device's hardware.

## How does Arm support neural graphics?

Arm enables neural graphics through the **Neural Graphics Development Kit**: a set of open-source tools that let developers train, evaluate, and deploy ML models for graphics workloads.

At its core are the ML Extensions for Vulkan, which bring native ML inference into the GPU pipeline using structured compute graphs. These extensions (`VK_ARM_tensors` and `VK_ARM_data_graph`) allow real-time upscaling and similar effects to run efficiently alongside rendering tasks.

The neural graphics models can be developed using well-known ML frameworks like **PyTorch**, and exported to deployment using Arm's hardware-aware pipeline. The workflow converts .pt model weights to `.vgf` via the TOSA intermediate representation, making it possible to do tailored model development for you game use-case. This Learning Path focuses on **Neural Super Sampling (NSS)** as the use case for training, evaluating, and deploying neural models using a toolkit called the **Neural Graphics Model Gym**.

Starting in 2026, Arm GPUs will feature dedicated neural accelerators, optimized for low-latency inference in graphics workloads. To help developers get started early, Arm provides the ML Emulation Layers for Vulkan that simulate future hardware behavior, so you can build and test models now.

To learn more about the Development Kit, check out the [introductory Learning Path](/learning-paths/mobile-graphics-and-gaming/vulkan-ml-sample).

## What is the Neural Graphics Model Gym?

The Neural Graphics Model Gym is an open-source toolkit for fine-tuning and exporting neural graphics models. It is designed to streamline the entire model lifecycle for graphics-focused use cases, like NSS.

Model Gym gives you:

- A training and evaluation API built on PyTorch
- Model export to .vgf for real-time use in game development
- Support for quantization-aware training (QAT) and post-training quantization (PTQ) using ExecuTorch
- Optional Docker setup for reproducibility

The toolkit supports workflows via both Python notebooks (for rapid experimentation) and command-line interface. This Learning Path will walk you through the demonstrative notebooks, and prepare you to start using the CLI for your own model development.
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: Set up your environment
weight: 3

### FIXED, DO NOT MODIFY
layout: learningpathall
---

In this section, you will install a few dependencies into your Ubuntu environment. You'll need a working Python 3.10+ environment with some ML and system dependencies. Make sure Python is installed by verifying the version is printed:

```bash
python3 --version
```

Next, install a few additional packages:

```bash
sudo apt update
sudo apt install python3-venv python-is-python3 gcc make python3-dev -y
```

## Set up the examples repository

The example notebooks are open-sourced in a GitHub repository. Start by cloning it:

```bash
git clone https://github.com/arm/neural-graphics-model-gym-examples.git
cd neural-graphics-model-gym-examples
```

From inside the `neural-graphics-model-gym-examples/` folder, run the setup script:

```bash
./setup.sh
```

This will:
- create a Python virtual environment called `nb-env`
- install the `ng-model-gym` package and required dependencies
- download the datasets and weights needed to run the notebooks

Run the following in a python shell to confirm that the script was successful:

```python
import torch
import ng_model_gym

print("Torch version:", torch.__version__)
print("Model Gym version:", ng_model_gym.__version__)
```

You’re now ready to start walking through the training and evaluation steps.
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
---
title: Launch the training notebook
weight: 4

### FIXED, DO NOT MODIFY
layout: learningpathall
---

In this section, you'll get hands-on with how you can use the model gym to train your own models, or fine-tune the NSS use-case.

## About NSS

Arm Neural Super Sampling (NSS) is an upscaling technique designed to solve a growing challenge in real-time graphics: delivering high visual quality without compromising performance or battery life. Instead of rendering every pixel at full resolution, NSS uses a neural network to intelligently upscale frames, freeing up GPU resources and enabling smoother, more immersive experiences on mobile devices.

The NSS model is available in two formats:

| Model format | File extension | Used for |
|--------------|----------------|--------------------------------------------------------------------------|
| PyTorch | .pt | training, fine-tuning, or evaluation in or scripts using the Model Gym |
| VGF | .vgf | for deployment using ML Extensions for Vulkan on Arm-based hardware or emulation layers |

Both formats are available in the [NSS repository on Hugging Face](https://huggingface.co/Arm/neural-super-sampling). You'll also be able to explore config files, model metadata, usage details and detailed documentation on the use-case.

Aside from the model in HuggingFace, the Neural Graphics Development Kit features [an NSS plugin for game engines such as Unreal](/learning-paths/mobile-graphics-and-gaming/nss-unreal).

## Run the training notebook

With your environment set up, you're ready to launch the first step in the workflow: training your neural graphics model using the `model_training_example.ipynb` notebook.

{{% notice Before you begin %}}
In this part of the Learning Path, you will run through two Jupyter Notebooks. Return to this tutorial when you're done to explore further resources and next steps.
{{% /notice %}}

You will get familiarized with the following steps:

- Loading a model configuration
- Launching a full training pipeline
- Visualizing metrics with TensorBoard
- Saving intermediate checkpoints

### Start Jupyter Lab

Launch Jupyter Lab with the following command:

```bash
jupyter lab
```

This will prompt you to open your browser to `http://localhost:8888`and enter the token that is printed in the terminal output. Navigate to:

```output
neural-graphics-model-gym-examples/model_training_example.ipynb
```

Step through the notebook for training.

Once your model is trained, the next step is evaluation. You'll measure accuracy, compare checkpoints, and prepare the model for export. Open the evaluation notebook.

```output
neural-graphics-model-gym-examples/model_evaluation_example.ipynb
```

At the end you should see a visual comparison of the NSS upscaling and the ground truth image.

Proceed to the final section to view the model structure and explore further resources.


Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
---
title: Visualize your model with Model Explorer
weight: 5

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## What is Model Explorer?

TODO: verify .vgf flavor runs smoothly

Model Explorer is a visualization tool for inspecting neural network structures and execution graphs. Arm provides a VGF adapter for Model Explorer, allowing you to visualize `.vgf` models created from your training and export pipeline.

This lets you inspect model architecture, tensor shapes, and graph connectivity before deployment. This can be a powerful way to debug and understand your exported neural graphics models.

## Setting up the VGF adapter

The VGF adapter extends Model Explorer to support `.vgf` files exported from the Model Gym toolchain.

### Install the VGF adapter with pip

```bash
pip install vgf-adapter-model-explorer
```

Or install the prebuilt wheel from GitHub:

```bash
PYTHON_VERSION_TAG=311
gh release download \
--repo arm/vgf-adapter-model-explorer \
--pattern "*py${PYTHON_VERSION_TAG}*.whl"
pip install *py${PYTHON_VERSION_TAG}*.whl
```

### Install Model Explorer

The next step is to make sure the Model Explorer itself is installed. Use pip to set it up:

```bash
pip install torch ai-edge-model-explorer
```

### Launch the viewer

Once installed, launch the explorer with the VGF adapter:

```bash
model-explorer --extensions=vgf_adapter_model_explorer
```

Use the file browser to open the `.vgf` model exported earlier in your training workflow.

## Wrapping up

Through this Learning Path, you’ve learned what neural graphics is and why it matters for game performance. You’ve stepped through the process of training and evaluating an NSS model using PyTorch and the Model Gym, and seen how to export that model into VGF (.vgf) for real-time deployment. You’ve also explored how to visualize and inspect the model’s structure using Model Explorer.

As a next step, you can head over to the [Model Training Gym repository](https://github.com/arm/neural-graphics-model-gym/tree/main) and explore the documentation to explore integration into your own game development workflow. You’ll find resources on fine-tuning, deeper details about the training and export process, and everything you need to adapt to your own content and workflows.
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
---
title: Fine-Tuning Neural Graphics Models with Model Gym

minutes_to_complete: 45

who_is_this_for: This is an advanced topic for developers exploring neural graphics and interested in training and deploying upscaling models like Neural Super Sampling (NSS) using PyTorch and Arm’s hardware-aware backend.

learning_objectives:
- Understand the principles of neural graphics and how it’s applied to game performance
- Learn how to fine-tune and evaluate a neural network for Neural Super Sampling (NSS)
- Use the Model Gym Python API and CLI to configure and train neural graphics models
- Visualize and inspect models using the Model Explorer tool

prerequisites:
- Basic understanding of PyTorch and machine learning concepts
- A development machine running Ubuntu 22.04, with a CUDA-capable NVIDIA® GPU
- CUDA Toolkit version 11.8 or later

author: Annie Tallund

### Tags
skilllevels: Advanced
subjects: ML
armips:
- Mali
tools_software_languages:
- PyTorch
- Jupyter Notebook
- Vulkan
operatingsystems:
- Linux
further_reading:
- resource:
title: Model Gym GitHub Repository
link: https://github.com/arm/neural-graphics-model-gym
type: code
- resource:
title: How Arm Neural Super Sampling works
link: https://community.arm.com/arm-community-blogs/b/mobile-graphics-and-gaming-blog/posts/how-arm-neural-super-sampling-works
type: blog
- resource:
title: Neural Graphics Development Kit
link: https://developer.arm.com/mobile-graphics-and-gaming/neural-graphics
type: website
- resource:
title: NSS Use Case Guide
link: https://developer.arm.com/documentation/111009/latest/
type: documentation


### FIXED, DO NOT MODIFY
weight: 1
layout: "learningpathall"
learning_path_main_page: "yes"
---
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
# ================================================================================
# FIXED, DO NOT MODIFY THIS FILE
# ================================================================================
weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation.
title: "Next Steps" # Always the same, html page title.
layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing.
---