Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
158 changes: 150 additions & 8 deletions docsrc/getting_started/tensorrt_rtx.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ For detailed information about TensorRT-RTX, refer to:

* `TensorRT-RTX Documentation <https://docs.nvidia.com/deeplearning/tensorrt-rtx/latest/index.html>`_

Currently, Torch-TensorRT only supports TensorRT-RTX for experimental purposes.
Currently, Torch-TensorRT only supports TensorRT-RTX for experimental purposes.
Torch-TensorRT by default uses standard TensorRT during the build and run.

To use TensorRT-RTX:
Expand All @@ -28,6 +28,84 @@ To use TensorRT-RTX:
Prerequisites
-------------

Clone the Repository
~~~~~~~~~~~~~~~~~~~~~

First, clone the Torch-TensorRT repository:

.. code-block:: sh

git clone https://github.com/pytorch/TensorRT.git
cd TensorRT

Install System Dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

**In Linux:**

Install Python development headers (required for building Python extensions):

.. code-block:: sh

# For Python 3.12 (adjust version number based on your Python version)
sudo apt install python3.12-dev

Install CUDA Toolkit
~~~~~~~~~~~~~~~~~~~~

Download and install the CUDA Toolkit from the `NVIDIA Developer website <https://developer.nvidia.com/cuda-downloads>`_.

**Important:** Check the required CUDA version in the `MODULE.bazel <https://github.com/pytorch/TensorRT/blob/main/MODULE.bazel>`_ file. You must install the exact CUDA toolkit version specified there (for example, at the time of writing, CUDA 13.0 is required).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Supported cuda version are cu129 and cu130,
if you have cu129 installed, you can change this line https://github.com/pytorch/TensorRT/blob/main/MODULE.bazel#L39
to your version

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since multiple versions are supported, should we just change the MODULE.bazel to look at the /usr/local/cuda/ path instead? Then we don't have to mention specific versions in this document.

Or would you just prefer if I add what you commented to the document?


After installation, set the ``CUDA_HOME`` environment variable:

.. code-block:: sh

export CUDA_HOME=/usr/local/cuda
# Add this to your ~/.bashrc or ~/.zshrc to make it persistent

Install Python Dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

**It is strongly recommended to use a virtual environment** to avoid conflicts with system packages:

.. code-block:: sh

# Create a virtual environment
python -m venv .venv

# Activate the virtual environment
source .venv/bin/activate # On Linux/Mac
# OR on Windows:
# .venv\Scripts\activate

Before building, install the required Python packages:

.. code-block:: sh

# Install setuptools (provides distutils)
pip install setuptools

# Install PyTorch nightly build (check CUDA version in MODULE.bazel)
# Replace cuXXX with your CUDA version (e.g., cu130 for CUDA 13.0)
pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cuXXX
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can add a explaination, if user is building torch_tensorrt from TOT(torch_tensorrt TOT is aligned with torch TOT),
It is using the torch nightly,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the explanation, please take a look


# If you encounter version conflicts during the build, you may need to specify
# the exact PyTorch version constraint. Check pyproject.toml for requirements.
# For example, if pyproject.toml specifies torch>=2.10.0.dev,<2.11.0:
# pip install --pre "torch>=2.10.0.dev,<2.11.0" torchvision --index-url https://download.pytorch.org/whl/nightly/cu130

# Install additional build dependencies
pip install pyyaml numpy

.. note::

The PyTorch version requirement is defined in `pyproject.toml <https://github.com/pytorch/TensorRT/blob/main/pyproject.toml>`_ (build requirements) and `setup.py <https://github.com/pytorch/TensorRT/blob/main/setup.py>`_ (runtime requirements). If you encounter version-related errors during installation, refer to these files for the exact version constraints.

.. note::

Remember to activate the virtual environment (``source .venv/bin/activate``) whenever you work with this project or run the build commands.

Install Bazel
~~~~~~~~~~~~~

Expand All @@ -51,7 +129,7 @@ Bazel is required to build the wheel with TensorRT-RTX.
Install TensorRT-RTX Tarball
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TensorRT-RTX tarball can be downloaded from https://developer.nvidia.com/tensorrt-rtx.
TensorRT-RTX tarball can be downloaded from https://developer.nvidia.com/tensorrt-rtx.
Currently, Torch-TensorRT uses TensorRT-RTX version **1.2.0.54**.

Once downloaded:
Expand Down Expand Up @@ -79,7 +157,7 @@ Make sure you add the lib path to the Windows system variable ``PATH``.
Install TensorRT-RTX Wheel
~~~~~~~~~~~~~~~~~~~~~~~~~~

Currently, the `tensorrt_rtx` wheel is not published on PyPI.
Currently, the `tensorrt_rtx` wheel is not published on PyPI.
You must install it manually from the downloaded tarball.

.. code-block:: sh
Expand All @@ -93,20 +171,31 @@ Build Torch-TensorRT with TensorRT-RTX
Build Locally with TensorRT-RTX
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Before building, ensure you have completed all the prerequisite steps above, including:

- Cloning the repository
- Installing Python dependencies (setuptools, torch, pyyaml, numpy)
- Setting CUDA_HOME environment variable
- Installing the correct CUDA toolkit version
- Installing Python development headers
- Installing Bazel

Then build the wheel:

.. code-block:: sh

# If you have previously built with standard TensorRT, make sure to clean the build environment,
# otherwise it will use the existing .so built with standard TensorRT, which is not compatible with TensorRT-RTX.
python setup.py clean
bazel clean --expunge
#remove everything under build directory,
# Remove everything under build directory
rm -rf build/*

# Build wheel with TensorRT-RTX
python setup.py bdist_wheel --use-rtx

# Install the wheel
python -m pip install dist/torch-tensorrt-*.whl
# Install the wheel (note: the wheel filename uses underscores, not hyphens)
python -m pip install dist/torch_tensorrt-*.whl

Quick Start
-----------
Expand All @@ -119,15 +208,68 @@ Quick Start
Troubleshooting
---------------

If you encounter load or link errors, check if `tensorrt_rtx` is linked correctly.
Common Issues
~~~~~~~~~~~~~

**Missing distutils module**

If you encounter ``ModuleNotFoundError: No module named 'distutils'``, install setuptools:

.. code-block:: sh

pip install setuptools

**Missing CUDA_HOME environment variable**

If you encounter ``OSError: CUDA_HOME environment variable is not set``, set the CUDA_HOME path:

.. code-block:: sh

export CUDA_HOME=/usr/local/cuda

**CUDA version mismatch**

If you encounter errors about CUDA paths not existing (e.g., ``/usr/local/cuda-X.Y/ does not exist``), ensure you have the correct CUDA version installed. Check the required version in `MODULE.bazel <https://github.com/pytorch/TensorRT/blob/main/MODULE.bazel>`_. You may need to:

1. Update your NVIDIA drivers
2. Download and install the specific CUDA toolkit version required by MODULE.bazel
3. Clean and rebuild after installing the correct version

**PyTorch version mismatch**

If you encounter an error like ``ERROR: No matching distribution found for torch<X.Y.Z,>=X.Y.Z.dev`` (for example, ``torch<2.11.0,>=2.10.0.dev``), you need to install a compatible PyTorch nightly version.

First, check the exact version constraint in `pyproject.toml <https://github.com/pytorch/TensorRT/blob/main/pyproject.toml>`_, then install with that constraint:

.. code-block:: sh

# Example: if pyproject.toml requires torch>=2.10.0.dev,<2.11.0
# and MODULE.bazel specifies CUDA 13.0 (cu130):
pip install --pre "torch>=2.10.0.dev,<2.11.0" torchvision --index-url https://download.pytorch.org/whl/nightly/cu130

Replace the version constraint and CUDA version (cuXXX) according to your project's requirements.

**Missing Python development headers**

If you encounter ``fatal error: Python.h: No such file or directory``, install the Python development package:

.. code-block:: sh

# For Python 3.12 (adjust version based on your Python)
sudo apt install python3.12-dev

Verifying TensorRT-RTX Linkage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If you encounter load or link errors, check if `tensorrt_rtx` is linked correctly.
If not, clean up the environment and rebuild.

**In Linux:**

.. code-block:: sh

# Ensure only tensorrt_rtx is installed (no standard tensorrt wheels)
python -m pip list | grep tensorrt
python -m pip list | grep tensorrt

# Check if libtorchtrt.so links to the correct tensorrt_rtx shared object
trt_install_path=$(python -m pip show torch-tensorrt | grep "Location" | awk '{print $2}')/torch_tensorrt
Expand Down