You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
😄 Don’t worry — both [Quick Installation](#quick-installation) and [Dataset Preparation](#dataset-preparation) are beginner-friendly.
17
17
@@ -177,18 +177,59 @@ We provide a flexible installation tool for users who want to use InternNav for
177
177
178
178
## Quick Installation
179
179
180
-
We support two mainstream setups in this toolbox and the user can choose one given their requirements and devices.
180
+
Our toolchain provides two Python environment solutions to accommodate different usage scenarios with the InternNav-N1 series model:
181
181
182
-
### Basic Environment
183
-
For users that only need to run the model inference and visualize the planned trajectory results, we recommend run the simplest installation scripts:
182
+
- For quick trials and evaluations of the InternNav-N1 model, we recommend using the [Habitat environment](#habitat-environment). This option offer allowing you to quickly test and eval the InternVLA-N1 models with minimal configuration.
183
+
- If you require high-fidelity rendering, training capabilities, and physical property evaluations within the environment, we suggest using the [Isaac Sim](#isaac-sim-environment) environment. This solution provides enhanced graphical rendering and more accurate physics simulations for comprehensive testing.
184
184
185
+
Choose the environment that best fits your specific needs to optimize your experience with the InternNav-N1 model. Note that both environments support the training of the system1 model NavDP.
186
+
187
+
### Isaac Sim Environment
188
+
#### Prerequisite
189
+
- Ubuntu 20.04, 22.04
190
+
- Conda
191
+
- Python 3.10.16 (3.10.* should be ok)
192
+
- NVIDIA Omniverse Isaac Sim 4.5.0
193
+
- NVIDIA GPU (RTX 2070 or higher)
194
+
- NVIDIA GPU Driver (recommended version 535.216.01+)
195
+
- PyTorch 2.5.1, 2.6.0 (recommended)
196
+
- CUDA 11.8, 12.4 (recommended)
197
+
- Docker (Optional)
198
+
- NVIDIA Container Toolkit (Optional)
199
+
200
+
Before proceeding with the installation, ensure that you have [Isaac Sim 4.5.0](https://docs.isaacsim.omniverse.nvidia.com/4.5.0/installation/install_workstation.html) and [Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html) installed.
201
+
202
+
To help you get started quickly, we've prepared a Docker image pre-configured with Isaac Sim 4.5 and InternUtopia. You can pull the image and run evaluations in the container using the following command:
docker run -it --name internutopia-container registry.cn-hangzhou.aliyuncs.com/internutopia/internutopia:2.2.0
206
+
```
207
+
#### Conda installation
185
208
```bash
186
-
conda create -n internnav python=3.9
187
-
pip install -r basic_requirements.txt
209
+
$ conda create -n <env> python=3.10 libxcb=1.14
210
+
211
+
# Install InternUtopia through pip.(2.1.1 and 2.2.0 recommended)
212
+
$ conda activate <env>
213
+
$ pip install internutopia
214
+
215
+
# Configure the conda environment.
216
+
$ python -m internutopia.setup_conda_pypi
217
+
$ conda deactivate && conda activate <env>
188
218
```
219
+
For InternUtopia installation, you can find more detailed [docs](https://internrobotics.github.io/user_guide/internutopia/get_started/installation.html) in [InternUtopia](https://github.com/InternRobotics/InternUtopia?tab=readme-ov-file).
If you need to train or evaluate models on [Habitat](#optional-habitat-environment) without physics simulation, we recommend the following setup and easier environment installation.
189
231
190
232
### Habitat Environment
191
-
For users that would like to train models or evaluate on Habitat without physics simulation, we recommend the following setup and easier environment installation.
192
233
193
234
#### Prerequisite
194
235
- Python 3.9
@@ -197,72 +238,102 @@ For users that would like to train models or evaluate on Habitat without physics
197
238
- GPU: NVIDIA A100 or higher (optional for VLA training)
For users that would like to evaluate the whole navigation system with VLN-PE on Isaac Sim, we recommend the following setup with NVIDIA RTX series GPUs for better rendering support.
210
249
211
-
#### Prerequisite
212
-
- Python 3.10 or higher
213
-
- CUDA 12.4
214
-
- GPU: NVIDIA RTX 4090 or higher (optional for VLA testing)
250
+
## Verification
251
+
252
+
Please download our latest pretrained [checkpoint](https://huggingface.co/InternRobotics/InternVLA-N1) of InternVLA-N1 and run the following script to inference with visualization results. Move the checkpoint to the `checkpoints` directory. Download the VLN-CE dataset from [huggingface](). The final folder structure should look like this:
215
253
216
254
```bash
217
-
# TODO: pull the docker with Isaac Sim and GRUtopia
218
-
pip install -r isaac_requirements.txt --no-deps
255
+
InternNav/
256
+
|-- data/
257
+
||-- datasets
258
+
|-- vln
259
+
|-- vln_datasets
260
+
|-- scene_datasets
261
+
|-- hm3d
262
+
|-- mp3d
263
+
264
+
|-- src/
265
+
||-- ...
266
+
267
+
|-- checkpoints/
268
+
||-- InternVLA-N1/
269
+
|||-- model-00001-of-00004.safetensors
270
+
|||-- config.json
271
+
|||-- ...
272
+
||-- InternVLA-N1-S2
273
+
|||-- model-00001-of-00004.safetensors
274
+
|||-- config.json
275
+
|||-- ...
219
276
```
220
-
```bash
221
-
# Make the script executable
222
-
chmod +x install.sh
223
277
224
-
# View available options
225
-
./install.sh --help
278
+
Replace the 'model_path' variable in 'vln_ray_backend.py' with the path of InternVLA-N1 checkpoint.
Please download our latest pretrained [checkpoint]() and run the following script to inference with visualization results.
232
-
282
+
Find the IP address of the node allocated by Slurm. Then change the BACKEND_URL in the gradio client (navigation_ui.py) to the server's IP address. Start the gradio.
Note that it's better to run the Gradio client on a machine with a graphical user interface (GUI) but ensure there is proper network connectivity between the client and the server. Then open a browser and enter the Gradio address (such as http://0.0.0.0:5700). We can see the interface as shown below.
Click the 'Start Navigation Simulation' button to send a VLN request to the backend. The backend will submit a task to ray server and simulate the VLN task with InternVLA-N1 models. Wait about 3 minutes, the VLN task will be finished and return a result video. We can see the result video in the gradio like this.
0 commit comments