Skip to content

Releases: dstackai/dstack-enterprise

0.19.22-v1

07 Aug 19:36
Compare
Choose a tag to compare

Services

Probes

You can now configure HTTP probes to check the health of your service.

type: service
name: my-service
port: 80
image: my-app:latest
probes:
- type: http
  url: /health
  interval: 15s

Probe statuses are displayed in dstack ps --verbose and are considered during rolling deployments. This enables you to deploy new versions of your service with zero downtime.

> dstack ps --verbose

 NAME                            BACKEND          STATUS   PROBES  SUBMITTED
 my-service deployment=1                          running          11 mins ago
   replica=0 job=0 deployment=0  aws (us-west-2)  running  ✓       11 mins ago
   replica=1 job=0 deployment=1  aws (us-west-2)  running  ×       1 min ago

Learn more about probes in the docs.

Accelerators

NVIDIA GPU health checks

dstack now monitors NVIDIA GPU health using DCGM background health checks:

> dstack fleet

 FLEET     INSTANCE  BACKEND          RESOURCES  PRICE   STATUS          CREATED
 my-fleet  0         aws (us-east-1)  T4:16GB:1  $0.526  idle            11 mins ago
           1         aws (us-east-1)  T4:16GB:1  $0.526  idle (warning)  11 mins ago
           2         aws (us-east-1)  T4:16GB:1  $0.526  idle (failure)  11 mins ago

In this example, the first instance is healthy, the second has a non-fatal issue and can still be used, and the last has a fatal error that makes it inoperable.

Note

GPU health checks are supported on AWS (except with custom os_images), Azure (except for A10 GPUs), GCP, and OCI, as well as SSH fleet instances with DCGM installed and configured for background health checks. To use GPU health checks, re-create the fleets that were created before 0.19.22.

Tenstorrent Galaxy

dstack now supports Tenstorrent Galaxy cards via SSH fleets.

Backends

Hot Aisle

This release features an integration with Hot Aisle, a cloud provider that offers on-demand access to AMD MI300x GPUs at competitive prices.

> dstack offer -b hotaisle                   

 #  BACKEND                   RESOURCES                                     INSTANCE TYPE                     PRICE   
 1  hotaisle (us-michigan-1)  cpu=13 mem=224GB disk=12288GB MI300X:192GB:1  1x MI300X 13x Xeon Platinum 8470  $1.99
 2  hotaisle (us-michigan-1)  cpu=8 mem=224GB disk=12288GB MI300X:192GB:1   1x MI300X 8x Xeon Platinum 8470   $1.99

Refer to the docs for instructions on configuring the hotaisle backend in your dstack project.

CLI

Reading configurations from stdin

dstack apply can now read configurations from stdin using the -y -f - flags. This allows configuration files to be parameterized in arbitrary ways:

> cat .dstack/volume.dstack.yml
type: volume
name: my-vol

backend: aws
region: us-east-1
size: $VOL_SIZE

> export VOL_SIZE=50
> envsubst '$VOL_SIZE' < .dstack/volume.dstack.yml | dstack apply -y -f -

Debug logs

The dstack CLI now saves debug logs to the ~/.dstack/logs/cli/ directory. These logs can be useful for troubleshooting failed commands or submitting bug reports.

UI

Secrets

The project settings page now has a section to manage secrets.

ui-secrets

Logs improvements

The UI can now optionally display timestamps in front of each message in run logs. This can be a lifesaver when debugging runs that write log messages without built-in timestamps.

ui-logs-maybe

Additionally, if the dstack server is configured to use external log storage, such as AWS CloudWatch or GCP Logging, a button will appear in the UI to view the logs in that storage system.

What's changed

New Contributors

Full Changelog: dstackai/dstack@0.19.21...0.19.22

0.19.17-v1

02 Jul 12:21
Compare
Choose a tag to compare

Single Sign-On via Google

dstack Enterprise now supports Single Sign-On via Google. When Google integration is configured, the dstack login page will display the Sign in with Google button. See the Google integration guide for more information.

image

Secrets

dstack gets support for secrets that allow centralized management of sensitive values such as API keys and credentials. They are project-scoped, managed by project admins, and can be referenced in run configurations to pass sensitive values to runs in a secure manner. Example:

$ dstack secret set my_secret some_secret_value
OK
type: task
nodes: 1
name: test-secrets
env:
  - MY_SECRET=${{ secrets.my_secret }}
commands:
  - echo $MY_SECRET
$ dstack apply -f .dstack/confs/task.dstack.yaml

Submit the run test-task? [y/n]: y
 NAME            BACKEND         RESOURCES              PRICE   STATUS   SUBMITTED
 test-task       aws             cpu=2 mem=8GB          $0.107  running  10:48 
                 (eu-west-1)     disk=100GB                                   

test-secrets provisioning completed (running)
some_secret_value
Exited (0)

For more details on secrets, check out the docs.

Files

By default, dstack automatically mounts the repo directory where you ran dstack init to any run configuration.

However, in some cases, you may not want to mount the entire directory (e.g., if it’s too large), or you might want to mount files outside of it. In such cases, you can use the files property.

type: task
name: trl-sft

files:
  - .:examples  # Maps the directory where `.dstack.yml` to `/workflow/examples`
  - ~/.ssh/id_rsa  # Maps `~/.ssh/id_rsa` to `/root/.ssh/id_rsa`

python: 3.12

env:
  - HF_TOKEN
  - HF_HUB_ENABLE_HF_TRANSFER=1
  - MODEL=Qwen/Qwen2.5-0.5B
  - DATASET=stanfordnlp/imdb

commands:
  - uv pip install trl
  - | 
    trl sft \
      --model_name_or_path $MODEL --dataset_name $DATASET
      --num_processes $DSTACK_GPUS_PER_NODE

resources:
  gpu: H100:1

Warning

If you have existing fleets, it's recommended to re-create them after upgrading to version 0.19.17. Otherwise, there is a risk that these instances won't be able to execute jobs if if a run uses files.

Services

Rolling deployment

Rolling deployments introduced in 0.19.15 are now supported when deploying new commits or branches from a Git repo, or when changes are made to the repo contents or files listed in the files section.

Additionally, dstack apply now displays a full list of detected changes:

$ dstack apply -f my-service.dstack.yml

Active run my-service already exists. Detected changes that can be updated in-place:
- Repo state (branch, commit, or other)
- File archives
- Configuration properties:
  - env
  - files

Update the run? [y/n]:

Even when a rolling deployment isn't possible, the list of changes is still shown — making it easier to identify which changes are preventing the deployment from proceeding in-place.

What's changed

Full changelog: dstackai/dstack@0.19.16...0.19.17

0.19.21-v1

29 Jul 08:31
Compare
Choose a tag to compare

Runs

Scheduled runs

Runs get a new schedule property that allows starting runs periodically by specifying a cron expression:

type: task
nodes: 1
schedule:
  cron: "*/15 * * * *"
commands:
  - ...

dstack will start a scheduled run at cron times unless the run is already running. It can then be stopped manually to prevent it from starting again. Learn more about scheduled runs in the docs.

CLI

Startup time

The CLI startup time was significantly improved up to 4 times by optimizing Python imports.

Server

Optimized DB queries

We optimized DB queries issues by the dstack server. This improves API response times and decreases the load on the DB, which was previously noticeable on small Postgres instances.

What's Changed

Full Changelog: dstackai/dstack@0.19.20...0.19.21

0.19.20-v1

21 Jul 12:36
Compare
Choose a tag to compare

User interface

Logs

This is a hotfix release addressing three major issues related to the UI:

  • The UI didn’t display newer AWS CloudWatch logs if there was a long gap between old and new logs.
  • Logs received before the 19th appeared as base64-encoded in the UI. The UI now includes a button to decode them automatically.
  • Logs were loaded from start to end, which made viewing very slow for long runs.

Note

The dstack logs CLI command may still be affected by the issues above. However, it’s less critical and will be addressed separately.

What's changed

Full changelog: dstackai/dstack@0.19.19...0.19.20

0.19.19-v1

17 Jul 05:59
Compare
Choose a tag to compare

Fleets

SSH fleets in-place updates

You can now add and remove instances in SSH fleets without recreating the entire fleet.

type: fleet
name: ssh-fleet
ssh_config:
  user: dstack
  identity_file: ~/.ssh/dstack
  hosts:
    - 10.0.0.1
    - 10.0.0.2
$ dstack apply -f fleet.dstack.yml
...
Fleet ssh-fleet does not exist yet.
Create the fleet? [y/n]: y
...
 FLEET      INSTANCE  BACKEND       RESOURCES                PRICE  STATUS  CREATED
 ssh-fleet  0         ssh (remote)  cpu=4 mem=4GB disk=30GB  $0     idle    09:08
            1         ssh (remote)  cpu=2 mem=4GB disk=30GB  $0     idle    09:08

Then, if you update the hosts configuration property to

  hosts:
    #- 10.0.0.1  # removed
    - 10.0.0.2
    - 10.0.0.3  # added

and apply the same configuration again, the fleet will be updated in-place, meaning that you don't need to stop runs on the fleet instances if they are not affected by the changes (in this example, it's okay if the instance 1 is currenty busy, you can still apply the configuration).

$ dstack apply -f fleet.dstack.yml
...
Found fleet ssh-fleet. Configuration changes detected.
Update the fleet in-place? [y/n]: y
...
 FLEET      INSTANCE  BACKEND       RESOURCES                PRICE  STATUS  CREATED
 ssh-fleet  1         ssh (remote)  cpu=2 mem=4GB disk=30GB  $0     idle    09:08
            2         ssh (remote)  cpu=8 mem=4GB disk=30GB  $0     idle    09:12

Note

For in-place updates it's only allowed to add and/or remove instances, the root configuration and configurations of hosts that are not changed must not be changed, otherwise the full fleet recreation is triggered, as before. This restriction may be lifted in the future.

Volumes

Automatic cleanup of unused volumes

The volume configuration gets a new auto_cleanup_duration property:

type: volume
name: my-volume
backend: aws
region: eu-west-1
availability_zone: eu-west-1a
auto_cleanup_duration: 1h

The volume will be automatically deleted after it's not being used for the specified duration.

Logs

Browsable, queryable, and searchable logs

dstack now stores run logs in plaintext, which were previously base64-encoded. This allows you to use the configured log storage, be it AWS CloudWatch or GCP Logging, to browse and query dstack run logs.

Note

Logs generated before this release will be shown as base64-encoded in the UI and CLI after the update.

Server

Faster API response times

The dstack server API has been optimized to serialize json responses faster. The API endpoints are up to 2x faster than before.

Benchmarks

Benchmarking AMD GPUs: bare-metal, containers, partitions

Our new benchmark explores two important areas for optimizing AI workloads on AMD GPUs: First, do containers introduce a performance penalty for network-intensive tasks compared to a bare-metal setup? Second, how does partitioning a powerful GPU like the MI300X affect its real-world performance for different types of AI workloads?

What's Changed

Full Changelog: dstackai/dstack@0.19.18...0.19.19

0.19.18-v1

09 Jul 09:59
Compare
Choose a tag to compare

Server

Optimized resources processing

This release includes major improvements that allow the dstack server process more resources quickly. It also allows scaling processing rates of one server replica to take advantage of big Postgres instances by setting the DSTACK_SERVER_BACKGROUND_PROCESSING_FACTOR environment variable.

The result is:

  • Faster processing rates: provisioning 100 runs on SQLite with default settings went from ~5m to ~2m.
  • Better scaling: provisioning additional 100 runs is even quicker due to warm cache. Before, it was slower than the first 100 runs.
  • Ability to process more runs per server replica: provisioning 300 runs on Postgres with DSTACK_SERVER_BACKGROUND_PROCESSING_FACTOR=4 is ~4m.

For more details on scaling backgraound processing rates, see the Server deployment guide.

Backends

Private GCP gateways

It's now possible to create GCP gateways without public IPs:

type: gateway
name: example
domain: gateway.example.com
backend: gcp
region: europe-west9
public_ip: false
certificate: null

Note that configuring HTTPS certificates for private GCP gateways is not yet supported, so you need to specify certificate: null.

What's Changed

Full Changelog: dstackai/dstack@0.19.17...0.19.18

0.19.16-v1

26 Jun 11:24
Compare
Choose a tag to compare

Docker

Docker in Docker

Using Docker in a run configuration is now much easier. Just set docker to true:

type: task
name: docker-nvidia-smi

docker: true

commands:
  - docker run --gpus all nvidia/cuda:12.3.0-base-ubuntu22.04 nvidia-smi

resources:
  gpu: 1

This works with all run configuration types and supports both AMD and NVIDIA GPUs. It’s especially useful if you want to use the docker CLI in your commands—for example, to build Docker images.

The docker property is supported on all backends except vastai, runpod, and kubernetes, and is fully supported on SSH fleets as well.

Backends

CloudRift

The CloudRift team has added support for their GPU cloud, which can now be used with dstack.

To configure it, use a CloudRift API key in the backend configuration:

projects:
  - name: main
    backends:
      - type: cloudrift
        creds:
          type: api_key
          api_key: rift_2prgY1d0laOrf2BblTwx2B2d1zcf1zIp4tZYpj5j88qmNgz38pxNlpX3vAo

CloudRift offers competitive on-demand GPU pricing, with more GPUs and regions coming soon.

dstack apply -f examples/.dstack.yml -b cloudrift

 #  BACKEND                      RESOURCES                                    INSTANCE TYPE   PRICE
 1  cloudrift (us-east-nc-nr-1)  cpu=16 mem=100GB disk=1000GB RTX5090:32GB:1  rtx59-16c-nr.1  $0.65

If you encounter any issues with this backend, please report them.

Server

Public projects

You can now create public projects that any user on the server can join or leave without approval. Previously, all projects were private, and adding new members required manual action by an admin or manager—a step that’s redundant in high-trust environments.

Admins can change a project’s visibility at any time in the project settings.

Metrics

The server exports new Prometheus metrics:

  • dstack_submit_to_provision_duration_seconds: Time from when a run has been submitted and first job provisioning
  • dstack_pending_runs_total: Total number of pending runs

What's changed

New contributors

Full changelog: dstackai/dstack@0.19.15...0.19.16

0.19.15-v1

19 Jun 20:50
Compare
Choose a tag to compare

Services

Rolling deployments

This update introduces rolling deployments, which help avoid downtime when deploying new versions of your services.

When you apply an updated service configuration, dstack will gradually replace old service replicas with new ones. You can track the progress in the dstack apply output — the deployment number will be lower for old replicas and higher for new ones.

> dstack apply -f my-service.dstack.yml

Active run my-service already exists. Detected configuration changes that can be updated in-place: ['image', 'env', 'commands']
Update the run? [y/n]: y

⠋ Launching my-service...
 NAME                            BACKEND          RESOURCES                        PRICE    STATUS       SUBMITTED
 my-service deployment=1                                                                    running      11 mins ago
   replica=0 job=0 deployment=0  aws (us-west-2)  cpu=2 mem=1GB disk=100GB (spot)  $0.0026  terminating  11 mins ago
   replica=1 job=0 deployment=1  aws (us-west-2)  cpu=2 mem=1GB disk=100GB (spot)  $0.0026  running      1 min ago

Currently, the following service configuration properties can be updated using rolling deployments: resources, volumes, image, user, privileged, entrypoint, python, nvcc, single_branch, env, shell, and commands.

Future releases will allow updating more properties and deploying new git repo commits.

Clusters

Updated default Docker images

If you don't specify a custom image in the run configuration, dstack uses its default images. These images have been improved for cluster environments and now include mpirun and NCCL tests. Additionally, if you are running on AWS EFA-capable instances, dstack will now automatically select an image with the appropriate EFA drivers. See our new AWS EFA guide for more details.

Server

Health metrics

The dstack server now exports some operational Prometheus metrics that allow to monitor its health. If you are running your own production-grade dstack server installation, refer to the metrics docs for details.

What's changed

New Contributors

Full Changelog: dstackai/dstack@0.19.13...0.19.15

0.19.13-v1

11 Jun 10:28
Compare
Choose a tag to compare

Clusters

Built-in InfiniBand support in dstack Docker images

The dstack default Docker images now come with built-in InfiniBand support, which includes the necessary libibverbs library and InfiniBand utilities from rdma-core. This means you can run torch distributed and other workloads utilizing NCCL, and they'll take full advantage of InfiniBand without custom Docker images.

You can try InfiniBand clusters with dstack on Nebius.

Built-in EFA support in dstack VM images

dstack switches to DLAMI as the default AWS GPU VM image from a custom one. DLAMI supports EFA out-of-the-box, so you no longer need to use a custom VM image to take advantage of EFA.

Server

GCS support for code uploads

It's now possible to configure the dstack server to use GCP Cloud Storage for code uploads. Previously, only DB and S3 storages were supported. Learn more in the Server deployment guide.

What's Changed

Full Changelog: dstackai/dstack@0.19.12...0.19.13

0.19.12-v1

04 Jun 11:18
Compare
Choose a tag to compare

Clusters

Simplified use of MPI

startup_order and stop_criteria

New run configuration properties are introduced:

  • startup_order: any/master-first/workers-first specifies the order in which master and workers jobs are started.
  • stop_criteria: all-done/master-done specifies the criteria when a multi-node run should be considered finished.

These properties simplify running certain multi-node workloads. For example, MPI requires that workers are up and running when the master runs mpirun, so you'd use startup_order: workers-first. MPI workload can be considered done when the master is done, so you'd use stop_criteria: master-done and dstack won't wait for workers to exit.

DSTACK_MPI_HOSTFILE

dstack now automatically creates an MPI hostfile and exposes the DSTACK_MPI_HOSTFILE environment variable with the hostfile path. It can be used directly as mpirun --hostfile $DSTACK_MPI_HOSTFILE.

CLI

We've also updated how the CLI displays run and job status. Previously, the CLI displayed the internal status code which was hard to interpret. Now, the the STATUS column in dstack ps and dstack apply displays a status code which is easy to understand why run or job was terminated.

dstack ps -n 10
 NAME               BACKEND             RESOURCES                            PRICE    STATUS        SUBMITTED
 oom-task                                                                             no offers     yesterday
 oom-task           nebius (eu-north1)  cpu=2 mem=8GB disk=100GB             $0.0496  exited (127)  yesterday
 oom-task           nebius (eu-north1)  cpu=2 mem=8GB disk=100GB             $0.0496  exited (127)  yesterday
 heavy-wolverine-1                                                                    done          yesterday
   replica=0 job=0  aws (us-east-1)     cpu=4 mem=16GB disk=100GB T4:16GB:1  $0.526   exited (0)    yesterday
   replica=0 job=1  aws (us-east-1)     cpu=4 mem=16GB disk=100GB T4:16GB:1  $0.526   exited (0)    yesterday
 cursor             nebius (eu-north1)  cpu=2 mem=8GB disk=100GB             $0.0496  stopped       yesterday
 cursor             nebius (eu-north1)  cpu=2 mem=8GB disk=100GB             $0.0496  error         yesterday
 cursor             nebius (eu-north1)  cpu=2 mem=8GB disk=100GB             $0.0496  interrupted   yesterday
 cursor             nebius (eu-north1)  cpu=2 mem=8GB disk=100GB             $0.0496  aborted       yesterday

Examples

Simplified NCCL tests

With this release improvements, it became much easier to run MPI workloads with dstack. This includes NCCL tests that can now be run using the following configuration:

type: task
name: nccl-tests

nodes: 2
startup_order: workers-first
stop_criteria: master-done

image: dstackai/efa
env:
  - NCCL_DEBUG=INFO
commands:
  - cd /root/nccl-tests/build
  - |
    if [ ${DSTACK_NODE_RANK} -eq 0 ]; then
      mpirun \
        --allow-run-as-root --hostfile $DSTACK_MPI_HOSTFILE \
        -n ${DSTACK_GPUS_NUM} \
        -N ${DSTACK_GPUS_PER_NODE} \
        --mca btl_tcp_if_exclude lo,docker0 \
        --bind-to none \
        ./all_reduce_perf -b 8 -e 8G -f 2 -g 1
    else
      sleep infinity
    fi

resources:
  gpu: nvidia:4:16GB
  shm_size: 16GB

See the updated NCCL tests example for more details.

Distributed training

TRL

The new TRL example walks you through how to run distributed fine-tune using TRL, Accelerate and Deepspeed.

Axolotl

The new Axolotl example walks you through how to run distributed fine-tune using Axolotl with dstack.

What's changed

Full changelog: dstackai/dstack@0.19.11...0.19.12