Skip to content

Support autoscaling and test in CI #151

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 141 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
141 commits
Select commit Hold shift + click to select a range
a123492
WIP autoscale PoC
sjpb Mar 30, 2021
c9c9bfc
add IMB package to allow testing
sjpb Mar 31, 2021
ab14526
move cloud_nodes config to right environment
sjpb Mar 31, 2021
67b16a4
fix /etc/openstack permissions for resume
sjpb Mar 31, 2021
a618aca
fix clouds.yaml
sjpb Mar 31, 2021
341a5c9
get resume/suspend scripts working manually
sjpb Mar 31, 2021
99fe7ad
note issue with adhoc slurm restart for combined headnode
sjpb Apr 1, 2021
a956a54
fix openhpc variables for autoscale
sjpb Apr 1, 2021
4ea81c5
set new image ID
sjpb Apr 1, 2021
354c67a
set autoscale branch for openhpc role requirements
sjpb Apr 1, 2021
c74a271
fix /etc/openstack for autoscale
sjpb Apr 1, 2021
73eed39
remove SlurmctldParameters unsupported in slurm 20.02.5
sjpb Apr 1, 2021
967d107
use openhpc_munge_key parameter
sjpb Apr 1, 2021
94de099
don't cache node ips in slurm
sjpb Apr 1, 2021
99793ad
tune slurm debug info for powersave only
sjpb Apr 1, 2021
1a3fd48
use default security groups
sjpb Apr 6, 2021
b992161
remove ssh proxying from inventory
sjpb Apr 6, 2021
0ebba20
add helloworld MPI program setup
sjpb Apr 6, 2021
79b0516
specify NFS server by hostname not IP
sjpb Apr 6, 2021
9f9430a
update to latest built image
sjpb Apr 6, 2021
95a8ed2
remove inventory hosts file from git
sjpb Apr 6, 2021
510a1bf
show cloud nodes even when powered off
sjpb Apr 8, 2021
9392c39
revert compute image to vanilla cento8.2
sjpb Apr 8, 2021
9467973
remove sausagecloud environment
sjpb Sep 8, 2021
000a4e7
move autoscale into slurm
sjpb Sep 8, 2021
f6514e6
allow for overriding slurm config in appliance
sjpb Sep 8, 2021
b285620
add autoscale group/group_vars
sjpb Sep 8, 2021
7ae5042
use autoscale branch of openhpc role
sjpb Sep 8, 2021
ea5c3bc
Add podman_cidr to allow changing podman network range
sjpb Sep 9, 2021
9fba933
Merge branch 'main' into feature/autoscale
sjpb Sep 9, 2021
290f01c
Merge branch 'fix/podman-cidr' into feature/autoscale - to allow test…
sjpb Sep 9, 2021
237b069
fix order of slurm.conf changes and {Resume,Suspend}Program creation …
sjpb Sep 9, 2021
82e4fac
turn up slurmctld logging
sjpb Sep 10, 2021
1353f86
add extension to templates
sjpb Sep 10, 2021
6047313
log exception tracebacks from resume/suspend programs
sjpb Sep 10, 2021
919ff50
chhange appcred owner
sjpb Sep 10, 2021
02377b1
fix try/except in resume/suspend
sjpb Sep 10, 2021
b0622d9
handle incorrect resume config
sjpb Sep 10, 2021
d1ba38e
fix autoscale config for smslabs
sjpb Sep 10, 2021
8e2a827
avoid suspend/resume exceptions on successful run
sjpb Sep 10, 2021
6f25ff8
Merge branch 'main' into feature/autoscale
sjpb Sep 23, 2021
37055b5
basic (messy) working autoscale
sjpb Sep 23, 2021
6a37f50
make clouds.yaml idemponent (TODO: fix for rebuild nodes)
sjpb Sep 24, 2021
49a76cc
fix /etc/openstack permissions for autoscale
sjpb Sep 24, 2021
9c9a69e
use openhpc_suspend_exc_nodes to prevent login nodes autoscaling
sjpb Sep 24, 2021
10a2036
install slurm user before adding slurm tools
sjpb Sep 28, 2021
7de823f
read node Features to get openstack instance information
sjpb Sep 28, 2021
d7bfa75
move autoscale node info to openhpc_slurm_partitions
sjpb Sep 28, 2021
544b1ab
rename openhpc vars
sjpb Sep 29, 2021
31d8e84
add vars from smslabs environment as demo
sjpb Sep 29, 2021
3257a85
cope with no non-cloud nodes in suspend_exc defaults
sjpb Sep 29, 2021
75a0069
smslabs: more complex partition example
sjpb Sep 29, 2021
4a61c5d
use cloud_features support
sjpb Sep 29, 2021
74404c2
fix feature extraction for multiple nodes
sjpb Sep 29, 2021
7d13831
smslabs: testable (default) burst partition
sjpb Sep 29, 2021
8d627f4
write instance ID to StateSaveLocation on creation
sjpb Sep 29, 2021
8b31189
use instance id on deletion
sjpb Sep 29, 2021
a1ba9ea
fixup rebuild/autoscale variable names
sjpb Sep 30, 2021
ebf3dd9
create autoscale role with auto-modification of openhpc_slurm_partitions
sjpb Sep 30, 2021
0bde5fc
set autoscale defaults with merged options
sjpb Sep 30, 2021
37a1070
enable rebuild from controller
sjpb Sep 30, 2021
138de0a
make suspend less picky about instance ID file format
sjpb Oct 1, 2021
dee0807
use existing compute-based rebuild
sjpb Oct 1, 2021
993d413
move suspend/resume program into slurm_openstack_tools
sjpb Oct 1, 2021
53e27fd
use autoscale defaults in role via set_fact
sjpb Oct 1, 2021
0516499
improve autoscale vars/defaults/docs
sjpb Oct 1, 2021
04198d5
use set_fact merging on rebuild and fix venv deployment
sjpb Oct 5, 2021
60e74a8
use openhpc role's extra_nodes feature
sjpb Oct 5, 2021
1ee10e9
fix actually generataing cloud_node info
sjpb Oct 6, 2021
8a95667
retrieve cloud_node instance cpu/mem from openstack
sjpb Oct 6, 2021
a96e68c
WIP autoscale README
sjpb Oct 6, 2021
20d98fc
smslabs: update demo partition
sjpb Oct 6, 2021
173fe3e
add install tag to first run of stackhpc.openhpc:install.yml
sjpb Oct 6, 2021
474c838
fix changed_when
sjpb Oct 6, 2021
8054f77
add autoscale_clouds
sjpb Oct 7, 2021
8c1b4be
move suspend_excl_nodes definition from openhpc role to here
sjpb Oct 7, 2021
dfc859e
use separate tasks for rebuild and autoscale and move rebuild role in…
sjpb Oct 7, 2021
a236d36
move rebuild role back into collection
sjpb Oct 7, 2021
62b6cf2
move autoscale into collection
sjpb Oct 7, 2021
e140d6a
remove autoscale validation as needed vars not available
sjpb Oct 7, 2021
1132ccd
fix merging of enable_configless
sjpb Oct 7, 2021
4e7b28d
avoid multiple package installation tasks when using autoscale
sjpb Oct 7, 2021
99950ad
remove in-appliance rebuild role
sjpb Oct 8, 2021
3f6419d
fallback to working smslabs partition definition for demo
sjpb Oct 8, 2021
ef90759
smslabs: demo groups in openhpc_slurm_partitions
sjpb Oct 12, 2021
2ee9304
tidy for PR
sjpb Oct 15, 2021
6476e82
fix branch for ansible_collection_slurm_openstack_tools
sjpb Oct 15, 2021
4a342fd
Merge branch 'main' into feature/autoscale
sjpb Jan 24, 2022
2e9c926
Merge branch 'main' into feature/autoscale
sjpb Jan 27, 2022
2c6c642
fix up autoscale test environment
sjpb Jan 31, 2022
12e7de4
change autoscale group to be openstack-specific
sjpb Jan 31, 2022
c0370d6
fix security groups in smslabs for idempotency
sjpb Feb 17, 2022
2192ab7
fix smslabs env not being configless, add checks for this
sjpb Feb 17, 2022
0290115
WIP for smslabs autoscale
sjpb Feb 17, 2022
7c33f1c
add basic autoscale to CI
sjpb Feb 21, 2022
e353d4e
WIP CI for autoscale on Arcus
sjpb Feb 22, 2022
b53b72d
add squid proxy on arcus CI
sjpb Feb 28, 2022
2274755
wip arcus CI (TF only)
sjpb Feb 28, 2022
177a605
add TF provisioners to arcus CI
sjpb Feb 28, 2022
2c73a91
add direct deploy, reimage, destroy jobs to arcus workflow
sjpb Feb 28, 2022
2f8b5d4
use simplier arcus CI workflow
sjpb Feb 28, 2022
5f627c2
add /etc/hosts creation to arcus as no working DNS
sjpb Feb 28, 2022
e5c0f45
create /etc/hosts from port info in arcus CI
sjpb Feb 28, 2022
bcc1014
try to set OS_CLOUD correctly in arcus CI
sjpb Feb 28, 2022
d9c267c
fix autoscale partition definition in arcus CI
sjpb Feb 28, 2022
1490816
fix arcus partition definition
sjpb Mar 1, 2022
453eac6
use branch of ansible slurm/openstack tools which enforces ram_mb def…
sjpb Mar 1, 2022
1c5b8c9
Merge branch 'main' into ci/arcus
sjpb Mar 1, 2022
cd9c777
autodetect partition name in arcus CI check
sjpb Mar 1, 2022
b89dc37
fix arcus memory definition
sjpb Mar 1, 2022
5554966
use larger arcus flavor as runing out of memory
sjpb Mar 2, 2022
c1c1869
add validation of rebuild
sjpb Mar 2, 2022
54e4c87
change pytools version during dev
sjpb Mar 2, 2022
a841149
fix Packer vars for arcus CI
sjpb Mar 2, 2022
bff9f7c
remove port creation in arcus CI to make failure cleanup easier
sjpb Mar 2, 2022
4da09c8
remove debugging end from CI
sjpb Mar 2, 2022
a44d7bb
make arcus login node flavor match packer builder size
sjpb Mar 3, 2022
5635813
increase timeout after reimage for arcus CI
sjpb Mar 3, 2022
bf2e8ae
do image build in parallel in arcus CI
sjpb Mar 3, 2022
15152d5
separate image rebuild from site.yml in arcus CI
sjpb Mar 4, 2022
67a3603
try to fix check_slurm tasks location in arcus CI
sjpb Mar 4, 2022
f693eb8
fix arcus CI bug for cloud image update
sjpb Mar 4, 2022
17c7a94
use predefined RDMA-capable ports for arcus CI
sjpb Mar 7, 2022
ec3dcbd
remove unused port_prefix from arcus (finds it from nodename)
sjpb Mar 8, 2022
3c5c262
add cpu info for Arcus
sjpb Mar 8, 2022
c1d15f0
tidy up arcus CI openstack after successful run
sjpb Mar 8, 2022
d34600c
revert smslabs environment to main
sjpb Mar 8, 2022
3320e8c
remove podman override (copied from smslabs) from arcus environment
sjpb Mar 8, 2022
0b04a66
set arcus CI to run only on push to main and PRs
sjpb Mar 8, 2022
0ced47d
remove (broken) smslabs CI
sjpb Mar 9, 2022
bd1e0cc
rename smslabs environment
sjpb Mar 9, 2022
248cbe4
fix additional openhpc config in smslabs environment
sjpb Mar 9, 2022
b881529
smslabs terraform fixes for security group + clouds.yaml
sjpb Mar 9, 2022
9273f6d
align smslabs environment with arcus CI
sjpb Mar 9, 2022
8b8b003
replicate arcus CI workflow on smslabs w/ shared ansible/ci/ plays
sjpb Mar 9, 2022
5176449
move check_slurm.yml path
sjpb Mar 9, 2022
658b0b1
Merge pull request #152 from stackhpc/ci/smslabs
sjpb Mar 10, 2022
90262b0
add environment path to instance metadata
sjpb Mar 22, 2022
c88e906
fix slurm /home/slurm-app-ci owner to permit pip upgrades on slurm-op…
sjpb Mar 22, 2022
cb21a99
Merge branch 'main' into ci/arcus for Open Ondemand
sjpb Mar 22, 2022
ca640ff
simplify smslabs test user handling
sjpb Mar 22, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
146 changes: 146 additions & 0 deletions .github/workflows/arcus.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@

name: Test on Arcus OpenStack in rcp-cloud-portal-demo
on:
push:
branches:
- main
pull_request:
concurrency: rcp-cloud-portal_demo # openstack project
jobs:
arcus:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2

- name: Setup ssh
run: |
set -x
mkdir ~/.ssh
echo "$SSH_KEY" > ~/.ssh/id_rsa
chmod 0600 ~/.ssh/id_rsa
env:
SSH_KEY: ${{ secrets.ARCUS_SSH_KEY }}

- name: Add bastion's ssh key to known_hosts
run: cat environments/arcus/bastion_fingerprint >> ~/.ssh/known_hosts
shell: bash

- name: Install ansible etc
run: dev/setup-env.sh

- name: Install terraform
uses: hashicorp/setup-terraform@v1

- name: Initialise terraform
run: terraform init
working-directory: ${{ github.workspace }}/environments/arcus/terraform

- name: Write clouds.yaml
run: |
mkdir -p ~/.config/openstack/
echo "$CLOUDS_YAML" > ~/.config/openstack/clouds.yaml
shell: bash
env:
CLOUDS_YAML: ${{ secrets.ARCUS_CLOUDS_YAML }}

- name: Provision infrastructure
id: provision
run: |
. venv/bin/activate
. environments/arcus/activate
cd $APPLIANCES_ENVIRONMENT_ROOT/terraform
terraform apply -auto-approve
env:
OS_CLOUD: openstack
TF_VAR_cluster_name: ci${{ github.run_id }}

- name: Get server provisioning failure messages
id: provision_failure
run: |
. venv/bin/activate
. environments/arcus/activate
cd $APPLIANCES_ENVIRONMENT_ROOT/terraform
echo "::set-output name=messages::$(./getfaults.py)"
env:
OS_CLOUD: openstack
TF_VAR_cluster_name: ci${{ github.run_id }}
if: always() && steps.provision.outcome == 'failure'

- name: Delete infrastructure if failed due to lack of hosts
run: |
. venv/bin/activate
. environments/arcus/activate
cd $APPLIANCES_ENVIRONMENT_ROOT/terraform
terraform destroy -auto-approve
env:
OS_CLOUD: openstack
TF_VAR_cluster_name: ci${{ github.run_id }}
if: ${{ always() && steps.provision.outcome == 'failure' && contains('not enough hosts available', steps.provision_failure.messages) }}

- name: Directly configure cluster and build compute + login images
# see pre-hook for the image build
run: |
. venv/bin/activate
. environments/arcus/activate
ansible all -m wait_for_connection
ansible-playbook ansible/adhoc/generate-passwords.yml
ansible-playbook -vv ansible/site.yml
env:
OS_CLOUD: openstack
ANSIBLE_FORCE_COLOR: True

- name: Test reimage of login and compute nodes
run: |
. venv/bin/activate
. environments/arcus/activate
ansible all -m wait_for_connection
ansible-playbook -vv ansible/ci/test_reimage.yml
env:
OS_CLOUD: openstack
ANSIBLE_FORCE_COLOR: True

- name: Update cloud image and reconfigure Slurm
run: |
. venv/bin/activate
. environments/arcus/activate
ansible-playbook -vv ansible/ci/update_cloudnode_image.yml
ansible-playbook -vv ansible/slurm.yml --tags openhpc --skip-tags install
env:
ANSIBLE_FORCE_COLOR: True
OS_CLOUD: openstack

- name: Run MPI-based tests (triggers autoscaling)
run: |
. venv/bin/activate
. environments/arcus/activate
ansible-playbook -vv ansible/adhoc/hpctests.yml
env:
ANSIBLE_FORCE_COLOR: True

- name: Wait for CLOUD nodes to be destroyed
run: |
. venv/bin/activate
. environments/arcus/activate
ansible-playbook -vv ansible/ci/wait_for_scaledown.yml
env:
OS_CLOUD: openstack
ANSIBLE_FORCE_COLOR: True

- name: Delete infrastructure
run: |
. venv/bin/activate
. environments/arcus/activate
cd $APPLIANCES_ENVIRONMENT_ROOT/terraform
terraform destroy -auto-approve
env:
TF_VAR_cluster_name: ci${{ github.run_id }}
if: ${{ success() || cancelled() }}

- name: Delete images
run: |
. venv/bin/activate
. environments/arcus/activate
ansible-playbook -vv ansible/ci/delete_images.yml
env:
OS_CLOUD: openstack
ANSIBLE_FORCE_COLOR: True
70 changes: 40 additions & 30 deletions .github/workflows/smslabs.yml
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@

name: Test on OpenStack via smslabs
name: Test on SMS-Labs OpenStack in stackhpc-ci
on:
push:
branches:
- main
pull_request:
concurrency: stackhpc-ci # openstack project
jobs:
openstack-example:
smslabs:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
Expand All @@ -22,7 +22,7 @@ jobs:
SSH_KEY: ${{ secrets.SSH_KEY }}

- name: Add bastion's ssh key to known_hosts
run: cat environments/smslabs-example/bastion_fingerprint >> ~/.ssh/known_hosts
run: cat environments/smslabs/bastion_fingerprint >> ~/.ssh/known_hosts
shell: bash

- name: Install ansible etc
Expand All @@ -33,7 +33,7 @@ jobs:

- name: Initialise terraform
run: terraform init
working-directory: ${{ github.workspace }}/environments/smslabs-example/terraform
working-directory: ${{ github.workspace }}/environments/smslabs/terraform

- name: Write clouds.yaml
run: |
Expand All @@ -47,7 +47,7 @@ jobs:
id: provision
run: |
. venv/bin/activate
. environments/smslabs-example/activate
. environments/smslabs/activate
cd $APPLIANCES_ENVIRONMENT_ROOT/terraform
terraform apply -auto-approve
env:
Expand All @@ -58,7 +58,7 @@ jobs:
id: provision_failure
run: |
. venv/bin/activate
. environments/smslabs-example/activate
. environments/smslabs/activate
cd $APPLIANCES_ENVIRONMENT_ROOT/terraform
echo "::set-output name=messages::$(./getfaults.py)"
env:
Expand All @@ -69,71 +69,81 @@ jobs:
- name: Delete infrastructure if failed due to lack of hosts
run: |
. venv/bin/activate
. environments/smslabs-example/activate
. environments/smslabs/activate
cd $APPLIANCES_ENVIRONMENT_ROOT/terraform
terraform destroy -auto-approve
env:
OS_CLOUD: openstack
TF_VAR_cluster_name: ci${{ github.run_id }}
if: ${{ always() && steps.provision.outcome == 'failure' && contains('not enough hosts available', steps.provision_failure.messages) }}

- name: Configure infrastructure
- name: Directly configure cluster and build compute + login images
# see pre-hook for the image build
run: |
. venv/bin/activate
. environments/smslabs-example/activate
. environments/smslabs/activate
ansible all -m wait_for_connection
ansible-playbook ansible/adhoc/generate-passwords.yml
echo test_user_password: "$TEST_USER_PASSWORD" > $APPLIANCES_ENVIRONMENT_ROOT/inventory/group_vars/basic_users/defaults.yml
ansible-playbook -vv ansible/site.yml
env:
OS_CLOUD: openstack
ANSIBLE_FORCE_COLOR: True
TEST_USER_PASSWORD: ${{ secrets.TEST_USER_PASSWORD }}

- name: Run MPI-based tests
- name: Test reimage of login and compute nodes
run: |
. venv/bin/activate
. environments/smslabs-example/activate
ansible-playbook -vv ansible/adhoc/hpctests.yml
. environments/smslabs/activate
ansible all -m wait_for_connection
ansible-playbook -vv ansible/ci/test_reimage.yml
env:
OS_CLOUD: openstack
ANSIBLE_FORCE_COLOR: True
TEST_USER_PASSWORD: ${{ secrets.TEST_USER_PASSWORD }}

- name: Build control and compute images
- name: Update cloud image and reconfigure Slurm
run: |
. venv/bin/activate
. environments/smslabs-example/activate
cd packer
PACKER_LOG=1 PACKER_LOG_PATH=build.log packer build -var-file=$PKR_VAR_environment_root/builder.pkrvars.hcl openstack.pkr.hcl
. environments/smslabs/activate
ansible-playbook -vv ansible/ci/update_cloudnode_image.yml
ansible-playbook -vv ansible/slurm.yml --tags openhpc --skip-tags install
env:
ANSIBLE_FORCE_COLOR: True
OS_CLOUD: openstack
TEST_USER_PASSWORD: ${{ secrets.TEST_USER_PASSWORD }}

- name: Reimage compute nodes via slurm and check cluster still up
- name: Run MPI-based tests (triggers autoscaling)
run: |
. venv/bin/activate
. environments/smslabs-example/activate
ansible-playbook -vv $APPLIANCES_ENVIRONMENT_ROOT/ci/reimage-compute.yml
ansible-playbook -vv $APPLIANCES_ENVIRONMENT_ROOT/hooks/post.yml
. environments/smslabs/activate
ansible-playbook -vv ansible/adhoc/hpctests.yml
env:
ANSIBLE_FORCE_COLOR: True
OS_CLOUD: openstack
TEST_USER_PASSWORD: ${{ secrets.TEST_USER_PASSWORD }}

- name: Reimage login nodes via openstack and check cluster still up
- name: Wait for CLOUD nodes to be destroyed
run: |
. venv/bin/activate
. environments/smslabs-example/activate
ansible-playbook -vv $APPLIANCES_ENVIRONMENT_ROOT/ci/reimage-login.yml
ansible-playbook -vv $APPLIANCES_ENVIRONMENT_ROOT/hooks/post.yml
. environments/smslabs/activate
ansible-playbook -vv ansible/ci/wait_for_scaledown.yml
env:
OS_CLOUD: openstack
TEST_USER_PASSWORD: ${{ secrets.TEST_USER_PASSWORD }}
ANSIBLE_FORCE_COLOR: True

- name: Delete infrastructure
run: |
. venv/bin/activate
. environments/smslabs-example/activate
. environments/smslabs/activate
cd $APPLIANCES_ENVIRONMENT_ROOT/terraform
terraform destroy -auto-approve
env:
OS_CLOUD: openstack
TF_VAR_cluster_name: ci${{ github.run_id }}
if: ${{ success() || cancelled() }}

- name: Delete images
run: |
. venv/bin/activate
. environments/smslabs/activate
ansible-playbook -vv ansible/ci/delete_images.yml
env:
OS_CLOUD: openstack
ANSIBLE_FORCE_COLOR: True
2 changes: 1 addition & 1 deletion ansible/adhoc/restart-slurm.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
name: slurmctld
state: restarted

- hosts: compute,login
- hosts: compute,login # FIXME: doesn't work if using `login` as combined slurmctld
become: yes
gather_facts: no
tasks:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
# Reimage login nodes via OpenStack

- hosts: login
- hosts: login:!builder
become: no
gather_facts: no
tasks:
- name: Read packer build manifest
set_fact:
manifest: "{{ lookup('file', manifest_path) | from_json }}"
vars:
manifest_path: "{{ lookup('env', 'APPLIANCES_REPO_ROOT') }}/packer/packer-manifest.json"
delegate_to: localhost
- name: Get latest login image build

- name: Get latest image builds
set_fact:
login_build: "{{ manifest['builds'] | selectattr('custom_data', 'eq', {'source': 'login'}) | last }}"
compute_build: "{{ manifest['builds'] | selectattr('custom_data', 'eq', {'source': 'compute'}) | last }}"

- name: Reimage node via openstack
- name: Delete images
shell:
cmd: "openstack server rebuild {{ instance_id | default(inventory_hostname) }} --image {{ login_build.artifact_id }}"
cmd: |
openstack image delete {{ login_build.artifact_id }}
openstack image delete {{ compute_build.artifact_id }}
delegate_to: localhost

- name: Wait for connection
wait_for_connection:

Loading