Skip to content
/ aws Public

Scripts that simplify the launching of Amazon (AWS) server instances and related resources.

Notifications You must be signed in to change notification settings

balfieri/aws

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Table of Contents

Overview

These are some trivial scripts that I use to do things on Amazon's cloud computing platform, AWS. AWS is confusing and it's hard to remember all the commands and operations that one has to do. These scripts attempt to make it simpler and to give a little structure and order to the otherwise frustrating. I use AWS for doing large, parallel simulations, so these scripts are geared toward that kind of usage scenario.

These scripts have no licensing restrictions whatsoever. They are public domain.

One-Time Setup

My Security Philosophy

I recommend the following:

  1. Create a separate AWS (root) account for each group of users who need to work on the same stuff. I prefer multiple accounts rather than one account with lots of complicated boundaries inside the account. It's just less error-prone. Use one Virtual Private Cloud (VPC) within each account.

  2. Create a special group within each AWS root account for admins of that account. Call it "admins". Only these users may perform administrative actions for the account. Only they have SSH keys that allow them to SSH to instances as ec2-user which should be the only sudo-capable Linux username.

  3. Use SSH as the only mechanism for connecting to instances. Even VNC can be run through SSH tunnels. I trust SSH when it's set up properly. In a later section, I will show you how to harden SSH on your instances so that hackers have no chance of getting in.

  4. By default, do not allow non-admin users to do anything with AWS, including read-only things. There is no reason to create some kind of "users" group by default. Instead, allow those users only SSH access to one or more instances using non-admin Linux usernames. For the most part, non-admin users can operate without even worrying about whether their servers live in AWS, Google Cloud, or on-premises. Later, I will show how to map subdomain names to running instances to make it easier to log in in the face of changing IP addresses.

  5. The exception to the default in #4 is when you want to allow users to launch their own instances. In that case, you'll want to put them into their own security group within the VPC, possibly in a per-user security group. This way, the user can administer his/her own instances within the VPC, and not have privilege to manipulate other instances in other security groups (besides SSH to those instances if you so allow).

  6. Enable EBS volume encryption by default. This ensures that all volumes (virtual disks) are encrypted by default. Go to the EC2 Dashboard, select your region, then click on Settings under Account Attributes on the right. Check the box labeled "Always encrypt new EBS volumes." If you use multiple regions, you must do this for each region.

Set Up Your Local PC for AWS Command Line

First you must obtain the "access key id" and "secret access key" from the AWS Console for your account. As described earlier, you can have one "admins" security group with you as the only user who can do anything, and then non-admin users having only SSH accesss. Or you can decide to give non-admin users their own security group(s) in which case you'd need to assign them access keys via the AWS console so that they can administer resources in that security group. The difference between the "admins" group and these other non-admin groups and users is that the latter will likely have access to fewer resources and privileges within account.

With that nuance (I hope) clarified, the rest of this document applies to any AWS user who has some level of administrative privilege within some security group within some VPC within some AWS account, even if that user is not the AWS account owner. For users who have only SSH access to instances via normal SSH server mechanisms, they would not be allowed to execute any of the scripts described in this document because they wouldn't have any AWS access keys. They would have only SSH keys, which is a Linux server concept, not an AWS concept.

Install aws-cli. Search web for how to do this on your type of PC.

Next, clone this repo and put it on your PATH:

cd [somewhere]
git clone https://github.com/balfieri/aws
export PATH=...

Now we can configure the AWS environment on your PC which will set you up for a particular account, group, VPC, and region.

If you want to first list all regions that you have available to you, use this command, but keep in mind that an account may belong to at most one region at a time:

my_regions                              # returns list of available regions
aws configure                           
                                        # then respond to the four questions: 
AWS Access Key ID:                      # from AWS Console
AWS Secret Access Key:                  # from AWS Console
Default region name:                    # I am closest to "us-east-1"
Default output format:                  # I normally use "text"

Do some sanity checks:

my_account                              # returns your account/owner id (an integer)
my_group                                # returns your "sg-nnn" security group id
my_vpc                                  # returns your "vpc-nnn" VPC id (typically one per account)
my_region                               # returns the region you specified above

You'll notice that those don't print a newline. That's so other scripts can use them on command lines like `my_group` etc.

Create an Admin SSH Key Pair or Upload a Pre-Generated Public Key

We are going to create an SSH key pair that we'll use to SSH to our master instance as ec2-user, which is the only Linux username with sudo privilege. When we create the master instance, we'll indicate that only this key pair is allowed to SSH into the instance. A little later, we'll add some non-admin users with their SSH keys, but we won't use the commands shown here. These are just for the admin ec2-user.

NOTE: ec2-user, as well as root and others, will be initialized during aws instance creation to disallow password logins, so the only way to log in as ec2-user is using SSH. That's good because we don't want non-admin users to be able to "su - ec2-user". Later, we will setting up non-admin users with the same restriction so that non-admin users can't "su - normal_user".

Back to setting up SSH keys for you, the admin. Embed your LASTNAME in the key name or whatever you want to call it to make it unique.

aws ec2 create-key-pair --key-name awsLASTNAMEkey --query 'KeyMaterial' \
                        --output text > ~/.ssh/awsLASTNAMEkey.pem
chmod 400 ~/.ssh/awsLASTNAMEkey.pem     # required

I prefer to generate my own SSH key pair and upload just the public key which is the only key that's needed on the server side of SSH. In fact, I use - and highly recommend - a Yubikey device which hides the private key even from me, so I couldn't upload the private key even if I wanted to.

aws ec2 import-key-pair --key-name awsLASTNAMEkey \
                        --public-key-material file://~/.ssh/awsLASTNAMEkey.pub

Allow SSH Access to Instances

Allow external users - including yourself - to SSH to instances in your security group. SSH uses protocol TCP, port 22. Without this authorization, you won't even be able to get to the master instance even if you have a key pair set up above:

auth_group_ingress tcp 22             

The default egress rule is to allow all outgoing traffic, but you can add more specific rules using:

auth_group_egress protocol port

You can check your ingress and egress rules using this:

group_rules

The output from that should include this ingress rule for ssh:

{
    "PrefixListIds": [],
    "FromPort": 22,                     # default ssh port
    "IpRanges": [
        {
            "CidrIp": "0.0.0.0/0"       # allow any incoming IP
        }
    ],
    "ToPort": 22,
    "IpProtocol": "tcp",
    "UserIdGroupPairs": [],
    "Ipv6Ranges": []
}

You'll also see the default ingress rule which disallows all incoming traffic:

{
    "IpProtocol": "-1",                 # means "any" protocol
    "PrefixListIds": [],
    "IpRanges": [],                     # no allowed IP ranges
    "UserIdGroupPairs": [
        {
            "UserId": "780936156948",
            "GroupId": "sg-1d247e5d"
        }
    ],
    "Ipv6Ranges": []
},

The default egress rule is to allow all outgoing traffic:

{
    "IpProtocol": "-1",                 # means "any"
    "PrefixListIds": [],
    "IpRanges": [
        {
            "CidrIp": "0.0.0.0/0"       # full range of IPv4 addresses
        }
    ],
    "UserIdGroupPairs": [],
    "Ipv6Ranges": []
}

You may later revoke ingress/egress access using one or both of following commands:

revoke_group_ingress protocol port

revoke_group_egress  protocol port

Create Your "Master" Instance

Typically, you'll want one instance to act as a default or as a template for others. We'll call this your "master instance." Typically you'll get a program running on the master instance, then take a snapshot and use that snapshot to clone new instances. More on that later.

Get the list of the most recent Amazon Linux2 images (assuming you want to run that). This will return a list of ami-nnn image ids and creation dates. Normally you'll just pick the most recent one:

linux2_images                           # defaults to x86_64 for t3 instances, etc.
linux2_images -arch arm64               # for Graviton arm64 t4g/c7g/c8g instances, etc.

If you just want to get the most recent one from Amazon, use this:

linux2_image
linux2_image -arch arm64

Create one on-demand instance using the t3.medium instance type for starters and the SSH key pair that you created above so that you (and only you) can SSH to the instance as admin ec2-user:

create_inst -type t3.medium -image `linux2_image` -key awsLASTNAMEkey

Here's the same, except for a t4g.medium or c7g.large (newer Amazon Graviton arm64 CPUs that provide better price/performance):

create_inst -type t4g.medium -image `linux2_image -arch arm64` -key awsLASTNAMEkey
create_inst -type c7g.large -image `linux2_image -arch arm64` -key awsLASTNAMEkey

Note: the new EBS root volume (virtual disk) will be encrypted, assuming the account owner had set up default EBS volume encryption in the AWS Console.

.

Sanity check to get your i-nnn instance id. You should have only one instance at this point:

my_insts -show_useful

Set Your AWS_MASTER_INSTANCE Environment Variable

Add the following to your .bash_profile or similar for other shells. i-nnn is the instance id returned above by my_insts:

export AWS_MASTER_INSTANCE=i-nnn

If you don't want to embed it directly in your .bash_profile, you could put it in a file and instead put this in your .bash_profile:

export AWS_MASTER_INSTANCE=`cat ~/.aws_master_instance.txt`

Now when you type "master_inst" it will return your master instance id. This is IMPORTANT because many of the scripts will use master_inst to get the default instance id for operations. You can usually override it, but usually you don't want to.

Alternatively, assuming you have "." at the front of your PATH, you can have different master_inst scripts in different directories. Then when you cd to a particular directory, these scripts will pick up the master_inst in that directory. So the master_inst script gives context for most of the scripts described here.

Alternatively, you could write one master_inst script and have it search up a directory tree until it finds the instance name in some file you keep around. It's up to you. I use the environment-variable technique, particularly since I normally have just one master instance. Some people don't like environment variables, for good reason.

HIGHLY RECOMMENDED: Harden SSH On Your Master Instance

There are a set of common techniques to "harden" SSH on a server. Those techniques can be applied to your master instance to make it more secure. You could do a web search for "harden ssh" to get the latest thinking on the topic. A few such ideas are:

  • Use a Yubikey encryption device plugged into your PC with a required passcode. This hides the private key, even from you.
  • Use a time-based authenticator on your phone for 2-factor authentication.
  • Don't use a VPN or special gateways. Stick to one mechanism: SSH.
  • Do not even allow pinging of instances from the internet.

Most of those ideas will require some editing of SSH config files on your master instance.

Install Software On Your Master Instance

First, update the existing software on your master instance:

on_inst sudo yum update -y 

Install any apps that you will need, such as C++, Python3, and GIT:

on_inst sudo yum install -y gcc-c++ python3 git

How often do I recreate my master instance from scratch? About once every year or two. But I typically "yum update -y" more often.

Stop Your Master Instance When Not In Use

Remember to stop your master instance when you aren't using it. This will perform the equivalent of a "shutdown" on that instance. But the EBS root volume persists. We're not deleting the instance. We can re-start it later.

stop_inst

Sanity check that's it's stopping:

inst_state

Most of the scripts here will automatically issue a "start_inst" to make sure the instance is started, but they won't stop the instance automatically unless the script requires that it be stopped.

Environment Queries

These commands are not specific to any instance(s):

my_profile              # returns "default" (see next section)
my_account              # returns your account/owner id (an integer)
my_group                # returns your sg-nnn security group id
my_vpc                  # returns your vpc-nnn VPC id
my_region               # returns the region you specified above
my_regions              # list of regions that your owner could use (only one allowed)
my_zones                # list of availability zones within your owner region

Environment Actions

When you configured your aws environment earlier, AWS gave your environment a profile name called "default". If you don't need more than one environment, then you can skip the rest of this section.

If you'd like to set up a second profile, called "work1" for example, then you can define this environment variable in your .bash_profile or similar for other shells. Then my_profile will write out "work1" in this case:

export AWS_DEFAULT_PROFILE=work1

Test that it writes out the proper profile name using my_profile:

my_profile              # should return "work1", not "default"

Unlike AWS_MASTER_INSTANCE, this environment variable does not need to be set. my_profile defaults to "default".

Next, configure this profile the way you configured the "default" one, except supply the profile name on the command line:

aws configure --profile work1

All subsequent scripts in this repo use "my_profile" and use will pick up "work1" as the profile for all AWS commands.

If you end up using multiple profiles from the same PC username, you should consider adding aliases to redefine the two environment variables, for example:

alias work1="export AWS_DEFAULT_PROFILE=work1; export AWS_MASTER_INSTANCE=i-0123456789abc"

Instance Queries

These commands show information for all instances:

master_inst             # show i-nnn id of master (i.e., default) instance
my_insts                # list i-nnn ids of all of your instances (has many options)
my_insts -show_useful   # list useful information for all instances
my_insts_state          # list i-nnn ids and states (equivalent to my_insts -show_state)
my_insts_json           # list all information for all instances in json format

These commands take an instance id as an argument, but normally you'll supply no argument and just let it use `master_inst` to get your master instance id.

inst_state              # pending, running, stopping, stopped, shutting-down, terminated
inst_type               # t3.medium, t4g.medium, c7g.large, c8g.metal, etc.
inst_host               # "" if not running, else the hostname it's running on
inst_zone               # availability zone within region
inst_vpc                # VPC id
inst_subnet             # subnet id
inst_account            # owning account id of instance
inst_group              # sg-nnn group id of instance
inst_key                # creator's SSH key name
inst_image              # ami-nnn image id that the instance is running
inst_vol                # volume id of root EBS root volume
inst_device             # /dev/xda1 or whatever root mount point
inst_json               # list all information in JSON format

Instance Actions

Start/Stop/Modify

start_inst              # scripts call this automatically, so don't need to manually
stop_inst               # stop instance if it's running
change_inst_type type   # stops instance and changes its type (t3.medium, etc.)
resize_inst_vol gigabytes # stops instance and resizes its root EBS volume

SSH and SCP

on_inst                 # ssh to the instance
on_inst cmd args...     # ssh to the instance and run "cmd args..."
to_inst src dst         # scp src file or directory from this PC to dst on the instance
fm_inst src dst         # scp src file or directory from instance to dst on this PC

Snapshots

snapshot_inst           # take a snapshot of the instance's root volume (for backups)
vol_snapshot            # get snapshot-nnn id of most recent snapshot for instance's root volume
vol_snapshots           # get snapshot-nnn ids of all snapshots for instance's root volume
image_snapshot_inst name # take a snapshot and use it to create a new launchable image with name 
image_snapshot snapshot name # from an existing snapshot, create a new launchable image with name 
my_last_image           # get ami-nnn id of most recently created image
my_images               # get ami-nnn ids and time created of all created images
my_images_json          # show all information for all images in json format
my_vols                 # get vol-nnn ids of all created volumes (third column is true if encrypted)

Note that snapshots do not consume extra space. Amazon implements them using copy-on-write, so new space is allocated only when blocks are changed in one of the volumes (snapshot or other).

Launching New Instances

Create One Instance

Here's how to create one on-demand instance using master instance's image, type, etc. Note that this does not clone the master instance's root drive, so this is normally used to create a different master instance or miscellaneous instance:

create_inst                             

Override instance type:

create_inst -type t3.nano

Override availability zone (note: must be in my_region):

create_inst -zone us-east-1b

Create Multiple Instances

Here's how to create 3 on-demand instances that start by running a local_script. The "local_script" is a script on your PC, not on the instance. The local_script will be copied to each instance and is executed as "root", not as "ec2-user". The stdout of the script and other stdout are written on the instance to /var/log/cloud-init-output.log, which is readable by ec2-user.

create_insts 3 -script "local_script"   
create_insts 3 -script "local_script" -type m3.medium 

Here's an example script (example.sh) that simply executes two commands that show the script (userdata) and the launch id of the instance, both of which are described below, and then prints out "PASS":

#!/bin/bash
ec2-metadata -d
ec2-metadata -l
echo "PASS"

Often times, you may need to upload some data and a C++ program to your master instance. In this case, use the to_inst command to upload a tar bundle beforehand, untar it, then build whatever needs to get built on the master_inst before invoking create_insts. For example:

to_inst my_app.tar.gz my_app.tar.gz
on_inst 'tar xvfz my_app.tar.gz; cd my_app; ./build.sh' 

If you have a huge amount of data (many GBs or TBs), then you'll want to load it into an S3 bucket, then access the data from the instances. More on S3 later.

Create Multiple Instances as Clones of the Master and Execute a Script On Each

Before cloning the master, it's wise to update any software packages installed on it. This will also make the created instances start up faster because package update is done by default during the start-up of each inst:

on_inst sudo yum update -y

Create 3 on-demand instances cloned from the current master_inst. This will first execute image_snapshot_inst above to clone the master:

create_insts 3 -script "local_script" -clone_master

If you've already cloned the master, you can use the ami-nnn image id directly, which is available by first executing my_last_image to get the latest ami-nnn image created:

create_insts 3 -script "local_script" -image ami-nnn

Remember that cloned volumes do not consume extra space until file system blocks are modified. Also, create_insts sets up the instances so that EBS root volumes are deleted as their instances are deleted/terminated.

My typical usage scenario for simulations is to copy a large amount of read-only data to the master, then clone the master and have each instance produce a small result file, so this cloning works out well.

Create Multiple Spot Instances

Spot instances can be up to 80% less expensive than on-demand instances. However, they could get terminated at any time by Amazon. I've never had this happen because my instances typically run for less than one hour and I set my max spot price to be the same as the on-demand price. Still, there are no guarantees. My recommendation is to just restart the whole job in this rare occurrence.

Create 3 spot instances with max spot price of $0.01/inst-hour:

create_insts 3 -script "local_script" -spot 0.01               

Create 3 spot instances cloned from the current master_inst:

create_insts 3 -script "local_script" -spot 0.01 -clone_master

Note that spot instances may not be stopped, but they can be terminated using delete_inst.

Note finally that EC2 (regardless of spot vs. non-spot) rounds up time to the nearest second, with a minimum charge of 60 seconds per started instance.

On Each Launched Instance Running the Same Script

The script running _on_ each launched instance can use the following ec2-metadata commands to retrieve information it needs in order to figure out what work it should do.

This command will retrieve the launch index. You can use the launch index to determine which part of your larger job that this instance is supposed to perform. Its interpretation is completely up to you:

$ ec2-metadata -l         

Less useful because your script already knows what to do, but this command will retrieve the contents of the file:local_script:

$ ec2-metadata -d         

When the instance is done with its work, it will typically write some kind of "PASS" indication to stdout (which goes to /var/log/cloud-init-output.log) or to some results_file of your choosing in /tmp.

Of course, there are other ways each instance can get information. It could look in an S3 bucket (more on that later) or use a database. The above technique has an advantage that it doesn't require anything else besides the master instance, assuming your application doesn't need a huge amount of shared data or need to update some database.

Harvesting Results from Instances Running the Same Script

You can issue the following commands from your PC to get results from your instances and then delete (terminate) them.

The following command will show all useful information about all instances:

my_insts -show_useful
i-0fcb24869e6f081a1	2019-02-07T23:19:44.000Z	stopped	t3.medium	ami-009d6802948d06e52	us-east-1a	None	
i-0000de611bb6799db	2019-02-11T03:20:27.000Z	terminated	t3.medium	ami-013c046b8914ec5a7	us-east-1a	None	751871c6340f86b91aaf19478278b67c0af76a114cb88fe101b188c1fbda
i-0d0b236507c47ff02	2019-02-11T03:24:03.000Z	running	t3.medium	ami-013c046b8914ec5a7	us-east-1a	None	e12c1301f0cd35347ffa0c39f854e1354ad627aa5f4927a034a898d31b22
i-0aa0531f3b57fc14a	2019-02-11T03:24:03.000Z	running	t3.medium	ami-013c046b8914ec5a7	us-east-1a	None	e12c1301f0cd35347ffa0c39f854e1354ad627aa5f4927a034a898d31b22
i-01f1e5135fa9a2c0a	2019-02-11T03:24:03.000Z	running	t3.medium	ami-013c046b8914ec5a7	us-east-1a	None	e12c1301f0cd35347ffa0c39f854e1354ad627aa5f4927a034a898d31b22

From that output, you can see that the last 3 were created at the same time. That last long e12c130... number is supposed to be the -script string, but it's some other form of it, at least for spot instances. It's better to go by the date. We can use this command from inside some script to get the i-nnn instance ids of those 3 and their states:

my_insts -time 2019-02-11T03:24:03.000Z -show_state

Even easier, you could find all instances that were created using the last my_last_image (remembering that -clone_master will cause an implicit image snapshot of the master):

my_insts -image `my_last_image` -show_state
i-0d0b236507c47ff02	running
i-0aa0531f3b57fc14a	running
i-01f1e5135fa9a2c0a	running

For each running instance, your PC-side script will typically want to copy some results_file or /var/log/cloud-init-output.log from the instance to the current directory on your PC, and then delete the instance, assuming the instance is done according to the results_file:

fm_inst i-nnn /tmp/results_file .
fm_inst i-nnn /var/log/cloud-init-output.log .

Make sure the work is done, then:

delete_inst i-nnn

Or gather up multiple names and use this one:

delete_inst i-nnn i-ooo i-ppp ...

Note that delete_inst will not by default delete the master_inst if you pass now argument.

For the example.sh script earlier, we could have done something as simple as the following to see the final result printed to stdout, which goes in /var/log/cloud-init-output.log on the instance:

on_inst i-nnn cat /var/log/cloud-init-output.log

That would print out something like the following. Our "harvesting" script on the PC will grep that and see that the work has been finished based on the "PASS" on the 2nd-to-last line, so it's ok to delete the instance:

...
Cloud-init v. 18.2-72.amzn2.0.6 running 'modules:final' at Mon, 11 Feb 2019 23:12:33 +0000. Up 51.93 seconds.
user-data: #!/bin/bash          # ec2-metadata -d
echo "script: `ec2-metadata -d`"
echo "launch id: `ec2-metadata -l`"
echo "PASS"
ami-launch-index: 2             # ec2-metadata -l
PASS
Cloud-init v. 18.2-72.amzn2.0.6 finished at Mon, 11 Feb 2019 23:12:33 +0000. Datasource DataSourceEc2.  Up 52.25 seconds

The work is done in this case, so our harvesting script might copy some files off the instance using fm_inst, then will delete the instance.

Once all instances have been harvested and deleted, they should all be in the shutting-down or terminated state. After about 15 minutes, terminated instances will drop off the my_insts list.

If you think the instances are not behaving correctly and want to stop them all, you can use:

stop_inst `my_insts`

If you want to give up completely and delete all instances except the master instance, you can use:

delete_inst `my_insts -except_master`

A Canonical Way to Launch and Harvest Results

This section discusses a convenient script to canonically launch an application on multiple instances and harvest the results. It sits on top of the above scripts and ties everything together. But you must first follow some simple rules.

The caller must first create a bundle.tar.gz with a ./build.sh script and a ./run.sh script. Then to get one instance started, type:

launch

To get 3 instances started, type:

launch -c 3

And there are arguments that can be used to change the name of the bundle (-b bundle_name) and other things.

The following arguments can also be passed into launch and will be passed on to create_insts without modification: -type, -key, -zone, -spot, -spot_type.

The launch script will do the following work:

The bundle.tar.gz is copied to the master_inst under a fresh ./run directory, this bundle is untar'ed in there, then the ./build.sh script is executed on the master_inst to do any one-time build. We don't want to go through the source build on each instance.

Then one or more instances are started as clones of the master_inst and the remote run.sh script is executed - as ec2-user - within the ./run directory and its stdout/stderr are redirected to ./results.out. As a convenience to your run.sh script, a file called index.out in the ./run directory is created with just the launch index id from the 'ec2-metadata -l' command output. The run.sh script, if successful, must 1) create a results.tar.gz, and finally 2) write "PASS" to stdout.

After executing your run.sh script, the outer script (run_run_sh in this directory) will execute the "shutdown" command to get the instance into the "stopped" state.

After starting the instances, the launch script will wait for all of them to finish (i.e., be in the stopped state) and will copy each run/results.out to ./results[0,1,2,...].out and run/results.tar.gz to ./results[0,1,2,...].tar.gz on the current PC, and delete instances once it's copied stuff from them. The harvesting does not currently look for "PASS" in the results.out file, but it may do so in the future and give an early report of failing instances.

Once the launch script ends, the user script can unpack all of the results[0,1,2,...].tar.gz and combine the results in an application-specific way on this PC side. The user script can obviously also interrogate results[0,1,2,...].out.

Trivial Example

Under eg/launch there's a trivial example that launches 3 instances and harvests the results using launch. The outer script is doit.launch. It bundles a simple main.cpp program that computes the factorial of the instance index and writes it to results{i}.out. There is no results{i}.tar.gz, which is optional with launch. After all the results are harvested by launch, doit.launch checks to make sure it finds 'PASS' in each results{i}.out file and that the computed factorial is correct.

doit.launch

In a real example, your doit.launch script would instead combine results in some application-specific way, but it would still make sure that 'PASS' is found because launch does not currently check for that.

Future Work

  • show how to run a Linux desktop in the cloud and VNC to it through a ssh tunnel
  • show how to run a Windows desktop in the cloud and RDP to it
  • show how to run a Mac desktop in the cloud and Apple Remote Desktop to it
  • morph these scripts to support other cloud service providers (CSPs) - hide any AWS-isms in the process
  • possibly support containers (e.g., Docker), but not clear these are better than the existing scheme with the master instance

Bob Alfieri
Chapel Hill, NC

About

Scripts that simplify the launching of Amazon (AWS) server instances and related resources.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published