Skip to content

AWS Batch

Overview#

Requirements

This guide assumes you have an existing Amazon Web Service (AWS) account. Sign up for a free AWS account here.

There are two ways to create a Compute Environment for AWS Batch with Tower:

  1. Tower Forge: This option automatically manages the AWS Batch resources in your AWS account.

  2. Manual: This option allows you to create a compute environment using existing AWS Batch resources.

If you don't have an AWS Batch environment fully set-up yet, it is suggested to follow the Tower Forge guide.

If you have been provided an AWS Batch queue from your account administrator, or if you have set up AWS Batch previously, please follow the Manual guide.

Tower Forge#

Warning

Follow these instructions only if you have not pre-configured an AWS Batch environment. Note that this option will automatically create resources in your AWS account that you may be charged for by AWS.

Tower Forge automates the configuration of an AWS Batch compute environment and queues required for the deployment of Nextflow pipelines.

IAM User#

To use the Tower Forge feature, Tower requires an Identity and Access Management (IAM) user with the permissions listed in the following policy file. These authorizations are more permissive than those required to only launch a pipeline, since Tower needs to manage AWS resources on your behalf.

The steps below will guide you through the creation of a new IAM user for Tower, plus how to attach the required policy for the newly created user.

  1. Open the AWS IAM console.

  2. Select Users in the left-hand menu and select Add User at the top.

  3. Enter a name for your user (e.g. tower) and select the Programmatic access type.

  4. Select Next: Permissions.

  5. Select Next: Tags, then Next: Review and Create User.

    This user has no permissions

    For the time being, you can ignore the warning. It will be addressed through our team using an IAM Policy later on.

  6. Save the Access key ID and Secret access key in a secure location as we will use these in the next section.

  7. Once you have saved the keys, select Close.

  8. Back in the users table, select the newly created user and select + Add inline policy to add user permissions.

  9. Copy the content of the policy linked above into the JSON tab.

  10. Select Review policy, then name your policy (e.g. tower-forge-policy), and confirm the operation by selecting Create policy.

    Which permissions are required?

    This policy includes the minimal permissions required to allow the user to submit jobs to AWS Batch, gather the container execution metadata, read CloudWatch logs and access data from the S3 bucket in your AWS account in read-only mode.

S3 Bucket#

S3 stands for "Simple Storage Service" and is a type of object storage. To access files and store the results for our pipelines, we have to create an S3 Bucket and grant our new Tower IAM user access to it.

  1. Navigate to S3 service.

  2. Select Create New Bucket.

  3. Enter a unique name for your Bucket and select a region.

    Which AWS region should I use?

    The region of the bucket should be in the same region as the compute environment that we create in the next section. Typically users select a region closest to their physical location but Tower Forge supports creating resources in any available AWS region.

  4. Select the default options for Configure options.

  5. Select the default options for Set permissions.

  6. Review and select Create bucket.

    S3 Storage Costs

    S3 is used by Nextflow for the storage of intermediate files. For production pipelines, this can amount to a large quantity of data. To reduce costs, when configuring a bucket, users should consider using a retention policy, such as automatically deleting intermediate files after 30 days. For more information on this process, see here.

Compute Environment#

Tower Forge automates the configuration of an AWS Batch compute environment and queues required for the deployment of Nextflow pipelines.

Once the AWS resources are set up, we can add a new AWS Batch environment in Tower. To create a new compute environment:

  1. In a workspace, select Compute Environments and then New Environment.

  2. Enter a descriptive name for this environment, e.g. "AWS Batch Spot (eu-west-1)"

  3. Select Amazon Batch as the target platform.

  4. From the Credentials drop-down, select existing AWS credentials, or add new credentials by selecting the + button. If you select to use existing credentials, skip to step 7.

  5. Enter a name, e.g. "AWS Credentials".

  6. Add the Access key and Secret key. These are the keys you saved previously when you created the AWS IAM user.

    Multiple credentials

    You can create multiple credentials in your Tower environment.

    Container registry credentials

    From version 22.3, Tower supports the use of credentials for container registry services. These credentials can be created from the Credentials tab.

  7. Select a Region, for example "eu-west-1 - Europe (Ireland)".

  8. Enter the Pipeline work directory as the S3 bucket we created in the previous section, e.g. s3://unique-tower-bucket.

    Warning

    The bucket should be in the same Region from the previous step.

  9. Set the Config mode to Batch Forge.

  10. Select a Provisioning model. In most cases this will be Spot.

    Spot or On-demand?

    You can choose to create a compute environment that launches either Spot or On-demand instances. Spot instances can cost as little as 20% of on-demand instances, and with Nextflow's ability to automatically relaunch failed tasks, Spot is almost always the recommended provisioning model.

    Note, however, that when choosing Spot instances, Tower will also create a dedicated queue for running the main Nextflow job using a single on-demand instance in order to prevent any execution interruptions.

  11. Enter the Max CPUs e.g. 64. This is the maximum number of combined CPUs (the sum of all instances CPUs) AWS Batch will provision at any time.

  12. Select EBS Auto scale to allow the EC2 virtual machines to dynamically expand the amount of available disk space during task execution.

  13. With the optional Enable Fusion mounts feature enabled, S3 buckets specified in Pipeline work directory and Allowed S3 Buckets will be mounted as file system volumes in the EC2 instances carrying out the Batch job execution. These buckets will be accessible at /fusion/s3/<bucket-name>. For example, if the bucket name is s3://imputation-gp2, the Nextflow pipeline will access it using the file system path /fusion/s3/imputation-gp2.

    Tip

    You are not required to modify your pipeline or files to take advantage of this feature. Nextflow is able to recognise these buckets automatically and will replace any reference to files prefixed with s3:// with the corresponding Fusion mount paths.

  14. Select Enable GPUs if you intend to run GPU-dependent workflows in the compute environment. Note that:

    • The Enable GPUs setting does not cause GPU instances to deploy in your compute environment. You must still specify GPU-enabled instance types in the Advanced options > Instance types field.
    • The Enable GPUs setting causes Forge to specify the most current AWS-recommended GPU-optimized ECS AMI as the EC2 fleet AMI when creating the compute environment.
    • This setting can be overridden by AMI Id in the advanced options.
  15. Enter any additional Allowed S3 buckets that your workflows require to read input data or write output data. The Pipeline work directory bucket above is added by default to the list of Allowed S3 buckets.

  16. To use EFS, you can either select Use existing EFS file system and specify an existing EFS instance or select Create new EFS file system to create one automatically.

  17. To use FSx, set the FSx mount path to /fsx and set the Pipeline work directory to /fsx/work.

  18. Select Dispose resources if you want Tower to automatically delete these AWS resources if you delete the compute environment in Tower.

  19. You can use the Environment variables option to specify custom environment variables for the Head job and/or Compute jobs.

  20. Configure any advanced options described below, as needed.

  21. Select Create to finalize the compute environment setup. It will take a few seconds for all the resources to be created, and then you will be ready to launch pipelines.

Jump to the documentation for Launching Pipelines.

Advanced options#

  • You can specify the Allocation strategy and indicate the preferred Instance types to AWS Batch.

  • You can configure your custom networking setup using the VPC, Subnets and Security groups fields.

  • You can specify a custom AMI Id.

    Requirements for custom AMI

    To use a custom AMI, make sure the AMI is based on an Amazon Linux-2 ECS optimized image that meets the Batch requirements. To learn more about approved versions of the Amazon ECS optimized AMI, see this AWS guide

    GPU-enabled AMI

    If a custom AMI is specified and the Enable GPU option is also selected, the custom AMI will be used instead of the AWS-recommended GPU-optimized AMI.

  • If you need to debug the EC2 instance provisioned by AWS Batch, specify a Key pair to login to the instance via SSH.

  • You can set Min CPUs to be greater than 0, in which case some EC2 instances will remain active. An advantage of this is that pipeline executions will initialize faster.

    Increasing Min CPUs may increase AWS costs

    Keeping EC2 instances running may result in additional costs. You will be billed for these running EC2 instances regardless of whether you are executing pipelines or not.

  • You can use Head Job CPUs and Head Job Memory to specify the hardware resources allocated for the Head Job.

  • You can use Head Job role and Compute Job role to grant fine-grained IAM permissions to the Head Job and Compute Jobs

  • If you're using Spot instances, then you can also specify the Cost percentage, which is the maximum allowed price of a Spot instance as a percentage of the On-Demand price for that instance type. Spot instances will not be launched until the current spot price is below the specified cost percentage.

  • You can use AWS CLI tool path to specify the location of the aws CLI.

Manual#

This section is for users with a pre-configured AWS environment. You will need a Batch queue, a Batch compute environment, an IAM user and an S3 bucket already set up.

To enable Tower within your existing AWS configuration, you need to have an IAM user with the following IAM permissions:

  • AmazonS3ReadOnlyAccess
  • AmazonEC2ContainerRegistryReadOnly
  • CloudWatchLogsReadOnlyAccess
  • A custom policy to grant the ability to submit and control Batch jobs.
  • Write access to any S3 bucket used by pipelines with the following policy template. See below for details

With these permissions set, we can add a new AWS Batch compute environment in Tower.

Access to S3 Buckets#

Tower can use S3 to store intermediate and output data generated by pipelines. We need to create a policy for our Tower IAM user that grants access to specific buckets.

  1. Go to the IAM User table in the IAM service

  2. Select the IAM user.

  3. Select Add inline policy.

  4. Copy the contents of this policy into the JSON tab. Replace YOUR-BUCKET-NAME (lines 10 and 21) with your bucket name.

  5. Name your policy and select Create policy.

Compute Environment#

To create a new compute environment for AWS Batch (without Forge):

  1. In a workspace, select Compute Environments and then New Environment.

  2. Enter a descriptive name for this environment, e.g. "AWS Batch Manual (eu-west-1)".

  3. Select Amazon Batch as the target platform.

  4. Add new credentials by selecting the + button.

  5. Enter a name for the credentials, e.g. "AWS Credentials".

  6. Enter the Access key and Secret key for your IAM user.

    Multiple credentials

    You can create multiple credentials in your Tower environment. See the Credentials section.

  7. Select a Region, e.g. "eu-west-1 - Europe (Ireland)"

  8. Enter an S3 bucket path for the Pipeline work directory, for example s3://tower-bucket

  9. Set the Config mode to Manual.

  10. Enter the Head queue, which is the name of the AWS Batch queue that the Nextflow driver job will run.

  11. Enter the Compute queue, which is the name of the AWS Batch queue that tasks will be submitted to.

  12. You can use the Environment variables option to specify custom environment variables for the Head job and/or Compute jobs.

  13. Configure any advanced options described below, as needed.

  14. Select Create to finalize the compute environment setup.

Jump to the documentation for Launching Pipelines.

Advanced options#

  • You can use Head Job CPUs and Head Job Memory to specify the hardware resources allocated for the Head Job.

  • You can use Head Job role and Compute Job role to grant fine-grained IAM permissions to the Head Job and Compute Jobs

  • You can use AWS CLI tool path to specify the location of the aws CLI.

Back to top