rights reserved. Jamie Kinney, Principal Product Manager, AWS Batch [email protected] March 27, 2017 Introducing AWS Batch A highly-efficient, dynamically-scaled, batch computing service
• USD$8.86 million ($7.9 million plus $1 million for disk) CRAY-1 on display in the hallways of the EPFL in Lausanne. https://commons.wikimedia.org/wiki/File:Cray_1_IMG_9126.jpg
the New York Times processed 130 years of archives in 36 hours. 11 million articles & 4TB of data AWS services used: Amazon S3, SQS, EC2, and EMR Total cost (in 2007): $890 $240 compute + $650 storage http://open.blogs.nytimes.com/2007/11/01/self-service- prorated-super-computing-fun/
servers to manage. AWS Batch provisions, manages, and scales your infrastructure Integrated with AWS Natively integrated with the AWS Platform, AWS Batch jobs can easily and securely interact with services such as Amazon S3, DynamoDB, and Rekognition Cost-optimized Resource Provisioning AWS Batch automatically provisions compute resources tailored to the needs of your jobs using Amazon EC2 and EC2 Spot
Definitions specify how jobs are to be run. While each job must reference a job definition, many parameters can be overridden. Some of the attributes specified in a job definition: • IAM role associated with the job • vCPU and memory requirements • Mount points • Container properties • Environment variables $ aws batch register-job-definition --job-definition-name gatk --container-properties ...
Batch as containerized applications running on Amazon EC2. Containerized jobs can reference a container image, command, and parameters or users can simply provide a .zip containing their application and we will run it on a default Amazon Linux container. $ aws batch submit-job --job-name variant-calling --job-definition gatk --job-queue genomics
large number of independent “simple jobs.” Soon, we will add support for “array jobs” that run many copies of an application against an array of elements. Array jobs are an efficient way to run: • Parametric sweeps • Monte Carlo simulations • Processing a large collection of objects These use-cases are still possibly today. Simply submit more jobs.
on the successful completion of other jobs or specific elements of an array job. Use your preferred workflow engine and language to submit jobs. Flow-based systems simply submit jobs serially, while DAG-based systems submit many jobs at once, identifying inter-job dependencies. $ aws batch submit-job –depends-on 606b3ad1-aa31-48d8-92ec-f154bfc8215f ...
they reside until they are able to be scheduled to a compute resource. Information related to completed jobs persists in the queue for 24 hours. $ aws batch create-job-queue --job-queue-name genomics --priority 500 --compute-environment-order ...
Compute Environments containing the EC2 instances used to run containerized batch jobs. Managed compute environments enable you to describe your business requirements (instance types, min/max/desired vCPUs, and EC2 Spot bid as a % of On-Demand) and we launch and scale resources on your behalf. You can choose specific instance types (e.g. c4.8xlarge), instance families (e.g. C4, M4, R3), or simply choose “optimal” and AWS Batch will launch appropriately sized instances from our more-modern instance families.
resources within an Unmanaged compute environment. Your instances need to include the ECS agent and run supported versions of Linux and Docker. AWS Batch will then create an Amazon ECS cluster which can accept the instances you launch. Jobs can be scheduled to your Compute Environment as soon as your instances are healthy and register with the ECS Agent. $ aws batch create-compute-environment --compute- environment-name unmanagedce --type UNMANAGED ...
to run jobs that have been submitted to a job queue. Jobs run in approximately the order in which they are submitted as long as all dependencies on other jobs have been met.
STARTING as FAILED. TerminateJob: Cancels jobs that are currently waiting in the queue. Stops jobs that are in a STARTING or RUNNING state and transitions them to FAILED. Requires a “reason” which is viewable via DescribeJobs $ aws batch cancel-job --reason “Submitted to wrong queue” --jobId= 8a767ac8-e28a-4c97-875b-e5c0bcf49eb8
in the US East (Northern Virginia) Region • Support for automated job retries and AWS Lambda blueprints for AWS Batch launched this week J • Customer provided AMIs for Managed CEs coming in April • Regional expansion, array jobs and jobs executed as AWS Lambda functions arriving in Q2 • Multi-node parallel jobs, consumable resources, and Windows jobs arriving in late 2017