resource utilization (right size your compute for small workloads) Simpler parity among environments (configuration still a challenging difference) Well suited to service decoupling Security via isolation (with many complex exceptions) Standard runtime environment (some languages/platforms easier to contain than others)
• Build an image using docker build • Simple DSL to describe runtime environment of an application • RUN commands, set ENV variables, EXPOSE ports, create an ENTRYPOINT, etc • Secrets are cumbersome
config and scripts from the host 2. Set up the container as desired using scripts, exit cleanly 3. Use docker commit to save the state as an image • Enables more complex image composition • For example, use setup scripts via a volume on the host
from a customized base • Allows for faster builds and better reuse 1. Choose a base image to start from 2. Localize the base 3. Build a runtime image to execute the application
patches, dependencies, libraries, related software • Build instructions (Dockerfile, run & commit) can be in same or separate repository • Must rebuild for updates like patches, new library versions, etc java:7u71-jre MyOrg/MyApp:base MyOrg/tomcat
to execute • Set defaults for environment variables • Specify user to run the application • Configure command to run when container starts • Set up network ports to expose from the host • Add application code (build artifact) java:7u71-jre MyOrg/MyApp:base MyOrg/MyApp:release1 MyOrg/tomcat
a repository • Docker Hub, Quay.io • Run your own • S3-backed, local registry running per AWS instance • Or export/import via docker save & docker load • Your tarball, your problem
for operator/administrative use • Container stdout is captured by Docker • logspout container can forward to a syslog endpoint • Docker version 1.6 has native syslog support • Can mount host syslog socket to container, or run a container with just rsyslog/syslog-ng • Locally written log files must be collected separately
infrastructure • Using a container scheduler? Many containers per host • Running immutable AWS instances? One container per host • Really, just one? • One service per host, but service may require multiple containers • Don’t trust containers to contain? One container per host
an image for each service • Create a tag for each release • A useful tag is the git short hash • alternatively sync with a git tag • Use latest for autoscaling • Use base to inherit a base image
• create, but do not run, containers with a volume • Use --volumes-from to mount the data volumes to runtime containers • Stop, then start new container using same --volumes-from
too simple, use a wrapper • Run a container with a volume from the host (-v /host/path:/ container/path) • Have the config in /host/path, then copy it in to place at runtime #!/bin/bash /bin/cp /container/path/myconfig /myapp/config.json /myapp/run
most distributions Past versions known to be buggy Not as performant as aufs, overlay device- mapper Default when other support not available Works with Red Hat-like OSes Buggy (in my experience) Much slower aufs First storage driver Easy to run in ubuntu Highly performant Will never be in Linux kernel More difficult to install for some distros overlay Included in kernel 3.18+ Very fast Brand new Not included by default in any distro
daemon installed and ECS agent running Container Instance An EC2 instance registered to a cluster Task Definition A JSON description of one or more containers to run as a group Task An invocation of a task definition. E.g. a set of containers running on the cluster
the reference AMI, runs as a container • Image imported via docker load, not pulled from a registry • Based on the official scratch image • Polls ECS backend via websockets for actions to take on the local docker daemon The ECS Agent
policies, logs) • No image management • Lacking integration with other AWS services • Compute (EC2) still managed by the user • Sensitive env vars exposed within task definitions • No examples of custom schedulers