Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Application Automation & Containerization With ...

Application Automation & Containerization With Habitat - SCaLE 16x

Habitat is a simple, flexible way to build, deploy, and manage applications. Build applications of any flavor, from microservices to traditional applications. Deploy applications in any operational environment from bare metal to containers. Habitat provides consistent, repeatable, auditable applications that lower operational complexity and simplify development workflows.

This hands-on workshop is for anyone involved in building, deploying, or managing applications. The workshop will give you experience running and building applications with Habitat and describe the application-first approach that Habitat provides. Find out more at habitat.sh.

Agenda

* Modern applications for modern application teams
* Introduction to Habitat
* Run a Habitat application
* Build a Habitat application
* Learning Objectives

By the end of this workshop, participants will be able to:

* Describe the various components that make up Habitat’s build- and run-time environments.
* Describe the workflow for building, deploying, and managing applications with Habitat
* Build application packages using the Habitat Builder service
* Publish Habitat packages to the Habitat depot
* Promote Habitat packages through various Habitat depot channels
* Install packages using the Habitat command line interface
* Run services in the Habitat Supervisor
* Modify the configuration of a running service

Nathen Harvey

March 08, 2018
Tweet

More Decks by Nathen Harvey

Other Decks in Technology

Transcript

  1. Shifts in Modern Application Architecture Demand an application-centric operations model

    Infrastructure Centric Operations Application Centric Operations Cloud Containers Microservices UNIT OF VALUE is App UNIT OF WORK is Deployment UNIT OF VALUE is Server UNIT OF WORK is Configuration Infrastructure Automation Compliance Automation Application Automation EXISTING BUSINESS NEW BUSINESS
  2. Application automation that enables modern application teams to build, deploy,

    and manage any application in any environment - from traditional to cloud native architectures. Delivering essential tools and services for cloud native operations What is Habitat?
  3. $ Habitat CLI Setup ================= Welcome to hab setup. Let's

    get started. Set up a default origin Every package in Habitat belongs to an origin, which indicates the person or organization responsible for maintaining that package. Each origin also has a key used to cryptographically sign packages in that origin. Selecting a default origin tells package building operations such as 'hab pkg build' what key should be used to sign the packages produced. If you do not set a default origin now, you will have to tell package building commands each time what origin to use. For more information on origins and how they are used in building packages, please consult the docs at https://www.habitat.sh/docs/create-packages-build/ Set up a default origin? [Yes/no/quit] Yes Set-up the Habitat Studio (1 of 7) hab setup
  4. $ Enter the name of your origin. If you plan

    to publish your packages publicly, we recommend that you select one that is not already in use on the Habitat build service found at https://bldr.habitat.sh/. Origins must begin with a lowercase letter or number. Allowed characters include lowercase letters, numbers, _, -. No more than 255 characters. Default origin name: [default: chef] chef Set-up the Habitat Studio (2 of 7)
  5. $ Create origin key pair It doesn't look like you

    have a signing key for the origin `chef'. Without it, you won't be able to build new packages successfully. You can either create a new signing key now, or, if you are building packages for an origin that already exists, ask the owner to give you the signing key. For more information on the use of origin keys, please consult the documentation at https://www.habitat.sh/docs/concepts-keys/#origin-keys Create an origin key for `chef'? [Yes/no/quit] yes Set-up the Habitat Studio (3 of 7)
  6. $ Create origin key pair It doesn't look like you

    have a signing key for the origin `chef'. Without it, you won't be able to build new packages successfully. You can either create a new signing key now, or, if you are building packages for an origin that already exists, ask the owner to give you the signing key. For more information on the use of origin keys, please consult the documentation at https://www.habitat.sh/docs/concepts-keys/#origin-keys Create an origin key for `chef'? [Yes/no/quit] yes Set-up the Habitat Studio (4 of 7)
  7. $ GitHub Access Token While you can build and run

    Habitat packages without sharing them on the public depot, doing so allows you to collaborate with the Habitat community. In addition, it is how you can perform continuous deployment with Habitat. The GitHub personal access token needs the user:email and read:org OAuth scopes. Habitat uses the information provided through these scopes for authentication and to determine features based on team membership. You can set this up at https://github.com/settings/tokens If you would like to share your packages on the depot, please enter your GitHub access token. Otherwise, just enter No. For more information on sharing packages on the depot, please read the documentation at https://www.habitat.sh/docs/share-packages-overview/ Set up a default GitHub access token? [Yes/no/quit] no Set-up the Habitat Studio (5 of 7)
  8. $ Analytics The `hab` command-line tool will optionally send anonymous

    usage data to Habitat's Google Analytics account. This is a strictly opt-in activity and no tracking will occur unless you respond affirmatively to the question below. We collect this data to help improve Habitat's user experience. For example, we would like to know the category of tasks users are performing, and which ones they are having trouble with (e.g. mistyping command line arguments). To see what kinds of data are sent and how they are anonymized, please read more about our analytics here: https://www.habitat.sh/docs/about-analytics/ Enable analytics? [Yes/no/quit] yes Set-up the Habitat Studio (6 of 7)
  9. $ CLI Setup Complete That's all for now. Thanks for

    using Habitat! Set-up the Habitat Studio (7 of 7)
  10. Plan ~/sample-node-app/habitat/plan.sh pkg_name=sample-node-app pkg_origin=your_origin pkg_origin=chef pkg_scaffolding="core/scaffolding-node" pkg_version="1.0.1" declare -A scaffolding_env

    # Define path to config file scaffolding_env[APP_CONFIG]="{{pkg.svc_config_path}}/config.json" - + $ cp ~/new-plan.sh sample-node-app/habitat/plan.sh If you don’t like manually editing files…
  11. $ ∵ Missing package for core/hab-studio/0.54.0 » Installing core/hab-studio/0.54.0 from

    channel 'stable' ↓ Downloading core/hab-studio/0.54.0/20180221023422 3.17 MB / 3.17 MB - [===============================] 100.00 % 79.21 MB/s ☛ Verifying core/hab-studio/0.54.0/20180221023422 ✓ Installed core/hab-studio/0.54.0/20180221023422 ★ Install of core/hab-studio/0.54.0/20180221023422 complete with 1 new packages installed. hab-studio: Creating Studio at /hab/studios/home--chef--sample-node-app (default) ... --> To prevent a Supervisor from running automatically in your Studio, export 'HAB_STUDIO_SUP=false' before running 'hab studio enter'. Enter the Habitat Studio hab studio enter
  12. # : Loading /src/habitat/plan.sh sample-node-app: Plan loaded sample-node-app: Validating plan

    metadata ... sample-node-app: Installed Path: /hab/pkgs/chef/sample-node-app/1.0.1/20180221155144 sample-node-app: Artifact: /src/results/chef-sample-node-app-1.0.1-20180221155144- x86_64-linux.hart sample-node-app: Build Report: /src/results/last_build.env sample-node-app: SHA256 Checksum: 811247255d0b53319f29f5090a72e186247fb000b8f8b362282a3761422a3cd3 sample-node-app: Blake2b Checksum: bde6652ff19cd6ec8c336e466b200127aa7c60987bc3ba86917064237df0c421 sample-node-app: sample-node-app: I love it when a plan.sh comes together. sample-node-app: sample-node-app: Build time: 0m39s Build the Application build
  13. # No services loaded. Make Sure the Supervisor is Running

    hab sup status Yay!! This is what we expect!
  14. # The Supervisor is not running. [1]+ Done hab sup

    run $* > /hab/sup/default/sup.log 2>&1 (wd: /) (wd now: /hab/pkgs/chef/sample-node-app/1.0.1/20180308025017) Make Sure the Supervisor is Running hab sup status
  15. # The Supervisor is not running. [1]+ Done hab sup

    run $* > /hab/sup/default/sup.log 2>&1 (wd: /) (wd now: /hab/pkgs/chef/sample-node-app/1.0.1/20180308025017) Make Sure the Supervisor is Running hab sup status # exit $ hab studio enter
  16. # hab-sup(MN): The chef/sample-node-app service was successfully loaded Run the

    Application in the Studio hab sup load chef/sample-node-app
  17. # --> Tailing the Habitat Supervisor's output (use 'Ctrl+c' to

    stop) hab-sup(MR): Starting chef/sample-node-app sample-node-app.default(UCW): Watching user.toml sample-node-app.default(HK): init, compiled to /hab/svc/sample-node-app/hooks/init sample-node-app.default(HK): Hooks compiled sample-node-app.default(SR): Hooks recompiled default(CF): Updated app_env.sh 9c88a9595def28f5914f08997b3cd3ba8d32f0e3eb2b4e4ea90264acd6acb66b default(CF): Updated config.json 3f22842e8d737bbb107d9ac19afba42642eccf68a06ddfbdba70507b23b8498a sample-node-app.default(SR): Configuration recompiled sample-node-app.default(SR): Initializing sample-node-app.default(SV): Starting service as user=hab, group=hab Follow the Supervisor Log sup-log
  18. # message = "Hello, World!" node_env = "production" [app] port

    = 8000 Configuration Options hab sup config chef/sample-node-app
  19. # Write Configuration Options to a File hab sup config

    chef/sample-node-app > config.toml
  20. Update the configuration config.toml message = "Hello, World!" node_env =

    "production" [app] port = 8000 port = 9000 - +
  21. Update the configuration config.toml message = "Hello, World!" node_env =

    "production" [app] port = 8000 port = 9000 - + # cp new-config.toml config.toml If you don’t like manually editing files…
  22. Managing with Habitat A RESTful API allows external services to

    integrate with Habitat Supervisors. SERVICE SUPERVISOR EXTERNAL SERVICES
  23. Applying Configuration hab config apply --peer 127.0.0.1 sample-node-app.default 1 config.toml

    •  peer - hostname or IP address of a running supervisor •  sample-node-app.default - the service group •  1 - the incarnation of this configuration •  config.toml - the file with the new configuration
  24. # ∵ Missing package for core/hab-butterfly/0.54.0 » Installing core/hab-butterfly/0.54.0 from

    channel 'stable' ↓ Downloading core/hab-butterfly/0.54.0/20180221020301 ... ★ Install of core/hab-butterfly/0.54.0/20180221020301 complete with 1 new packages installed. » Applying configuration for sample-node-app.default incarnation 1 Ω Creating service configuration ✓ Verified this configuration is valid TOML ↑ Applying to peer 127.0.0.1:9638 ★ Applied configuration Apply the New Configuration hab config apply --peer 127.0.0.1 sample-node-app.default 1 config.toml
  25. # ∵ Missing package for core/hab-pkg-export-docker/0.54.0 » Installing core/hab-pkg-export-docker/0.54.0 from

    channel 'stable' ↓ Downloading core/hab-pkg-export-docker/0.54.0/20180221020412 2.40 MB / 2.40 MB - [===============================] 100.00 % 89.49 MB/s ☛ Verifying core/hab-pkg-export-docker/0.54.0/20180221020412 ... Removing intermediate container 582430ddac0a ---> 0cb96cce166b Successfully built 0cb96cce166b Successfully tagged chef/sample-node-app:1.0.1-20180221155145 Successfully tagged chef/sample-node-app:1.0.1 Successfully tagged chef/sample-node-app:latest ☒ Deleting temporary files ★ Docker image 'chef/sample-node-app' created with tags: 1.0.1-20180221155145, 1.0.1, latest Ω Creating build report /src/results/last_docker_export.env Export to Docker hab pkg export docker results/chef-sample-node-app-*.hart
  26. $ REPOSITORY TAG IMAGE ID CREATED SIZE chef/sample-node-app 1.0.1 0cb96cce166b

    4 minutes ago 207MB chef/sample-node-app 1.0.1-20180221155145 0cb96cce166b 4 minutes ago 207MB chef/sample-node-app latest 0cb96cce166b 4 minutes ago 207MB Inspect the Local Docker Images docker images
  27. $ REPOSITORY TAG IMAGE ID CREATED SIZE chef/sample-node-app 1.0.1 0cb96cce166b

    4 minutes ago 207MB chef/sample-node-app 1.0.1-20180221155145 0cb96cce166b 4 minutes ago 207MB chef/sample-node-app latest 0cb96cce166b 4 minutes ago 207MB Run in a Docker Container docker run -it -p 8000:8000 chef/sample-node-app
  28. $ REPOSITORY TAG IMAGE ID CREATED SIZE chef/sample-node-app 1.0.1 0cb96cce166b

    4 minutes ago 207MB chef/sample-node-app 1.0.1-20180221155145 0cb96cce166b 4 minutes ago 207MB chef/sample-node-app latest 0cb96cce166b 4 minutes ago 207MB Run in a Docker Container with Updated Configuration docker run -e HAB_SAMPLE_NODE_APP="$(cat config.toml)" -p 9000:9000 -it chef/sample-node-app
  29. $ Copy the Origin Key to the Proper Location sudo

    cp ~/.hab/cache/keys/chef-*.pub /hab/cache/keys/
  30. $ ∵ Missing package for core/hab-launcher » Installing core/hab-launcher from

    channel 'stable' ↓ Downloading core/hab-launcher/6925/20180202215752 4.78 MB / 4.78 MB / [===============================] 100.00 % 31.47 MB/s ... sample-node-app.default(SR): Configuration recompiled sample-node-app.default(SR): Initializing sample-node-app.default(SV): Starting service as user=hab, group=hab Start the Application Locally sudo hab sup start results/chef-sample-node-app-*.hart
  31. Deploying with Habitat Habitat packages can be deployed to a

    variety of infrastructure platforms SERVICE SUPERVISOR SERVICE SUPERVISOR SERVICE SUPERVISOR SERVICE SUPERVISOR BARE METAL CONTAINERS AMI VM ARTIFACT DEPLOYABLE ARTIFACT
  32. Quick Aside - tmux •  Used to multiplex several virtual

    consoles •  Allows for multiple separate terminal sessions inside a single remote terminal session •  tmux new –s <SESSION_NAME> •  tmux detach •  tmux attach –t <SESSION_NAME>
  33. $ hab-sup(MR): Supervisor Member-ID 100b72aea4104c9ba52972622db4bda1 hab-sup(MR): Starting chef/sample-node-app/1.0.1/20180222023512 sample-node-app.default(UCW): Watching

    user.toml hab-sup(MR): Starting gossip-listener on 0.0.0.0:9638 hab-sup(MR): Starting http-gateway on 0.0.0.0:9631 sample-node-app.default(HK): Hooks compiled sample-node-app.default(SR): Initializing sample-node-app.default(SV): Starting service as user=hab, group=hab sample-node-app.default(HK): Hooks compiled Join My Supervisor Ring sudo hab sup run
  34. $ » Installing core/redis from channel 'stable' ↓ Downloading core/redis/3.2.4/20170514150022

    569.70 KB / 569.70 KB \ [=========] 100.00 % 55.69 MB/s ☛ Verifying core/redis/3.2.4/20170514150022 → Using core/glibc/2.22/20170513201042 → Using core/linux-headers/4.3/20170513200956 ✓ Installed core/redis/3.2.4/20170514150022 ★ Install of core/redis/3.2.4/20170514150022 complete with 1 new packages installed. hab-sup(MN): The core/redis service was successfully loaded Run Redis sudo hab sup load core/redis
  35. $ » Binlinking redis-benchmark from core/redis into /bin ★ Binlinked

    redis-benchmark from core/redis/3.2.4/20170514150022 to /bin/redis-benchmark » Binlinking redis-server from core/redis into /bin ★ Binlinked redis-server from core/redis/3.2.4/20170514150022 to /bin/redis-server » Binlinking redis-check-rdb from core/redis into /bin ★ Binlinked redis-check-rdb from core/redis/3.2.4/20170514150022 to /bin/redis-check-rdb » Binlinking redis-cli from core/redis into /bin ★ Binlinked redis-cli from core/redis/3.2.4/20170514150022 to /bin/redis-cli » Binlinking redis-sentinel from core/redis into /bin ★ Binlinked redis-sentinel from core/redis/3.2.4/20170514150022 to /bin/redis-sentinel » Binlinking redis-check-aof from core/redis into /bin ★ Binlinked redis-check-aof from core/redis/3.2.4/20170514150022 to /bin/redis-check-aof Binlink Redis sudo hab pkg binlink core/redis
  36. $ lrwxrwxrwx. 1 root root 55 Mar 8 03:25 /usr/bin/redis-cli

    -> /hab/pkgs/core/redis/ 3.2.4/20170514150022/bin/redis-cli What did binlink do? which redis-cli | xargs ls -l
  37. $ package type state uptime (s) pid group style chef/sample-node-app/1.0.1/20180308025017

    standalone up 1487 15386 sample-node-app.default transient core/redis/3.2.4/20170514150022 standalone up 1345 15517 redis.default persistent Check Supervisor Status hab sup status
  38. $ Run My Sample Node App hab-sup(MN): Missing package for

    nathenharvey/sample-node-app » Installing nathenharvey/sample-node-app from channel 'stable' ... ★ Install of nathenharvey/sample-node-app/1.0.19/20180228214110 complete with 1 new packages installed. hab-sup(MN): The nathenharvey/sample-node-app service was successfully loaded sudo hab sup load nathenharvey/sample-node-app
  39. Managing with Habitat Supervisors form Service Groups SERVICE SUPERVISOR SERVICE

    SUPERVISOR SERVICE SUPERVISOR SERVICE SUPERVISOR SERVICE SUPERVISOR SERVICE SUPERVISOR
  40. $ package type state uptime (s) pid group style core/redis/3.2.4/20170514150022

    standalone up 2535 15517 redis.default persistent nathenharvey/sample-node-app/1.0.19/20180228214110 standalone up 1028 15386 sample-node-app.default persistent Check Supervisor Status hab sup status
  41. Quick Aside - tmux •  Used to multiplex several virtual

    consoles •  Allows for multiple separate terminal sessions inside a single remote terminal session •  tmux new –s <SESSION_NAME> •  tmux detach •  tmux attach –t <SESSION_NAME>
  42. $ hab-sup(MR): Supervisor Member-ID 100b72aea4104c9ba52972622db4bda1 hab-sup(MR): Starting chef/sample-node-app/1.0.1/20180222023512 sample-node-app.default(UCW): Watching

    user.toml hab-sup(MR): Starting gossip-listener on 0.0.0.0:9638 hab-sup(MR): Starting http-gateway on 0.0.0.0:9631 sample-node-app.default(HK): Hooks compiled sample-node-app.default(SR): Initializing sample-node-app.default(SV): Starting service as user=hab, group=hab sample-node-app.default(HK): Hooks compiled Join My Supervisor Ring sudo hab sup run –peer 172.31.0.251
  43. Deploying a Change •  git branch •  Make the change

    •  Test the change locally – LGTM! •  Submit the Change
  44. Deploying a Change VERIFY BUILD APPROVE Lint Syntax Unit Security

    Quality Publish Lint Syntax Unit Submit Change Does this code change look good?
  45. Deploying a Change VERIFY BUILD UNSTABLE ENVIRONMENT APPROVE Lint Syntax

    Unit Security Quality Publish Lint Syntax Unit Update Smoke Functional Submit Change Does this code change look good?
  46. Deploying a Change VERIFY BUILD UNSTABLE ENVIRONMENT APPROVE PROMOTE Lint

    Syntax Unit Security Quality Publish Lint Syntax Unit Update Smoke Functional Submit Change Does this code change look good? Do we want to ship this?
  47. Deploying a Change VERIFY BUILD UNSTABLE ENVIRONMENT APPROVE PROMOTE Lint

    Syntax Unit Security Quality Publish Lint Syntax Unit Smoke Functional Smoke Functional Submit Change Does this code change look good? Do we want to ship this? PROMOTE Do we want to ship this? ENVIRONMENT Update Update
  48. Deploying a Change VERIFY BUILD UNSTABLE ENVIRONMENT ENVIRONMENT APPROVE PROMOTE

    Lint Syntax Unit Security Quality Publish Lint Syntax Unit Smoke Functional Smoke Functional Smoke Functional Submit Change Does this code change look good? Do we want to ship this? PROMOTE Do we want to ship this? ENVIRONMENT Update Update Update
  49. Managing with Habitat Habitat applications can automatically update as new

    artifacts are published. SERVICE SUPERVISOR ARTIFACT DEPOT ALL AT ONCE ROLLING DEPLOY
  50. $ hab-sup(MN): Missing package for nathenharvey/sample-node-app » Installing nathenharvey/sample-node-app from

    channel 'stable' ... ✓ Installed nathenharvey/sample-node-app/1.0.8/20180221181902 ★ Install of nathenharvey/sample-node-app/1.0.8/20180221181902 complete with 1 new packages installed. hab-sup(MN): The nathenharvey/sample-node-app service was successfully loaded Launch My Sample App sudo hab sup load nathenharvey/sample-node-app -s rolling
  51. What’s Next? •  Join the Habitat Slack Team - http://slack.habitat.sh/

    •  Work through the tutorial at https://www.habitat.sh/tutorials/ •  Explore Habitat packages on the depot - https://app.habitat.sh/ •  Explore the Habitat projects - https://github.com/habitat-sh •  Read Habitat Blog posts - https://www.habitat.sh/blog/
  52. SERVICE SUPERVISOR SERVICE SUPERVISOR SERVICE SUPERVISOR SERVICE SUPERVISOR USER ARTIFACT

    How we do it Packaging Applications Running Applications PLAN DEPOT DEPOT ARTIFACT BARE METAL CONTAINERS AMI VM
  53. Habitat Build Philosophy •  Container formats recreate the traditional model

    of infrastructure and applications. •  Poor at abstracting the Build + Run aspects of Applications Libraries Operating System Application Application & Libraries •  Habitat builds from the application down •  Small lightweight OS included •  Embedded Supervisor for Application Management •  Builds have strict dependency version control Application Libraries OS
  54. Application Centric Containers with Habitat SERVICE SUPERVISOR SERVICE SUPERVISOR SERVICE

    SUPERVISOR SERVICE SUPERVISOR SERVICE SUPERVISOR SERVICE SUPERVISOR CONTAINER •  Provides a “DSL” for defining the build lifecycle of the application •  Application run lifecycle is defined along side the application build lifecycle •  Hab artifacts are based on application artifact, declared dependencies, and defined run lifecycle •  Export hab artifacts to containers •  Containers have built in supervisor for application operations concerns: ◦  Configuration ◦  Service Discovery ◦  Clustering Topologies EXPORTS CONTAINER FORMATS
  55. Habitat + Container Orchestrators •  Develop reusable application packages • 

    Expose tunable settings •  Export to your preferred Container Platform •  Deploy your application in a scalable, topology aware format. BUILD DEPLOY MANAGE •  Orchestrate and Automate your application config changes •  Application lifecycle hooks respond to updates & events