IT & DevOps consultant, architect and lecturer. I'm boosting effectivity & productivity of software development teams by using right tools and techniques which lead to faster development and reliable operation of software products. I help companies to set up whole DevOps pipeline using training, consulting and short term project work.
practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market. Source: https://aws.amazon.com/devops/what-is-devops/
- Rapid Delivery - Deliver changes automatically into production (staging, ...) - Reliability - People do mistakes, script don't. - Scaling - Easy scaling using Clouds, Kubernetes, Serverless, ... - Infrastructure as a Code - Treat your Infrastructure like a code (Terraform, ...) - Security - Security policy as a code
treat your infrastructure as a other code - merge requests, CI, ... - Automatic documentation - You can generate docs from the code - terraform graph -type=refresh | dot -Tsvg > infrastructure.svg - Simple Scaling - In infrastructure definition code - Auto scaling (Kubernetes, Auto Scaling Groups) - Reliable Upgrades - Review (merge requests) upgrades before applies - Rollbacks of infrastructure changes
Scaling is easy and secure in Infrastructure as a Code - Terraform, Cloud Formation - Autoscaling - Applications in Kubernetes - Nodes of Clusters (AWS, Azure, …) - Auto Scaling Groups
need just Kubernetes Cluster (or machines with Docker) to run any application - Simple CI stack - Unified test, staging & production env - Solid role separation (but on shared codebase) - Devs: Dockerfile & Kubernetes manifest, ... - Ops: Kubernetes Clusters, Terraform manifests, ... - Bulk deployments & management - Treat your deployments like a cattle, not a pets - Deploy desired state - Declarative approach (instead of imperative)
and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. Source: https://en.wikipedia.org/wiki/Docker_(software)
current application into DevOps pipeline - Be able to fast & easily deploy your current application to various unified environments (machines or clusters with Docker) - Make environment (libraries, dependencies, ...) as part of application (source code) - Deploy application with libraries & dependencies instead of installing dependencies on production servers. It's faster and more reliable approach. - Saves your productuction environment costs (resources) and minimize downtime
CI/CD tools - Fast, repeatable & cached builds - Simple application distribution throw Registry and Docker Trusted Registry - Be able to deploy several times per day - Defines simple interface for communication between containers and underlying layer (kubernetes or hardware)
for containers. Volumes can be shared between containers and data are written directly to host. docker run -ti -v my-volume:/data debian docker run -ti -v $(pwd)/my-data:/data debian
run command and save as layer COPY <local path> <image path> - copy file or directory to image ENV <variable> <value> - set ENV variable WORKDIR <path> - change working directory VOLUME <path> - define volume CMD <command> - executable which you want to start in container EXPOSE <port> - define port where container listen
can speedup your builds. For example, it build multiple stages in parallel and more. You can also extend Dockerfile functionality for caches, mounts, … - https://docs.docker.com/develop/develop-images/build_enhancements/ - https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/ex perimental.md
for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. Source: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
Remove concept of nodes - Manage your applications like cattle instead of like pets Deploy your desired state - You (admin) describe the desired state and kubernetes turn it into actual state
need HA - If you have to manage applications on many servers - If you don't want to care about servers (Kubernetes as a Service, IaaS) - If you want easily deploy your Dockerized applications (IaaS)
block of Kubernetes, which is a single instance of app. Pods are mortal. Deployment - Atomic update of Pods. Deployments contains Pod & ReplicaSet templates and keep running desired pods. Service - Provide immortal IP address or DNS name for some selected pods. Ingress - Provide external access to service using domain name. Storage, Configuration, Monitoring, ...
by distributed Etcd Controller Manager - ensure the actual state of the cluster equals the desired state Scheduler - Schedule creations of Pods on a Nodes Kubelet - Client for API Server, run Pods Kube Proxy - Forward traffic into cluster
manager for Kubernetes kubeadm - Tool for Kubernetes cluster setup (on VMs) minikube - Run Kubernetes locally for development kops - Create Kubernetes cluster in cloud
by distributed Etcd Controller Manager - ensure the actual state of the cluster equals the desired state Scheduler - Schedule creations of Pods on a Nodes Kubelet - Client for API Server, run Pods Kube Proxy - Forward traffic into cluster
containers running in one IPC & network namespace - Contains definition of Docker image, resource limits and other settings for containers - Pods are not used directly, we use controllers like Deployments, ... More: https://kubernetes.io/docs/concepts/workloads/pods/pod/
running in N instances - Provide various deployment (upgrade) strategies - Allow us to rollback deployment More: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
manage stateful applications. - Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. More: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
run a copy of a Pod. - As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Some typical uses of a DaemonSet are: - running a cluster storage daemon, such as glusterd, ceph, on each node. - running a logs collection daemon on every node, such as fluentd or logstash. More: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
Kubernetes cluster - NodePort - Expose specific port on every node of cluster - Use ports from range 30000 - 32767 - LoadBalancer (cloud only) - Create new load balancer with new IP - Publish service on standart (defined) ports
web paths - Easiest & cheapest way how to expose web services - Requires Ingress Controllers - Traefik - https://github.com/ondrejsika/kubernetes-ingress-traefik - Nginx + Cert Manager
to specific Pod (persistent only for that specific pod) - Stored on node - PersistentVolume (PV) - Storage which can be attached to pods - StorageClass (SC) - Dynamic provisioner of Persistent Volumes - PersistentVolumeClaim (PVC) - allow a user to consume abstract storage resources More: https://kubernetes.io/docs/concepts/storage/volumes/
- Docker helps you separate applications & unify your environment - Kubernetes remove concept of nodes and provide you one large pool of resources - Kubernetes deploy desired state - Docker & Kubernetes help you with microservice architecture - IaaS (Terraform) provide simple & reproducible infrastructure (even on private cloud)