Upgrade to Pro — share decks privately, control downloads, hide ads and more …

2020-09-26_Kubernetes_Summit_帶你從無到有打造_Kubernete...

Sammy Lin
September 26, 2020
210

 2020-09-26_Kubernetes_Summit_帶你從無到有打造_Kubernetes_的環境_.pdf

Sammy Lin

September 26, 2020
Tweet

Transcript

  1. Sammy Lin 17Live Engineering Director Scott 17Live Site Reliability Engineer

    Can Yu mutix Software Engineer 講者及助教介紹
  2. Agenda 1. 認識 kubernetes (K8s) -- 5 mins 2. 建立你第一個

    K8s cluster - 10 min 3. 運行一個靜態的網頁容器在 K8s 上 -- 20 min 4. 運行一個 Blog 在 K8s 上 (Mongo + flask) - 25 min 5. 透過 K8s 管理 YAML 工具 - 20 min a. helm b. kustomize c. cdk8s 6. 打造 CI/CD 的環境 - 10 min
  3. DEMO - AKS $ az login $ az group create

    --name k8s-summit --location eastasia $ az aks create \ --name aks-2020 \ --resource-group k8s-summit \ --node-count 2 # 等待建置完成 ~ # 設定 kubeconfig $ az aks get-credentials --resource-group k8s-summit --name aks-2020 $ kubectl get nodes NAME STATUS ROLES AGE VERSION aks-nodepool1-30396539-vmss000000 Ready agent 21m v1.17.9 aks-nodepool1-30396539-vmss000001 Ready agent 20m v1.17.9
  4. DEMO - GKE # 取得操作權限: 登入帳號, 設定隨意專案, 預設區域 26 (asia-east1-b)

    $ gcloud init $ gcloud container clusters create gke-2020 --num-nodes=2 # 等待建置完成 ~ # 設定 kubeconfig $ gcloud container clusters get-credentials gke-2020 $ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-gke-2020-default-pool-ed94fc94-l7cs Ready <none> 6m59s v1.15.12-gke.2 gke-gke-2020-default-pool-ed94fc94-rfkz Ready <none> 6m59s v1.15.12-gke.2
  5. DEMO - EKS $ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE $ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY $

    export AWS_DEFAULT_REGION=us-west-2 $ eksctl create cluster \ --name eks-2020 \ --managed \ --node-type t3.medium # 等待建置完成, 自動更新 kubeconfig $ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-31-192.us-west-2.compute.internal Ready <none> 3m33s v1.17.9-eks-4c6976 ip-192-168-89-55.us-west-2.compute.internal Ready <none> 3m52s v1.17.9-eks-4c6976
  6. Cluster 之間的切換 $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO

    NAMESPACE aks-2020 aks-2020 aks-2020 * gke-2020 gke-2020 gke-2020 eks-2020 eks-2020 eks-2020 $ kubectl config use-context eks-2020 CURRENT NAME CLUSTER AUTHINFO NAMESPACE aks-2020 aks-2020 aks-2020 gke-2020 gke-2020 gke-2020 * eks-2020 eks-2020 eks-2020
  7. 你需要一份 Dockerfile 1. 用Official NGINX 的Image 2. 告訴Docker會用到80 Port 3.

    把index.html copy到預設的Nginx路徑 4. Build Docker Image $ docker build . -t <image:tag> 5. 把Image推到Docker Repo $ docker push <image:tag> 搞定!
  8. Ingress 又是什麼? 跟 Service 最大的差異是你可以做到 Layer 7的流 量控制,或著 Routing 等等

    根據部署平台或著選用的 API Gateway 不同會有 些許差異,像是GCP會建立 Http Load Balancer, 你也可以選擇自己用 Nginx-Ingress 或著其他開 源軟體實作 ref: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-wh en-should-i-use-what-922f010849e0
  9. 練習時間 1. Apply 新的 Image $ kubectl apply -f 01_plain/deploy/

    2. 用Port-Forwarding 連到 K8s的 pod (kubectl port-forward) $ kubectl port-forward svc/plain-app-service 10080:80 3. 或著用 kubectl 取得剛剛LoadBalancer的Address $ k get svc/plain-app 記得要用 http 開,我們沒有設定HTTPS(因為太麻煩了...
  10. Mongo StatefulSet: 保留狀態的 Deployment • 確保每一個Pod的獨立性以及順序 ◦ 啟動: 0 ->

    1 -> 2 … ◦ 關閉: … 2 -> 1 -> 0 • 掛載的儲存區在每一次 Pod狀態變化 時都能夠確保是一致的 ◦ ex: Pod 1 掛掉 -> 掛載在 Pod 1 上的 Data Volume 1 會重新被掛載在新啟 動的 Pod 1 上
  11. Persistent Volume Claim, Persistent Volumes Persistent Volumes Claim (request) •

    告訴 K8s 我需要一個永久儲存區 Persistent Volumes • 實體儲存區 PVC 會建立 PV 來使用 PV可以是Node上的硬碟或著其他儲存裝置 (NFS, AWS EBS, GCEPersistentDisk, AzureDisk…)
  12. 用 Config Map 改變應用程式的設定 Config Map可以被掛載在Pod的Volume之中 所以你可以拿來放 • app的設定 •

    script(mongo的case有用到) • 任何其他想要經常修改,但不需要改動到 程式碼的東西 需要重新啟動Pod才會生效,或著在程式內設計每次讀取新的Config $ kubectl rollout restart deployment/flask-app
  13. Secrets ConfigMap的好朋友,但是會加密存在K8S中 可以用來放 • 使用者帳密檔案,例如 basic http auth 的 .htpasswd

    • TLS用的 Certificate 跟 Private Key • 其他不想被看到的秘密(但是只要能掛載 到 Pod 中還是會被發現)
  14. 監控Pod狀態 + 水平擴充Pod數量 先裝 K8S Metris-Server $ kubectl apply -f

    https://parg.co/b0vy 建立HPA $ kubectl autoscale deployment flask-app --cpu-percent=50 --min=1 --max=2 啟動壓測工具 Locust $ kubectl run -it --rm load-generator --image=doody/load_test --generator=run-pod/v1
  15. 監控Pod狀態 + Horizontal Pod Autoscaling 轉接壓測介面 $ kubectl port-forward load-generator

    8089:8089 打開瀏覽器 http://localhost:8089 看HPA狀態 $ kubectl get hpa 設定HPA最大數量 $ kubectl patch hpa flask-app --patch '{"spec":{"maxReplicas":10}}'
  16. kubectl - Kubernetes Object Management # Imperative commands $ kubectl

    create deployment nginx --image nginx. # Imperative object configuration $ kubectl create -f nginx.yaml # Declarative object configuration $ kubectl diff -f configs/ $ kubectl apply -f configs/ $ kubectl diff -R -f configs/ $ kubectl apply -R -f configs/
  17. yaml Engineer 是你 ? DB ├── deployment.yaml ├── configmap.yaml ├──

    service.yaml └── ingress.yaml Web-A ├── deployment.yaml ├── configmap.yaml ├── service.yaml └── ingress.yaml Transaction Server ├── deployment.yaml ├── configmap.yaml ├── service.yaml └── ingress.yaml Auth Server ├── deployment.yaml ├── configmap.yaml ├── service.yaml └── ingress.yaml Graphql Server ├── deployment.yaml ├── configmap.yaml ├── service.yaml └── ingress.yaml Web B ├── deployment.yaml ├── configmap.yaml ├── service.yaml └── ingress.yaml API Server C ├── deployment.yaml ├── configmap.yaml ├── service.yaml └── ingress.yaml Live Stream ├── deployment.yaml ├── configmap.yaml ├── service.yaml └── ingress.yaml
  18. Helm wordpress/ Chart.yaml # A YAML file containing information about

    the chart LICENSE # OPTIONAL: A plain text file containing the license for the chart README.md # OPTIONAL: A human-readable README file values.yaml # The default configuration values for this chart values.schema.json # OPTIONAL: A JSON Schema for imposing a structure on the values.yaml file charts/ # A directory containing any charts upon which this chart depends. crds/ # Custom Resource Definitions templates/ # A directory of templates that, when combined with values, deployment.yaml service.yaml configmap.yaml ingress.yaml # will generate valid Kubernetes manifest files. templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes
  19. Kustomize ├── base │ ├── config-map.yaml │ ├── deployment.yaml │

    ├── kustomization.yaml │ └── service.yaml └── overlays ├── production │ ├── app.cfg │ └── kustomization.yaml └── test ├── app.cfg └── kustomization.yaml
  20. 練習時間 - 透過三個工具來部署Application 1. Helm cd demo_app/04_configuration_tools/web-helm $ helm install

    test . -f ./values-development.yaml 2. Kustomize $ cd demo_app/04_configuration_tools/web-kustomize $ kubectl apply -k overlays/development/ 3. CDK8S $ npm install typescript $ npm run build $ kubectl apply -f dist/flaskapp.k8s.yaml
  21. • DOCKER_USERNAME • DOCKER_PASSWORD • DOCKER_REPOSITORY ◦ sammylin/k8s_cicd AWS •

    AWS_ACCESS_KEY_ID • AWS_SECRET_ACCESS_KEY • KUBE_CONFIG_DATA ◦ mv $HOME/.kube/config $HOME/.kube/config.tmp ◦ eksctl utils write-kubeconfig --cluster=eks-2020 ◦ cat $HOME/.kube/config | base64 | pbcopy GKE • PROJECT_ID • GOOGLE_APPLICATION_CREDENTIALS ◦ Enter https://console.cloud.google.com/iam-admin/serviceaccounts ◦ Add Service Account & grant “Kubernetes Engine Admin” role ◦ cat gke_sa.json | base64 | pbcopy • GKE_CLUSTER_NAME • ZONE_NAME Add Secrts
  22. Puch Code into Repository $ cd ~ $ mkdir 2020_workshop

    $ cd 2020_workshop $ git clone [email protected]:DevOpsTW/k8s_workshop.git $ cp -r k8s_workshop/demo_app/05_cicd . $ mv 05_cicd/ k8s_cicd ## Edit .github/workflow/main.yaml $ vi .github/workflows/main.yml
  23. Puch Code into Repository $ cd k8s_cicd $ git init

    $ git commit -m "first commit" $ git branch -M master $ git remote add origin [email protected]:<YOU Name>/k8s_cicd.git $ git push -u origin master
  24. 感謝您參加今天的活動!歡迎填寫此份問卷取得價 值25美元的AWS Credit Code,請務必正確填 寫資訊,以確保後續順利提供 Credit code,謝謝! Thank you for

    joining the event today! Please take a few minutes to finish the survey and make sure you check the box to receive US$25 AWS credits. Thank you for your time.
  25. 其他資源 • Example code: https://github.com/devOpsTW/k8s_workshop/ • DevOps Taiwan telegram: https://t.me/devopstw

    • AWS CDK telegram: https://t.me/AWSCDK • Amazon EKS用戶群: https://t.me/AmazonEKS Thank You