Upgrade to Pro — share decks privately, control downloads, hide ads and more …

ゼロから始める Rook-Ceph Object Bucket Provisioning

ゼロから始める Rook-Ceph Object Bucket Provisioning

Takuya Utsunomiya

December 09, 2019
Tweet

More Decks by Takuya Utsunomiya

Other Decks in Technology

Transcript

  1. who are you apiVersion: apiextentions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: 宇都宮

    卓也 spec: group: レッドハット株式会社 role: ストレージソリューションアーキテクト born: 大阪 version: 38歳 favorites: technology: ストレージ hobby: [“野球好き”, “プロレス好き”,”将棋好き”] drink: [“ビール”, “ワイン”] @japan_rook Japan Rook https://rook.connpass.com/ 一部執筆 してます #japanrook 2
  2. Motivation • Consistent control plane for Rook-Ceph: ◦ File and

    Block storage ◦ Not Buckets • Rook-Ceph supports object store and user provisioning, making Buckets the next logical feature • Showcase new bucket provisioning library Why Rook-Ceph bucket provisioning? Cephalocon 2019 : Object Bucket Provisioning in Rook-Ceph より https://static.sched.com/hosted_files/cephalocon2019/9e/Cephalocon.pdf 7
  3. Motivation • Need Kubernetes expertise to write a correct controller

    ◦ Need robust error recovery and retry logic to recover from many types of failures • Need to define Custom Resource Definitions (CRDs) • Need to define interface layer between Kubernetes and your provisioner But writing a Provisioner is hard... Cephalocon 2019 : Object Bucket Provisioning in Rook-Ceph より https://static.sched.com/hosted_files/cephalocon2019/9e/Cephalocon.pdf 8
  4. Bucket Library • ObjectBucket (OB) and ObjectBucketClaim (OBC) similar to

    PersistentVolumes (PV) and PersistentVolumeClaims (PVC) • Dynamic & Static provisioning via StorageClass ◦ Dynamic Buckets ▪ Provision() ▪ Delete() ◦ Static Buckets ▪ Grant() ▪ Revoke() • Secret & ConfigMap provide authentication and endpoint data for app consumption • Library handles OBC controller, retries, recovery, etc... Design Goals Cephalocon 2019 : Object Bucket Provisioning in Rook-Ceph より https://static.sched.com/hosted_files/cephalocon2019/9e/Cephalocon.pdf 9
  5. やること Persistent Volumeの場合 1. BlockPool or FileSystemをつくる 2. StorageClassをつくる 3.

    PVCをつくる 4. Appにattach PersistentVolumeとほとんど同じ Object Bucketの場合 1. Object Storeをつくる 2. StorageClassをつくる 3. OBC(Object Bucket Claim)をつくる 4. Appにattach 11
  6. CephObjectStoreリソースの作成 Object Storeをつくる [utubo@tutsunom ceph]$ cat my-object.yaml apiVersion: ceph.rook.io/v1 kind:

    CephObjectStore metadata: name: objstore namespace: rook-ceph spec: metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: host replicated: size: 3 gateway: type: s3 sslCertificateRef: port: 80 securePort: instances: 2 placement: annotations: resources: 12
  7. rook-ceph-rgwとpoolができる [utubo@tutsunom ceph]$ kubectl -n rook-ceph get cephobjectstore NAME AGE

    objstore 10s [utubo@tutsunom ceph]$ kubectl get pod -l app=rook-ceph-rgw NAME READY STATUS RESTARTS AGE rook-ceph-rgw-objstore-a-65cf59bbb7-6r8f8 1/1 Running 0 10s rook-ceph-rgw-objstore-b-754d5f7cbd-v6mdk 1/1 Running 0 10s ~: ceph df RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 120 GiB 108 GiB 6.0 GiB 12 GiB 10.01 ssd 60 GiB 48 GiB 6.0 GiB 12 GiB 20.02 TOTAL 180 GiB 156 GiB 12 GiB 24 GiB 13.35 POOLS: POOL ID STORED OBJECTS USED %USED MAX AVAIL objstore.rgw.control 1 0 B 8 0 B 0 45 GiB objstore.rgw.meta 2 0 B 0 0 B 0 45 GiB objstore.rgw.log 3 50 B 178 48 KiB 0 45 GiB objstore.rgw.buckets.index 4 0 B 0 0 B 0 45 GiB objstore.rgw.buckets.non-ec 5 0 B 0 0 B 0 45 GiB .rgw.root 6 3.7 KiB 16 720 KiB 0 45 GiB objstore.rgw.buckets.data 7 0 B 0 0 B 0 45 GiB Object Storeをつくる 13
  8. PersistentVolumeと同じ Object Bucket用StorageClassをつくる [utubo@tutsunom ceph]$ cat my-sc-bkt-retain.yaml apiVersion: storage.k8s.io/v1 kind:

    StorageClass metadata: name: ceph-bkt-retain provisioner: ceph.rook.io/bucket reclaimPolicy: Retain parameters: objectStoreName: objstore objectStoreNamespace: rook-ceph [utubo@tutsunom ceph]$ kubectl create -f my-sc-bkt-retaindelete.yaml storageclass.storage.k8s.io/ceph-bkt-retain created [utubo@tutsunom ceph]$ kubectl -n default get sc NAME PROVISIONER AGE ceph-bkt-retain ceph.rook.io/bucket 5s default kubernetes.io/aws-ebs 30h gp2 (default) kubernetes.io/aws-ebs 30h reclaimPolicy: Retain OBC/OBを削除してもBucketは残る reclaimPolicy: Delete OBC/OBの削除と共にBucketも削除 14
  9. PersistentVolumeClaimと同じ Object Bucket Claimをつくる [utubo@tutsunom ceph]$ cat my-obc-retain.yaml apiVersion: objectbucket.io/v1alpha1

    kind: ObjectBucketClaim metadata: name: objstore-obc-retain namespace: default spec: #bucketName: generateBucketName: fugafuga storageClassName: ceph-bkt-retain [utubo@tutsunom ceph]$ kubectl create -f my-obc-retain.yaml objectbucketclaim.objectbucket.io/my-store-obc-retain created [utubo@tutsunom ceph]$ kubectl -n default get obc,ob NAME AGE objectbucketclaim.objectbucket.io/objstore-obc-retain 54s NAME AGE objectbucket.objectbucket.io/obc-default-objstore-obc-retain 54s 15
  10. かぶってはいけない Bucket Nameについての注意 [utubo@tutsunom ceph]$ cat my-obc-retain.yaml apiVersion: objectbucket.io/v1alpha1 kind:

    ObjectBucketClaim metadata: name: objstore-obc-retain namespace: default spec: #bucketName: generateBucketName: hogehoge storageClassName: ceph-bkt-retain • bucketName:を指定することで任意の名前の Bucketを Cephで作成可能 ◦ だがCeph側でBucket名がかぶると作成できなくてコケ る • generateBucketName:を使って名前がかぶらないように自 動で生成することがオススメ ◦ generateBucketName:で指定した文字列が Prefix になる 16
  11. ObjectStoreと同時に作られる ConfigMapとSecret [utubo@tutsunom ceph]$ kubectl -n default get configmap NAME

    DATA AGE objstore-obc-retain 6 3m37s [utubo@tutsunom ceph]$ kubectl -n default get secret NAME TYPE DATA AGE default-token-lg5mh kubernetes.io/service-account-token 3 2d21h objstore-obc-retain Opaque 2 3m46s [utubo@tutsunom ceph]$ kubectl -n default get configmap/objstore-obc-retain -oyaml apiVersion: v1 data: BUCKET_HOST: rook-ceph-rgw-objstore.rook-ceph BUCKET_NAME: fugafuga-60ee50f0-d7b6-4cb1-ac14-3ea8d1a132f3 BUCKET_PORT: "80" BUCKET_REGION: "" BUCKET_SSL: "false" BUCKET_SUBREGION: "" kind: ConfigMap metadata: ... [utubo@tutsunom ceph]$ kubectl -n default get secret/objstore-obc-retain -oyaml apiVersion: v1 data: AWS_ACCESS_KEY_ID: RjhCWDRYTTQ2TDc2UDY1WFNSRE8= AWS_SECRET_ACCESS_KEY: SEdyelZGMnNrZjdyTlhJYWtyQ0NkeWJqcTZKb1dSYnUwWlo0Q W5XMQ== kind: Secret metadata: ... ConfigMap Secret 17
  12. ConfigMapとSecretを参照し環境変数として取り込み App [utubo@tutsunom ceph]$ cat my-obc-app.yaml apiVersion: v1 kind: Pod

    metadata: name: photo1 labels: name: photo1 spec: containers: - name: photo1 image: docker.io/screeley44/photo-gallery:latest imagePullPolicy: Always envFrom: - configMapRef: name: objstore-obc-retain - secretRef: name: objstore-obc-retain ports: - containerPort: 3000 protocol: TCP root@photo1:/usr/src/app# env | grep -e AWS -e BUCKET BUCKET_SUBREGION= BUCKET_HOST=rook-ceph-rgw-objstore.rook-ceph BUCKET_NAME=fugafuga-60ee50f0-d7b6-4cb1-ac14-3ea8d1a1 32f3 BUCKET_PORT=80 AWS_SECRET_ACCESS_KEY=HGrzVF2skf7rNXIakrCCdybjq6JoWRb u0ZZ4AnW1 BUCKET_REGION= BUCKET_SSL=false AWS_ACCESS_KEY_ID=F8BX4XM46L76P65XSRDO 18
  13. 見比べてみる App root@photo1:/usr/src/app# env | grep -e AWS -e BUCKET

    BUCKET_SUBREGION= BUCKET_HOST=rook-ceph-rgw-objstore.rook-ceph BUCKET_NAME=fugafuga-60ee50f0-d7b6-4cb1-ac14-3ea8d1a1 32f3 BUCKET_PORT=80 AWS_SECRET_ACCESS_KEY=HGrzVF2skf7rNXIakrCCdybjq6JoWRb u0ZZ4AnW1 BUCKET_REGION= BUCKET_SSL=false AWS_ACCESS_KEY_ID=F8BX4XM46L76P65XSRDO data: BUCKET_HOST: rook-ceph-rgw-objstore.rook-ceph BUCKET_NAME: fugafuga-60ee50f0-d7b6-4cb1-ac14-3ea8d1a132f3 BUCKET_PORT: "80" BUCKET_REGION: "" BUCKET_SSL: "false" BUCKET_SUBREGION: "" data: AWS_ACCESS_KEY_ID: RjhCWDRYTTQ2TDc2UDY1WFNSRE8= AWS_SECRET_ACCESS_KEY: SEdyelZGMnNrZjdyTlhJYWtyQ0NkeWJqcTZKb1dSYnUwWlo0Q W5XMQ== ConfigMap Secret base64でdecode 19
  14. まとめ • PV/PVCと同じような感覚で使える ◦ Bucket Libraryがちゃんとしている • 特別Ceph側の深い知識は必要ない • 別に無くてもAppからBucketを使うことはできるけど、

    Object StoreとBucketのLifecycleを operator使って管理できるって結構便利 • KubernetesにBucket Libraryが取り込まれることを期待します わりとわかりやすいよ 21