실습

StatefulSet

  1. Service 생성

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      name: web
      labels:
        app: httpd
    spec:
      ports:
      - port: 80
      clusterIP: None
      selector:
        app: httpd
    EOF
  2. 생성된 Service 및 Endpoint 확인

    kubectl get svc,ep -l app=httpd
  3. StatefulSet 생성

    cat <<EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: httpd
      labels:
        app: httpd
    spec:
      serviceName: web
      replicas: 1
      selector:
        matchLabels:
          app: httpd
      template:
        metadata:
          labels:
            app: httpd
        spec:
          containers:
          - name: httpd
            image: httpd
    EOF
  4. 생성된 StatefulSet 및 Pod 확인

    kubectl get sts,pod -l app=httpd
  5. Endpoint 확인

    kubectl get ep web
  6. Endpoint에 포함된 Pod의 상세정보 확인

    kubectl get ep web -o=jsonpath='{.subsets}' | jq
  7. 생성된 Pod의 이름 및 IP 확인

    kubectl get pod -l app=httpd -o wide
  8. Pod 생성

    kubectl run curl --image=curlimages/curl --command -- sleep 3600
  9. cURL 명령어로 기존에 생성한 Pod에 배포된 웹서버 호출

    kubectl exec curl -- curl -s httpd-0.web
  10. StatefulSet의 Replica 수를 3개로 수정

    kubectl scale statefulset httpd --replicas=3
  11. 생성된 Pod 확인

    kubectl get pod -l app=httpd
  12. cURL 명령어로 새로 생성된 Pod에 배포된 웹서버 호출

    {
        kubectl exec curl -- curl -s httpd-1.web
        kubectl exec curl -- curl -s httpd-2.web
    }
  13. 첫번째로 생성된 Pod를 삭제

    kubectl delete pod httpd-0
  14. Pod가 다시 생성되는지 확인

    kubectl get pod -l app=httpd
  15. 생성된 리스소 삭제

    {
        kubectl delete sts httpd
        kubectl delete svc web 
        kubectl delete pod curl
    }

Persistent Volume

  1. 현재 설정된 Storage Class 확인

    kubectl get storageclass
  2. NFS 서버 생성

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Pod
    metadata:
      name: nfs
      labels:
        app: nfs
    spec:
      containers:
      - name: nfs
        image: erichough/nfs-server
        ports:
        - containerPort: 2049
        volumeMounts:
        - name: data
          mountPath: /data
        - name: kernel-module
          mountPath: /lib/modules
          readOnly: true
        env:
        - name: NFS_EXPORT_0
          value: "/data/ *(rw,sync,no_root_squash,subtree_check)"
        securityContext:
          capabilities:
            add:
            - CAP_SYS_ADMIN
            - SYS_MODULE
      initContainers:
      - name: index
        image: busybox
        command: ["/bin/chmod","-R","1777", "/data"]
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
      - name: data
        emptyDir: {}
      - name: kernel-module
        hostPath:
          path: /lib/modules
    EOF
  3. NFS 서버 생성 확인

    kubectl get pod -l app=nfs
  4. NFS 서버에 HTML 파일 생성

    kubectl exec nfs -c nfs -- \
    bash -c 'echo "Greeting from NFS-powered NGINX" > /data/index.html'
  5. HTML 파일이 정상적으로 생성되었는지 확인

    kubectl exec nfs -c nfs -- cat /data/index.html
  6. PersistentVolume 생성

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: my-pv
    spec:
      storageClassName: manual
      capacity:
        storage: 1Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      nfs:
        path: /data
        server: $(kubectl get pod -l app=nfs -o=jsonpath='{.items[0].status.podIP}')
        readOnly: false
    EOF
  7. 생성된 PersistentVolume 확인

    kubectl get pv my-pv
  8. 생성된 PVC가 있는지 확인

    kubectl get pvc
  9. PVC 생성

    cat <<EOF | kubectl create -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc
    spec:
      accessModes:
      - ReadWriteMany
      resources:
         requests:
           storage: 200Mi
      storageClassName: manual
    EOF
  10. 생성된 PVC 확인

    kubectl get pvc my-pvc
  11. PV 확인

    kubectl get pv my-pv
  12. 데모 애플리케이션 배포

    cat <<EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: nginx
      name: nginx
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
            volumeMounts:
            - name: data
              mountPath: /usr/share/nginx/html
          volumes:
          - name: data
            persistentVolumeClaim:
              claimName: my-pvc
    EOF
  13. 위에서 배포한 애플리케이션의 Pod가 정상적으로 생성되었는지 확인

    kubectl get pod -l app=nginx
  14. 배포된 NGINX 웹서버들이 동일한 페이지를 보여주는지 확인

    for pod in $(kubectl get pod -l app=nginx -o=jsonpath='{.items[*].metadata.name}')
    do
      kubectl exec $pod -- curl -s localhost
    done
  15. PVC 상태 확인

    kubectl get pvc
  16. 하나의 Pod에 접속해서 PV가 마운트된 경로에 파일 생성

    kubectl exec deploy/nginx -- \
    bash -c  "echo helloworld > /usr/share/nginx/html/hello.txt"
  17. NFS 서버에 접속해서 위에서 생성한 파일이 보이는지 확인

    kubectl exec nfs -c nfs -- cat /data/hello.txt
  18. 배포된 Pod들에서 위에서 생성한 파일이 보이는지 확인

    for pod in $(kubectl get pod -l app=nginx -o=jsonpath='{.items[*].metadata.name}')
    do
      kubectl exec $pod -- curl -s localhost/hello.txt
    done
  19. 데모 애플리케이션 삭제

    kubectl delete deploy nginx
  20. NFS 서버에 접속해서 파일이 삭제되었는지 확인

    kubectl exec nfs -c nfs -- ls /data
  21. PVC 상태 확인

    kubectl get pvc my-pvc
  22. 데모 애플리케이션 재배포

    cat <<EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: nginx
      name: nginx
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
            volumeMounts:
            - name: data
              mountPath: /usr/share/nginx/html
          volumes:
          - name: data
            persistentVolumeClaim:
              claimName: my-pvc
    EOF
  23. 이전에 생성했던 파일이 존재 하는지 확인

    for pod in $(kubectl get pod -l app=nginx -o=jsonpath='{.items[*].metadata.name}')
    do
      kubectl exec $pod -- ls /usr/share/nginx/html
    done
  24. PVC 삭제 시도

    kubectl delete pvc my-pvc
  25. 위에서 실행한 명령어가 완료되지 않을 경우에 Ctrl+C를 입력해서 명령어 종료

  26. PVC 상태 확인

    kubectl get pvc my-pvc
  27. PVC에 명시된 Finalizers 확인 - https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection

    kubectl get pvc my-pvc -o=jsonpath='{.metadata.finalizers}{"\n"}'
  28. 데모 애플리케이션 삭제

    kubectl delete deploy nginx
  29. PVC가 삭제되었는지 확인

    kubectl get pvc my-pvc
  30. PV 상태 확인

    kubectl get pv my-pv
  31. PVC 생성

    cat <<EOF | kubectl create -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc
    spec:
      accessModes:
      - ReadWriteMany
      resources:
         requests:
           storage: 200Mi
      storageClassName: manual
    EOF
  32. PVC 상태 확인

    kubectl get pvc my-pvc
  33. PV에 명시된 PVC 정보 확인

    kubectl get pv my-pv -o=jsonpath='{.spec.claimRef}' | jq
  34. 새로 생성한 PVC 상세정보 확인

    kubectl get pvc my-pvc -o yaml
  35. PV 수정 - spec.claimRef 삭제

    kubectl patch pv my-pv \
    --type=json -p='[{"op": "remove", "path": "/spec/claimRef"}]'
  36. PVC 상태 확인

    kubectl get pvc my-pvc
  37. PV에 명시된 PVC 정보 확인

    kubectl get pv my-pv -o=jsonpath='{.spec.claimRef}' | jq
  38. PVC의 uid 확인

    kubectl get pvc my-pvc -o=jsonpath='{.metadata.uid}{"\n"}'
  39. PV 삭제 시도

    kubectl delete pv my-pv 
  40. 위에서 실행한 명령어가 완료되지 않을 경우에 Ctrl+C를 입력해서 명령어 종료

  41. PV 상태 확인

    kubectl get pv my-pv
  42. PV에 명시된 Finalizers 확인 - https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection

    kubectl get pv my-pv -o=jsonpath='{.metadata.finalizers}{"\n"}'
  43. PVC 삭제

    kubectl delete pvc my-pvc 
  44. PV가 삭제되었는지 확인

    kubectl get pv my-pv
  45. NFS 서버에 접속해서 파일이 삭제되었는지 확인

    kubectl exec nfs -c nfs -- ls /data
  46. NFS 서버 삭제

    kubectl delete pod nfs 

Dynamic Volume Provisioning

  1. 현재 설정된 StorageClass 확인

    kubectl get storageclass
  2. PVC 생성

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc
    spec:
      accessModes:
      - ReadWriteMany
      resources:
         requests:
           storage: 200Mi
    EOF
  3. PVC 상태 확인

    kubectl get pvc my-pvc
  4. PVC 상태가 Pending인 이유 확인

    kubectl describe pvc my-pvc
  5. 데모 애플리케이션 배포

    cat <<EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: nginx
      name: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
            volumeMounts:
            - name: data
              mountPath: /mnt
          volumes:
          - name: data
            persistentVolumeClaim:
              claimName: my-pvc
    EOF
  6. 위에서 배포한 애플리케이션의 Pod가 정상적으로 생성되었는지 확인

    kubectl get pod -l app=nginx
  7. Pod 상태가 Pending인 이유 확인

    kubectl describe pod -l app=nginx
  8. PVC 상태가 Pending인 이유 확인

    kubectl describe pvc my-pvc
  9. PVC 재생성

    cat <<EOF | kubectl replace --force -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
         requests:
           storage: 200Mi
    EOF
  10. Pod 상태 확인 - Running 상태로 될때까지 시간이 걸릴수도 있음

    kubectl get pod -l app=nginx
  11. PVC 상태 확인

    kubectl get pvc my-pvc
  12. PVC에 명시된 PV가 생성되었는지 확인

    kubectl get pv $(kubectl get pvc my-pvc -o=jsonpath='{.spec.volumeName}')
  13. PVC에 명시한 볼륨 요청 크기 만큼 실제로 할당되었는지 확인

    diff <(kubectl get pvc my-pvc -o=jsonpath='{.spec}' | jq) \
    <(kubectl get pvc my-pvc -o=jsonpath='{.status}' | jq) -y
  14. 데모 애플리케이션 삭제

    kubectl delete deploy nginx
  15. PVC 삭제

    kubectl delete pvc my-pvc 
  16. PV가 삭제되었는지 확인 - 삭제 될때까지 시간이 걸릴수도 있음

    kubectl get pv
  17. LimitRange 생성

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: LimitRange
    metadata:
      name: storagelimits
    spec:
      limits:
      - type: PersistentVolumeClaim
        max:
          storage: 16Ti
        min:
          storage: 1Gi
    EOF
  18. LimitRagne 확인

    kubectl describe ns default
  19. PVC 생성 시도

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
         requests:
           storage: 200Mi
    EOF
  20. PVC 생성

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
         requests:
           storage: 2Gi
    EOF
  21. PVC 상태 확인

    kubectl get pvc my-pvc
  22. 데모 애플리케이션 배포

    cat <<EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: nginx
      name: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
            volumeMounts:
            - name: data
              mountPath: /mnt
          volumes:
          - name: data
            persistentVolumeClaim:
              claimName: my-pvc
    EOF
  23. Pod 상태 확인

    kubectl get pod -l app=nginx
  24. PVC 상태 확인

    kubectl get pvc my-pvc
  25. PVC에 명시된 PV가 생성되었는지 확인

    kubectl get pv $(kubectl get pvc my-pvc -o=jsonpath='{.spec.volumeName}')
  26. PVC에 명시한 볼륨 요청 크기 만큼 실제로 할당되었는지 확인

    diff <(kubectl get pvc my-pvc -o=jsonpath='{.spec}' | jq) \
    <(kubectl get pvc my-pvc -o=jsonpath='{.status}' | jq) -y
  27. Deployment의 Replica 수를 10개로 수정

    kubectl scale deploy nginx --replicas=10
  28. 추가로 생성되는 Pod 상태 확인

    kubectl get pod -l app=nginx
  29. Pending 상태의 Pod가 있다면 그 이유 확인

    kubectl describe pod \
    $(kubectl get pod -l app=nginx \
    -o=jsonpath='{.items[?(@.status.phase=="Pending")].metadata.name}')
  30. PV에 명시된 Node Affinity 확인 - https://kubernetes.io/docs/concepts/storage/persistent-volumes/#node-affinity

    kubectl get pv $(kubectl get pvc my-pvc -o=jsonpath='{.spec.volumeName}') \
    -o=jsonpath='{.spec.nodeAffinity}' | jq
  31. 각 Node의 가용영역 확인

    kubectl get nodes --label-columns topology.kubernetes.io/zone
  32. Pod가 배포된 Node 확인

    kubectl get pod -l app=nginx -o wide
  33. Deployment의 Replica 수를 1개로 수정

    kubectl scale deploy nginx --replicas=1
  34. Pod가 배포된 Node의 이름 확인

    kubectl get pod -l app=nginx \
    -o=jsonpath='{.items[0].spec.nodeName}{"\n"}'
  35. 해당 Node에 문제가 생겼다는 상황을 가정하고 Pod를 다른 Node로 이전하고 Node 상태를 SchedulingDisabled로 변경

    kubectl drain $(kubectl get pod -l app=nginx \
    -o=jsonpath='{.items[0].spec.nodeName}') \
    --ignore-daemonsets --delete-emptydir-data
  36. Pod가 생성되는지 확인

    kubectl get pod -l app=nginx
  37. Pod가 Pending 상태라면 그 이유를 확인

    kubectl describe pod \
    $(kubectl get pod -l app=nginx \
    -o=jsonpath='{.items[?(@.status.phase=="Pending")].metadata.name}')
  38. Node 상태 확인

    kubectl get node
  39. Node에 부여된 Taint 확인

    kubectl get nodes \
    -o=custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect
  40. PV 이름 확인

    kubectl get pvc my-pvc -o=jsonpath='{.spec.volumeName}{"\n"}'
  41. PV에 연동된 EBS 볼륨를 확인하고 볼륨 ID를 환경변수로 지정

    {
        export VOL_ID=$(kubectl get pv $(kubectl get pvc my-pvc \
        -o=jsonpath='{.spec.volumeName}') \
        -o jsonpath='{.spec.awsElasticBlockStore.volumeID}' \
        | grep -oE "vol-[a-z0-9]+")
    
        echo $VOL_ID
    }
  42. 위에서 확인한 EBS 볼륨 상태 확인

    aws ec2 describe-volumes --volume-ids $VOL_ID
  43. Node가 정상으로 돌아왔다고 가정하고 Node에 부여된 Taint 삭제

    kubectl uncordon \
    $(kubectl get node -o=jsonpath='{.items[?(@.spec.unschedulable==true)].metadata.name}')
  44. Node 상태 확인

    kubectl get node
  45. Pod가 생성되는지 확인

    kubectl get pod -l app=nginx
  46. PV에 연동된 EBS 볼륨 상태 확인

    aws ec2 describe-volumes --volume-ids $VOL_ID
  47. 리소스 삭제

    {
        kubectl delete deploy nginx
        kubectl delete pvc my-pvc
        kubectl delete limitrange storagelimits
    }
  48. PV가 삭제되었는지 확인 - 삭제 될때까지 시간이 걸릴수도 있음

    kubectl get pv
  49. PV에 연동된 EBS 볼륨이 삭제되었는지 확인

    aws ec2 describe-volumes --volume-ids $VOL_ID

MySQL 클러스터 구성

  1. ConfigMap 생성

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: mysql
      labels:
        app: mysql
    data:
      primary.cnf: |
        # Apply this config only on the primary.
        [mysqld]
        log-bin    
      replica.cnf: |
        # Apply this config only on replicas.
        [mysqld]
        super-read-only    
    EOF
  2. Service 생성

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql
      labels:
        app: mysql
    spec:
      ports:
      - name: mysql
        port: 3306
      clusterIP: None
      selector:
        app: mysql
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql-read
      labels:
        app: mysql
    spec:
      ports:
      - name: mysql
        port: 3306
      selector:
        app: mysql
    EOF
  3. StatefulSet 생성

    cat <<'EOF' | kubectl apply -f -
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: mysql
    spec:
      selector:
        matchLabels:
          app: mysql
      serviceName: mysql
      replicas: 3
      template:
        metadata:
          labels:
            app: mysql
        spec:
          initContainers:
          - name: init-mysql
            image: mysql:5.7
            command:
            - bash
            - "-c"
            - |
              set -ex
              # Generate mysql server-id from pod ordinal index.
              [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              echo [mysqld] > /mnt/conf.d/server-id.cnf
              # Add an offset to avoid reserved server-id=0 value.
              echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
              # Copy appropriate conf.d files from config-map to emptyDir.
              if [[ $ordinal -eq 0 ]]; then
                cp /mnt/config-map/primary.cnf /mnt/conf.d/
              else
                cp /mnt/config-map/replica.cnf /mnt/conf.d/
              fi          
            volumeMounts:
            - name: conf
              mountPath: /mnt/conf.d
            - name: config-map
              mountPath: /mnt/config-map
          - name: clone-mysql
            image: gcr.io/google-samples/xtrabackup:1.0
            command:
            - bash
            - "-c"
            - |
              set -ex
              # Skip the clone if data already exists.
              [[ -d /var/lib/mysql/mysql ]] && exit 0
              # Skip the clone on primary (ordinal index 0).
              [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              [[ $ordinal -eq 0 ]] && exit 0
              # Clone data from previous peer.
              ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
              # Prepare the backup.
              xtrabackup --prepare --target-dir=/var/lib/mysql          
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
          containers:
          - name: mysql
            image: mysql:5.7
            env:
            - name: MYSQL_ALLOW_EMPTY_PASSWORD
              value: "1"
            ports:
            - name: mysql
              containerPort: 3306
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            resources:
              requests:
                cpu: 500m
                memory: 500Mi
            livenessProbe:
              exec:
                command: ["mysqladmin", "ping"]
              initialDelaySeconds: 30
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              exec:
                # Check we can execute queries over TCP (skip-networking is off).
                command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
              initialDelaySeconds: 5
              periodSeconds: 2
              timeoutSeconds: 1
          - name: xtrabackup
            image: gcr.io/google-samples/xtrabackup:1.0
            ports:
            - name: xtrabackup
              containerPort: 3307
            command:
            - bash
            - "-c"
            - |
              set -ex
              cd /var/lib/mysql
    
              # Determine binlog position of cloned data, if any.
              if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
                # XtraBackup already generated a partial "CHANGE MASTER TO" query
                # because we're cloning from an existing replica. (Need to remove the tailing semicolon!)
                cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
                # Ignore xtrabackup_binlog_info in this case (it's useless).
                rm -f xtrabackup_slave_info xtrabackup_binlog_info
              elif [[ -f xtrabackup_binlog_info ]]; then
                # We're cloning directly from primary. Parse binlog position.
                [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
                rm -f xtrabackup_binlog_info xtrabackup_slave_info
                echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                      MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
              fi
    
              # Check if we need to complete a clone by starting replication.
              if [[ -f change_master_to.sql.in ]]; then
                echo "Waiting for mysqld to be ready (accepting connections)"
                until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
    
                echo "Initializing replication from clone position"
                mysql -h 127.0.0.1 \
                      -e "$(<change_master_to.sql.in), \
                              MASTER_HOST='mysql-0.mysql', \
                              MASTER_USER='root', \
                              MASTER_PASSWORD='', \
                              MASTER_CONNECT_RETRY=10; \
                            START SLAVE;" || exit 1
                # In case of container restart, attempt this at-most-once.
                mv change_master_to.sql.in change_master_to.sql.orig
              fi
    
              # Start a server to send backups when requested by peers.
              exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
                "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"          
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            resources:
              requests:
                cpu: 100m
                memory: 100Mi
          volumes:
          - name: conf
            emptyDir: {}
          - name: config-map
            configMap:
              name: mysql
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 10Gi
    EOF
  4. Pod 생성 확인

    watch kubectl get pod -l app=mysql
  5. 생성된 PVC 확인

    kubectl get pvc -l app=mysql
  6. PVC에 명시된 PV가 생성되었는지 확인

    kubectl get pv \
    $(kubectl get pvc -l app=mysql -o=jsonpath='{.items[*].spec.volumeName}')
  7. 각 PV에 맵핑된 EBS 볼륨 확인

    for pv in $(kubectl get pvc -l app=mysql -o=jsonpath='{.items[*].spec.volumeName}')
    do
      aws ec2 describe-volumes --filters Name=tag:kubernetes.io/created-for/pv/name,Values=$pv
    done
  8. Pod 생성

    kubectl run mysql-client --image=mysql:5.7 -- sleep 3600
  9. mysql-0 Pod를 통해서 데이터베이스 및 레코드 생성

    kubectl exec -i mysql-client -- \
    mysql -h mysql-0.mysql <<EOF
    CREATE DATABASE test;
    CREATE TABLE test.messages (message VARCHAR(250));
    INSERT INTO test.messages VALUES ('hello');
    EOF
  10. 읽기 엔드포인트 mysql-read 통해서 위에서 생성한 데이터를 확인

    kubectl exec -it mysql-client -- \
    mysql -h mysql-read -e "SELECT (SELECT message FROM test.messages) as message, (SELECT @@server_id) server_id;"
  11. 읽기 엔드포인트 mysql-read 가 읽기에 대한 쿼리를 분산하는지 확인

    kubectl exec -it mysql-client -- \
    bash -c "while sleep 1; do mysql -h mysql-read -e 'SELECT (SELECT message FROM test.messages) as message, (SELECT @@server_id) server_id;'; done"
  12. Ctrl+C를 입력해서 위에서 실행한 프로세스 종료

  13. mysql-read Endpoint에 타켓으로 포함되어 있는 Pod 목록 확인

    kubectl get ep mysql-read \
    -o jsonpath='{.subsets[*].addresses[*].targetRef.name}{"\n"}'
  14. mysql-2 Pod의 ReadinessProbe가 실패하도록 수정

    {
        kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql /usr/bin/mysql.off
        kubectl exec mysql-2 -c mysql -- cp /usr/bin/bash /usr/bin/mysql
    }
  15. mysql-2 Pod의 상태 확인

    kubectl get pod mysql-2
  16. mysql-2 Pod의 상세 상태 확인

    kubectl describe pod mysql-2
  17. 읽기 엔드포인트 mysql-read 로 쿼리를 실행해서 mysql-2 Pod로 트래픽이 전달되는지 확인

    kubectl exec -it mysql-client -- \
    bash -c "while sleep 1; do mysql -h mysql-read -e 'SELECT (SELECT message FROM test.messages) as message, (SELECT @@server_id) server_id;'; done"
  18. mysql-read Endpoint에 타켓으로 mysql-2 Pod가 포함되어 있는지 확인

    kubectl get ep mysql-read \
    -o jsonpath='{.subsets[*].addresses[*].targetRef.name}{"\n"}'
  19. mysql-2 Pod의 ReadinessProbe가 성공하도록 수정

    kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql.off /usr/bin/mysql
  20. mysql-2 Pod의 상태 확인

    kubectl get pod mysql-2
  21. mysql-read Endpoint에 타켓으로 mysql-2 Pod가 포함되어 있는지 확인

    kubectl get ep mysql-read \
    -o jsonpath='{.subsets[*].addresses[*].targetRef.name}{"\n"}'
  22. StatefulSet의 Replica 갯수를 0개로 조정

    kubectl scale sts mysql --replicas=0
  23. Pod 상태 확인

    watch kubectl get pod -l app=mysql
  24. PVC가 삭제되었는지 확인

    kubectl get pvc -l app=mysql 
  25. StatefulSet의 Replica 갯수를 3개로 조정

    kubectl scale sts mysql --replicas=3
  26. Pod 상태 확인

    watch kubectl get pod -l app=mysql
  27. 새로운 PVC가 생성되었는지 확인

    kubectl get pvc -l app=mysql
  28. 기존에 생성한 데이터베이스 및 레코드가 존재하는지 확인

    kubectl exec -it mysql-client -- \
    bash -c "while sleep 1; do mysql -h mysql-read -e 'SELECT (SELECT message FROM test.messages) as message, (SELECT @@server_id) server_id;'; done"
  29. Ctrl+C를 입력해서 위에서 실행한 프로세스 종료

  30. StatefulSet 삭제

    kubectl delete sts mysql
  31. PVC가 삭제되었는지 확인

    kubectl get pvc -l app=mysql
  32. StatefulSet 재생성

    cat <<'EOF' | kubectl apply -f -
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: mysql
    spec:
      selector:
        matchLabels:
          app: mysql
      serviceName: mysql
      replicas: 3
      template:
        metadata:
          labels:
            app: mysql
        spec:
          initContainers:
          - name: init-mysql
            image: mysql:5.7
            command:
            - bash
            - "-c"
            - |
              set -ex
              # Generate mysql server-id from pod ordinal index.
              [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              echo [mysqld] > /mnt/conf.d/server-id.cnf
              # Add an offset to avoid reserved server-id=0 value.
              echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
              # Copy appropriate conf.d files from config-map to emptyDir.
              if [[ $ordinal -eq 0 ]]; then
                cp /mnt/config-map/primary.cnf /mnt/conf.d/
              else
                cp /mnt/config-map/replica.cnf /mnt/conf.d/
              fi          
            volumeMounts:
            - name: conf
              mountPath: /mnt/conf.d
            - name: config-map
              mountPath: /mnt/config-map
          - name: clone-mysql
            image: gcr.io/google-samples/xtrabackup:1.0
            command:
            - bash
            - "-c"
            - |
              set -ex
              # Skip the clone if data already exists.
              [[ -d /var/lib/mysql/mysql ]] && exit 0
              # Skip the clone on primary (ordinal index 0).
              [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              [[ $ordinal -eq 0 ]] && exit 0
              # Clone data from previous peer.
              ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
              # Prepare the backup.
              xtrabackup --prepare --target-dir=/var/lib/mysql          
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
          containers:
          - name: mysql
            image: mysql:5.7
            env:
            - name: MYSQL_ALLOW_EMPTY_PASSWORD
              value: "1"
            ports:
            - name: mysql
              containerPort: 3306
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            resources:
              requests:
                cpu: 500m
                memory: 500Mi
            livenessProbe:
              exec:
                command: ["mysqladmin", "ping"]
              initialDelaySeconds: 30
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              exec:
                # Check we can execute queries over TCP (skip-networking is off).
                command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
              initialDelaySeconds: 5
              periodSeconds: 2
              timeoutSeconds: 1
          - name: xtrabackup
            image: gcr.io/google-samples/xtrabackup:1.0
            ports:
            - name: xtrabackup
              containerPort: 3307
            command:
            - bash
            - "-c"
            - |
              set -ex
              cd /var/lib/mysql
    
              # Determine binlog position of cloned data, if any.
              if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
                # XtraBackup already generated a partial "CHANGE MASTER TO" query
                # because we're cloning from an existing replica. (Need to remove the tailing semicolon!)
                cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
                # Ignore xtrabackup_binlog_info in this case (it's useless).
                rm -f xtrabackup_slave_info xtrabackup_binlog_info
              elif [[ -f xtrabackup_binlog_info ]]; then
                # We're cloning directly from primary. Parse binlog position.
                [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
                rm -f xtrabackup_binlog_info xtrabackup_slave_info
                echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                      MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
              fi
    
              # Check if we need to complete a clone by starting replication.
              if [[ -f change_master_to.sql.in ]]; then
                echo "Waiting for mysqld to be ready (accepting connections)"
                until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
    
                echo "Initializing replication from clone position"
                mysql -h 127.0.0.1 \
                      -e "$(<change_master_to.sql.in), \
                              MASTER_HOST='mysql-0.mysql', \
                              MASTER_USER='root', \
                              MASTER_PASSWORD='', \
                              MASTER_CONNECT_RETRY=10; \
                            START SLAVE;" || exit 1
                # In case of container restart, attempt this at-most-once.
                mv change_master_to.sql.in change_master_to.sql.orig
              fi
    
              # Start a server to send backups when requested by peers.
              exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
                "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"          
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            resources:
              requests:
                cpu: 100m
                memory: 100Mi
          volumes:
          - name: conf
            emptyDir: {}
          - name: config-map
            configMap:
              name: mysql
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 10Gi
    EOF
  33. Pod 생성 확인

    watch kubectl get pod -l app=mysql
  34. 새로운 PVC가 생성되었는지 확인

    kubectl get pvc -l app=mysql
  35. 기존에 생성한 데이터베이스 및 레코드가 존재하는지 확인

    kubectl exec -it mysql-client -- \
    bash -c "while sleep 1; do mysql -h mysql-read -e 'SELECT (SELECT message FROM test.messages) as message, (SELECT @@server_id) server_id;'; done"
  36. Ctrl+C를 입력해서 위에서 실행한 프로세스 종료

  37. 리소스 삭제

    {
        kubectl delete sts mysql
        kubectl delete pvc -l app=mysql
        kubectl delete svc mysql mysql-read
        kubectl delete cm mysql
        kubectl delete pod mysql-client
    }

Last updated