실습
StatefulSet
Service 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Service metadata: name: web labels: app: httpd spec: ports: - port: 80 clusterIP: None selector: app: httpd EOF
생성된 Service 및 Endpoint 확인
kubectl get svc,ep -l app=httpd
StatefulSet 생성
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: StatefulSet metadata: name: httpd labels: app: httpd spec: serviceName: web replicas: 1 selector: matchLabels: app: httpd template: metadata: labels: app: httpd spec: containers: - name: httpd image: httpd EOF
생성된 StatefulSet 및 Pod 확인
kubectl get sts,pod -l app=httpd
Endpoint 확인
kubectl get ep web
Endpoint에 포함된 Pod의 상세정보 확인
kubectl get ep web -o=jsonpath='{.subsets}' | jq
생성된 Pod의 이름 및 IP 확인
kubectl get pod -l app=httpd -o wide
Pod 생성
kubectl run curl --image=curlimages/curl --command -- sleep 3600
cURL 명령어로 기존에 생성한 Pod에 배포된 웹서버 호출
kubectl exec curl -- curl -s httpd-0.web
StatefulSet의 Replica 수를 3개로 수정
kubectl scale statefulset httpd --replicas=3
생성된 Pod 확인
kubectl get pod -l app=httpd
cURL 명령어로 새로 생성된 Pod에 배포된 웹서버 호출
{ kubectl exec curl -- curl -s httpd-1.web kubectl exec curl -- curl -s httpd-2.web }
첫번째로 생성된 Pod를 삭제
kubectl delete pod httpd-0
Pod가 다시 생성되는지 확인
kubectl get pod -l app=httpd
생성된 리스소 삭제
{ kubectl delete sts httpd kubectl delete svc web kubectl delete pod curl }
Persistent Volume
Persistent Volume 종류 확인 - https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes
현재 설정된 Storage Class 확인
kubectl get storageclass
NFS 서버 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: nfs labels: app: nfs spec: containers: - name: nfs image: erichough/nfs-server ports: - containerPort: 2049 volumeMounts: - name: data mountPath: /data - name: kernel-module mountPath: /lib/modules readOnly: true env: - name: NFS_EXPORT_0 value: "/data/ *(rw,sync,no_root_squash,subtree_check)" securityContext: capabilities: add: - CAP_SYS_ADMIN - SYS_MODULE initContainers: - name: index image: busybox command: ["/bin/chmod","-R","1777", "/data"] volumeMounts: - name: data mountPath: /data volumes: - name: data emptyDir: {} - name: kernel-module hostPath: path: /lib/modules EOF
NFS 서버 생성 확인
kubectl get pod -l app=nfs
NFS 서버에 HTML 파일 생성
kubectl exec nfs -c nfs -- \ bash -c 'echo "Greeting from NFS-powered NGINX" > /data/index.html'
HTML 파일이 정상적으로 생성되었는지 확인
kubectl exec nfs -c nfs -- cat /data/index.html
PersistentVolume 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolume metadata: name: my-pv spec: storageClassName: manual capacity: storage: 1Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: path: /data server: $(kubectl get pod -l app=nfs -o=jsonpath='{.items[0].status.podIP}') readOnly: false EOF
생성된 PersistentVolume 확인
kubectl get pv my-pv
생성된 PVC가 있는지 확인
kubectl get pvc
PVC 생성
cat <<EOF | kubectl create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 200Mi storageClassName: manual EOF
생성된 PVC 확인
kubectl get pvc my-pvc
PV 확인
kubectl get pv my-pv
데모 애플리케이션 배포
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html volumes: - name: data persistentVolumeClaim: claimName: my-pvc EOF
위에서 배포한 애플리케이션의 Pod가 정상적으로 생성되었는지 확인
kubectl get pod -l app=nginx
배포된 NGINX 웹서버들이 동일한 페이지를 보여주는지 확인
for pod in $(kubectl get pod -l app=nginx -o=jsonpath='{.items[*].metadata.name}') do kubectl exec $pod -- curl -s localhost done
PVC 상태 확인
kubectl get pvc
하나의 Pod에 접속해서 PV가 마운트된 경로에 파일 생성
kubectl exec deploy/nginx -- \ bash -c "echo helloworld > /usr/share/nginx/html/hello.txt"
NFS 서버에 접속해서 위에서 생성한 파일이 보이는지 확인
kubectl exec nfs -c nfs -- cat /data/hello.txt
배포된 Pod들에서 위에서 생성한 파일이 보이는지 확인
for pod in $(kubectl get pod -l app=nginx -o=jsonpath='{.items[*].metadata.name}') do kubectl exec $pod -- curl -s localhost/hello.txt done
데모 애플리케이션 삭제
kubectl delete deploy nginx
NFS 서버에 접속해서 파일이 삭제되었는지 확인
kubectl exec nfs -c nfs -- ls /data
PVC 상태 확인
kubectl get pvc my-pvc
데모 애플리케이션 재배포
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /usr/share/nginx/html volumes: - name: data persistentVolumeClaim: claimName: my-pvc EOF
이전에 생성했던 파일이 존재 하는지 확인
for pod in $(kubectl get pod -l app=nginx -o=jsonpath='{.items[*].metadata.name}') do kubectl exec $pod -- ls /usr/share/nginx/html done
PVC 삭제 시도
kubectl delete pvc my-pvc
위에서 실행한 명령어가 완료되지 않을 경우에 Ctrl+C를 입력해서 명령어 종료
PVC 상태 확인
kubectl get pvc my-pvc
PVC에 명시된 Finalizers 확인 - https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection
kubectl get pvc my-pvc -o=jsonpath='{.metadata.finalizers}{"\n"}'
데모 애플리케이션 삭제
kubectl delete deploy nginx
PVC가 삭제되었는지 확인
kubectl get pvc my-pvc
PV 상태 확인
kubectl get pv my-pv
PVC 생성
cat <<EOF | kubectl create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 200Mi storageClassName: manual EOF
PVC 상태 확인
kubectl get pvc my-pvc
PV에 명시된 PVC 정보 확인
kubectl get pv my-pv -o=jsonpath='{.spec.claimRef}' | jq
새로 생성한 PVC 상세정보 확인
kubectl get pvc my-pvc -o yaml
PV 수정 -
spec.claimRef
삭제kubectl patch pv my-pv \ --type=json -p='[{"op": "remove", "path": "/spec/claimRef"}]'
PVC 상태 확인
kubectl get pvc my-pvc
PV에 명시된 PVC 정보 확인
kubectl get pv my-pv -o=jsonpath='{.spec.claimRef}' | jq
PVC의 uid 확인
kubectl get pvc my-pvc -o=jsonpath='{.metadata.uid}{"\n"}'
PV 삭제 시도
kubectl delete pv my-pv
위에서 실행한 명령어가 완료되지 않을 경우에 Ctrl+C를 입력해서 명령어 종료
PV 상태 확인
kubectl get pv my-pv
PV에 명시된 Finalizers 확인 - https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection
kubectl get pv my-pv -o=jsonpath='{.metadata.finalizers}{"\n"}'
PVC 삭제
kubectl delete pvc my-pvc
PV가 삭제되었는지 확인
kubectl get pv my-pv
NFS 서버에 접속해서 파일이 삭제되었는지 확인
kubectl exec nfs -c nfs -- ls /data
NFS 서버 삭제
kubectl delete pod nfs
Dynamic Volume Provisioning
현재 설정된 StorageClass 확인
kubectl get storageclass
PVC 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 200Mi EOF
PVC 상태 확인
kubectl get pvc my-pvc
PVC 상태가 Pending인 이유 확인
kubectl describe pvc my-pvc
데모 애플리케이션 배포
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /mnt volumes: - name: data persistentVolumeClaim: claimName: my-pvc EOF
위에서 배포한 애플리케이션의 Pod가 정상적으로 생성되었는지 확인
kubectl get pod -l app=nginx
Pod 상태가 Pending인 이유 확인
kubectl describe pod -l app=nginx
PVC 상태가 Pending인 이유 확인
kubectl describe pvc my-pvc
PVC 재생성
cat <<EOF | kubectl replace --force -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Mi EOF
Pod 상태 확인 - Running 상태로 될때까지 시간이 걸릴수도 있음
kubectl get pod -l app=nginx
PVC 상태 확인
kubectl get pvc my-pvc
PVC에 명시된 PV가 생성되었는지 확인
kubectl get pv $(kubectl get pvc my-pvc -o=jsonpath='{.spec.volumeName}')
PVC에 명시한 볼륨 요청 크기 만큼 실제로 할당되었는지 확인
diff <(kubectl get pvc my-pvc -o=jsonpath='{.spec}' | jq) \ <(kubectl get pvc my-pvc -o=jsonpath='{.status}' | jq) -y
EBS (gp2)의 최소 볼륨 크기 확인 - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html#solid-state-drives
데모 애플리케이션 삭제
kubectl delete deploy nginx
PVC 삭제
kubectl delete pvc my-pvc
PV가 삭제되었는지 확인 - 삭제 될때까지 시간이 걸릴수도 있음
kubectl get pv
LimitRange 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: LimitRange metadata: name: storagelimits spec: limits: - type: PersistentVolumeClaim max: storage: 16Ti min: storage: 1Gi EOF
LimitRagne 확인
kubectl describe ns default
PVC 생성 시도
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Mi EOF
PVC 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi EOF
PVC 상태 확인
kubectl get pvc my-pvc
데모 애플리케이션 배포
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /mnt volumes: - name: data persistentVolumeClaim: claimName: my-pvc EOF
Pod 상태 확인
kubectl get pod -l app=nginx
PVC 상태 확인
kubectl get pvc my-pvc
PVC에 명시된 PV가 생성되었는지 확인
kubectl get pv $(kubectl get pvc my-pvc -o=jsonpath='{.spec.volumeName}')
PVC에 명시한 볼륨 요청 크기 만큼 실제로 할당되었는지 확인
diff <(kubectl get pvc my-pvc -o=jsonpath='{.spec}' | jq) \ <(kubectl get pvc my-pvc -o=jsonpath='{.status}' | jq) -y
Deployment의 Replica 수를 10개로 수정
kubectl scale deploy nginx --replicas=10
추가로 생성되는 Pod 상태 확인
kubectl get pod -l app=nginx
Pending 상태의 Pod가 있다면 그 이유 확인
kubectl describe pod \ $(kubectl get pod -l app=nginx \ -o=jsonpath='{.items[?(@.status.phase=="Pending")].metadata.name}')
PV에 명시된 Node Affinity 확인 - https://kubernetes.io/docs/concepts/storage/persistent-volumes/#node-affinity
kubectl get pv $(kubectl get pvc my-pvc -o=jsonpath='{.spec.volumeName}') \ -o=jsonpath='{.spec.nodeAffinity}' | jq
각 Node의 가용영역 확인
kubectl get nodes --label-columns topology.kubernetes.io/zone
Pod가 배포된 Node 확인
kubectl get pod -l app=nginx -o wide
Deployment의 Replica 수를 1개로 수정
kubectl scale deploy nginx --replicas=1
Pod가 배포된 Node의 이름 확인
kubectl get pod -l app=nginx \ -o=jsonpath='{.items[0].spec.nodeName}{"\n"}'
해당 Node에 문제가 생겼다는 상황을 가정하고 Pod를 다른 Node로 이전하고 Node 상태를 SchedulingDisabled로 변경
kubectl drain $(kubectl get pod -l app=nginx \ -o=jsonpath='{.items[0].spec.nodeName}') \ --ignore-daemonsets --delete-emptydir-data
Pod가 생성되는지 확인
kubectl get pod -l app=nginx
Pod가 Pending 상태라면 그 이유를 확인
kubectl describe pod \ $(kubectl get pod -l app=nginx \ -o=jsonpath='{.items[?(@.status.phase=="Pending")].metadata.name}')
Node 상태 확인
kubectl get node
Node에 부여된 Taint 확인
kubectl get nodes \ -o=custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect
PV 이름 확인
kubectl get pvc my-pvc -o=jsonpath='{.spec.volumeName}{"\n"}'
PV에 연동된 EBS 볼륨를 확인하고 볼륨 ID를 환경변수로 지정
{ export VOL_ID=$(kubectl get pv $(kubectl get pvc my-pvc \ -o=jsonpath='{.spec.volumeName}') \ -o jsonpath='{.spec.awsElasticBlockStore.volumeID}' \ | grep -oE "vol-[a-z0-9]+") echo $VOL_ID }
위에서 확인한 EBS 볼륨 상태 확인
aws ec2 describe-volumes --volume-ids $VOL_ID
Node가 정상으로 돌아왔다고 가정하고 Node에 부여된 Taint 삭제
kubectl uncordon \ $(kubectl get node -o=jsonpath='{.items[?(@.spec.unschedulable==true)].metadata.name}')
Node 상태 확인
kubectl get node
Pod가 생성되는지 확인
kubectl get pod -l app=nginx
PV에 연동된 EBS 볼륨 상태 확인
aws ec2 describe-volumes --volume-ids $VOL_ID
리소스 삭제
{ kubectl delete deploy nginx kubectl delete pvc my-pvc kubectl delete limitrange storagelimits }
PV가 삭제되었는지 확인 - 삭제 될때까지 시간이 걸릴수도 있음
kubectl get pv
PV에 연동된 EBS 볼륨이 삭제되었는지 확인
aws ec2 describe-volumes --volume-ids $VOL_ID
MySQL 클러스터 구성
ConfigMap 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: mysql labels: app: mysql data: primary.cnf: | # Apply this config only on the primary. [mysqld] log-bin replica.cnf: | # Apply this config only on replicas. [mysqld] super-read-only EOF
Service 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Service metadata: name: mysql labels: app: mysql spec: ports: - name: mysql port: 3306 clusterIP: None selector: app: mysql --- apiVersion: v1 kind: Service metadata: name: mysql-read labels: app: mysql spec: ports: - name: mysql port: 3306 selector: app: mysql EOF
StatefulSet 생성
cat <<'EOF' | kubectl apply -f - apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql spec: selector: matchLabels: app: mysql serviceName: mysql replicas: 3 template: metadata: labels: app: mysql spec: initContainers: - name: init-mysql image: mysql:5.7 command: - bash - "-c" - | set -ex # Generate mysql server-id from pod ordinal index. [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} echo [mysqld] > /mnt/conf.d/server-id.cnf # Add an offset to avoid reserved server-id=0 value. echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf # Copy appropriate conf.d files from config-map to emptyDir. if [[ $ordinal -eq 0 ]]; then cp /mnt/config-map/primary.cnf /mnt/conf.d/ else cp /mnt/config-map/replica.cnf /mnt/conf.d/ fi volumeMounts: - name: conf mountPath: /mnt/conf.d - name: config-map mountPath: /mnt/config-map - name: clone-mysql image: gcr.io/google-samples/xtrabackup:1.0 command: - bash - "-c" - | set -ex # Skip the clone if data already exists. [[ -d /var/lib/mysql/mysql ]] && exit 0 # Skip the clone on primary (ordinal index 0). [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} [[ $ordinal -eq 0 ]] && exit 0 # Clone data from previous peer. ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql # Prepare the backup. xtrabackup --prepare --target-dir=/var/lib/mysql volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d containers: - name: mysql image: mysql:5.7 env: - name: MYSQL_ALLOW_EMPTY_PASSWORD value: "1" ports: - name: mysql containerPort: 3306 volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d resources: requests: cpu: 500m memory: 500Mi livenessProbe: exec: command: ["mysqladmin", "ping"] initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 readinessProbe: exec: # Check we can execute queries over TCP (skip-networking is off). command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"] initialDelaySeconds: 5 periodSeconds: 2 timeoutSeconds: 1 - name: xtrabackup image: gcr.io/google-samples/xtrabackup:1.0 ports: - name: xtrabackup containerPort: 3307 command: - bash - "-c" - | set -ex cd /var/lib/mysql # Determine binlog position of cloned data, if any. if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then # XtraBackup already generated a partial "CHANGE MASTER TO" query # because we're cloning from an existing replica. (Need to remove the tailing semicolon!) cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in # Ignore xtrabackup_binlog_info in this case (it's useless). rm -f xtrabackup_slave_info xtrabackup_binlog_info elif [[ -f xtrabackup_binlog_info ]]; then # We're cloning directly from primary. Parse binlog position. [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1 rm -f xtrabackup_binlog_info xtrabackup_slave_info echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\ MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in fi # Check if we need to complete a clone by starting replication. if [[ -f change_master_to.sql.in ]]; then echo "Waiting for mysqld to be ready (accepting connections)" until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done echo "Initializing replication from clone position" mysql -h 127.0.0.1 \ -e "$(<change_master_to.sql.in), \ MASTER_HOST='mysql-0.mysql', \ MASTER_USER='root', \ MASTER_PASSWORD='', \ MASTER_CONNECT_RETRY=10; \ START SLAVE;" || exit 1 # In case of container restart, attempt this at-most-once. mv change_master_to.sql.in change_master_to.sql.orig fi # Start a server to send backups when requested by peers. exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \ "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root" volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d resources: requests: cpu: 100m memory: 100Mi volumes: - name: conf emptyDir: {} - name: config-map configMap: name: mysql volumeClaimTemplates: - metadata: name: data spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 10Gi EOF
Pod 생성 확인
watch kubectl get pod -l app=mysql
생성된 PVC 확인
kubectl get pvc -l app=mysql
PVC에 명시된 PV가 생성되었는지 확인
kubectl get pv \ $(kubectl get pvc -l app=mysql -o=jsonpath='{.items[*].spec.volumeName}')
각 PV에 맵핑된 EBS 볼륨 확인
for pv in $(kubectl get pvc -l app=mysql -o=jsonpath='{.items[*].spec.volumeName}') do aws ec2 describe-volumes --filters Name=tag:kubernetes.io/created-for/pv/name,Values=$pv done
Pod 생성
kubectl run mysql-client --image=mysql:5.7 -- sleep 3600
mysql-0 Pod를 통해서 데이터베이스 및 레코드 생성
kubectl exec -i mysql-client -- \ mysql -h mysql-0.mysql <<EOF CREATE DATABASE test; CREATE TABLE test.messages (message VARCHAR(250)); INSERT INTO test.messages VALUES ('hello'); EOF
읽기 엔드포인트 mysql-read 통해서 위에서 생성한 데이터를 확인
kubectl exec -it mysql-client -- \ mysql -h mysql-read -e "SELECT (SELECT message FROM test.messages) as message, (SELECT @@server_id) server_id;"
읽기 엔드포인트 mysql-read 가 읽기에 대한 쿼리를 분산하는지 확인
kubectl exec -it mysql-client -- \ bash -c "while sleep 1; do mysql -h mysql-read -e 'SELECT (SELECT message FROM test.messages) as message, (SELECT @@server_id) server_id;'; done"
Ctrl+C를 입력해서 위에서 실행한 프로세스 종료
mysql-read Endpoint에 타켓으로 포함되어 있는 Pod 목록 확인
kubectl get ep mysql-read \ -o jsonpath='{.subsets[*].addresses[*].targetRef.name}{"\n"}'
mysql-2 Pod의 ReadinessProbe가 실패하도록 수정
{ kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql /usr/bin/mysql.off kubectl exec mysql-2 -c mysql -- cp /usr/bin/bash /usr/bin/mysql }
mysql-2 Pod의 상태 확인
kubectl get pod mysql-2
mysql-2 Pod의 상세 상태 확인
kubectl describe pod mysql-2
읽기 엔드포인트 mysql-read 로 쿼리를 실행해서 mysql-2 Pod로 트래픽이 전달되는지 확인
kubectl exec -it mysql-client -- \ bash -c "while sleep 1; do mysql -h mysql-read -e 'SELECT (SELECT message FROM test.messages) as message, (SELECT @@server_id) server_id;'; done"
mysql-read Endpoint에 타켓으로 mysql-2 Pod가 포함되어 있는지 확인
kubectl get ep mysql-read \ -o jsonpath='{.subsets[*].addresses[*].targetRef.name}{"\n"}'
mysql-2 Pod의 ReadinessProbe가 성공하도록 수정
kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql.off /usr/bin/mysql
mysql-2 Pod의 상태 확인
kubectl get pod mysql-2
mysql-read Endpoint에 타켓으로 mysql-2 Pod가 포함되어 있는지 확인
kubectl get ep mysql-read \ -o jsonpath='{.subsets[*].addresses[*].targetRef.name}{"\n"}'
StatefulSet의 Replica 갯수를 0개로 조정
kubectl scale sts mysql --replicas=0
Pod 상태 확인
watch kubectl get pod -l app=mysql
PVC가 삭제되었는지 확인
kubectl get pvc -l app=mysql
StatefulSet의 Replica 갯수를 3개로 조정
kubectl scale sts mysql --replicas=3
Pod 상태 확인
watch kubectl get pod -l app=mysql
새로운 PVC가 생성되었는지 확인
kubectl get pvc -l app=mysql
기존에 생성한 데이터베이스 및 레코드가 존재하는지 확인
kubectl exec -it mysql-client -- \ bash -c "while sleep 1; do mysql -h mysql-read -e 'SELECT (SELECT message FROM test.messages) as message, (SELECT @@server_id) server_id;'; done"
Ctrl+C를 입력해서 위에서 실행한 프로세스 종료
StatefulSet 삭제
kubectl delete sts mysql
PVC가 삭제되었는지 확인
kubectl get pvc -l app=mysql
StatefulSet 재생성
cat <<'EOF' | kubectl apply -f - apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql spec: selector: matchLabels: app: mysql serviceName: mysql replicas: 3 template: metadata: labels: app: mysql spec: initContainers: - name: init-mysql image: mysql:5.7 command: - bash - "-c" - | set -ex # Generate mysql server-id from pod ordinal index. [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} echo [mysqld] > /mnt/conf.d/server-id.cnf # Add an offset to avoid reserved server-id=0 value. echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf # Copy appropriate conf.d files from config-map to emptyDir. if [[ $ordinal -eq 0 ]]; then cp /mnt/config-map/primary.cnf /mnt/conf.d/ else cp /mnt/config-map/replica.cnf /mnt/conf.d/ fi volumeMounts: - name: conf mountPath: /mnt/conf.d - name: config-map mountPath: /mnt/config-map - name: clone-mysql image: gcr.io/google-samples/xtrabackup:1.0 command: - bash - "-c" - | set -ex # Skip the clone if data already exists. [[ -d /var/lib/mysql/mysql ]] && exit 0 # Skip the clone on primary (ordinal index 0). [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} [[ $ordinal -eq 0 ]] && exit 0 # Clone data from previous peer. ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql # Prepare the backup. xtrabackup --prepare --target-dir=/var/lib/mysql volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d containers: - name: mysql image: mysql:5.7 env: - name: MYSQL_ALLOW_EMPTY_PASSWORD value: "1" ports: - name: mysql containerPort: 3306 volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d resources: requests: cpu: 500m memory: 500Mi livenessProbe: exec: command: ["mysqladmin", "ping"] initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 readinessProbe: exec: # Check we can execute queries over TCP (skip-networking is off). command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"] initialDelaySeconds: 5 periodSeconds: 2 timeoutSeconds: 1 - name: xtrabackup image: gcr.io/google-samples/xtrabackup:1.0 ports: - name: xtrabackup containerPort: 3307 command: - bash - "-c" - | set -ex cd /var/lib/mysql # Determine binlog position of cloned data, if any. if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then # XtraBackup already generated a partial "CHANGE MASTER TO" query # because we're cloning from an existing replica. (Need to remove the tailing semicolon!) cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in # Ignore xtrabackup_binlog_info in this case (it's useless). rm -f xtrabackup_slave_info xtrabackup_binlog_info elif [[ -f xtrabackup_binlog_info ]]; then # We're cloning directly from primary. Parse binlog position. [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1 rm -f xtrabackup_binlog_info xtrabackup_slave_info echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\ MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in fi # Check if we need to complete a clone by starting replication. if [[ -f change_master_to.sql.in ]]; then echo "Waiting for mysqld to be ready (accepting connections)" until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done echo "Initializing replication from clone position" mysql -h 127.0.0.1 \ -e "$(<change_master_to.sql.in), \ MASTER_HOST='mysql-0.mysql', \ MASTER_USER='root', \ MASTER_PASSWORD='', \ MASTER_CONNECT_RETRY=10; \ START SLAVE;" || exit 1 # In case of container restart, attempt this at-most-once. mv change_master_to.sql.in change_master_to.sql.orig fi # Start a server to send backups when requested by peers. exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \ "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root" volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d resources: requests: cpu: 100m memory: 100Mi volumes: - name: conf emptyDir: {} - name: config-map configMap: name: mysql volumeClaimTemplates: - metadata: name: data spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 10Gi EOF
Pod 생성 확인
watch kubectl get pod -l app=mysql
새로운 PVC가 생성되었는지 확인
kubectl get pvc -l app=mysql
기존에 생성한 데이터베이스 및 레코드가 존재하는지 확인
kubectl exec -it mysql-client -- \ bash -c "while sleep 1; do mysql -h mysql-read -e 'SELECT (SELECT message FROM test.messages) as message, (SELECT @@server_id) server_id;'; done"
Ctrl+C를 입력해서 위에서 실행한 프로세스 종료
리소스 삭제
{ kubectl delete sts mysql kubectl delete pvc -l app=mysql kubectl delete svc mysql mysql-read kubectl delete cm mysql kubectl delete pod mysql-client }
Last updated