실습
사전 작업
Ingress-NGINX 컨트롤러 설치
ExternalDNS 설치
Ceph 클러스터 구성
Node에 부여된 Label을 통해서 EKS 클러스터 이름 확인하고 환경변수로 저장
{ export CLUSTER_NAME=$(kubectl get node \ -o=jsonpath='{.items[0].metadata.labels.alpha\.eksctl\.io\/cluster-name}') echo $CLUSTER_NAME }
Ceph 클러스터에 사용할 노드그룹에 생성
{ cat <<EOF > ceph-node.yaml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: $CLUSTER_NAME region: ap-northeast-2 managedNodeGroups: - name: ceph-node instanceType: c5.2xlarge desiredCapacity: 3 volumeSize: 20 labels: role: ceph additionalVolumes: - volumeName: /dev/sdf volumeSize: 20 volumeType: gp3 EOF eksctl create nodegroup --config-file=ceph-node.yaml }
생성된 노드 확인
kubectl get node -l role=ceph
Namespace 생성
kubectl create ns rook-ceph
Helm 차트 리포지토리 추가
{ helm repo add rook-release https://charts.rook.io/release helm repo update }
Ceph Operator 차트에 지정할 변수를 명시한 파일 생성 - https://rook.io/docs/rook/v1.12/Helm-Charts/operator-chart/#configuration
cat <<EOF > rook-ceph.yaml nodeSelector: role: ceph csi: provisionerNodeAffinity: role=ceph admissionController: nodeAffinity: role=ceph priorityClassName: system-cluster-critical EOF
Ceph Operator 설치
helm install rook-ceph rook-release/rook-ceph -n rook-ceph -f rook-ceph.yaml
Ceph Operator 상태 확인
kubectl -n rook-ceph get pod -l app=rook-ceph-operator
Ceph Operator 로그 확인
kubectl -n rook-ceph logs deploy/rook-ceph-operator
Ceph Cluster 차트에 지정할 변수를 명시한 파일 생성 - https://rook.io/docs/rook/v1.12/Helm-Charts/ceph-cluster-chart/#configuration
cat <<EOF > rook-ceph-cluster.yaml cephClusterSpec: cephVersion: image: quay.io/ceph/ceph:v17.2.5 storage: useAllNodes: false useAllDevices: false nodes: - name: NODE_NAME devices: - name: DEVICE_NAME - name: NODE_NAME devices: - name: DEVICE_NAME - name: NODE_NAME devices: - name: DEVICE_NAME placement: all: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: role operator: In values: - ceph crashCollector: disable: true dashboard: enabled: true ssl: false cephFileSystems: - name: ceph-filesystem spec: metadataPool: replicated: size: 3 dataPools: - failureDomain: host replicated: size: 3 name: data0 metadataServer: activeCount: 1 activeStandby: true resources: limits: cpu: "2000m" memory: "4Gi" requests: cpu: "1000m" memory: "4Gi" priorityClassName: system-cluster-critical placement: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: role operator: In values: - ceph storageClass: enabled: true isDefault: false name: ceph-filesystem pool: data0 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: "Immediate" mountOptions: [] parameters: csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: "{{ .Release.Namespace }}" csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: "{{ .Release.Namespace }}" csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: "{{ .Release.Namespace }}" csi.storage.k8s.io/fstype: ext4 cephObjectStores: - name: ceph-objectstore spec: metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: host erasureCoded: dataChunks: 2 codingChunks: 1 preservePoolsOnDelete: true gateway: port: 80 resources: limits: cpu: "2000m" memory: "2Gi" requests: cpu: "1000m" memory: "1Gi" instances: 1 priorityClassName: system-cluster-critical placement: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: role operator: In values: - ceph storageClass: enabled: true name: ceph-bucket reclaimPolicy: Delete volumeBindingMode: "Immediate" parameters: region: ap-northeast-2 ingress: dashboard: ingressClassName: nginx host: name: HOST_NAME toolbox: enabled: true affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: role operator: In values: - ceph monitoring: enabled: false createPrometheusRules: false EOF
위에서 생성한 파일을 열고 NODE_NAME, DEV_NAME, HOST_NAME을 변경
Ceph Cluster 설치
helm install rook-ceph-cluster rook-release/rook-ceph-cluster -n rook-ceph -f rook-ceph-cluster.yaml
생성된 CRD 확인
kubectl get crd
CephCluster 객체 확인
kubectl get cephclusters.ceph.rook.io rook-ceph -n rook-ceph
Ceph Operator 로그 확인
kubectl -n rook-ceph logs deploy/rook-ceph-operator -f
생성된 Pod 목록 확인
kubectl get pod -n rook-ceph
Ceph Operator 로그 확인
kubectl -n rook-ceph logs deploy/rook-ceph-operator -f
Ceph Cluster 대시보드에 연동된 Ingress 객체 확인
kubectl get ing -n rook-ceph
대시보드 로그인에 필요한 비밀번호 확인
kubectl -n rook-ceph get secret rook-ceph-dashboard-password \ -o jsonpath={.data.password} | base64 -d && echo
웹브라우저를 열고 대시보드로 연결 후 로그인
클러스터 상태가 정상인지 확인
StorageClass가 생성되었는지 확인
kubectl get sc
볼륨 프로비저닝
생성된 PVC가 있는지 확인
kubectl get pvc
PVC 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nginx-data spec: storageClassName: ceph-block accessModes: - ReadWriteOnce resources: requests: storage: 1Gi EOF
PVC가 생성되었는지 확인
kubectl get pvc nginx-data
PVC에 발생한 이벤트 확인
kubectl describe pvc nginx-data
PV가 생성되었는지 확인
kubectl get pv
생성된 PV의 상세 내용 확인
kubectl get pv -o yaml
Ceph Cluster 대시보드로 이동해서 새로운 Block 스토리지가 생성되었는지 확인
Deployment 생성
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: run: nginx name: nginx spec: selector: matchLabels: run: nginx template: metadata: labels: run: nginx spec: containers: - image: nginx imagePullPolicy: Always name: nginx volumeMounts: - name: data mountPath: /data ports: - containerPort: 80 protocol: TCP volumes: - name: data persistentVolumeClaim: claimName: nginx-data EOF
생성된 Pod 확인
kubectl get pod -l run=nginx
PV를 마운트한 경로에 파일 생성
kubectl exec -it deploy/nginx -- bash -c "echo hello > /data/world.txt"
파일이 정상적으로 생성되었는지 확인
kubectl exec -it deploy/nginx -- cat /data/world.txt
PVC 수정
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nginx-data spec: storageClassName: ceph-block accessModes: - ReadWriteOnce resources: requests: storage: 2Gi EOF
PVC가 수정되었는지 확인
kubectl get pvc nginx-data
PVC에 발생한 이벤트 확인
kubectl describe pvc nginx-data
PV가 수정되었는지 확인
kubectl get pv
Ceph Cluster 대시보드로 이동해서 생성된 Block 스토리지에 변경 사항이 있는지 확인
Deployment의 복제본 갯수를 2개로 수정
kubectl scale deploy nginx --replicas=2
Pod가 생성되었는지 확인
kubectl get pod -l run=nginx
새로운 Pod가 실행되지 않는 이유 확인
kubectl describe pod \ $(kubectl get pod -l run=nginx --sort-by='.metadata.creationTimestamp' -o=jsonpath='{.items[-1].metadata.name}')
PVC 삭제
kubectl delete pvc nginx-data
위의 명령어가 종료되지 않을 경우에는 Ctrl+C를 입력
PVC 상태 확인
kubectl get pvc nginx-data
PVC가 삭제되지 않는 이유 확인
kubectl get pvc nginx-data -o yaml
PV 상태 확인
kubectl get pv
Deployment 삭제
kubectl delete deploy nginx
PVC가 삭제되었는지 확인
kubectl get pvc nginx-data
PV가 삭제되었는지 확인
kubectl get pv
PVC 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nginx-data spec: storageClassName: ceph-filesystem accessModes: - ReadWriteMany resources: requests: storage: 1Gi EOF
PVC가 생성되었는지 확인
kubectl get pvc nginx-data
PVC에 발생한 이벤트 확인
kubectl describe pvc nginx-data
PV가 생성되었는지 확인
kubectl get pv
생성된 PV의 상세 내용 확인
kubectl get pv -o yaml
Ceph Cluster 대시보드로 이동해서 새로운 Filesystem이 생성되었는지 확인
Deployment 생성
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: run: nginx name: nginx spec: selector: matchLabels: run: nginx template: metadata: labels: run: nginx spec: containers: - image: nginx imagePullPolicy: Always name: nginx volumeMounts: - name: data mountPath: /data ports: - containerPort: 80 protocol: TCP volumes: - name: data persistentVolumeClaim: claimName: nginx-data EOF
생성된 Pod 확인
kubectl get pod -l run=nginx
PV를 마운트한 경로에 파일 생성
kubectl exec -it deploy/nginx -- bash -c "echo hello > /data/world.txt"
파일이 정상적으로 생성되었는지 확인
kubectl exec -it deploy/nginx -- cat /data/world.txt
Deployment의 복제본 갯수를 2개로 수정
kubectl scale deploy nginx --replicas=2
Pod가 생성되었는지 확인
kubectl get pod -l run=nginx
새로 생성된 Pod에서 다른 Pod에서 생성한 파일이 보이는지 확인
kubectl exec -it $(kubectl get pod -l run=nginx --sort-by='.metadata.creationTimestamp' -o=jsonpath='{.items[-1].metadata.name}') \ -- cat /data/world.txt
리소스 삭제
{ kubectl delete deploy nginx kubectl delete pvc nginx-data }
PV 목록 확인
kubectl get pv
Cleanup
Ceph Cluster 삭제
helm uninstall rook-ceph-cluster -n rook-ceph
Ceph Operator 로그 확인
kubectl -n rook-ceph logs deploy/rook-ceph-operator -f
Pod 목록 확인
kubectl get pod -n rook-ceph
Ceph Operator 삭제
helm uninstall rook-ceph -n rook-ceph
Pod 목록 확인
kubectl get pod -n rook-ceph
Namespace 삭제
kubectl delete ns rook-ceph
노드그룹 삭제
eksctl delete nodegroup --config-file=ceph-node.yaml --approve
노드 목록 확인
kubectl get node
Last updated