실습
EBS
현재 설정된 StorageClass 목록 확인
kubectl get sc
ClusterConfig를 아래와 같이 수정
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: mycluster region: ap-northeast-2 version: "1.29" availabilityZones: - ap-northeast-2a - ap-northeast-2b - ap-northeast-2c - ap-northeast-2d managedNodeGroups: - name: nodegroup instanceType: t3.small minSize: 2 desiredCapacity: 2 maxSize: 5 volumeSize: 20 iam: attachPolicyARNs: - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore iam: withOIDC: true addons: - name: aws-ebs-csi-driver wellKnownPolicies: ebsCSIController: true
Amazon EBS CSI driver 설치
eksctl create addon -f mycluster.yaml
설치 확인
kubectl get pod -n kube-system \ -l app.kubernetes.io/name=aws-ebs-csi-driver
StorageClass 생성 - https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/parameters.md
cat <<EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ebs provisioner: ebs.csi.aws.com volumeBindingMode: WaitForFirstConsumer parameters: type: gp3 encrypted: "true" EOF
생성된 StorageClass 확인
kubectl get sc
기본 StorageClass 변경
{ kubectl annotate sc gp2 storageclass.kubernetes.io/is-default-class- --overwrite kubectl annotate sc ebs storageclass.kubernetes.io/is-default-class=true --overwrite }
StorageClass 확인
kubectl get sc
데모 애플리케이션 배포
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /mnt volumes: - name: data persistentVolumeClaim: claimName: my-pvc EOF
Pod 상태 확인
kubectl get pod -l app=nginx
Pod에 발생한 이벤트 확인
kubectl describe pod -l app=nginx
PVC 상태 확인
kubectl get pvc my-pvc
PVC에 발생한 이벤트 확인
kubectl describe pvc my-pvc
생성된 PV 확인
kubectl get pv $(kubectl get pvc my-pvc -o=jsonpath='{.spec.volumeName}')
PV에 연동된 EBS 볼륨를 확인하고 볼륨 ID를 환경변수로 지정
{ export VOL_ID=$(kubectl get pv $(kubectl get pvc my-pvc \ -o=jsonpath='{.spec.volumeName}') \ -o=jsonpath='{.spec.csi.volumeHandle}') echo $VOL_ID }
위에서 확인한 EBS 볼륨 상태 확인
aws ec2 describe-volumes --volume-ids $VOL_ID --no-cli-pager
PV(EBS)가 마운트된 경로에 파일 생성
kubectl exec -it deploy/nginx -- \ bash -c "echo hello EBS > /mnt/message.txt"
파일 내용 확인
kubectl exec -it deploy/nginx -- cat /mnt/message.txt
Pod 삭제
kubectl scale deploy nginx --replicas=0
Pod 재생성
kubectl scale deploy nginx --replicas=1
Pod 상태 확인
kubectl get pod
PV에 생성했던 파일이 존재하는지 확인
kubectl exec -it deploy/nginx -- cat /mnt/message.txt
리소스 삭제
{ kubectl delete deploy nginx kubectl delete pvc my-pvc }
PV가 삭제되었는지 확인 - 삭제 될때까지 시간이 걸릴수도 있음
kubectl get pv
PV에 연동된 EBS 볼륨이 삭제되었는지 확인
aws ec2 describe-volumes --volume-ids $VOL_ID
EFS
ClusterConfig를 아래와 같이 수정
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: mycluster region: ap-northeast-2 version: "1.29" availabilityZones: - ap-northeast-2a - ap-northeast-2b - ap-northeast-2c - ap-northeast-2d managedNodeGroups: - name: nodegroup instanceType: t3.small minSize: 2 desiredCapacity: 2 maxSize: 5 volumeSize: 20 iam: attachPolicyARNs: - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore iam: withOIDC: true addons: - name: aws-ebs-csi-driver wellKnownPolicies: ebsCSIController: true - name: aws-efs-csi-driver wellKnownPolicies: efsCSIController: true
Amazon EFS CSI driver 설치
eksctl create addon -f mycluster.yaml
설치 확인
kubectl get pod -n kube-system \ -l app.kubernetes.io/name=aws-efs-csi-driver
EFS 파일 시스템 생성
aws efs create-file-system --tags Key=Name,Value=my-efs --no-cli-pager
생성된 EFS 파일 시스템 확인
{ export EFS_ID=$(aws efs describe-file-systems \ --query 'FileSystems[? Tags[? (Key==`Name`) && Value==`my-efs`]].FileSystemId' \ --output text) echo $EFS_ID }
EKS 클러스터가 생성된 VPC의 프라이빗 서브넷 목록 확인
{ export EKS_SUBNETS=$(aws ec2 describe-subnets \ --filters "Name=tag:alpha.eksctl.io/cluster-name,Values=mycluster" \ --filters "Name=tag-key,Values=kubernetes.io/role/internal-elb" \ --query "Subnets[*].SubnetId" \ --output text) echo $EKS_SUBNETS }
EFS 마운트 대상에 부여할 보안 그룹 생성
{ export EKS_VPC=$(aws ec2 describe-vpcs \ --filters "Name=tag:alpha.eksctl.io/cluster-name,Values=mycluster" \ --query "Vpcs[*].VpcId" \ --output text) aws ec2 create-security-group \ --description "my efs security group" \ --group-name my-efs-sg \ --vpc-id $EKS_VPC }
EKS 노드에서 EFS로 접근이 가능하도록 인바운드 규칙 추가
{ export EFS_SG=$(aws ec2 describe-security-groups \ --filters "Name=vpc-id,Values=$EKS_VPC" \ --filters "Name=group-name,Values=my-efs-sg" \ --query "SecurityGroups[*].GroupId" \ --output text) export EKS_SG=$(aws eks describe-cluster --name mycluster \ --query 'cluster.resourcesVpcConfig.clusterSecurityGroupId' \ --output text) aws ec2 authorize-security-group-ingress \ --group-id $EFS_SG \ --protocol tcp \ --port 2049 \ --source-group $EKS_SG \ --no-cli-pager }
EFS 마운트 대상 생성
{ for subnet in $EKS_SUBNETS do aws efs create-mount-target \ --file-system-id $EFS_ID \ --subnet-id $subnet \ --security-groups $EFS_SG \ --no-cli-pager done }
EFS 마운트 대상 상태 확인
aws efs describe-mount-targets --file-system-id $EFS_ID --no-cli-pager
StorageClass 생성
cat <<EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: efs provisioner: efs.csi.aws.com EOF
PV 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolume metadata: name: my-efs spec: storageClassName: efs capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: $EFS_ID EOF
PVC 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-efs spec: storageClassName: efs accessModes: - ReadWriteMany resources: requests: storage: 1Gi volumeName: my-efs EOF
PVC 상태 확인
kubectl get pvc my-efs
데모 애플리케이션 배포
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /mnt volumes: - name: data persistentVolumeClaim: claimName: my-efs EOF
Pod 생성 확인
kubectl get pod -l app=nginx
첫번째 Pod에서 PV(EFS)가 마운트된 경로에 파일 생성
kubectl exec -it \ $(kubectl get pod -l app=nginx -o=jsonpath='{.items[0].metadata.name}') \ -- bash -c "echo hello EFS > /mnt/message.txt"
두번째 Pod에서 첫번째 Pod에서 생성한 파일 확인
kubectl exec -it \ $(kubectl get pod -l app=nginx -o=jsonpath='{.items[1].metadata.name}') \ -- cat /mnt/message.txt
두번째 Pod에서 파일 수정
kubectl exec -it \ $(kubectl get pod -l app=nginx -o=jsonpath='{.items[1].metadata.name}') \ -- bash -c "echo greeting from 2nd pod > /mnt/message.txt"
첫번째 Pod에서 파일 확인
kubectl exec -it \ $(kubectl get pod -l app=nginx -o=jsonpath='{.items[0].metadata.name}') \ -- cat /mnt/message.txt
리소스 삭제
{ kubectl delete deploy nginx kubectl delete pvc my-efs kubectl delete pv my-efs }
StorageClass 생성 - https://github.com/kubernetes-sigs/aws-efs-csi-driver?tab=readme-ov-file#storage-class-parameters-for-dynamic-provisioning
cat <<EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: efs-ap provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap fileSystemId: $EFS_ID directoryPerms: "700" EOF
PVC 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-efs-ap spec: accessModes: - ReadWriteMany storageClassName: efs-ap resources: requests: storage: 5Gi EOF
PVC 생성 확인
kubectl get pvc my-efs-ap
PV 생성 확인
kubectl get pv $(kubectl get pvc my-efs-ap -o=jsonpath='{.spec.volumeName}')
생성된 EFS Access Point 확인
aws efs describe-access-points --file-system-id $EFS_ID --no-cli-pager
PVC 삭제
kubectl delete pvc my-efs-ap
PV가 삭제되었는지 확인 - 삭제 될때까지 시간이 걸릴수도 있음
kubectl get pv
PV에 연동된 EFS Access Point가 삭제되었는지 확인
aws efs describe-access-points --file-system-id $EFS_ID --no-cli-pager
리소스 삭제
{ export EFS_MOUNT_TARGETS=$(aws efs describe-mount-targets --file-system-id $EFS_ID \ --query 'MountTargets[*].MountTargetId' \ --output text) for target in $EFS_MOUNT_TARGETS do aws efs delete-mount-target --mount-target-id $target --no-cli-pager done aws efs delete-file-system --file-system-id $EFS_ID --no-cli-pager aws ec2 delete-security-group --group-id $EFS_SG }
Secret
Secrets Store CSI Driver 설치
{ helm repo add secrets-store-csi-driver \ https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver \ --namespace kube-system \ --set syncSecret.enabled=true \ --set enableSecretRotation=true }
AWS provider for the Secrets Store CSI Driver 설치
{ helm repo add secrets-store-csi-driver-provider-aws \ https://aws.github.io/secrets-store-csi-driver-provider-aws helm install secrets-store-csi-driver-provider-aws \ secrets-store-csi-driver-provider-aws/secrets-store-csi-driver-provider-aws \ --namespace kube-system }
AWS Secrets Manager에 저장할 파일 생성
cat <<EOF | tee application.properties a=b c=d e=f g=h EOF
JSON 파일 생성
{ jq -n --argjson application.properties \ $(jq -Rs '.' application.properties) '$ARGS.named' > secret-file.json cat secret-file.json }
Secret 생성
aws secretsmanager create-secret --name secret-file \ --secret-string file://secret-file.json
생성된 Secret 확인
aws secretsmanager get-secret-value --secret-id secret-file
SecretProviderClass 생성
cat <<EOF | kubectl apply -f - apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: secret-file spec: provider: aws parameters: objects: | - objectName: "secret-file" objectType: "secretsmanager" EOF
데모 애플리케이션 배포
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: secret mountPath: /mnt volumes: - name: secret csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: secret-file EOF
Pod 생성 확인
kubectl get pod -l app=nginx
Pod에 발생한 이벤트 확인
kubectl describe pod -l app=nginx
ServiceAccount 생성
{ export CLUSTER_NAME=$(kubectl get node \ -o=jsonpath='{.items[0].metadata.labels.alpha\.eksctl\.io\/cluster-name}') eksctl create iamserviceaccount \ --cluster $CLUSTER_NAME \ --namespace=default \ --name=nginx \ --attach-policy-arn=arn:aws:iam::aws:policy/SecretsManagerReadWrite \ --override-existing-serviceaccounts \ --approve }
Pod에 위에서 생성한 ServiceAccount 부여
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: serviceAccountName: nginx containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: secret mountPath: /mnt volumes: - name: secret csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: secret-file EOF
Pod 생성 확인
kubectl get pod -l app=nginx
Pod에 발생한 이벤트 확인
kubectl describe pod -l app=nginx
Pod에 마운트된 파일 확인
kubectl exec -it deploy/nginx -- ls /mnt
파일 내용 확인
kubectl exec -it deploy/nginx -- cat /mnt/secret-file
SecretProviderClass 수정
cat <<EOF | kubectl apply -f - apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: secret-file spec: provider: aws parameters: objects: | - objectName: "secret-file" objectType: "secretsmanager" jmesPath: - path: '"application.properties"' objectAlias: application.properties EOF
Pod 재생성
{ kubectl scale deploy nginx --replicas=0 kubectl scale deploy nginx --replicas=1 }
Pod에 마운트된 파일 확인
kubectl exec -it deploy/nginx -- ls /mnt
파일 내용 확인
kubectl exec -it deploy/nginx -- cat /mnt/application.properties
파일 수정
cat <<EOF | tee application.properties a=b c=d e=f g=h additonal_key=additonal_value EOF
JSON 파일 생성
{ jq -n --argjson application.properties \ $(jq -Rs '.' application.properties) '$ARGS.named' > secret-file.json cat secret-file.json }
Secret 수정
aws secretsmanager put-secret-value --secret-id secret-file \ --secret-string file://secret-file.json
수정된 Secret 확인
aws secretsmanager get-secret-value --secret-id secret-file
수정 사항이 반영되는지 확인
kubectl exec -it deploy/nginx -- cat /mnt/application.properties
Pod에 발생한 이벤트 확인
kubectl describe pod -l app=nginx
AWS Secrets Manager에 저장할 파일 생성
cat <<EOF | tee secret-env.json { "ACCESS_KEY": "ABCDEFG", "SECRET_ACCESS_KEY": "ASDF1234" } EOF
Secret 생성
aws secretsmanager create-secret --name secret-env \ --secret-string file://secret-env.json
생성된 Secret 확인
aws secretsmanager get-secret-value --secret-id secret-env
SecretProviderClass 생성
cat <<EOF | kubectl apply -f - apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: secret-env spec: provider: aws parameters: objects: | - objectName: "secret-env" objectType: "secretsmanager" jmesPath: - path: ACCESS_KEY objectAlias: ACCESS_KEY - path: SECRET_ACCESS_KEY objectAlias: SECRET_ACCESS_KEY secretObjects: - secretName: secret-env type: Opaque data: - objectName: ACCESS_KEY key: ACCESS_KEY - objectName: SECRET_ACCESS_KEY key: SECRET_ACCESS_KEY EOF
데모 애플리케이션 수정
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: serviceAccountName: nginx containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: secret mountPath: /mnt envFrom: - secretRef: name: secret-env volumes: - name: secret csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: secret-env EOF
Pod에 지정된 환경 변수 확인
kubectl exec -it deploy/nginx -- printenv
생성된 Secret 확인
kubectl get secret
생성된 Secret 데이터의 평문 확인
kubectl get secret secret-env -o jsonpath={.data.ACCESS_KEY} \ | base64 -d && echo
파일 수정
cat <<EOF | tee secret-env.json { "ACCESS_KEY": "QWER1234", "SECRET_ACCESS_KEY": "ASDF1234" } EOF
Secret 수정
aws secretsmanager put-secret-value --secret-id secret-env \ --secret-string file://secret-env.json
수정된 Secret 확인
aws secretsmanager get-secret-value --secret-id secret-env
Pod에 수정 사항이 반영되는지 확인
kubectl exec -it deploy/nginx -- printenv
Pod에 발생한 이벤트 확인
kubectl describe pod -l app=nginx
Secret에 수정 사항이 반영되었는지 확인
kubectl get secret secret-env -o jsonpath={.data.ACCESS_KEY} \ | base64 -d && echo
Pod에 수정 사항이 반영되는지 확인
kubectl exec -it deploy/nginx -- printenv
Reloader 설치 - https://github.com/stakater/Reloader
{ helm repo add stakater https://stakater.github.io/stakater-charts helm install reloader stakater/reloader \ --set reloader.autoReloadAll=true \ --namespace kube-system }
파일 수정
cat <<EOF | tee secret-env.json { "ACCESS_KEY": "ZXCV1234", "SECRET_ACCESS_KEY": "ASDF1234" } EOF
Secret 수정
aws secretsmanager put-secret-value --secret-id secret-env \ --secret-string file://secret-env.json
수정된 Secret 확인
aws secretsmanager get-secret-value --secret-id secret-env
Pod에 발생한 이벤트 확인
kubectl describe pod -l app=nginx
Secret에 수정 사항이 반영되었는지 확인
kubectl get secret secret-env -o jsonpath={.data.ACCESS_KEY} \ | base64 -d && echo
Pod에 수정 사항이 반영되는지 확인
kubectl exec -it deploy/nginx -- printenv
리소스 삭제
{ kubectl delete deploy nginx helm uninstall secrets-store-csi-driver-provider-aws -n kube-system helm uninstall csi-secrets-store -n kube-system helm uninstall reloader -n kube-system aws secretsmanager delete-secret \ --secret-id secret-file \ --recovery-window-in-days 7 aws secretsmanager delete-secret \ --secret-id secret-env \ --recovery-window-in-days 7 eksctl delete iamserviceaccount \ --cluster $CLUSTER_NAME \ --namespace=default \ --name=nginx }
Last updated