실습
EBS
현재 설정된 StorageClass 목록 확인
kubectl get scClusterConfig를 아래와 같이 수정
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: mycluster region: ap-northeast-2 version: "1.29" availabilityZones: - ap-northeast-2a - ap-northeast-2b - ap-northeast-2c - ap-northeast-2d managedNodeGroups: - name: nodegroup instanceType: t3.small minSize: 2 desiredCapacity: 2 maxSize: 5 volumeSize: 20 iam: attachPolicyARNs: - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore iam: withOIDC: true addons: - name: aws-ebs-csi-driver wellKnownPolicies: ebsCSIController: trueAmazon EBS CSI driver 설치
eksctl create addon -f mycluster.yaml설치 확인
kubectl get pod -n kube-system \ -l app.kubernetes.io/name=aws-ebs-csi-driverStorageClass 생성 - https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/parameters.md
cat <<EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ebs provisioner: ebs.csi.aws.com volumeBindingMode: WaitForFirstConsumer parameters: type: gp3 encrypted: "true" EOF생성된 StorageClass 확인
kubectl get sc기본 StorageClass 변경
{ kubectl annotate sc gp2 storageclass.kubernetes.io/is-default-class- --overwrite kubectl annotate sc ebs storageclass.kubernetes.io/is-default-class=true --overwrite }StorageClass 확인
kubectl get sc데모 애플리케이션 배포
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /mnt volumes: - name: data persistentVolumeClaim: claimName: my-pvc EOFPod 상태 확인
kubectl get pod -l app=nginxPod에 발생한 이벤트 확인
kubectl describe pod -l app=nginxPVC 상태 확인
kubectl get pvc my-pvcPVC에 발생한 이벤트 확인
kubectl describe pvc my-pvc생성된 PV 확인
kubectl get pv $(kubectl get pvc my-pvc -o=jsonpath='{.spec.volumeName}')PV에 연동된 EBS 볼륨를 확인하고 볼륨 ID를 환경변수로 지정
{ export VOL_ID=$(kubectl get pv $(kubectl get pvc my-pvc \ -o=jsonpath='{.spec.volumeName}') \ -o=jsonpath='{.spec.csi.volumeHandle}') echo $VOL_ID }위에서 확인한 EBS 볼륨 상태 확인
aws ec2 describe-volumes --volume-ids $VOL_ID --no-cli-pagerPV(EBS)가 마운트된 경로에 파일 생성
kubectl exec -it deploy/nginx -- \ bash -c "echo hello EBS > /mnt/message.txt"파일 내용 확인
kubectl exec -it deploy/nginx -- cat /mnt/message.txtPod 삭제
kubectl scale deploy nginx --replicas=0Pod 재생성
kubectl scale deploy nginx --replicas=1Pod 상태 확인
kubectl get podPV에 생성했던 파일이 존재하는지 확인
kubectl exec -it deploy/nginx -- cat /mnt/message.txt리소스 삭제
{ kubectl delete deploy nginx kubectl delete pvc my-pvc }PV가 삭제되었는지 확인 - 삭제 될때까지 시간이 걸릴수도 있음
kubectl get pvPV에 연동된 EBS 볼륨이 삭제되었는지 확인
aws ec2 describe-volumes --volume-ids $VOL_ID
EFS
ClusterConfig를 아래와 같이 수정
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: mycluster region: ap-northeast-2 version: "1.29" availabilityZones: - ap-northeast-2a - ap-northeast-2b - ap-northeast-2c - ap-northeast-2d managedNodeGroups: - name: nodegroup instanceType: t3.small minSize: 2 desiredCapacity: 2 maxSize: 5 volumeSize: 20 iam: attachPolicyARNs: - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore iam: withOIDC: true addons: - name: aws-ebs-csi-driver wellKnownPolicies: ebsCSIController: true - name: aws-efs-csi-driver wellKnownPolicies: efsCSIController: trueAmazon EFS CSI driver 설치
eksctl create addon -f mycluster.yaml설치 확인
kubectl get pod -n kube-system \ -l app.kubernetes.io/name=aws-efs-csi-driverEFS 파일 시스템 생성
aws efs create-file-system --tags Key=Name,Value=my-efs --no-cli-pager생성된 EFS 파일 시스템 확인
{ export EFS_ID=$(aws efs describe-file-systems \ --query 'FileSystems[? Tags[? (Key==`Name`) && Value==`my-efs`]].FileSystemId' \ --output text) echo $EFS_ID }EKS 클러스터가 생성된 VPC의 프라이빗 서브넷 목록 확인
{ export EKS_SUBNETS=$(aws ec2 describe-subnets \ --filters "Name=tag:alpha.eksctl.io/cluster-name,Values=mycluster" \ --filters "Name=tag-key,Values=kubernetes.io/role/internal-elb" \ --query "Subnets[*].SubnetId" \ --output text) echo $EKS_SUBNETS }EFS 마운트 대상에 부여할 보안 그룹 생성
{ export EKS_VPC=$(aws ec2 describe-vpcs \ --filters "Name=tag:alpha.eksctl.io/cluster-name,Values=mycluster" \ --query "Vpcs[*].VpcId" \ --output text) aws ec2 create-security-group \ --description "my efs security group" \ --group-name my-efs-sg \ --vpc-id $EKS_VPC }EKS 노드에서 EFS로 접근이 가능하도록 인바운드 규칙 추가
{ export EFS_SG=$(aws ec2 describe-security-groups \ --filters "Name=vpc-id,Values=$EKS_VPC" \ --filters "Name=group-name,Values=my-efs-sg" \ --query "SecurityGroups[*].GroupId" \ --output text) export EKS_SG=$(aws eks describe-cluster --name mycluster \ --query 'cluster.resourcesVpcConfig.clusterSecurityGroupId' \ --output text) aws ec2 authorize-security-group-ingress \ --group-id $EFS_SG \ --protocol tcp \ --port 2049 \ --source-group $EKS_SG \ --no-cli-pager }EFS 마운트 대상 생성
{ for subnet in $EKS_SUBNETS do aws efs create-mount-target \ --file-system-id $EFS_ID \ --subnet-id $subnet \ --security-groups $EFS_SG \ --no-cli-pager done }EFS 마운트 대상 상태 확인
aws efs describe-mount-targets --file-system-id $EFS_ID --no-cli-pagerStorageClass 생성
cat <<EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: efs provisioner: efs.csi.aws.com EOFPV 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolume metadata: name: my-efs spec: storageClassName: efs capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: $EFS_ID EOFPVC 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-efs spec: storageClassName: efs accessModes: - ReadWriteMany resources: requests: storage: 1Gi volumeName: my-efs EOFPVC 상태 확인
kubectl get pvc my-efs데모 애플리케이션 배포
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: data mountPath: /mnt volumes: - name: data persistentVolumeClaim: claimName: my-efs EOFPod 생성 확인
kubectl get pod -l app=nginx첫번째 Pod에서 PV(EFS)가 마운트된 경로에 파일 생성
kubectl exec -it \ $(kubectl get pod -l app=nginx -o=jsonpath='{.items[0].metadata.name}') \ -- bash -c "echo hello EFS > /mnt/message.txt"두번째 Pod에서 첫번째 Pod에서 생성한 파일 확인
kubectl exec -it \ $(kubectl get pod -l app=nginx -o=jsonpath='{.items[1].metadata.name}') \ -- cat /mnt/message.txt두번째 Pod에서 파일 수정
kubectl exec -it \ $(kubectl get pod -l app=nginx -o=jsonpath='{.items[1].metadata.name}') \ -- bash -c "echo greeting from 2nd pod > /mnt/message.txt"첫번째 Pod에서 파일 확인
kubectl exec -it \ $(kubectl get pod -l app=nginx -o=jsonpath='{.items[0].metadata.name}') \ -- cat /mnt/message.txt리소스 삭제
{ kubectl delete deploy nginx kubectl delete pvc my-efs kubectl delete pv my-efs }StorageClass 생성 - https://github.com/kubernetes-sigs/aws-efs-csi-driver?tab=readme-ov-file#storage-class-parameters-for-dynamic-provisioning
cat <<EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: efs-ap provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap fileSystemId: $EFS_ID directoryPerms: "700" EOFPVC 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-efs-ap spec: accessModes: - ReadWriteMany storageClassName: efs-ap resources: requests: storage: 5Gi EOFPVC 생성 확인
kubectl get pvc my-efs-apPV 생성 확인
kubectl get pv $(kubectl get pvc my-efs-ap -o=jsonpath='{.spec.volumeName}')생성된 EFS Access Point 확인
aws efs describe-access-points --file-system-id $EFS_ID --no-cli-pagerPVC 삭제
kubectl delete pvc my-efs-apPV가 삭제되었는지 확인 - 삭제 될때까지 시간이 걸릴수도 있음
kubectl get pvPV에 연동된 EFS Access Point가 삭제되었는지 확인
aws efs describe-access-points --file-system-id $EFS_ID --no-cli-pager리소스 삭제
{ export EFS_MOUNT_TARGETS=$(aws efs describe-mount-targets --file-system-id $EFS_ID \ --query 'MountTargets[*].MountTargetId' \ --output text) for target in $EFS_MOUNT_TARGETS do aws efs delete-mount-target --mount-target-id $target --no-cli-pager done aws efs delete-file-system --file-system-id $EFS_ID --no-cli-pager aws ec2 delete-security-group --group-id $EFS_SG }
Secret
Secrets Store CSI Driver 설치
{ helm repo add secrets-store-csi-driver \ https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver \ --namespace kube-system \ --set syncSecret.enabled=true \ --set enableSecretRotation=true }AWS provider for the Secrets Store CSI Driver 설치
{ helm repo add secrets-store-csi-driver-provider-aws \ https://aws.github.io/secrets-store-csi-driver-provider-aws helm install secrets-store-csi-driver-provider-aws \ secrets-store-csi-driver-provider-aws/secrets-store-csi-driver-provider-aws \ --namespace kube-system }AWS Secrets Manager에 저장할 파일 생성
cat <<EOF | tee application.properties a=b c=d e=f g=h EOFJSON 파일 생성
{ jq -n --argjson application.properties \ $(jq -Rs '.' application.properties) '$ARGS.named' > secret-file.json cat secret-file.json }Secret 생성
aws secretsmanager create-secret --name secret-file \ --secret-string file://secret-file.json생성된 Secret 확인
aws secretsmanager get-secret-value --secret-id secret-fileSecretProviderClass 생성
cat <<EOF | kubectl apply -f - apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: secret-file spec: provider: aws parameters: objects: | - objectName: "secret-file" objectType: "secretsmanager" EOF데모 애플리케이션 배포
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: secret mountPath: /mnt volumes: - name: secret csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: secret-file EOFPod 생성 확인
kubectl get pod -l app=nginxPod에 발생한 이벤트 확인
kubectl describe pod -l app=nginxServiceAccount 생성
{ export CLUSTER_NAME=$(kubectl get node \ -o=jsonpath='{.items[0].metadata.labels.alpha\.eksctl\.io\/cluster-name}') eksctl create iamserviceaccount \ --cluster $CLUSTER_NAME \ --namespace=default \ --name=nginx \ --attach-policy-arn=arn:aws:iam::aws:policy/SecretsManagerReadWrite \ --override-existing-serviceaccounts \ --approve }Pod에 위에서 생성한 ServiceAccount 부여
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: serviceAccountName: nginx containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: secret mountPath: /mnt volumes: - name: secret csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: secret-file EOFPod 생성 확인
kubectl get pod -l app=nginxPod에 발생한 이벤트 확인
kubectl describe pod -l app=nginxPod에 마운트된 파일 확인
kubectl exec -it deploy/nginx -- ls /mnt파일 내용 확인
kubectl exec -it deploy/nginx -- cat /mnt/secret-fileSecretProviderClass 수정
cat <<EOF | kubectl apply -f - apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: secret-file spec: provider: aws parameters: objects: | - objectName: "secret-file" objectType: "secretsmanager" jmesPath: - path: '"application.properties"' objectAlias: application.properties EOFPod 재생성
{ kubectl scale deploy nginx --replicas=0 kubectl scale deploy nginx --replicas=1 }Pod에 마운트된 파일 확인
kubectl exec -it deploy/nginx -- ls /mnt파일 내용 확인
kubectl exec -it deploy/nginx -- cat /mnt/application.properties파일 수정
cat <<EOF | tee application.properties a=b c=d e=f g=h additonal_key=additonal_value EOFJSON 파일 생성
{ jq -n --argjson application.properties \ $(jq -Rs '.' application.properties) '$ARGS.named' > secret-file.json cat secret-file.json }Secret 수정
aws secretsmanager put-secret-value --secret-id secret-file \ --secret-string file://secret-file.json수정된 Secret 확인
aws secretsmanager get-secret-value --secret-id secret-file수정 사항이 반영되는지 확인
kubectl exec -it deploy/nginx -- cat /mnt/application.propertiesPod에 발생한 이벤트 확인
kubectl describe pod -l app=nginxAWS Secrets Manager에 저장할 파일 생성
cat <<EOF | tee secret-env.json { "ACCESS_KEY": "ABCDEFG", "SECRET_ACCESS_KEY": "ASDF1234" } EOFSecret 생성
aws secretsmanager create-secret --name secret-env \ --secret-string file://secret-env.json생성된 Secret 확인
aws secretsmanager get-secret-value --secret-id secret-envSecretProviderClass 생성
cat <<EOF | kubectl apply -f - apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: secret-env spec: provider: aws parameters: objects: | - objectName: "secret-env" objectType: "secretsmanager" jmesPath: - path: ACCESS_KEY objectAlias: ACCESS_KEY - path: SECRET_ACCESS_KEY objectAlias: SECRET_ACCESS_KEY secretObjects: - secretName: secret-env type: Opaque data: - objectName: ACCESS_KEY key: ACCESS_KEY - objectName: SECRET_ACCESS_KEY key: SECRET_ACCESS_KEY EOF데모 애플리케이션 수정
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: serviceAccountName: nginx containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: secret mountPath: /mnt envFrom: - secretRef: name: secret-env volumes: - name: secret csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: secret-env EOFPod에 지정된 환경 변수 확인
kubectl exec -it deploy/nginx -- printenv생성된 Secret 확인
kubectl get secret생성된 Secret 데이터의 평문 확인
kubectl get secret secret-env -o jsonpath={.data.ACCESS_KEY} \ | base64 -d && echo파일 수정
cat <<EOF | tee secret-env.json { "ACCESS_KEY": "QWER1234", "SECRET_ACCESS_KEY": "ASDF1234" } EOFSecret 수정
aws secretsmanager put-secret-value --secret-id secret-env \ --secret-string file://secret-env.json수정된 Secret 확인
aws secretsmanager get-secret-value --secret-id secret-envPod에 수정 사항이 반영되는지 확인
kubectl exec -it deploy/nginx -- printenvPod에 발생한 이벤트 확인
kubectl describe pod -l app=nginxSecret에 수정 사항이 반영되었는지 확인
kubectl get secret secret-env -o jsonpath={.data.ACCESS_KEY} \ | base64 -d && echoPod에 수정 사항이 반영되는지 확인
kubectl exec -it deploy/nginx -- printenvReloader 설치 - https://github.com/stakater/Reloader
{ helm repo add stakater https://stakater.github.io/stakater-charts helm install reloader stakater/reloader \ --set reloader.autoReloadAll=true \ --namespace kube-system }파일 수정
cat <<EOF | tee secret-env.json { "ACCESS_KEY": "ZXCV1234", "SECRET_ACCESS_KEY": "ASDF1234" } EOFSecret 수정
aws secretsmanager put-secret-value --secret-id secret-env \ --secret-string file://secret-env.json수정된 Secret 확인
aws secretsmanager get-secret-value --secret-id secret-envPod에 발생한 이벤트 확인
kubectl describe pod -l app=nginxSecret에 수정 사항이 반영되었는지 확인
kubectl get secret secret-env -o jsonpath={.data.ACCESS_KEY} \ | base64 -d && echoPod에 수정 사항이 반영되는지 확인
kubectl exec -it deploy/nginx -- printenv리소스 삭제
{ kubectl delete deploy nginx helm uninstall secrets-store-csi-driver-provider-aws -n kube-system helm uninstall csi-secrets-store -n kube-system helm uninstall reloader -n kube-system aws secretsmanager delete-secret \ --secret-id secret-file \ --recovery-window-in-days 7 aws secretsmanager delete-secret \ --secret-id secret-env \ --recovery-window-in-days 7 eksctl delete iamserviceaccount \ --cluster $CLUSTER_NAME \ --namespace=default \ --name=nginx }
Last updated