실습
Manual Scaling
Deployment 생성
kubectl create deployment nginx --image=nginx --replicas=1
Pod 갯수 확인
kubectl get pod -l app=nginx
Deployment의 replica를 3개로 수정
kubectl scale deployment nginx --replicas=3
Pod 갯수 확인
kubectl get pod -l app=nginx
ReplicaSet 확인
kubectl get rs -l app=nginx
Deployment의 replica를 2개로 수정
kubectl scale deployment nginx --replicas=2
Deployment의 replica가 1개라면 replica를 3개로 변경
kubectl scale deployment nginx --current-replicas=1 --replicas=3
새로운 Deployment 생성
kubectl create deployment httpd --image=httpd --replicas=1
Deployment 중에서 replica가 1개인 Deployment가 있으면 replica를 3개로 변경
kubectl scale deployment --current-replicas=1 --replicas=3 --all
Pod 갯수 확인
kubectl get pod -l 'app in (nginx,httpd)'
모든 Deployment의 replica를 5개로 변경
kubectl scale deployment --replicas=5 --all
Pod 갯수 확인
kubectl get pod -l 'app in (nginx,httpd)'
모든 Deployment 삭제
kubectl delete deployment -l 'app in (nginx,httpd)'
HPA (Horizontal Pod Autoscaler)
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: php-apache spec: selector: matchLabels: app: php-apache replicas: 1 template: metadata: labels: app: php-apache spec: containers: - name: php-apache image: k8s.gcr.io/hpa-example ports: - containerPort: 80 resources: limits: cpu: 500m requests: cpu: 200m --- apiVersion: v1 kind: Service metadata: name: php-apache labels: app: php-apache spec: ports: - port: 80 selector: app: php-apache EOF
Pod의 리소스 사용량 확인
kubectl top pod -l app=php-apache --use-protocol-buffers
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
API 서버에 등록된 API 목록 확인
kubectl get apiservices.apiregistration.k8s.io
v1beta1.metrics.k8s.io API의 상세 내용 확인
kubectl get apiservices.apiregistration.k8s.io v1beta1.metrics.k8s.io -o yaml
모든 Pod의 리소스 사용량 확인
kubectl top pod -A --use-protocol-buffers
모든 Node의 리소스 사용량 확인
kubectl top node --use-protocol-buffers
Metrics Server 로그 확인
kubectl -n kube-system logs deploy/metrics-server
Metrics Server 로그 레벨 변경
kubectl -n kube-system patch deployment metrics-server --type=json \ -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--v=6"}]'
Metrics Server 로그 확인 - 새로운 Pod가 뜬 다음에 확인
kubectl -n kube-system logs deploy/metrics-server
kubelet에서 제공하는 지표에 대한 접근 권한을 가진 Pod 생성
cat <<EOF | kubectl apply -f - kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubelet-access rules: - apiGroups: [""] resources: - nodes/stats - nodes/metrics verbs: ["get"] --- apiVersion: v1 kind: ServiceAccount metadata: name: kubelet-access --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubelet-access namespace: default subjects: - kind: ServiceAccount name: kubelet-access namespace: default roleRef: kind: ClusterRole name: kubelet-access apiGroup: rbac.authorization.k8s.io --- apiVersion: v1 kind: Pod metadata: name: curl spec: serviceAccountName: kubelet-access containers: - image: curlimages/curl name: curl command: ["sleep", "3600"] env: - name: HOST_IP valueFrom: fieldRef: fieldPath: status.hostIP EOF
kubectl exec curl -- \ sh -c 'curl -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://$HOST_IP:10250/metrics/resource -k'
kubelet에서 제공하는 모든 지표 확인
kubectl exec curl -- \ sh -c 'curl -s -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://$HOST_IP:10250/stats/summary -k'
kubelet에서 제공하는 지표중에서 CPU 및 Memory에 대한 지표만 확인
kubectl exec curl -- \ sh -c 'curl -s -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://$HOST_IP:10250/stats/summary?only_cpu_and_memory=true -k'
API 서버를 통해서 Metrics Server가 지표를 수집할때 호출하는 Endpoint 호출
kubectl get --raw /api/v1/nodes/$(kubectl get node -o=jsonpath='{.items[0].metadata.name}')/proxy/metrics/resource
Autoscaling (HPA)설정
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=5
위에서 생성한 HPA 상태 확인
kubectl get hpa php-apache
데모 애플리케이션에 부하를 발생시키는 Pod 생성
kubectl run load-generator --image=busybox:1.28 -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
HPA 상태 모니터링
kubectl get hpa php-apache -w
Ctrl+C를 입력해서 HPA 모니터링을 중지하고 실제로 Pod가 생겼는지 확인
kubectl get pod -l app=php-apache
Pod의 리소스 사용량 확인
kubectl top pod -l app=php-apache --use-protocol-buffers
데모 애플리케이션에 부하를 발생시키는 Pod 삭제
kubectl delete pod load-generator
HPA 상태 모니터링
kubectl get hpa php-apache -w
Ctrl+C를 입력해서 HPA 모니터링을 중지하고 HPA 상세 내용 확인
kubectl describe hpa php-apache
Pod의 리소스 사용량 확인
kubectl top pod -l app=php-apache --use-protocol-buffers
HPA 상태 확인
kubectl get hpa php-apache
Pod 갯수 확인
kubectl get pod -l app=php-apache
데모 애플리케이션 및 리소스 삭제
{ kubectl delete hpa php-apache kubectl delete deployment php-apache kubectl delete svc php-apache kubectl delete pod curl kubectl delete clusterrole kubelet-access kubectl delete clusterrolebinding kubelet-access kubectl delete sa kubelet-access }
Cluster Autoscaler
Deployment 생성
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 5 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 resources: requests: cpu: 1 memory: 1Gi limits: cpu: 2 memory: 2Gi EOF
생성된 Deployment 및 Pod 확인
kubectl get deploy,pod -l app=nginx
Pending 상태의 Pod가 있다면 아래의 명령어를 통해서 그 이유를 확인
kubectl describe pod \ $(kubectl get pod -o=jsonpath='{.items[?(@.status.phase=="Pending")].metadata.name}')
Cluster Autoscaler에게 부여할 IAM 정책 JSON 파일 생성
cat <<EOF > cluster-autoscaler-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "autoscaling:DescribeAutoScalingInstances", "autoscaling:DescribeLaunchConfigurations", "autoscaling:DescribeScalingActivities", "ec2:DescribeImages", "ec2:DescribeInstanceTypes", "ec2:DescribeLaunchTemplateVersions", "ec2:GetInstanceTypesFromInstanceRequirements", "eks:DescribeNodegroup" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "autoscaling:SetDesiredCapacity", "autoscaling:TerminateInstanceInAutoScalingGroup" ], "Resource": [ "*" ] } ] } EOF
IAM 정책 생성
aws iam create-policy \ --policy-name AmazonEKSClusterAutoscalerPolicy \ --policy-document file://cluster-autoscaler-policy.json
Node에 부여된 Label을 통해서 EKS 클러스터 이름 확인하고 환경변수로 저장
{ export CLUSTER_NAME=$(kubectl get node \ -o=jsonpath='{.items[0].metadata.labels.alpha\.eksctl\.io\/cluster-name}') echo $CLUSTER_NAME }
EKS 클러스터가 생성되어 있는 AWS 계정번호 확인하고 환경변수로 저장
{ export ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) echo $ACCOUNT_ID }
IAM OIDC 제공자 활성화
eksctl utils associate-iam-oidc-provider --region=ap-northeast-2 \ --cluster=$CLUSTER_NAME --approve
ServiceAccount 생성
eksctl create iamserviceaccount \ --cluster=$CLUSTER_NAME \ --namespace=kube-system \ --name=cluster-autoscaler \ --attach-policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/AmazonEKSClusterAutoscalerPolicy \ --override-existing-serviceaccounts \ --approve
ServiceAccount가 생성되었는지 확인
kubectl -n kube-system describe sa cluster-autoscaler
ServiceAccount에 명시된 IAM 역할 이름을 확인하고 환경변수로 지정
{ IAM_ROLE_NAME=$(kubectl -n kube-system get sa cluster-autoscaler \ -o=jsonpath='{.metadata.annotations.eks\.amazonaws\.com\/role-arn}' \ | grep -oP '(?<=role.).*') echo $IAM_ROLE_NAME }
ServiceAccount와 연동된 IAM 역할에 부여된 IAM 정책 확인
aws iam list-attached-role-policies --role-name $IAM_ROLE_NAME
ServiceAccount와 연동된 IAM 역할에 부여된 신뢰관계 정책 확인
aws iam get-role --role-name $IAM_ROLE_NAME
Cluster Autoscaler 설치
{ helm repo add autoscaler https://kubernetes.github.io/autoscaler helm install cluster-autoscaler autoscaler/cluster-autoscaler \ --namespace kube-system \ --set autoDiscovery.clusterName=$CLUSTER_NAME \ --set rbac.serviceAccount.create=false \ --set rbac.serviceAccount.name=cluster-autoscaler \ --set fullnameOverride=cluster-autoscaler }
Cluster Autoscaler 로그 확인 - ASG map 확인
kubectl -n kube-system logs deploy/cluster-autoscaler
Cluster Autoscaler 설정값 확인
kubectl -n kube-system get deploy cluster-autoscaler \ -o=jsonpath='{.spec.template.spec.containers[*].command}' | jq
EKS 노드그룹와 연동된 오토스케일링 그룹 이름을 확인하고 환경변수로 지정
{ export ASG_NAME=$(aws autoscaling describe-auto-scaling-groups --query \ "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') && Value=='$CLUSTER_NAME']].AutoScalingGroupName" --output text) echo $ASG_NAME }
오토스케일링 그룹의 인스턴스 현황 확인
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $ASG_NAME \ --query "AutoScalingGroups[0].{MinSize: MinSize, MaxSize: MaxSize, DesiredCapacity: DesiredCapacity}"
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $ASG_NAME \ --query "AutoScalingGroups[0].Tags"
설치한 Helm 차트의 기본 변수값 확인
helm get values cluster-autoscaler -n kube-system --all
Cluster Autoscaler 설정값 수정
helm upgrade cluster-autoscaler autoscaler/cluster-autoscaler \ --namespace kube-system \ --set autoDiscovery.clusterName=$CLUSTER_NAME \ --set rbac.serviceAccount.create=false \ --set rbac.serviceAccount.name=cluster-autoscaler \ --set fullnameOverride=cluster-autoscaler \ --set awsRegion=ap-northeast-2
Cluster Autoscaler 설정값 확인
kubectl -n kube-system get deploy cluster-autoscaler \ -o=jsonpath='{.spec.template.spec.containers[*].command}' | jq
Cluster Autoscaler 로그 확인 - ASG map 확인
kubectl -n kube-system logs deploy/cluster-autoscaler
Pending 상태였던 Pod가 생성 되었는지 확인
kubectl get pod -l app=nginx
Node 갯수 확인
kubectl get node
오토스케일링 그룹의 인스턴스 현황 확인
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $ASG_NAME \ --query "AutoScalingGroups[0].{MinSize: MinSize, MaxSize: MaxSize, DesiredCapacity: DesiredCapacity}"
오토스케일링 그룹의 활동 로그 확인
aws autoscaling describe-scaling-activities --auto-scaling-group-name $ASG_NAME
Deployment 삭제
kubectl delete deployment nginx
Pod 목록 확인
kubectl get pod -l app=nginx
Node가 삭제 되는지 확인
kubectl get node
Cluster Autoscaler 로그 확인
kubectl -n kube-system logs deploy/cluster-autoscaler
리소스 삭제
rm cluster-autoscaler-policy.json
Karpenter
Fargate 실행에 필요한 역할에 부여할 신뢰관계 정책 문서 생성
cat <<EOF > pod-execution-role-trust-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "eks-fargate-pods.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } EOF
Fargate 실행에 필요한 역할 생성
aws iam create-role \ --role-name AmazonEKSFargatePodExecutionRole \ --assume-role-policy-document file://"pod-execution-role-trust-policy.json"
Fargate 실행에 필요한 역할에 필요한 권한 부여
aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy \ --role-name AmazonEKSFargatePodExecutionRole
Fargate 프로필 생성
aws eks create-fargate-profile \ --fargate-profile-name core-dns \ --cluster-name $CLUSTER_NAME \ --pod-execution-role-arn arn:aws:iam::$ACCOUNT_ID:role/AmazonEKSFargatePodExecutionRole \ --selectors '[{"namespace": "kube-system", "labels": {"k8s-app": "kube-dns"}}]'
Fargate 프로필 상태 확인
aws eks describe-fargate-profile \ --cluster-name $CLUSTER_NAME \ --fargate-profile-name core-dns
CoreDNS Pod 재생성
{ kubectl -n kube-system scale deployment coredns --replicas=0 kubectl -n kube-system scale deployment coredns --replicas=2 }
CoreDNS Pod 상태 확인
kubectl -n kube-system get pod -l k8s-app=kube-dns
CoreDNS Pod가 생성된 노드 확인
kubectl -n kube-system get pod -l k8s-app=kube-dns \ -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
Karpenter에 부여할 IAM 정책 문서 생성
cat <<EOF > karpenter-controller-policy.json { "Version": "2012-10-17", "Statement": [ { "Action": [ "ssm:GetParameter", "ec2:DescribeImages", "ec2:RunInstances", "ec2:DescribeSubnets", "ec2:DescribeSecurityGroups", "ec2:DescribeLaunchTemplates", "ec2:DescribeInstances", "ec2:DescribeInstanceTypes", "ec2:DescribeInstanceTypeOfferings", "ec2:DescribeAvailabilityZones", "ec2:DeleteLaunchTemplate", "ec2:CreateTags", "ec2:CreateLaunchTemplate", "ec2:CreateFleet", "ec2:DescribeSpotPriceHistory", "pricing:GetProducts" ], "Effect": "Allow", "Resource": "*", "Sid": "Karpenter" }, { "Action": "ec2:TerminateInstances", "Condition": { "StringLike": { "ec2:ResourceTag/karpenter.sh/nodepool": "*" } }, "Effect": "Allow", "Resource": "*", "Sid": "ConditionalEC2Termination" }, { "Effect": "Allow", "Action": "iam:PassRole", "Resource": "arn:aws:iam::${ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}", "Sid": "PassNodeIAMRole" }, { "Effect": "Allow", "Action": "eks:DescribeCluster", "Resource": "arn:aws:eks:${AWS_REGION}:${ACCOUNT_ID}:cluster/${CLUSTER_NAME}", "Sid": "EKSClusterEndpointLookup" }, { "Sid": "AllowScopedInstanceProfileCreationActions", "Effect": "Allow", "Resource": "*", "Action": [ "iam:CreateInstanceProfile" ], "Condition": { "StringEquals": { "aws:RequestTag/kubernetes.io/cluster/${CLUSTER_NAME}": "owned", "aws:RequestTag/topology.kubernetes.io/region": "${AWS_REGION}" }, "StringLike": { "aws:RequestTag/karpenter.k8s.aws/ec2nodeclass": "*" } } }, { "Sid": "AllowScopedInstanceProfileTagActions", "Effect": "Allow", "Resource": "*", "Action": [ "iam:TagInstanceProfile" ], "Condition": { "StringEquals": { "aws:ResourceTag/kubernetes.io/cluster/${CLUSTER_NAME}": "owned", "aws:ResourceTag/topology.kubernetes.io/region": "${AWS_REGION}", "aws:RequestTag/kubernetes.io/cluster/${CLUSTER_NAME}": "owned", "aws:RequestTag/topology.kubernetes.io/region": "${AWS_REGION}" }, "StringLike": { "aws:ResourceTag/karpenter.k8s.aws/ec2nodeclass": "*", "aws:RequestTag/karpenter.k8s.aws/ec2nodeclass": "*" } } }, { "Sid": "AllowScopedInstanceProfileActions", "Effect": "Allow", "Resource": "*", "Action": [ "iam:AddRoleToInstanceProfile", "iam:RemoveRoleFromInstanceProfile", "iam:DeleteInstanceProfile" ], "Condition": { "StringEquals": { "aws:ResourceTag/kubernetes.io/cluster/${CLUSTER_NAME}": "owned", "aws:ResourceTag/topology.kubernetes.io/region": "${AWS_REGION}" }, "StringLike": { "aws:ResourceTag/karpenter.k8s.aws/ec2nodeclass": "*" } } }, { "Sid": "AllowInstanceProfileReadActions", "Effect": "Allow", "Resource": "*", "Action": "iam:GetInstanceProfile" } ] } EOF
Karpenter에 부여할 IAM 정책 생성
aws iam create-policy \ --policy-name KarpenterRole-${CLUSTER_NAME} \ --policy-document file://karpenter-controller-policy.json
Karpenter에 부여할 ServiceAccount 생성
eksctl create iamserviceaccount \ --cluster=$CLUSTER_NAME \ --namespace=karpenter \ --name=karpenter \ --attach-policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/KarpenterRole-${CLUSTER_NAME} \ --override-existing-serviceaccounts \ --approve
Fargate 프로필 생성
aws eks create-fargate-profile \ --fargate-profile-name karpenter \ --cluster-name $CLUSTER_NAME \ --pod-execution-role-arn arn:aws:iam::$ACCOUNT_ID:role/AmazonEKSFargatePodExecutionRole \ --selectors '[{"namespace": "karpenter"}]'
Fargate 프로필 상태 확인
aws eks describe-fargate-profile \ --cluster-name $CLUSTER_NAME \ --fargate-profile-name karpenter
Karpenter 설치
helm install karpenter oci://public.ecr.aws/karpenter/karpenter \ --namespace karpenter \ --set settings.clusterName=${CLUSTER_NAME} \ --set serviceAccount.create=false \ --set serviceAccount.name=karpenter \ --set replicas=1
Karpenter 생성 확인
kubectl get pod -n karpenter -o wide
Karpenter 로그 확인
kubectl -n karpenter logs deploy/karpenter
노드에 부여할 IAM 역할 생성
{ cat <<EOF > node-trust-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } EOF aws iam create-role --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \ --assume-role-policy-document file://node-trust-policy.json }
EKS 노드에 요구되는 필수 권한 부여
{ aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \ --policy-arn "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy" aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \ --policy-arn "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy" aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \ --policy-arn "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \ --policy-arn "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" }
노드에 부여할 역할에 EKS Access Entry 생성
aws eks create-access-entry \ --cluster-name $CLUSTER_NAME \ --type EC2_LINUX \ --principal-arn arn:aws:iam::$ACCOUNT_ID:role/KarpenterNodeRole-${CLUSTER_NAME}
EC2NodeClass 생성
cat <<EOF | kubectl apply -f - apiVersion: karpenter.k8s.aws/v1 kind: EC2NodeClass metadata: name: default spec: amiSelectorTerms: - alias: al2023@latest role: KarpenterNodeRole-${CLUSTER_NAME} subnetSelectorTerms: - tags: alpha.eksctl.io/cluster-name: ${CLUSTER_NAME} kubernetes.io/role/elb: "1" securityGroupSelectorTerms: - tags: aws:eks:cluster-name: ${CLUSTER_NAME} blockDeviceMappings: - deviceName: /dev/xvda ebs: volumeSize: 20Gi volumeType: gp3 encrypted: true EOF
생성된 EC2NodeClass 상태 확인
kubectl describe ec2nodeclasses.karpenter.k8s.aws default
NodePool 생성
cat <<EOF | kubectl apply -f - apiVersion: karpenter.sh/v1 kind: NodePool metadata: name: default spec: template: spec: requirements: - key: kubernetes.io/arch operator: In values: ["amd64"] - key: kubernetes.io/os operator: In values: ["linux"] - key: karpenter.sh/capacity-type operator: In values: ["on-demand", "spot"] - key: karpenter.k8s.aws/instance-category operator: In values: ["t","m","c","r"] - key: karpenter.k8s.aws/instance-generation operator: Gt values: ["2"] - key: karpenter.k8s.aws/instance-memory operator: Gt values: ["2048"] nodeClassRef: group: karpenter.k8s.aws kind: EC2NodeClass name: default disruption: consolidationPolicy: WhenEmptyOrUnderutilized consolidateAfter: 1m EOF
생성된 NodePool 상태 확인
kubectl describe nodepools.karpenter.sh default
Cluster Autoscaler 비활성화
kubectl -n kube-system scale deployment cluster-autoscaler --replicas=0
노드그룹 삭제
eksctl delete nodegroup \ --cluster=$CLUSTER_NAME \ --name=nodegroup \ --disable-eviction \ --wait
Pod 상태 확인
kubectl get pod -A
Node 목록 확인
kubectl get node
Deployment 생성
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 5 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 resources: requests: cpu: 1 memory: 1Gi limits: cpu: 2 memory: 2Gi EOF
Pod 상태 확인
kubectl get pod -l app=nginx
Node 목록 확인
kubectl get node
NodeClaim 확인
kubectl get nodeclaim
Karpenter 로그 확인
kubectl -n karpenter logs deploy/karpenter
Cluster Autoscaler 삭제
{ helm uninstall cluster-autoscaler -n kube-system eksctl delete iamserviceaccount \ --cluster=$CLUSTER_NAME \ --namespace=kube-system \ --name=cluster-autoscaler }
Deployment 삭제
kubectl delete deploy nginx
Pod 목록 확인
kubectl get pod -A
Karpenter 로그 확인
kubectl -n karpenter logs deploy/karpenter
Node 목록 확인
kubectl get node
NodeClaim 확인
kubectl get nodeclaim
KEDA
데모 애플리케이션 배포
kubectl create deploy worker --image=amazon/aws-cli -- sleep infinity
Pod 생성 확인
kubectl get pod -l app=worker
SQS 대기열 생성
aws sqs create-queue --queue-name worker-queue
SQS 대기열에 메세지 생성
{ AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]') AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text) aws sqs send-message \ --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue \ --message-body "my first message" }
메세지 확인
{ AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]') AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text) aws sqs receive-message \ --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue \ --no-cli-pager }
데모 애플리케이션을 통해서 SQS 대기열에 있는 메시지 확인
{ AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]') AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text) kubectl exec -it deploy/worker -- aws sqs receive-message \ --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue \ --no-cli-pager }
EKS Pod Identity Agent 설치
eksctl create addon --cluster mycluster --name eks-pod-identity-agent
EKS Pod Identity Agent 생성 확인
kubectl get pod -n kube-system -l app.kubernetes.io/name=eks-pod-identity-agent
Pod Identity Association 생성
eksctl create podidentityassociation \ --cluster mycluster \ --namespace default \ --service-account-name worker \ --permission-policy-arns=arn:aws:iam::aws:policy/AmazonSQSFullAccess \ --create-service-account true
생성된 Pod Identity Association 확인
eksctl get podidentityassociation --cluster mycluster
생성된 ServiceAccount 확인
kubectl describe sa worker
데모 애플리케이션 Pod에 위에서 생성한 ServiceAccount 부여
kubectl patch deploy worker --type=json \ -p='[{"op": "replace", "path": "/spec/template/spec/serviceAccountName", "value": "worker"}]'
ServiceAccount가 부여되었는지 확인
kubectl get pods -l app=worker \ -o jsonpath='{.items[0].spec.serviceAccountName}{"\n"}'
데모 애플리케이션을 통해서 SQS 대기열에 있는 메시지 확인
{ AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]') AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text) kubectl exec -it deploy/worker -- aws sqs receive-message \ --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue \ --no-cli-pager }
KEDA 설치
{ helm repo add kedacore https://kedacore.github.io/charts helm repo update kubectl create ns keda helm install keda kedacore/keda --namespace keda }
KEDA 설치 확인
kubectl get pod -n keda
SQS 대기열의 메시지 갯수 기반 오토스케일링 설정
{ AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]') cat <<EOF | kubectl apply -f - apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: worker spec: podIdentity: provider: aws --- apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: worker spec: minReplicaCount: 1 maxReplicaCount: 20 scaleTargetRef: name: worker triggers: - type: aws-sqs-queue authenticationRef: name: worker metadata: queueURL: worker-queue queueLength: "5" awsRegion: $AWS_REGION EOF }
ScaledObject 상태 확인
kubectl describe scaledobjects.keda.sh worker
Pod Identity Association 생성
eksctl create podidentityassociation \ --cluster mycluster \ --namespace keda \ --service-account-name keda-operator \ --permission-policy-arns=arn:aws:iam::aws:policy/AmazonSQSReadOnlyAccess
KEDA Operator Pod 재생성
kubectl delete pod -n keda -l app=keda-operator
ScaledObject에 발생한 이벤트 확인
kubectl describe scaledobjects.keda.sh worker
ScaledObject 상태 확인
kubectl get scaledobjects.keda.sh worker
HPA 확인
kubectl get hpa
SQS 대기열에 10개의 메세지 전송
{ AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]') AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text) for i in {1..10} do aws sqs send-message \ --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue \ --message-body "message-$i" \ --no-cli-pager done }
ScaledObject에 발생한 이벤트 확인
kubectl describe scaledobjects.keda.sh worker
ScaledObject 상태 확인
kubectl get scaledobjects.keda.sh worker
HPA 확인
kubectl get hpa
HPA에 발생한 이벤트 확인
kubectl describe hpa
SQS 대기열에 있는 모든 메세지 삭제
{ AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]') AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text) aws sqs purge-queue \ --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue }
HPA 확인
kubectl get hpa
HPA에 발생한 이벤트 확인
kubectl describe hpa
리소스 삭제
{ eksctl delete podidentityassociation \ --cluster mycluster \ --namespace keda \ --service-account-name keda-operator kubectl delete scaledobjects.keda.sh worker kubectl delete triggerauthentications.keda.sh worker helm uninstall keda --namespace keda kubectl delete ns keda eksctl delete podidentityassociation \ --cluster mycluster \ --namespace default \ --service-account-name worker eksctl delete addon --cluster mycluster --name eks-pod-identity-agent AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]') AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text) aws sqs delete-queue \ --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue kubectl delete deploy worker }
Last updated