# 실습

### Manual Scaling &#x20;

1. Deployment 생성&#x20;

   ```
   kubectl create deployment nginx --image=nginx --replicas=1
   ```
2. Pod 갯수 확인&#x20;

   ```
   kubectl get pod -l app=nginx
   ```
3. Deployment의 *replica*를 3개로 수정&#x20;

   ```
   kubectl scale deployment nginx --replicas=3 
   ```
4. Pod 갯수 확인&#x20;

   ```
   kubectl get pod -l app=nginx
   ```
5. ReplicaSet 확인&#x20;

   ```
   kubectl get rs -l app=nginx
   ```
6. Deployment의 *replica*를 2개로 수정&#x20;

   ```
   kubectl scale deployment nginx --replicas=2
   ```
7. Deployment의 *replica*가 1개라면 *replica*를 3개로 변경&#x20;

   ```
   kubectl scale deployment nginx --current-replicas=1 --replicas=3 
   ```
8. 새로운 Deployment 생성&#x20;

   ```
   kubectl create deployment httpd --image=httpd --replicas=1
   ```
9. Deployment 중에서 *replica*가 1개인 Deployment가 있으면 *replica*를 3개로 변경&#x20;

   ```
   kubectl scale deployment --current-replicas=1 --replicas=3 --all
   ```
10. Pod 갯수 확인&#x20;

    ```
    kubectl get pod -l 'app in (nginx,httpd)'
    ```
11. 모든 Deployment의 *replica*를 5개로 변경&#x20;

    ```
    kubectl scale deployment --replicas=5 --all
    ```
12. Pod 갯수 확인&#x20;

    ```
    kubectl get pod -l 'app in (nginx,httpd)'
    ```
13. 모든 Deployment 삭제&#x20;

    ```
    kubectl delete deployment -l 'app in (nginx,httpd)'
    ```

### HPA (Horizontal Pod Autoscaler)

1. 데모 애플리케이션 배포 - <https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#run-and-expose-php-apache-server>

   ```
   cat <<EOF | kubectl apply -f -
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: php-apache
   spec:
     selector:
       matchLabels:
         app: php-apache
     replicas: 1
     template:
       metadata:
         labels:
           app: php-apache
       spec:
         containers:
         - name: php-apache
           image: k8s.gcr.io/hpa-example
           ports:
           - containerPort: 80
           resources:
             limits:
               cpu: 500m
             requests:
               cpu: 200m
   ---
   apiVersion: v1
   kind: Service
   metadata:
     name: php-apache
     labels:
       app: php-apache
   spec:
     ports:
     - port: 80
     selector:
       app: php-apache
   EOF
   ```
2. Pod의 리소스 사용량 확인

   ```
   kubectl top pod -l app=php-apache --use-protocol-buffers
   ```
3. Metrics Server 설치 - <https://github.com/kubernetes-sigs/metrics-server#kubernetes-metrics-server>

   ```
   kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
   ```
4. API 서버에 등록된 API 목록 확인

   ```
   kubectl get apiservices.apiregistration.k8s.io
   ```
5. *v1beta1.metrics.k8s.io* API의 상세 내용 확인&#x20;

   ```
   kubectl get apiservices.apiregistration.k8s.io v1beta1.metrics.k8s.io -o yaml
   ```
6. 모든 Pod의 리소스 사용량 확인&#x20;

   ```
   kubectl top pod -A --use-protocol-buffers
   ```
7. 모든 Node의 리소스 사용량 확인&#x20;

   ```
   kubectl top node --use-protocol-buffers
   ```
8. Metrics Server 로그 확인

   ```
   kubectl -n kube-system logs deploy/metrics-server
   ```
9. Metrics Server 로그 레벨 변경&#x20;

   ```
   kubectl -n kube-system patch deployment metrics-server --type=json \
   -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--v=6"}]'
   ```
10. Metrics Server 로그 확인 - *새로운 Pod가 뜬 다음에 확인*

    ```
    kubectl -n kube-system logs deploy/metrics-server
    ```
11. kubelet에서 제공하는 지표에 대한 접근 권한을 가진 Pod 생성&#x20;

    ```
    cat <<EOF | kubectl apply -f -
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: kubelet-access
    rules:
    - apiGroups: [""]
      resources: 
        - nodes/stats
        - nodes/metrics
      verbs: ["get"]
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: kubelet-access
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: kubelet-access
      namespace: default
    subjects:
    - kind: ServiceAccount
      name: kubelet-access
      namespace: default
    roleRef:
      kind: ClusterRole
      name: kubelet-access
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: curl
    spec:
      serviceAccountName: kubelet-access
      containers:
      - image: curlimages/curl
        name: curl
        command: ["sleep", "3600"]
        env:
          - name: HOST_IP
            valueFrom:
              fieldRef:
                fieldPath: status.hostIP
    EOF
    ```
12. 위에서 생성한 Pod에서 Metrics Server가 지표를 수집할때 호출하는 Endpoint 호출 - <https://github.com/kubernetes-sigs/metrics-server/blob/4436807eec6b07ea649444529eb3b46ddbbd8914/pkg/scraper/client/resource/client.go#L77>

    ```
    kubectl exec curl -- \
    sh -c 'curl -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://$HOST_IP:10250/metrics/resource -k'
    ```
13. kubelet에서 제공하는 모든 지표 확인

    ```
    kubectl exec curl -- \
    sh -c 'curl -s -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://$HOST_IP:10250/stats/summary -k'
    ```
14. kubelet에서 제공하는 지표중에서 CPU 및 Memory에 대한 지표만 확인

    ```
    kubectl exec curl -- \
    sh -c 'curl -s -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://$HOST_IP:10250/stats/summary?only_cpu_and_memory=true -k'
    ```
15. API 서버를 통해서 Metrics Server가 지표를 수집할때 호출하는 Endpoint 호출

    ```
    kubectl get --raw /api/v1/nodes/$(kubectl get node -o=jsonpath='{.items[0].metadata.name}')/proxy/metrics/resource
    ```
16. Autoscaling (HPA)설정&#x20;

    ```
    kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=5
    ```
17. 위에서 생성한 HPA 상태 확인&#x20;

    ```
    kubectl get hpa php-apache
    ```
18. 데모 애플리케이션에 부하를 발생시키는 Pod 생성

    ```
    kubectl run load-generator --image=busybox:1.28 -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
    ```
19. HPA 상태 모니터링

    ```
    kubectl get hpa php-apache -w
    ```
20. Ctrl+C를 입력해서 HPA 모니터링을 중지하고 실제로 Pod가 생겼는지 확인&#x20;

    ```
    kubectl get pod -l app=php-apache
    ```
21. Pod의 리소스 사용량 확인&#x20;

    ```
    kubectl top pod -l app=php-apache --use-protocol-buffers
    ```
22. 데모 애플리케이션에 부하를 발생시키는 Pod 삭제

    ```
    kubectl delete pod load-generator
    ```
23. HPA 상태 모니터링

    ```
    kubectl get hpa php-apache -w
    ```
24. Ctrl+C를 입력해서 HPA 모니터링을 중지하고 HPA 상세 내용 확인

    ```
    kubectl describe hpa php-apache
    ```
25. Pod의 리소스 사용량 확인&#x20;

    ```
    kubectl top pod -l app=php-apache --use-protocol-buffers
    ```
26. HPA 스케일링 동작방식 확인 - <https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#stabilization-window>, <https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/853-configurable-hpa-scale-velocity/README.md>
27. HPA 상태 확인

    ```
    kubectl get hpa php-apache
    ```
28. Pod 갯수 확인

    ```
    kubectl get pod -l app=php-apache
    ```
29. 데모 애플리케이션 및 리소스 삭제&#x20;

    ```
    {
        kubectl delete hpa php-apache
        kubectl delete deployment php-apache
        kubectl delete svc php-apache
        kubectl delete pod curl
        kubectl delete clusterrole kubelet-access
        kubectl delete clusterrolebinding kubelet-access
        kubectl delete sa kubelet-access
    }
    ```

### Cluster Autoscaler

1. Deployment 생성&#x20;

   ```
   cat <<EOF | kubectl apply -f -
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: nginx
     labels:
       app: nginx
   spec:
     replicas: 5
     selector:
       matchLabels:
         app: nginx
     template:
       metadata:
         labels:
           app: nginx
       spec:
         containers:
         - name: nginx
           image: nginx
           ports:
           - containerPort: 80
           resources:
             requests:
               cpu: 1
               memory: 1Gi
             limits:
               cpu: 2
               memory: 2Gi
   EOF
   ```
2. 생성된 Deployment 및 Pod 확인&#x20;

   ```
   kubectl get deploy,pod -l app=nginx
   ```
3. Pending 상태의 Pod가 있다면 아래의 명령어를 통해서 그 이유를 확인

   ```
   kubectl describe pod \
   $(kubectl get pod -o=jsonpath='{.items[?(@.status.phase=="Pending")].metadata.name}')
   ```
4. Cluster Autoscaler에게 부여할 IAM 정책 JSON 파일 생성 &#x20;

   ```
   cat <<EOF > cluster-autoscaler-policy.json
   {
       "Version": "2012-10-17",
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "autoscaling:DescribeAutoScalingGroups",
                   "autoscaling:DescribeAutoScalingInstances",
                   "autoscaling:DescribeLaunchConfigurations",
                   "autoscaling:DescribeScalingActivities",
                   "ec2:DescribeImages",
                   "ec2:DescribeInstanceTypes",
                   "ec2:DescribeLaunchTemplateVersions",
                   "ec2:GetInstanceTypesFromInstanceRequirements",
                   "eks:DescribeNodegroup"
               ],
               "Resource": [
                   "*"
               ]
           },
           {
               "Effect": "Allow",
               "Action": [
                   "autoscaling:SetDesiredCapacity",
                   "autoscaling:TerminateInstanceInAutoScalingGroup"
               ],
               "Resource": [
                   "*"
               ]
           }
       ]
   }
   EOF
   ```
5. IAM 정책 생성&#x20;

   ```
   aws iam create-policy \
       --policy-name AmazonEKSClusterAutoscalerPolicy \
       --policy-document file://cluster-autoscaler-policy.json
   ```
6. Node에 부여된 Label을 통해서 EKS 클러스터 이름 확인하고 환경변수로 저장

   ```
   {
       export CLUSTER_NAME=$(kubectl get node \
       -o=jsonpath='{.items[0].metadata.labels.alpha\.eksctl\.io\/cluster-name}')
       echo $CLUSTER_NAME
   }
   ```
7. EKS 클러스터가 생성되어 있는 AWS 계정번호 확인하고 환경변수로 저장

   ```
   {
       export ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
       echo $ACCOUNT_ID
   }
   ```
8. IAM OIDC 제공자 활성화&#x20;

   ```
   eksctl utils associate-iam-oidc-provider --region=ap-northeast-2 \
   --cluster=$CLUSTER_NAME --approve
   ```
9. ServiceAccount 생성&#x20;

   ```
   eksctl create iamserviceaccount \
   --cluster=$CLUSTER_NAME \
   --namespace=kube-system \
   --name=cluster-autoscaler \
   --attach-policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/AmazonEKSClusterAutoscalerPolicy \
   --override-existing-serviceaccounts \
   --approve
   ```
10. ServiceAccount가 생성되었는지 확인&#x20;

    ```
    kubectl -n kube-system describe sa cluster-autoscaler
    ```
11. ServiceAccount에 명시된 IAM 역할 이름을 확인하고 환경변수로 지정

    ```
    {
       IAM_ROLE_NAME=$(kubectl -n kube-system get sa cluster-autoscaler \
       -o=jsonpath='{.metadata.annotations.eks\.amazonaws\.com\/role-arn}' \
       | grep -oP '(?<=role.).*')
       echo $IAM_ROLE_NAME
    }
    ```
12. ServiceAccount와 연동된 IAM 역할에 부여된 IAM 정책 확인

    ```
    aws iam list-attached-role-policies --role-name $IAM_ROLE_NAME
    ```
13. ServiceAccount와 연동된 IAM 역할에 부여된 신뢰관계 정책 확인

    ```
    aws iam get-role --role-name $IAM_ROLE_NAME
    ```
14. Cluster Autoscaler 설치&#x20;

    ```
    {
        helm repo add autoscaler https://kubernetes.github.io/autoscaler
        helm install cluster-autoscaler autoscaler/cluster-autoscaler \
        --namespace kube-system \
        --set autoDiscovery.clusterName=$CLUSTER_NAME \
        --set rbac.serviceAccount.create=false \
        --set rbac.serviceAccount.name=cluster-autoscaler \
        --set fullnameOverride=cluster-autoscaler
    }
    ```
15. Cluster Autoscaler 로그 확인 - *ASG map 확인*

    ```
    kubectl -n kube-system logs deploy/cluster-autoscaler
    ```
16. Cluster Autoscaler 설정값 확인

    ```
    kubectl -n kube-system get deploy cluster-autoscaler \
    -o=jsonpath='{.spec.template.spec.containers[*].command}' | jq
    ```
17. EKS 노드그룹와 연동된 오토스케일링 그룹 이름을 확인하고 환경변수로 지정&#x20;

    ```
    {
      export ASG_NAME=$(aws autoscaling describe-auto-scaling-groups --query \
      "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') && Value=='$CLUSTER_NAME']].AutoScalingGroupName" --output text)
      echo $ASG_NAME
    }
    ```
18. 오토스케일링 그룹의 인스턴스 현황 확인

    ```
    aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $ASG_NAME \
    --query "AutoScalingGroups[0].{MinSize: MinSize, MaxSize: MaxSize, DesiredCapacity: DesiredCapacity}" 
    ```
19. 오토스케일링 그룹에 부여된 태그 확인 - <https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html>

    ```
    aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $ASG_NAME \
    --query "AutoScalingGroups[0].Tags"
    ```
20. 설치한 Helm 차트의 기본 변수값 확인

    ```
    helm get values cluster-autoscaler -n kube-system --all
    ```
21. Cluster Autoscaler 설정값 수정&#x20;

    ```
    helm upgrade cluster-autoscaler autoscaler/cluster-autoscaler \
    --namespace kube-system \
    --set autoDiscovery.clusterName=$CLUSTER_NAME \
    --set rbac.serviceAccount.create=false \
    --set rbac.serviceAccount.name=cluster-autoscaler \
    --set fullnameOverride=cluster-autoscaler \
    --set awsRegion=ap-northeast-2
    ```
22. Cluster Autoscaler 설정값 확인

    ```
    kubectl -n kube-system get deploy cluster-autoscaler \
    -o=jsonpath='{.spec.template.spec.containers[*].command}' | jq 
    ```
23. Cluster Autoscaler 로그 확인 - *ASG map 확인*

    ```
    kubectl -n kube-system logs deploy/cluster-autoscaler
    ```
24. Pending 상태였던 Pod가 생성 되었는지 확인

    ```
    kubectl get pod -l app=nginx
    ```
25. Node 갯수 확인

    ```
    kubectl get node
    ```
26. 오토스케일링 그룹의 인스턴스 현황 확인

    ```
    aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $ASG_NAME \
    --query "AutoScalingGroups[0].{MinSize: MinSize, MaxSize: MaxSize, DesiredCapacity: DesiredCapacity}" 
    ```
27. 오토스케일링 그룹의 활동 로그 확인

    ```
    aws autoscaling describe-scaling-activities --auto-scaling-group-name $ASG_NAME
    ```
28. Cluster Autoscaler에 의해서 삭제 되지 않는 Node 확인 - <https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node>
29. Deployment 삭제&#x20;

    ```
    kubectl delete deployment nginx
    ```
30. Pod 목록 확인

    ```
    kubectl get pod -l app=nginx
    ```
31. Node가 삭제 되는지 확인

    ```
    kubectl get node
    ```
32. Cluster Autoscaler 로그 확인

    ```
    kubectl -n kube-system logs deploy/cluster-autoscaler
    ```
33. Cluster Autoscaler 스케일 다운 동작방식 리뷰 - <https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work>
34. 리소스 삭제&#x20;

    ```
    rm cluster-autoscaler-policy.json
    ```

### Karpenter

1. Fargate 실행에 필요한 역할에 부여할 신뢰관계 정책 문서 생성&#x20;

   ```
   cat <<EOF > pod-execution-role-trust-policy.json
   {
       "Version": "2012-10-17",
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": "eks-fargate-pods.amazonaws.com"
               },
               "Action": "sts:AssumeRole"
           }
       ]
   }
   EOF
   ```
2. Fargate 실행에 필요한 역할 생성&#x20;

   ```
   aws iam create-role \
   --role-name AmazonEKSFargatePodExecutionRole \
   --assume-role-policy-document file://"pod-execution-role-trust-policy.json"
   ```
3. Fargate 실행에 필요한 역할에 필요한 권한 부여&#x20;

   ```
   aws iam attach-role-policy \
   --policy-arn arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy \
   --role-name AmazonEKSFargatePodExecutionRole
   ```
4. Fargate 프로필 생성

   ```
   aws eks create-fargate-profile \
   --fargate-profile-name core-dns \
   --cluster-name $CLUSTER_NAME \
   --pod-execution-role-arn arn:aws:iam::$ACCOUNT_ID:role/AmazonEKSFargatePodExecutionRole \
   --selectors '[{"namespace": "kube-system", "labels": {"k8s-app": "kube-dns"}}]'
   ```
5. Fargate 프로필 상태 확인&#x20;

   ```
   aws eks describe-fargate-profile \
   --cluster-name $CLUSTER_NAME \
   --fargate-profile-name core-dns
   ```
6. CoreDNS Pod 재생성&#x20;

   ```
   {
       kubectl -n kube-system scale deployment coredns --replicas=0
       kubectl -n kube-system scale deployment coredns --replicas=2
   }
   ```
7. CoreDNS Pod 상태 확인&#x20;

   ```
   kubectl -n kube-system get pod -l k8s-app=kube-dns
   ```
8. CoreDNS Pod가 생성된 노드 확인&#x20;

   ```
   kubectl -n kube-system get pod -l k8s-app=kube-dns \
   -o custom-columns=NAME:.metadata.name,NODE:.spec.nodeName
   ```
9. Karpenter에 부여할 IAM 정책 문서 생성

   ```
   cat <<EOF > karpenter-controller-policy.json
   {
       "Version": "2012-10-17",
       "Statement": [
           {
               "Action": [
                   "ssm:GetParameter",
                   "ec2:DescribeImages",
                   "ec2:RunInstances",
                   "ec2:DescribeSubnets",
                   "ec2:DescribeSecurityGroups",
                   "ec2:DescribeLaunchTemplates",
                   "ec2:DescribeInstances",
                   "ec2:DescribeInstanceTypes",
                   "ec2:DescribeInstanceTypeOfferings",
                   "ec2:DescribeAvailabilityZones",
                   "ec2:DeleteLaunchTemplate",
                   "ec2:CreateTags",
                   "ec2:CreateLaunchTemplate",
                   "ec2:CreateFleet",
                   "ec2:DescribeSpotPriceHistory",
                   "pricing:GetProducts"
               ],
               "Effect": "Allow",
               "Resource": "*",
               "Sid": "Karpenter"
           },
           {
               "Action": "ec2:TerminateInstances",
               "Condition": {
                   "StringLike": {
                       "ec2:ResourceTag/karpenter.sh/nodepool": "*"
                   }
               },
               "Effect": "Allow",
               "Resource": "*",
               "Sid": "ConditionalEC2Termination"
           },
           {
               "Effect": "Allow",
               "Action": "iam:PassRole",
               "Resource": "arn:aws:iam::${ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}",
               "Sid": "PassNodeIAMRole"
           },
           {
               "Effect": "Allow",
               "Action": "eks:DescribeCluster",
               "Resource": "arn:aws:eks:${AWS_REGION}:${ACCOUNT_ID}:cluster/${CLUSTER_NAME}",
               "Sid": "EKSClusterEndpointLookup"
           },
           {
               "Sid": "AllowScopedInstanceProfileCreationActions",
               "Effect": "Allow",
               "Resource": "*",
               "Action": [
               "iam:CreateInstanceProfile"
               ],
               "Condition": {
               "StringEquals": {
                   "aws:RequestTag/kubernetes.io/cluster/${CLUSTER_NAME}": "owned",
                   "aws:RequestTag/topology.kubernetes.io/region": "${AWS_REGION}"
               },
               "StringLike": {
                   "aws:RequestTag/karpenter.k8s.aws/ec2nodeclass": "*"
               }
               }
           },
           {
               "Sid": "AllowScopedInstanceProfileTagActions",
               "Effect": "Allow",
               "Resource": "*",
               "Action": [
               "iam:TagInstanceProfile"
               ],
               "Condition": {
               "StringEquals": {
                   "aws:ResourceTag/kubernetes.io/cluster/${CLUSTER_NAME}": "owned",
                   "aws:ResourceTag/topology.kubernetes.io/region": "${AWS_REGION}",
                   "aws:RequestTag/kubernetes.io/cluster/${CLUSTER_NAME}": "owned",
                   "aws:RequestTag/topology.kubernetes.io/region": "${AWS_REGION}"
               },
               "StringLike": {
                   "aws:ResourceTag/karpenter.k8s.aws/ec2nodeclass": "*",
                   "aws:RequestTag/karpenter.k8s.aws/ec2nodeclass": "*"
               }
               }
           },
           {
               "Sid": "AllowScopedInstanceProfileActions",
               "Effect": "Allow",
               "Resource": "*",
               "Action": [
               "iam:AddRoleToInstanceProfile",
               "iam:RemoveRoleFromInstanceProfile",
               "iam:DeleteInstanceProfile"
               ],
               "Condition": {
               "StringEquals": {
                   "aws:ResourceTag/kubernetes.io/cluster/${CLUSTER_NAME}": "owned",
                   "aws:ResourceTag/topology.kubernetes.io/region": "${AWS_REGION}"
               },
               "StringLike": {
                   "aws:ResourceTag/karpenter.k8s.aws/ec2nodeclass": "*"
               }
               }
           },
           {
               "Sid": "AllowInstanceProfileReadActions",
               "Effect": "Allow",
               "Resource": "*",
               "Action": "iam:GetInstanceProfile"
           }
       ]
   }
   EOF
   ```
10. Karpenter에 부여할 IAM 정책 생성

    ```
    aws iam create-policy \
    --policy-name KarpenterRole-${CLUSTER_NAME} \
    --policy-document file://karpenter-controller-policy.json
    ```
11. Karpenter에 부여할 ServiceAccount 생성&#x20;

    ```
    eksctl create iamserviceaccount \
    --cluster=$CLUSTER_NAME \
    --namespace=karpenter \
    --name=karpenter \
    --attach-policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/KarpenterRole-${CLUSTER_NAME} \
    --override-existing-serviceaccounts \
    --approve
    ```
12. Fargate 프로필 생성

    ```
    aws eks create-fargate-profile \
    --fargate-profile-name karpenter \
    --cluster-name $CLUSTER_NAME \
    --pod-execution-role-arn arn:aws:iam::$ACCOUNT_ID:role/AmazonEKSFargatePodExecutionRole \
    --selectors '[{"namespace": "karpenter"}]'
    ```
13. Fargate 프로필 상태 확인&#x20;

    ```
    aws eks describe-fargate-profile \
    --cluster-name $CLUSTER_NAME \
    --fargate-profile-name karpenter
    ```
14. Karpenter 설치&#x20;

    ```
    helm install karpenter oci://public.ecr.aws/karpenter/karpenter \
    --namespace karpenter \
    --set settings.clusterName=${CLUSTER_NAME} \
    --set serviceAccount.create=false \
    --set serviceAccount.name=karpenter \
    --set replicas=1
    ```
15. Karpenter 생성 확인&#x20;

    ```
    kubectl get pod -n karpenter -o wide
    ```
16. Karpenter 로그 확인&#x20;

    ```
    kubectl -n karpenter logs deploy/karpenter
    ```
17. 노드에 부여할 IAM 역할 생성

    ```
    {
    cat <<EOF > node-trust-policy.json
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "ec2.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
            }
        ]
    }
    EOF

    aws iam create-role --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
    --assume-role-policy-document file://node-trust-policy.json
    }
    ```
18. EKS 노드에 요구되는 필수 권한 부여&#x20;

    ```
    {
        aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
        --policy-arn "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"

        aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
        --policy-arn "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"

        aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
        --policy-arn "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"

        aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
        --policy-arn "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
    }
    ```
19. 노드에 부여할 역할에 EKS Access Entry 생성

    ```
    aws eks create-access-entry \
    --cluster-name $CLUSTER_NAME \
    --type EC2_LINUX \
    --principal-arn arn:aws:iam::$ACCOUNT_ID:role/KarpenterNodeRole-${CLUSTER_NAME}
    ```
20. EC2NodeClass 생성&#x20;

    ```
    cat <<EOF | kubectl apply -f -
    apiVersion: karpenter.k8s.aws/v1
    kind: EC2NodeClass
    metadata:
      name: default
    spec:
      amiSelectorTerms: 
      - alias: al2023@latest
      role: KarpenterNodeRole-${CLUSTER_NAME}
      subnetSelectorTerms:
      - tags:
          alpha.eksctl.io/cluster-name: ${CLUSTER_NAME}
          kubernetes.io/role/elb: "1"
      securityGroupSelectorTerms:
      - tags:
          aws:eks:cluster-name: ${CLUSTER_NAME}
      blockDeviceMappings:
      - deviceName: /dev/xvda
        ebs:
          volumeSize: 20Gi
          volumeType: gp3
          encrypted: true
    EOF
    ```
21. 생성된 EC2NodeClass 상태 확인&#x20;

    ```
    kubectl describe ec2nodeclasses.karpenter.k8s.aws default
    ```
22. NodePool 생성&#x20;

    ```
    cat <<EOF | kubectl apply -f -
    apiVersion: karpenter.sh/v1
    kind: NodePool
    metadata:
      name: default
    spec:
      template:
        spec:
          requirements:
          - key: kubernetes.io/arch
            operator: In
            values: ["amd64"]
          - key: kubernetes.io/os
            operator: In
            values: ["linux"]
          - key: karpenter.sh/capacity-type
            operator: In
            values: ["on-demand", "spot"]
          - key: karpenter.k8s.aws/instance-category
            operator: In
            values: ["t","m","c","r"]
          - key: karpenter.k8s.aws/instance-generation
            operator: Gt
            values: ["2"]
          - key: karpenter.k8s.aws/instance-memory
            operator: Gt
            values: ["2048"]
          nodeClassRef:
            group: karpenter.k8s.aws
            kind: EC2NodeClass
            name: default
      disruption:
        consolidationPolicy: WhenEmptyOrUnderutilized
        consolidateAfter: 1m
    EOF
    ```
23. 생성된 NodePool 상태 확인&#x20;

    ```
    kubectl describe nodepools.karpenter.sh default
    ```
24. Cluster Autoscaler 비활성화&#x20;

    ```
    kubectl -n kube-system scale deployment cluster-autoscaler --replicas=0
    ```
25. 노드그룹 삭제&#x20;

    ```
    eksctl delete nodegroup \
    --cluster=$CLUSTER_NAME \
    --name=nodegroup \
    --disable-eviction \
    --wait
    ```
26. Pod 상태 확인&#x20;

    ```
    kubectl get pod -A
    ```
27. Node 목록 확인&#x20;

    ```
    kubectl get node
    ```
28. Deployment 생성&#x20;

    ```
    cat <<EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      replicas: 5
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
            resources:
              requests:
                cpu: 1
                memory: 1Gi
              limits:
                cpu: 2
                memory: 2Gi
    EOF
    ```
29. Pod 상태 확인&#x20;

    ```
    kubectl get pod -l app=nginx
    ```
30. Node 목록 확인&#x20;

    ```
    kubectl get node
    ```
31. NodeClaim 확인&#x20;

    ```
    kubectl get nodeclaim
    ```
32. Karpenter 로그 확인&#x20;

    ```
    kubectl -n karpenter logs deploy/karpenter
    ```
33. Cluster Autoscaler 삭제&#x20;

    ```
    {
        helm uninstall cluster-autoscaler -n kube-system
        eksctl delete iamserviceaccount \
        --cluster=$CLUSTER_NAME \
        --namespace=kube-system \
        --name=cluster-autoscaler
    }
    ```
34. Deployment 삭제&#x20;

    ```
    kubectl delete deploy nginx
    ```
35. Pod 목록 확인&#x20;

    ```
    kubectl get pod -A
    ```
36. Karpenter 로그 확인&#x20;

    ```
    kubectl -n karpenter logs deploy/karpenter
    ```
37. Node 목록 확인&#x20;

    ```
    kubectl get node
    ```
38. NodeClaim 확인&#x20;

    ```
    kubectl get nodeclaim
    ```

### KEDA

1. 데모 애플리케이션 배포

   ```
   kubectl create deploy worker --image=amazon/aws-cli -- sleep infinity
   ```
2. Pod 생성 확인&#x20;

   ```
   kubectl get pod -l app=worker
   ```
3. SQS 대기열 생성&#x20;

   ```
   aws sqs create-queue --queue-name worker-queue
   ```
4. SQS 대기열에 메세지 생성&#x20;

   ```
   {
       AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]')
       AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
       
       aws sqs send-message \
       --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue \
       --message-body "my first message"
   }
   ```
5. 메세지 확인&#x20;

   ```
   {
       AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]')
       AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
       
       aws sqs receive-message \
       --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue \
       --no-cli-pager
   }
   ```
6. 데모 애플리케이션을 통해서 SQS 대기열에 있는 메시지 확인&#x20;

   ```
   {
       AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]')
       AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
       
       kubectl exec -it deploy/worker -- aws sqs receive-message \
       --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue \
       --no-cli-pager
   }
   ```
7. EKS Pod Identity Agent 설치

   ```
   eksctl create addon --cluster mycluster --name eks-pod-identity-agent
   ```
8. EKS Pod Identity Agent 생성 확인&#x20;

   ```
   kubectl get pod -n kube-system -l app.kubernetes.io/name=eks-pod-identity-agent
   ```
9. Pod Identity Association 생성&#x20;

   ```
   eksctl create podidentityassociation \
   --cluster mycluster \
   --namespace default \
   --service-account-name worker \
   --permission-policy-arns=arn:aws:iam::aws:policy/AmazonSQSFullAccess \
   --create-service-account true
   ```
10. 생성된 Pod Identity Association 확인

    ```
    eksctl get podidentityassociation --cluster mycluster
    ```
11. 생성된 ServiceAccount 확인&#x20;

    ```
    kubectl describe sa worker
    ```
12. 데모 애플리케이션 Pod에 위에서 생성한 ServiceAccount 부여&#x20;

    ```
    kubectl patch deploy worker --type=json \
    -p='[{"op": "replace", "path": "/spec/template/spec/serviceAccountName", "value": "worker"}]'
    ```
13. ServiceAccount가 부여되었는지 확인&#x20;

    ```
    kubectl get pods -l app=worker \
    -o jsonpath='{.items[0].spec.serviceAccountName}{"\n"}'
    ```
14. 데모 애플리케이션을 통해서 SQS 대기열에 있는 메시지 확인&#x20;

    ```
    {
        AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]')
        AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
        
        kubectl exec -it deploy/worker -- aws sqs receive-message \
        --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue \
        --no-cli-pager
    }
    ```
15. KEDA 설치&#x20;

    ```
    {
        helm repo add kedacore https://kedacore.github.io/charts
        helm repo update
        kubectl create ns keda
        helm install keda kedacore/keda --namespace keda
    }
    ```
16. KEDA 설치 확인&#x20;

    ```
    kubectl get pod -n keda
    ```
17. SQS 대기열의 메시지 갯수 기반 오토스케일링 설정&#x20;

    ```
    {
        AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]')
        cat <<EOF | kubectl apply -f -
    apiVersion: keda.sh/v1alpha1
    kind: TriggerAuthentication
    metadata:
      name: worker
    spec:
      podIdentity:
        provider: aws
    ---
    apiVersion: keda.sh/v1alpha1
    kind: ScaledObject
    metadata:
      name: worker
    spec:
      minReplicaCount: 1
      maxReplicaCount: 20
      scaleTargetRef:
        name: worker
      triggers:
      - type: aws-sqs-queue
        authenticationRef:
          name: worker
        metadata:
          queueURL: worker-queue
          queueLength: "5"
          awsRegion: $AWS_REGION
    EOF
    }
    ```
18. ScaledObject 상태 확인&#x20;

    ```
    kubectl describe scaledobjects.keda.sh worker
    ```
19. Pod Identity Association 생성&#x20;

    ```
    eksctl create podidentityassociation \
    --cluster mycluster \
    --namespace keda \
    --service-account-name keda-operator \
    --permission-policy-arns=arn:aws:iam::aws:policy/AmazonSQSReadOnlyAccess
    ```
20. KEDA Operator Pod 재생성

    ```
    kubectl delete pod -n keda -l app=keda-operator
    ```
21. ScaledObject에 발생한 이벤트 확인&#x20;

    ```
    kubectl describe scaledobjects.keda.sh worker
    ```
22. ScaledObject 상태 확인&#x20;

    ```
    kubectl get scaledobjects.keda.sh worker
    ```
23. HPA 확인&#x20;

    ```
    kubectl get hpa
    ```
24. SQS 대기열에 10개의 메세지 전송

    ```
    {
        AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]')
        AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
        
        for i in {1..10}
        do
          aws sqs send-message \
          --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue \
          --message-body "message-$i" \
          --no-cli-pager
        done
    }
    ```
25. ScaledObject에 발생한 이벤트 확인&#x20;

    ```
    kubectl describe scaledobjects.keda.sh worker
    ```
26. ScaledObject 상태 확인&#x20;

    ```
    kubectl get scaledobjects.keda.sh worker
    ```
27. HPA 확인&#x20;

    ```
    kubectl get hpa
    ```
28. HPA에 발생한 이벤트 확인&#x20;

    ```
    kubectl describe hpa
    ```
29. SQS 대기열에 있는 모든 메세지 삭제&#x20;

    ```
    {
        AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]')
        AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
        
        aws sqs purge-queue \
        --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue
    }
    ```
30. HPA 확인&#x20;

    ```
    kubectl get hpa
    ```
31. HPA에 발생한 이벤트 확인&#x20;

    ```
    kubectl describe hpa
    ```
32. 리소스 삭제

    ```
    {   
        eksctl delete podidentityassociation \
        --cluster mycluster \
        --namespace keda \
        --service-account-name keda-operator
        
        kubectl delete scaledobjects.keda.sh worker
        kubectl delete triggerauthentications.keda.sh worker
        
        helm uninstall keda --namespace keda
        kubectl delete ns keda
        
        eksctl delete podidentityassociation \
        --cluster mycluster \
        --namespace default \
        --service-account-name worker
        
        eksctl delete addon --cluster mycluster --name eks-pod-identity-agent
        
        AWS_REGION=$(aws ec2 describe-availability-zones --output text --query 'AvailabilityZones[0].[RegionName]')
        AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
        aws sqs delete-queue \
        --queue-url https://sqs.$AWS_REGION.amazonaws.com/$AWS_ACCOUNT/worker-queue
        
        kubectl delete deploy worker
    }
    ```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://kubernetes.youngwjung.com/advanced-topics/scaling/lab.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
