실습
Security Context
Pod 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx EOF
컨테이너에서 프로세스를 실행중인 유저 확인
kubectl exec nginx -- id
프로세스 유틸리티 설치
kubectl exec nginx -- bash -c "apt update && apt install -y procps"
실행중인 프로세스 확인
kubectl exec nginx -- ps aux
nginx 유저 확인
kubectl exec nginx -- id nginx
Pod 재생성
cat <<EOF | kubectl replace --force -f - apiVersion: v1 kind: Pod metadata: name: nginx spec: securityContext: runAsNonRoot: true containers: - image: nginx name: nginx EOF
Pod가 생성되었는지 확인
kubectl get pod nginx
Pod에 명시된 컨테이너가 실행 되지 않는다면 그 이유를 확인
kubectl describe pod nginx
Pod 재생성
cat <<EOF | kubectl replace --force -f - apiVersion: v1 kind: Pod metadata: name: nginx spec: securityContext: runAsNonRoot: true runAsUser: 101 containers: - image: nginx name: nginx EOF
Pod가 생성되었는지 확인
kubectl get pod nginx
Pod에 명시된 컨테이너가 실행 되지 않는다면 그 이유를 확인
kubectl describe pod nginx
Pod 로그 확인
kubectl logs nginx
NGINX 설정 파일 확인
kubectl run nginx-tmp --image=nginx --rm -it --restart=Never \ -- cat /etc/nginx/nginx.conf
Pod 재생성
cat <<EOF | kubectl replace --force -f - apiVersion: v1 kind: ConfigMap metadata: name: nginx data: nginx.conf: | worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } --- apiVersion: v1 kind: Pod metadata: name: nginx spec: securityContext: runAsNonRoot: true runAsUser: 101 containers: - image: nginx name: nginx ports: - containerPort: 80 volumeMounts: - name: nginx-conf mountPath: /etc/nginx/nginx.conf subPath: nginx.conf volumes: - name: nginx-conf configMap: name: nginx items: - key: nginx.conf path: nginx.conf EOF
Pod가 생성되었는지 확인
kubectl get pod nginx
Pod에 명시된 컨테이너가 실행 되지 않는다면 그 이유를 확인
kubectl describe pod nginx
Pod 로그 확인
kubectl logs nginx
아래의 Dockerfile로 새로운 컨테이너 이미지 생성
FROM nginx WORKDIR /app RUN chown -R nginx:nginx /app && chmod -R 755 /app && \ chown -R nginx:nginx /var/cache/nginx && \ chown -R nginx:nginx /var/log/nginx && \ chown -R nginx:nginx /etc/nginx/conf.d RUN touch /var/run/nginx.pid && \ chown -R nginx:nginx /var/run/nginx.pid USER nginx CMD ["nginx", "-g", "daemon off;"]
Pod 재생성
cat <<EOF | kubectl replace --force -f - apiVersion: v1 kind: Pod metadata: name: nginx spec: securityContext: runAsNonRoot: true runAsUser: 101 containers: - image: youngwjung/nginx-nonroot name: nginx ports: - containerPort: 80 volumeMounts: - name: nginx-conf mountPath: /etc/nginx/nginx.conf subPath: nginx.conf volumes: - name: nginx-conf configMap: name: nginx items: - key: nginx.conf path: nginx.conf EOF
Pod 상태 확인
kubectl get pod nginx
Pod 로그 확인
kubectl logs nginx
Pod 재생성
cat <<EOF | kubectl replace --force -f - apiVersion: v1 kind: ConfigMap metadata: name: nginx data: nginx.conf: | worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; server { listen 8080; server_name localhost; location / { root /usr/share/nginx/html; index index.html index.htm; } } } --- apiVersion: v1 kind: Pod metadata: name: nginx spec: securityContext: runAsNonRoot: true runAsUser: 101 containers: - image: youngwjung/nginx-nonroot name: nginx ports: - containerPort: 80 volumeMounts: - name: nginx-conf mountPath: /etc/nginx/nginx.conf subPath: nginx.conf volumes: - name: nginx-conf configMap: name: nginx items: - key: nginx.conf path: nginx.conf EOF
Pod 상태 확인
kubectl get pod nginx
컨테이너가 정상적으로 실행중인지 확인
kubectl exec nginx -- curl -s localhost:8080
새로운 Pod 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: ubuntu spec: securityContext:dddd runAsNonRoot: true runAsUser: 101 containers: - image: ubuntu name: ubuntu command: ["sleep", "3600"] EOF
컨테이너에서 프로세스를 실행중인 유저 확인
kubectl exec ubuntu -- id
실행중인 프로세스 확인
kubectl exec ubuntu -- ps aux
NGINX 설치 시도
kubectl exec ubuntu -- bash -c "apt update && apt install -y nginx"
Pod 재생성
cat <<EOF | kubectl replace --force -f - apiVersion: v1 kind: Pod metadata: name: ubuntu spec: securityContext: runAsNonRoot: true runAsUser: 101 containers: - image: ubuntu name: ubuntu command: ["sleep", "3600"] securityContext: runAsUser: 0 EOF
Pod가 정상적으로 생성되었는지 확인
kubectl get pod ubuntu
Pod에 명시된 컨테이너가 실행되지 않는다면 그 이유를 확인
kubectl describe pod ubuntu
Pod 재생성
cat <<EOF | kubectl replace --force -f - apiVersion: v1 kind: Pod metadata: name: ubuntu spec: securityContext: runAsUser: 101 containers: - image: ubuntu name: ubuntu command: ["sleep", "3600"] securityContext: runAsUser: 0 EOF
컨테이너에서 프로세스를 실행중인 유저 확인
kubectl exec ubuntu -- id
새로운 Pod 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: alpine spec: containers: - image: alpine name: alpine command: ["sleep", "3600"] volumeMounts: - name: dev mountPath: /mnt/dev volumes: - name: dev hostPath: path: /dev EOF
생성된 컨테이너를 통해 호스트의 디스크 디바이스 접근 시도
kubectl exec alpine -- head /mnt/dev/xvda
컨테이너에서 프로세스를 실행중인 유저 확인
kubectl exec ubuntu -- id
Pod 재생성
cat <<EOF | kubectl replace --force -f - apiVersion: v1 kind: Pod metadata: name: alpine spec: containers: - image: alpine name: alpine command: ["sleep", "3600"] volumeMounts: - name: dev mountPath: /mnt/dev securityContext: privileged: true volumes: - name: dev hostPath: path: /dev EOF
생성된 컨테이너를 통해 호스트의 디스크 디바이스 접근 시도
kubectl exec alpine -- head /mnt/dev/xvda
리소스 삭제
{ kubectl delete pod ubuntu alpine nginx kubectl delete cm nginx }
Seccomp
Pod 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: ubuntu spec: containers: - image: ubuntu name: ubuntu command: ["sleep", "3600"] EOF
새로운 리눅스 네임스페이스를 만드는 명령어 실행
kubectl exec -it ubuntu -- unshare
Pod 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: ubuntu spec: securityContext: seccompProfile: type: RuntimeDefault containers: - image: ubuntu name: ubuntu command: ["sleep", "3600"] EOF
Network Policy
Tigera Calico Operator 설치
{ helm repo add projectcalico https://docs.projectcalico.org/charts helm repo update helm install calico projectcalico/tigera-operator --version v3.21.4 }
Calico Operator 설치 확인
kubectl get all -n tigera-operator
Calico CNI 설치 확인
kubectl get all -n calico-system
모든 Calico 구성 요소들이 정상 동작중인지 확인 - 에러메시지가 없으면 정상으로 판단
{ kubectl logs deploy/tigera-operator -n tigera-operator | grep ERROR kubectl logs ds/calico-node -n calico-system | grep ERROR kubectl logs deploy/calico-typha -n calico-system | grep ERROR }
리소스 생성
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Namespace metadata: name: red labels: env: red --- apiVersion: v1 kind: Pod metadata: name: client namespace: red labels: role: client spec: containers: - image: curlimages/curl name: client command: ["sleep", "3600"] --- apiVersion: v1 kind: Namespace metadata: name: blue labels: env: blue --- apiVersion: v1 kind: Pod metadata: name: client namespace: blue labels: role: client spec: containers: - image: curlimages/curl name: client command: ["sleep", "3600"] --- apiVersion: v1 kind: Namespace metadata: name: web labels: env: web --- apiVersion: v1 kind: Pod metadata: name: server namespace: web labels: role: server spec: containers: - image: httpd name: httpd --- apiVersion: v1 kind: Pod metadata: name: client namespace: web labels: role: client spec: containers: - image: curlimages/curl name: client command: ["sleep", "3600"] EOF
web Namespace에 있는 server Pod의 IP 주소 확인
kubectl get pod server -o wide -n web
red Namespace에 있는 client Pod에서 web Namespace에 있는 server Pod 호출
kubectl -n red exec client -- \ curl -s $(kubectl get pod server -n web -o=jsonpath='{.status.podIP}')
red 및 blue Namespace에 Network Policy 생성
cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-egress namespace: red spec: podSelector: {} policyTypes: - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-egress namespace: blue spec: podSelector: {} policyTypes: - Egress EOF
red Namespace에 있는 client Pod에서 web Namespace에 있는 server Pod 호출 - timeout을 1초로 설정
kubectl -n red exec client -- \ curl -m 1 -s $(kubectl get pod server -n web -o=jsonpath='{.status.podIP}')
red Namespace에 새로운 Network Policy 생성
cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-egress-server namespace: red spec: podSelector: matchLabels: role: client policyTypes: - Egress egress: - to: - ipBlock: cidr: $(kubectl get pod server -n web -o=jsonpath='{.status.podIP}')/32 ports: - protocol: TCP port: 80 EOF
위에서 생성한 Network Policy 확인
kubectl -n red describe networkpolicy allow-egress-server
red Namespace에 있는 client Pod에서 web Namespace에 있는 server Pod 호출 - timeout을 1초로 설정
kubectl -n red exec client -- \ curl -m 1 -s $(kubectl get pod server -n web -o=jsonpath='{.status.podIP}')
blue Namespace에 있는 client Pod에서 web Namespace에 있는 server Pod 호출 - timeout을 1초로 설정
kubectl -n blue exec client -- \ curl -m 1 -s $(kubectl get pod server -n web -o=jsonpath='{.status.podIP}')
blue Namespace에 새로운 Network Policy 생성
cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-egress-server namespace: blue spec: podSelector: matchLabels: role: client policyTypes: - Egress egress: - to: - podSelector: matchLabels: role: server ports: - protocol: TCP port: 80 EOF
위에서 생성한 Network Policy 확인
kubectl -n blue describe networkpolicy allow-egress-server
blue Namespace에 있는 client Pod에서 web Namespace에 있는 server Pod 호출 - timeout을 1초로 설정
kubectl -n blue exec client -- \ curl -m 1 -s $(kubectl get pod server -n web -o=jsonpath='{.status.podIP}')
위에서 생성한 Network Policy 수정
cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-egress-server namespace: blue spec: podSelector: matchLabels: role: client policyTypes: - Egress egress: - to: - namespaceSelector: matchLabels: env: web ports: - protocol: TCP port: 80 EOF
위에서 수정한 Network Policy 확인
kubectl -n blue describe networkpolicy allow-egress-server
blue Namespace에 있는 client Pod에서 web Namespace에 있는 server Pod 호출 - timeout을 1초로 설정
kubectl -n blue exec client -- \ curl -m 1 -s $(kubectl get pod server -n web -o=jsonpath='{.status.podIP}')
web Namespace에 Network Policy 생성
cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-all namespace: web spec: podSelector: {} policyTypes: - Ingress - Egress EOF
blue Namespace에 있는 client Pod에서 web Namespace에 있는 server Pod 호출 - timeout을 1초로 설정
kubectl -n blue exec client -- \ curl -m 1 -s $(kubectl get pod server -n web -o=jsonpath='{.status.podIP}')
web Namespace에 새로운 Network Policy 생성 - Egress 정책
cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress namespace: web spec: podSelector: {} policyTypes: - Egress egress: - {} EOF
web Namespace에 새로운 Network Policy 생성 - Ingress 정책
cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-ingress-blue-client namespace: web spec: podSelector: matchLabels: role: server policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: env: blue podSelector: matchLabels: role: client EOF
blue Namespace에 있는 client Pod에서 web Namespace에 있는 server Pod 호출 - timeout을 1초로 설정
kubectl -n blue exec client -- \ curl -m 1 -s $(kubectl get pod server -n web -o=jsonpath='{.status.podIP}')
blue Namespace에 새로운 Pod를 생성
kubectl -n blue run nginx --image=nginx
위에서 생성한 Pod에서 web Namespace에 있는 server Pod 호출 - timeout을 1초로 설정
kubectl -n blue exec nginx -- \ curl -m 1 -s $(kubectl get pod server -n web -o=jsonpath='{.status.podIP}')
리소스 삭제
kubectl delete ns red blue web
Calico 삭제
helm uninstall calico
Calico가 삭제되었는지 확인
kubectl get ns tigera-operator calico-system
한개의 Node로 Session Manager 연결
aws ssm start-session --target \ $(kubectl get node -o jsonpath='{.items[0].spec.providerID}{"\n"}' | grep -oE "i-[a-z0-9]+")
Calico CNI가 생성한 iptable 규칙들이 남아 있는지 확인
sudo iptables-save | grep -i cali
Session Manager 종료
exit
모든 Node 삭제 - 관리형 노드그룹에 의해서 새로운 Node가 자동으로 생성됨
aws ec2 terminate-instances --instance-ids \ $(kubectl get node -o jsonpath='{.items[*].spec.providerID}' | grep -oE "i-[a-z0-9]+" | column)
Node를 새로 생성할수 없는 경우에는 해당 문서를 참고 - https://github.com/projectcalico/calico/blob/master/calico/hack/remove-calico-policy/remove-policy.md
새로운 Node가 생성되었는지 확인
kubectl get node
한개의 Node로 Session Manager 연결
aws ssm start-session --target \ $(kubectl get node -o jsonpath='{.items[0].spec.providerID}{"\n"}' | grep -oE "i-[a-z0-9]+")
Calico CNI가 생성한 iptable 규칙들이 남아 있는지 확인
sudo iptables-save | grep -i cali
Session Manager 종료
exit
Last updated