2023. 6. 20. 10:17ㆍDev/EKS
이 글은 AWS EKS에 Karmada를 설치하고 멤버 클러스터로 또 다른 EKS clsuter를 등록하고 테스트하는 과정을 기록했습니다.
karmada control-plane에 member cluster를 등록하는 push, pull 두 방식 중에 push방식을 사용했습니다.
글의 흐름
host cluster 생성 -> 배스천 생성 -> karmada 설치 -> member cluster1,2 생성 -> 배포 테스트
1. Karamada 설치를 위한 테스트 환경 구축
eksctl과 kubectl이 필요합니다.
저는 eksctl 버전 0.144.0, kubectl 버전 v1.27.2를 사용했습니다.
로컬에서 eksctl로 eks 클러스터를 만듭니다.
eksctl create cluster -f cluster.yaml
새로운 VPC에 클러스터가 만들어집니다. (약 20분 소요)
설치 후에 bastion용 ec2를 생성합니다.
배스천을 생성하는 이유는 karmada-apiserver의 주소가 기본적으로 eks node의 ip로 생성되기 때문입니다.
apiserver의 주소를 변경하려면 kubectl-karmada로 설치하는 경우 --host-cluster-domain 옵션,
helm으로 설치하는 경우는 clusterDomain과 apiServer 하위 파라미터들을 수정해서 사용할 수 있는 듯합니다.
(직접 해보지는 않음)
AWS Console에서 EC2를 생성합니다.
Amazon Linux 2023 AMI, t3.small, 키페어 등록하고
네트워크 설정에서 편집을 누릅니다.
기본 vpc 대신 위에서 만들어진 vpc에 public subnet 하나 골라줍니다.
기존 보안 그룹 선택하고 default 보안그룹 추가합니다.
인스턴스 시작합니다.
EC2 콘솔에서 해당 인스턴스 클릭 -> 작업 -> 보안 -> IAM 역할 수정 클릭
테스트를 위해 ADMIN 정책을 가진 역할을 추가했습니다.
인스턴스에서 연결을 눌러서 Session Manager로 bastion에 접속합니다. (다른 방법으로 접속하셔도 됩니다.)
sh-5.2$ aws sts get-caller-identity
{
"UserId": "AROASZKW6LDQ23B55PMGR:i-07dbbdc322cb875bd",
"Account": "191845259489",
"Arn": "arn:aws:sts::191845259489:assumed-role/ADMIN_ROLE/i-07dbbdc322cb875bd"
배스천에 접속해서 역할(Role) 적용이 잘 됐는지 확인합니다. (IAM 역할 추가했는데 안되면 인스턴스 재시작해보세요)
karmada를 설치하는 다양한 방법이 있지만 저는 kubectl-karmada를 사용했습니다.
배스천에 kubectl과 kubectl-karmada를 설치합니다.
#kubectl install
sudo curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
#kubectl-karmada install
sudo curl -s https://raw.githubusercontent.com/karmada-io/karmada/master/hack/install-cli.sh | sudo bash -s kubectl-karmada
aws eks list-clusters 명령어로 클러스터 이름을 확인합니다.
aws eks update-kubeconfig --name <clusterName>
kubectl get no
NAME STATUS ROLES AGE VERSION
ip-192-168-21-33.ap-northeast-2.compute.internal Ready <none> 3h35m v1.27.1-eks-2f008fe
ip-192-168-33-17.ap-northeast-2.compute.internal Ready <none> 70m v1.27.1-eks-2f008fe
ip-192-168-88-226.ap-northeast-2.compute.internal Ready <none> 3h35m v1.27.1-eks-2f008fe
update-kubeconfig 명령 후 kubectl이 잘 되는지 확인합니다.
명령어가 잘 실행되지 않으면
클러스터를 만든 로컬에서 eks의 aws-auth configmap에 배스천의 롤을 추가합니다.
방법 1. kubectl edit cm aws-auth -n kube-system
명령어로 아래와 같이 userarn 또는 rolearn을 추가합니다.
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::111122223333:role/my-role
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:masters
rolearn: arn:aws:iam::111122223333:role/my-ec2-role
username: my-ec2-role
mapUsers: |
- groups:
- system:masters
userarn: arn:aws:iam::111122223333:user/admin
username: admin
방법 2. eksctl 명령어를 이용해서 수정합니다.
eksctl create iamidentitymapping --cluster host --region=ap-northeast-2 --arn arn:aws:iam::191845259489:role/ADMIN_ROLE --group system:masters --username admin
kubectl 명령어가 잘 실행되면 host cluster 전용으로 kuebconfig파일을 만듭니다.
aws eks update-kubeconfig --name <clusterName> --kubeconfig host.config
aws eks update-kubeconfig 명령어에 클러스터 이름을 주고, kubeconfig 옵션을 줍니다.
보안그룹의 인바운드 규칙 편집
eks-cluster-sg-host-3208616의 인바운드 규칙에 default-sg 추가 (node->bastion)
deafult-sg 인바운드 규칙에 eks-cluster-sg-host-3208616를 추가 (bastion->node)
보안그룹 허용을 안 해주면 karmada를 설치할 때,
deploy.go:57] unable to create Namespace: Post "https://192.168.xx.xx:32443/api/v1/namespaces": dial tcp 192.168.xx.xx:32443: i/o timeout error가 발생하면서 설치에 실패합니다.
(이것 때문에 며칠 고생했습니다. + bastion에서 작업하는 이유)
sh-5.2$ sudo kubectl-karmada init --kubeconfig host.config
I0619 05:12:48.768532 3376 deploy.go:177] kubeconfig file: host.config, kubernetes: https://D57722CF84AF524ED9DCFA93BCF9BCA0.gr7.ap-northeast-2.eks.amazonaws.com
W0619 05:12:49.496234 3376 node.go:36] The kubernetes cluster does not have a Master role.
I0619 05:12:49.496266 3376 node.go:44] Randomly select 3 Node IPs in the kubernetes cluster.
I0619 05:12:49.504330 3376 deploy.go:197] karmada apiserver ip: [192.168.21.33 192.168.60.236 192.168.88.226]
I0619 05:12:49.869877 3376 cert.go:229] Generate ca certificate success.
I0619 05:12:50.133001 3376 cert.go:229] Generate karmada certificate success.
I0619 05:12:50.230871 3376 cert.go:229] Generate apiserver certificate success.
I0619 05:12:50.310357 3376 cert.go:229] Generate front-proxy-ca certificate success.
I0619 05:12:50.504339 3376 cert.go:229] Generate front-proxy-client certificate success.
I0619 05:12:50.617605 3376 cert.go:229] Generate etcd-ca certificate success.
I0619 05:12:50.880249 3376 cert.go:229] Generate etcd-server certificate success.
I0619 05:12:51.169638 3376 cert.go:229] Generate etcd-client certificate success.
I0619 05:12:51.169979 3376 deploy.go:291] download crds file:https://github.com/karmada-io/karmada/releases/download/v1.6.0/crds.tar.gz
Downloading...[ 100.00% ]
Download complete.
I0619 05:12:51.676690 3376 deploy.go:531] Create karmada kubeconfig success.
I0619 05:12:51.700846 3376 idempotency.go:251] Namespace karmada-system has been created or updated.
I0619 05:12:51.750776 3376 idempotency.go:275] Service karmada-system/etcd has been created or updated.
I0619 05:12:51.750799 3376 deploy.go:359] Create etcd StatefulSets
I0619 05:12:54.782917 3376 deploy.go:367] Create karmada ApiServer Deployment
I0619 05:12:54.804184 3376 idempotency.go:275] Service karmada-system/karmada-apiserver has been created or updated.
I0619 05:13:26.827781 3376 deploy.go:382] Create karmada aggregated apiserver Deployment
I0619 05:13:26.843884 3376 idempotency.go:275] Service karmada-system/karmada-aggregated-apiserver has been created or updated.
F0619 05:13:59.875232 3376 deploy.go:57] unable to create Namespace: Post "https://192.168.21.33:32443/api/v1/namespaces": dial tcp 192.168.21.33:32443: i/o timeout
2. Karmada init
kubectl-karmada init 명령어를 실행합니다. --kubeconfig 옵션으로 host cluster의 kubeconfig를 지정합니다.
sh-5.2$ sudo kubectl-karmada init --kubeconfig host.config
I0619 05:16:09.616448 3595 deploy.go:177] kubeconfig file: host.config, kubernetes: https://D57722CF84AF524ED9DCFA93BCF9BCA0.gr7.ap-northeast-2.eks.amazonaws.com
W0619 05:16:10.746921 3595 node.go:36] The kubernetes cluster does not have a Master role.
I0619 05:16:10.746943 3595 node.go:44] Randomly select 3 Node IPs in the kubernetes cluster.
I0619 05:16:10.770588 3595 deploy.go:197] karmada apiserver ip: [192.168.21.33 192.168.60.236 192.168.88.226]
I0619 05:16:11.225582 3595 cert.go:229] Generate ca certificate success.
I0619 05:16:11.918380 3595 cert.go:229] Generate karmada certificate success.
I0619 05:16:12.109319 3595 cert.go:229] Generate apiserver certificate success.
I0619 05:16:12.284519 3595 cert.go:229] Generate front-proxy-ca certificate success.
I0619 05:16:12.484823 3595 cert.go:229] Generate front-proxy-client certificate success.
I0619 05:16:12.701778 3595 cert.go:229] Generate etcd-ca certificate success.
I0619 05:16:12.811062 3595 cert.go:229] Generate etcd-server certificate success.
I0619 05:16:13.055710 3595 cert.go:229] Generate etcd-client certificate success.
I0619 05:16:13.055982 3595 deploy.go:291] download crds file:https://github.com/karmada-io/karmada/releases/download/v1.6.0/crds.tar.gz
Downloading...[ 100.00% ]
Download complete.
I0619 05:16:13.544657 3595 deploy.go:531] Create karmada kubeconfig success.
I0619 05:16:13.566301 3595 idempotency.go:251] Namespace karmada-system has been created or updated.
I0619 05:16:13.611384 3595 idempotency.go:275] Service karmada-system/etcd has been created or updated.
I0619 05:16:13.611407 3595 deploy.go:359] Create etcd StatefulSets
I0619 05:16:17.637336 3595 deploy.go:367] Create karmada ApiServer Deployment
I0619 05:16:17.657911 3595 idempotency.go:275] Service karmada-system/karmada-apiserver has been created or updated.
I0619 05:16:48.679596 3595 deploy.go:382] Create karmada aggregated apiserver Deployment
I0619 05:16:48.697209 3595 idempotency.go:275] Service karmada-system/karmada-aggregated-apiserver has been created or updated.
I0619 05:16:51.734918 3595 idempotency.go:251] Namespace karmada-system has been created or updated.
I0619 05:16:51.735179 3595 deploy.go:68] Initialize karmada bases crd resource `/etc/karmada/crds/bases`
I0619 05:16:51.746242 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:51.772649 3595 deploy.go:223] Create CRD federatedhpas.autoscaling.karmada.io successfully.
I0619 05:16:51.776084 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:51.788270 3595 deploy.go:223] Create CRD resourceinterpretercustomizations.config.karmada.io successfully.
I0619 05:16:51.789906 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:51.802389 3595 deploy.go:223] Create CRD resourceinterpreterwebhookconfigurations.config.karmada.io successfully.
I0619 05:16:51.803863 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:51.812613 3595 deploy.go:223] Create CRD serviceexports.multicluster.x-k8s.io successfully.
I0619 05:16:51.816014 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:51.847957 3595 deploy.go:223] Create CRD serviceimports.multicluster.x-k8s.io successfully.
I0619 05:16:51.850022 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:51.868652 3595 deploy.go:223] Create CRD multiclusteringresses.networking.karmada.io successfully.
I0619 05:16:51.877951 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:51.904307 3595 deploy.go:223] Create CRD clusteroverridepolicies.policy.karmada.io successfully.
I0619 05:16:51.913957 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:51.985232 3595 deploy.go:223] Create CRD clusterpropagationpolicies.policy.karmada.io successfully.
I0619 05:16:51.986410 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:52.012465 3595 deploy.go:223] Create CRD federatedresourcequotas.policy.karmada.io successfully.
I0619 05:16:52.017960 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:52.050306 3595 deploy.go:223] Create CRD overridepolicies.policy.karmada.io successfully.
I0619 05:16:52.054447 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:52.076661 3595 deploy.go:223] Create CRD propagationpolicies.policy.karmada.io successfully.
I0619 05:16:52.083164 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:52.222149 3595 deploy.go:223] Create CRD clusterresourcebindings.work.karmada.io successfully.
I0619 05:16:52.228872 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:52.389163 3595 deploy.go:223] Create CRD resourcebindings.work.karmada.io successfully.
I0619 05:16:52.390352 3595 deploy.go:213] Attempting to create CRD
I0619 05:16:52.557219 3595 deploy.go:223] Create CRD works.work.karmada.io successfully.
I0619 05:16:52.557344 3595 deploy.go:79] Initialize karmada patches crd resource `/etc/karmada/crds/patches`
I0619 05:16:52.978223 3595 deploy.go:91] Create MutatingWebhookConfiguration mutating-config.
I0619 05:16:52.992650 3595 webhook_configuration.go:273] MutatingWebhookConfiguration mutating-config has been created or updated successfully.
I0619 05:16:52.992679 3595 deploy.go:96] Create ValidatingWebhookConfiguration validating-config.
I0619 05:16:53.004367 3595 webhook_configuration.go:244] ValidatingWebhookConfiguration validating-config has been created or updated successfully.
I0619 05:16:53.004489 3595 deploy.go:102] Create Service 'karmada-aggregated-apiserver' and APIService 'v1alpha1.cluster.karmada.io'.
I0619 05:16:53.010216 3595 idempotency.go:275] Service karmada-system/karmada-aggregated-apiserver has been created or updated.
I0619 05:16:53.034918 3595 check.go:26] Waiting for APIService(v1alpha1.cluster.karmada.io) condition(Available), will try
I0619 05:16:54.086943 3595 tlsbootstrap.go:33] [bootstrap-token] configured RBAC rules to allow Karmada Agent Bootstrap tokens to post CSRs in order for agent to get long term certificate credentials
I0619 05:16:54.091521 3595 tlsbootstrap.go:47] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Karmada Agent Bootstrap Token
I0619 05:16:54.096897 3595 tlsbootstrap.go:61] [bootstrap-token] configured RBAC rules to allow certificate rotation for all agent client certificates in the member cluster
I0619 05:16:54.102786 3595 deploy.go:126] Initialize karmada bootstrap token
I0619 05:16:54.154865 3595 deploy.go:400] Create karmada kube controller manager Deployment
I0619 05:16:54.169864 3595 idempotency.go:275] Service karmada-system/kube-controller-manager has been created or updated.
I0619 05:17:00.191713 3595 deploy.go:414] Create karmada scheduler Deployment
I0619 05:17:08.217336 3595 deploy.go:425] Create karmada controller manager Deployment
I0619 05:17:17.236016 3595 deploy.go:436] Create karmada webhook Deployment
I0619 05:17:17.252286 3595 idempotency.go:275] Service karmada-system/karmada-webhook has been created or updated.
------------------------------------------------------------------------------------------------------
█████ ████ █████████ ███████████ ██████ ██████ █████████ ██████████ █████████
░░███ ███░ ███░░░░░███ ░░███░░░░░███ ░░██████ ██████ ███░░░░░███ ░░███░░░░███ ███░░░░░███
░███ ███ ░███ ░███ ░███ ░███ ░███░█████░███ ░███ ░███ ░███ ░░███ ░███ ░███
░███████ ░███████████ ░██████████ ░███░░███ ░███ ░███████████ ░███ ░███ ░███████████
░███░░███ ░███░░░░░███ ░███░░░░░███ ░███ ░░░ ░███ ░███░░░░░███ ░███ ░███ ░███░░░░░███
░███ ░░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ███ ░███ ░███
█████ ░░████ █████ █████ █████ █████ █████ █████ █████ █████ ██████████ █████ █████
░░░░░ ░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░░░░░░ ░░░░░ ░░░░░
------------------------------------------------------------------------------------------------------
Karmada is installed successfully.
Register Kubernetes cluster to Karmada control plane.
Register cluster with 'Push' mode
Step 1: Use "kubectl karmada join" command to register the cluster to Karmada control plane. --cluster-kubeconfig is kubeconfig of the member cluster.
(In karmada)~# MEMBER_CLUSTER_NAME=$(cat ~/.kube/config | grep current-context | sed 's/: /\n/g'| sed '1d')
(In karmada)~# kubectl karmada --kubeconfig /etc/karmada/karmada-apiserver.config join ${MEMBER_CLUSTER_NAME} --cluster-kubeconfig=$HOME/.kube/config
Step 2: Show members of karmada
(In karmada)~# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters
Register cluster with 'Pull' mode
Step 1: Use "kubectl karmada register" command to register the cluster to Karmada control plane. "--cluster-name" is set to cluster of current-context by default.
(In member cluster)~# kubectl karmada register 192.168.21.33:32443 --token j4kaad.4n48f0808nfqnlli --discovery-token-ca-cert-hash sha256:1c319742cbf1e1275a9e60eea277e4a87beaac72f03cdb02392ae30627fcd018
Step 2: Show members of karmada
(In karmada)~# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters
sh-5.2$
Karmada가 성공적으로 설치되었습니다.
멤버 클러스터를 등록하는 두 가지 방법 Push와 Pull에 대한 설명이 나옵니다.
host cluster의 karmada-system 네임스페이스를 살펴보면 아래와 같습니다.
sh-5.2$ k get all -n karmada-system
NAME READY STATUS RESTARTS AGE
pod/etcd-0 1/1 Running 0 3m48s
pod/karmada-aggregated-apiserver-b9d477d6b-548f8 1/1 Running 0 3m13s
pod/karmada-apiserver-58f66d6f76-xv6wg 1/1 Running 0 3m44s
pod/karmada-controller-manager-7c64888848-8rhjb 1/1 Running 0 2m53s
pod/karmada-scheduler-7dbf87497d-7sh4b 1/1 Running 0 3m1s
pod/karmada-webhook-5bc7545c7c-x7s69 1/1 Running 0 2m44s
pod/kube-controller-manager-744ccbbbd7-4kvhs 1/1 Running 0 3m7s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/etcd ClusterIP None <none> 2379/TCP,2380/TCP 3m48s
service/karmada-aggregated-apiserver ClusterIP 10.100.153.43 <none> 443/TCP 3m13s
service/karmada-apiserver NodePort 10.100.223.13 <none> 5443:32443/TCP 3m44s
service/karmada-webhook ClusterIP 10.100.8.222 <none> 443/TCP 2m44s
service/kube-controller-manager ClusterIP 10.100.151.105 <none> 10257/TCP 3m7s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/karmada-aggregated-apiserver 1/1 1 1 3m13s
deployment.apps/karmada-apiserver 1/1 1 1 3m44s
deployment.apps/karmada-controller-manager 1/1 1 1 2m53s
deployment.apps/karmada-scheduler 1/1 1 1 3m1s
deployment.apps/karmada-webhook 1/1 1 1 2m44s
deployment.apps/kube-controller-manager 1/1 1 1 3m7s
NAME DESIRED CURRENT READY AGE
replicaset.apps/karmada-aggregated-apiserver-b9d477d6b 1 1 1 3m13s
replicaset.apps/karmada-apiserver-58f66d6f76 1 1 1 3m44s
replicaset.apps/karmada-controller-manager-7c64888848 1 1 1 2m53s
replicaset.apps/karmada-scheduler-7dbf87497d 1 1 1 3m1s
replicaset.apps/karmada-webhook-5bc7545c7c 1 1 1 2m44s
replicaset.apps/kube-controller-manager-744ccbbbd7 1 1 1 3m7s
NAME READY AGE
statefulset.apps/etcd 1/1 3m48s
중요) /etc/karmada 경로에 아래와 같은 파일이 생깁니다.
sh-5.2$ ls /etc/karmada/
crds crds.tar.gz karmada-agent.yaml karmada-apiserver.config karmada-scheduler-estimator.yaml pki
멤버클러스터를 연결하면 아래와 같이 karmada-apiserver에서 cluster 목록을 가져올 수 있습니다.
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters
NAME VERSION MODE READY AGE
cluster1 v1.27.2-eks-c12679a Push True 14m
cluster2 v1.27.2-eks-c12679a Push True 18s
3. 멤버클러스터 조인
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: cluster1
region: ap-northeast-2
version: "1.27"
iamIdentityMappings:
- arn: arn:aws:iam::000000000000:role/myAdminRole
groups:
- system:masters
username: admin
noDuplicateARNs: true # prevents shadowing of ARNs
- arn: arn:aws:iam::000000000000:user/myUser
groups:
- system:masters
username: myUser
noDuplicateARNs: true # prevents shadowing of ARNs
managedNodeGroups:
- name: ng-spot
instanceTypes: ["t3.small","t3.medium"]
spot: true
위 yaml에서 EC2 bastion에서 사용하는 aws iam user 또는 iam role을 추가해서 eks를 만듭니다.
eksctl create cluster -f cluster1.yaml
(약 15분 소요)
배스천에서 cluster1.config 파일을 만듭니다.
aws eks update-kubeconfig --name cluster1 --kubeconfig cluster1.config
kubectl-karmada의 join 명령어를 사용하여 멤버 클러스터를 push 방식으로 등록합니다.
sudo kubectl-karmada --kubeconfig /etc/karmada/karmada-apiserver.config join cluster1 --cluster-kubeconfig=cluster1.config
또 다른 클러스터를 만들어서 같은 방식으로 연결하고 아래와 같은 결과를 확인합니다.
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters
NAME VERSION MODE READY AGE
cluster1 v1.27.2-eks-c12679a Push True 14m
cluster2 v1.27.2-eks-c12679a Push True 18s
4. 배포
karmada-apiserver에 deployment를 배포합니다.
1) deployment 배포 후 아래 명령어의 결과를 예상해 보세요.
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
EOF
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get deploy
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get po
kubectl --kubeconfig ~/.kube/cluster1.config get po
kubectl --kubeconfig ~/.kube/cluster1.config get deploy
결과
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/2 0 0 89s
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get po
No resources found in default namespace.
kubectl --kubeconfig ~/.kube/cluster1.config get po
No resources found in default namespace.
kubectl --kubeconfig ~/.kube/cluster1.config get deploy
No resources found in default namespace.
2) propagationPolicy 배포 후 아래 명령어의 결과를 예상해 보세요.
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config apply -f - <<EOF
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx
placement:
clusterAffinity:
clusterNames:
- cluster1
- cluster2
EOF
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get deploy
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get po
kubectl --kubeconfig ~/.kube/cluster1.config get po
kubectl --kubeconfig ~/.kube/cluster1.config get deploy
결과
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 4/2 4 4 2m21s
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get po
No resources found in default namespace.
kubectl --kubeconfig cluster1.config get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 2/2 2 2 5m21s
kubectl --kubeconfig cluster1.config get po
NAME READY STATUS RESTARTS AGE
nginx-77b4fdf86c-ffq8x 1/1 Running 0 5m26s
nginx-77b4fdf86c-pgbxt 1/1 Running 0 5m26s
3) replicaScheduling이 추가된 propagationPolicy 배포 후 아래 명령어의 결과를 예상해 보세요.
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config apply -f - <<EOF
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx
placement:
clusterAffinity:
clusterNames:
- cluster1
- cluster2
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
staticWeightList:
- targetCluster:
clusterNames:
- cluster1
weight: 1
- targetCluster:
clusterNames:
- cluster2
weight: 1
EOF
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get deploy
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get po
kubectl --kubeconfig ~/.kube/cluster1.config get deploy
kubectl --kubeconfig ~/.kube/cluster1.config get po
결과
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 2/2 2 2 8m42s
kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get po
No resources found in default namespace.
kubectl --kubeconfig ~/.kube/cluster1.config get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 6m48s
kubectl --kubeconfig ~/.kube/cluster1.config get po
NAME READY STATUS RESTARTS AGE
nginx-77b4fdf86c-pgbxt 1/1 Running 0 6m44s
5. 마무리
Karmada의 특징 중에 Kubernetes Native API Compatible이 있습니다.
설치하고 멀티 클러스터를 연결하는 과정만 봐도 쿠버네티스 네이티브 하다는 것을 많이 느낄 수 있었습니다.
멀티 클러스터 관리 솔루션 중에는 아직 잘 안 알려진 것 같아서 찾아보았는데 잘 만든 것 같습니다.
쿠버네티스의 장점을 다 흡수할 수 있는 느낌?
배포를 하고 결과를 예측하는 부분은 직접 해보기 전에는 결과를 예상하기 힘들었습니다.
propagationPolicy가 왜 필요한지, 어떤 기능을 하는지 docs의 concepts를 봤을 때는 잘 와닿지 않았지만
직접 실습을 해보면서 알게 되었습니다.
이 글은 가장 기초적인 karmada cluster join에 관한 글입니다.
저는 공식문서만 봐서는 따라 하기 힘들었기 때문에 이 과정을 글로 남깁니다.
공식문서에 다양한 예제들도 (CRD배포, FederatedHPA, failover, scheduling, istio활용, argocd활용 등)
테스트하고 연구해 볼 필요가 있다고 느낍니다.
'Dev > EKS' 카테고리의 다른 글
aws eks ALB, NLB error: service Failed build model due to unable to resolve at least one subnet (0) | 2023.08.10 |
---|---|
ebs-csi-driver volumesnapshot spam log (0) | 2023.07.28 |
1 pods are unevictable from node eksctl (1) | 2023.06.16 |
EKS ALB 504 Gateway Time-out (2) | 2023.06.07 |
AWS LoadBalancer Security Groups 초기화 문제 (0) | 2023.06.05 |