以 kubeadm 架設 K8S Cluster

前言

本篇紀錄 如何以 kubeadm 架設 K8S Cluster
以 GCP VM 作範例機器

GCP 機器設定

一台 control plane 機器
三台 Node 機器

設定 VPC & Firewall

# 建立 VPC
gcloud compute networks create example-k8s --subnet-mode custom

# 建立 VPC 子網路
gcloud compute networks subnets create k8s-nodes \
  --network example-k8s \
  --range 10.240.0.0/24 \
  --region asia-east1

# 設定 control plane 機器 的防火牆
gcloud compute firewall-rules create example-k8s-allow-internal \
  --allow tcp,udp,icmp,ipip \
  --network example-k8s \
  --source-ranges 10.240.0.0/24

# 設定 Node 機器 的防火牆
gcloud compute firewall-rules create example-k8s-allow-external \
  --allow tcp:22,tcp:6443,icmp \
  --network example-k8s \
  --source-ranges 0.0.0.0/0

設定 VM

# 建立 1台 control plane 機器
gcloud compute instances create controller \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1804-lts \
    --image-project ubuntu-os-cloud \
    --machine-type e2-standard-2 \
    --private-network-ip 10.240.0.11 \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet k8s-nodes \
    --zone asia-east1-a \
    --tags example-k8s,controller
    
# 建立 3台 Node 機器
for i in 0 1 2; do
  gcloud compute instances create worker-${i} \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1804-lts \
    --image-project ubuntu-os-cloud \
    --machine-type e2-standard-2 \
    --private-network-ip 10.240.0.2${i} \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet k8s-nodes \
    --zone asia-east1-a \
    --tags example-k8s,worker
done

目前測試機器配置

controller、worker-0、worker-1、worker-2 配置都相同
OS: Ubuntu 18.04 LTS
CPU: 2 vCPU
Memory: 8 GB
DiskSize: 200 GB

10.240.0.11 | controller
10.240.0.20 | worker-0
10.240.0.21 | worker-1
10.240.0.22 | worker-2

設定 kubeadm

controller、worker-0、worker-1、worker-2 基本設定

# 切成root
sudo su

apt-get update

# 設定 /etc/hosts
vim /etc/hosts
10.240.0.11 controller
10.240.0.20 worker-0
10.240.0.21 worker-1
10.240.0.22 worker-2

# disable swap
# 先查看是否有啟用 swap
swapon --show

# 若有啟用 則 disable
swapoff -a

# /swapfile 這一行註解起來(前方加上 '#' )
vim /etc/fstab

# 安裝 docker
apt-get install docker.io

# 設定 cgroup
cat <<EOF >/etc/docker/daemon.json
{
   "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload
systemctl restart docker

# 安裝 kuberbetes
apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

# 新增 Kubernetes 儲存庫檔案
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

# 安裝 kubelet kubeadm kubectl
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

備註: 2024/05/04 編輯
apt.kubernetes.io 已於 2024/03/04 棄用,改由 pkgs.k8s.io 取代
參考: https://kubernetes.io/blog/2023/08/31/legacy-package-repository-deprecation/

# 安裝 kuberbetes、新增 Kubernetes 儲存庫檔案 調整為
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
mkdir /etc/apt/keyrings/
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# 安裝 kubelet kubeadm kubectl
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

controller 設定

# 初始化
# 若 使用Calico CNI, pod-network-cidr = 192.168.0.0/16
# 若 使用Flannel CNI, pod-network-cidr = 10.244.0.0/16
kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version v1.23.6


# 配置 KUBECONFIG
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# Install CNI Calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# 查看目前服務是否正常
kubectl get pods -n kube-system

# 輸出
calico-kube-controllers-7c845d499-fgfvq   1/1     Running   0          15m
calico-node-nml95                         1/1     Running   0          15m
calico-node-zfqln                         1/1     Running   0          11m
coredns-64897985d-6dl9c                   1/1     Running   0          15m
coredns-64897985d-96mnh                   1/1     Running   0          15m
etcd-controller                           1/1     Running   0          16m
kube-apiserver-controller                 1/1     Running   0          15m
kube-controller-manager-controller        1/1     Running   0          15m
kube-proxy-fz98g                          1/1     Running   0          11m
kube-proxy-z6t7r                          1/1     Running   0          15m
kube-scheduler-controller                 1/1     Running   0          15m

kubeadm init 後 最後兩行 貼到 worker-0、worker-1、worker-2 執行 用以 加入 k8s cluster

# ex
kubeadm join 10.240.0.11:6443 --token 9u3q0q.gp2u4pr6b0w3he8d \
        --discovery-token-ca-cert-hash sha256:13b46bbff3dc7d2a4278dfd34eb5ab52d12fb83723e4cd3fde9b2c9ddebdb354

token 有 24小時時效 若過時 可重新建立

kubeadm token create

# 輸出
9pepda.382pmqirq0ccxuxt

若 discovery-token-ca-cert-hash 遺失 也可重新建立

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

# 輸出
13b46bbff3dc7d2a4278dfd34eb5ab52d12fb83723e4cd3fde9b2c9ddebdb354

查看 Node機器 是否有成功加入 K8S Cluster

kubectl get node

# 輸出
NAME         STATUS   ROLES                  AGE     VERSION
controller   Ready    control-plane,master   14m     v1.23.6
worker-0     Ready    <none>                 5m56s   v1.23.6
worker-1     Ready    <none>                 5m53s   v1.23.6
worker-2     Ready    <none>                 22s     v1.23.6

部屬 Nginx服務 做測試

回到 controller 機器執行

kubectl run test-nginx --image=nginx --port=80

kubectl expose pod/test-nginx --port=1000 --target-port=80 --type=NodePort
kubectl get svc -o wide

# 輸出
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE    SELECTOR
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          22m    <none>
test-nginx   NodePort    10.99.151.133   <none>        1000:30456/TCP   7m5s   run=test-nginx
curl -I http://10.240.0.20:30456 & curl -I http://10.240.0.21:30456 & curl -I http://10.240.0.22:30456

# 輸出
HTTP/1.1 200 OK
Server: nginx/1.21.6
Date: Thu, 28 Apr 2022 14:29:49 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT
Connection: keep-alive
ETag: "61f01158-267"
Accept-Ranges: bytes

HTTP/1.1 200 OK
Server: nginx/1.21.6
Date: Thu, 28 Apr 2022 14:29:49 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT
Connection: keep-alive
ETag: "61f01158-267"
Accept-Ranges: bytes

HTTP/1.1 200 OK
Server: nginx/1.21.6
Date: Thu, 28 Apr 2022 14:29:49 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 25 Jan 2022 15:03:52 GMT
Connection: keep-alive
ETag: "61f01158-267"
Accept-Ranges: bytes

刪除節點

刪除 worker-2 做測試

controller

kubectl drain worker-2 --delete-local-data --force --ignore-daemonsets
kubectl delete node worker-2

重置 worker-2 kubeadm 安裝狀態

worker-2

kubeadm reset

補充 2024-05-12

當只建一個 master node 後,無其他 work node,此時 master node 不會用來做工作調度 (也就是無法在master node 建立 deployment、pod、..等等)
建立後會是在 pending 狀態

kubectl describe pod test-nginx

...略
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  11m (x2 over 16m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..

原因跟 K8s Taints 有關,可參考 [Kubernetes] Taints and Tolerations - 小信豬的原始部落 說明
taint:設計讓 pod 如何不要被分派到某個 worker node

查看 node taint
kubectl describe nodes test-master | grep -E '(Roles|Taints)'

# 輸出
Roles:              control-plane
Taints:             node-role.kubernetes.io/control-plane:NoSchedule

# 需要移除 master node 的 node-role.kubernetes.io/control-plane
# 最後面的 - 代表移除
kubectl taint nodes test-master node-role.kubernetes.io/control-plane-

# 輸出
node/test-master untainted

# 再次查看 node taint
kubectl describe nodes test-master | grep -E '(Roles|Taints)'

# 輸出 (此時該 node 則會開始被安排工作調度)
Roles:              control-plane
Taints:             <none>
# 復原
kubectl taint nodes test-master node-role.kubernetes.io/control-plane=:NoSchedule

# 輸出
node/test-master tainted

參考資料


轉載請註明來源,若有任何錯誤或表達不清楚的地方,歡迎在下方評論區留言,也可以來信至 leozheng0621@gmail.com
如果文章對您有幫助,歡迎斗內(donate),請我喝杯咖啡

斗內💰

×

歡迎斗內

github