Please enable Javascript to view the contents

在 Anolis OS 8.8 | Rocky Linux 9.3 | AlmaLinux 9.3 上部署 Kubernetes v1.28.3 集群

 ·  🕒 4 分钟  ·  ✍️ 加文 · 👀... 阅读

本文部署步骤适用于 CentOS 衍生发行版: Anolis OS 8.8 (Kernel 5.10) 、 Rocky Linux 9.3 (Kernel 5.14)、AlmaLinux 9.3 (Kernel 5.14)

系统环境

1、部署环境 及 k8s 组件版本

  • 系统环境:Anolis OS release 8.8,内核版本:Linux Kernel 5.10.134-13.an8.x86_64
  • crictl v1.27.0
  • containerd v1.7.8
  • runc v1.1.9
  • flannel v0.23.0
  • etcd v3.5.9
  • kubeadm 、kubectl 、kubelet v1.28.3
  • kube-apiserver 、kube-scheduler 、kube-controller-manager v1.28.3
  • kubernetes-dashboard v2.7.0

2、节点 IP 地址及组件部署说明

节点 IP 地址 POD 网段 Service 网段 节点备注
c1 192.168.31.31 10.15.0.0/16 10.16.0.0/16 集群 Master 节点
c2 192.168.31.32 10.15.0.0/16 10.16.0.0/16 集群 Worker 节点
c3 192.168.31.33 10.15.0.0/16 10.16.0.0/16 集群 Worker 节点

环境初始化

1、配置主机名解析

# 集群节点 hosts 设置
cat >> /etc/hosts << EOF
192.168.31.31    c1
192.168.31.32    c2
192.168.31.33    c3
EOF

2、系统基础环境配置

# 系统启动时加载内核模块
cat << EOF > /etc/modules-load.d/99-k8s.conf
overlay
br_netfilter
EOF

# 内核参数优化
cat << EOF > /etc/sysctl.d/99-k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF
sysctl -p /etc/sysctl.d/99-k8s.conf   # 使内核参数配置立即生效 

# 禁用 selinux 和 swap
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=Disabled/' /etc/selinux/config
sed -i '/swap/ s/^/# /' /etc/fstab

3、各集群节点防火墙规则配置

方式一:配置防火墙规则,放行必要端口。适用于线上生产
# Master node 防火墙规则
firewall-cmd --permanent --add-port={6443,2379,2380,10250,10251,10252,10257,10259,179}/tcp
firewall-cmd --permanent --add-port=4789/udp
firewall-cmd --reload
# Worker Nodes 防火墙规则
firewall-cmd --permanent --add-port={179,10250,30000-32767}/tcp
firewall-cmd --permanent --add-port=4789/udp
firewall-cmd --reload

方式二:直接关闭防火墙。适用于测试环境
systemctl stop firewalld && systemctl disable firewalld 

4、重启各节点

ansible c1,c2,c3 -m shell -a 'reboot'

部署 Containerd

1、二进制解压部署容器运行时 containerd

# 部署 containerd + runc + cni plugins
wget https://github.com/containerd/containerd/releases/download/v1.7.8/cri-containerd-cni-1.7.8-linux-amd64.tar.gz
tar -zxvf cri-containerd-cni-1.7.8-linux-amd64.tar.gz -C /

2、自定义 containerd 配置项

# 导出 containerd 默认配置
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

# 自定义 如下配置项
vim /etc/containerd/config.toml
 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
   ...
   [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
     # 修改容器 cgroup driver 为 systemd,可以确保服务器节点在资源紧张的情况更加稳定。当使用 cgroup v2 时,该项必须设置为 true ,否则集群无法初始化成功
     SystemdCgroup = true
 [plugins."io.containerd.grpc.v1.cri"]
   ...
   # sandbox_image = "registry.k8s.io/pause:3.8"
   sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

3、启动 containerd

systemctl enable --now containerd.service 

部署 kubeadm 、kubectl 、kubelet

1、dnf 安装 k8s 组件 kubeadm 、kubectl 、kubelet

# 添加 Kubernetes 的 yum 仓库
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

# 安装 kubelet、kubeadm 和 kubectl,并启用 kubelet 以确保它在启动时自动启动
dnf install kubelet kubeadm kubectl --disableexcludes=kubernetes -y

# 安装其它依赖
dnf install iproute-tc ipvsadm -y

2、启动 Kubelet

systemctl enable --now kubelet

创建 Kubenetes Cluster

自定义 kubeadm 配置

1、通过 kubeadm.yml 自定义集群配置

# 自定义覆盖默认配置
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 5qu609.n2h6xv3t5w4iy3b4
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
localAPIEndpoint:     # 配置 kube-apiserver 监听 IP 及工作 Port
  advertiseAddress: 192.168.31.31
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: c1    # 配置为 master node 主机名,kubeadm 签发证书时需要用到
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: kubernetes
kubernetesVersion: v1.28.3
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
  timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
networking:     # 自定义集群 pod 及 service 网段
  dnsDomain: cluster.local
  podSubnet: 10.15.0.0/16
  serviceSubnet: 10.16.0.0/16
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
clusterDNS:
- 10.16.0.10
clusterDomain: cluster.local
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs        # 自定义 KubeProxy 工作模式
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 0
  contentType: ""
  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
  qps: 0
clusterCIDR: ""
configSyncPeriod: 0s

需注意,通过配置项networking.podSubnet自定义 pod 网段,该网段需同 Flanneld 或其它容器网络组件一致才能生效 !

创建集群

kubeadm init --config=kubeadm.yml

# 配置通过 kubectl 管理集群
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

加入节点

# Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.31.31:6443 --token 5qu609.n2h6xv3t5w4iy3b4 --discovery-token-ca-cert-hash sha256:d3a5ab286e2c8128e4d4782a385561555b469a94dae54a0af55ff672be601041 

拆除集群(可选)

kubeadm reset

rm /etc/cni/net.d/* -rf
rm $HOME/.kube/config -rf

ipvsadm --clear
iptables -F -t filter
iptables -F -t nat
iptables -F -t mangle
iptables -F -t raw

部署 Flanneld

1、安装集群网络组件 Flanneld

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

# 默认 pod 网段 10.244.0.0/16,需配置为与 kubeadm 配置文件中的一致
kubectl apply -f kube-flannel.yml

# 备份容器运行时 Containerd 的默认网络配置
cd  /etc/cni/net.d/  && mv 10-containerd-net.conflist 10-containerd-net.conflist.bak

2、自定生成的 Flanneld 默认配置

cat /etc/cni/net.d/10-flannel.conflist
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

3、Flanneld 运行时节点 POD 地址分配信息

cat /run/flannel/subnet.env 
#> FLANNEL_NETWORK=10.15.0.0/16
#> FLANNEL_SUBNET=10.15.0.1/24
#> FLANNEL_MTU=1450
#> FLANNEL_IPMASQ=true

4、自定义 Flanneld 配置 kube-flannel.yml

net-conf.json: |
  {
    "Network": "10.15.0.0/16",
    "Backend": {
      "Type": "vxlan",       # 取值范围 host-gw , wireguard (系统 kernel < 5.6 的用户需要安装额外的 Wireguard 软件包)
      "Directrouting": true  # 当集群节点主机位于同一子网时,则跨节点 pod 通信无需 vxlan 封装(类似于 host-gw);非同一子网则需 vxlan 封装
    }
  }

部署 Dashboard

1、部署 Dashboard

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

# 编辑 recommended.yaml 修改如下配置项, The range of valid ports is 30000-32767
spec:
  type: NodePort        # 设置 service 类型,其它类型有:
  ports:
    - port: 443         # service 端口
      targetPort: 8443  # pod 服务端口
      nodePort: 30443   # service 端口映射到节点端口 30443

kubectl apply -f recommended.yaml

# 获取默认 token,权限有限
kubectl -n kubernetes-dashboard create token kubernetes-dashboard

2、创建管理员权限用户

cat > kubernetes-dashboard-admin.yml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

# 获取该管理用户 token
kubectl -n kubernetes-dashboard create token admin-user

3、使用集群任意节点 Host Ip Address + nodePort 端口访问 Dashboard 服务

https://192.168.31.32:30443/

命令行管理集群

1、启动后,在容器内执行命令 tail -f /dev/null ,目的是让 rockylinux 容器继续运行,而非退出

kubectl create deployment --image rockylinux:9.2 --replicas 2 rockylinux -- tail -f /dev/null 

2、获取集群节点状态 kubectl get node -o wide

NAME   STATUS   ROLES           AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE        KERNEL-VERSION           CONTAINER-RUNTIME
c1     Ready    control-plane   2m32s   v1.28.3   192.168.31.31   <none>        Anolis OS 8.8   5.10.134-13.an8.x86_64   containerd://1.7.8
c2     Ready    <none>          111s    v1.28.3   192.168.31.32   <none>        Anolis OS 8.8   5.10.134-13.an8.x86_64   containerd://1.7.8
c3     Ready    <none>          109s    v1.28.3   192.168.31.33   <none>        Anolis OS 8.8   5.10.134-13.an8.x86_64   containerd://1.7.8

3、获取当前 pod 状态 kubectl get pod -o wide -A

NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
kube-flannel           kube-flannel-ds-5sfdl                        1/1     Running   0          4m21s   192.168.31.32   c2     <none>           <none>
kube-flannel           kube-flannel-ds-79h6p                        1/1     Running   0          4m21s   192.168.31.33   c3     <none>           <none>
kube-flannel           kube-flannel-ds-kgnmk                        1/1     Running   0          4m21s   192.168.31.31   c1     <none>           <none>
kube-system            coredns-66f779496c-5p4q8                     1/1     Running   0          6m15s   10.15.1.42      c2     <none>           <none>
kube-system            coredns-66f779496c-779sn                     1/1     Running   0          6m15s   10.15.1.41      c2     <none>           <none>
kube-system            etcd-c1                                      1/1     Running   0         6m30s   192.168.31.31   c1     <none>           <none>
kube-system            kube-apiserver-c1                            1/1     Running   0         6m28s   192.168.31.31   c1     <none>           <none>
kube-system            kube-controller-manager-c1                   1/1     Running   0         6m28s   192.168.31.31   c1     <none>           <none>
kube-system            kube-proxy-dsqdb                             1/1     Running   0          5m52s   192.168.31.32   c2     <none>           <none>
kube-system            kube-proxy-h9pqc                             1/1     Running   0          5m50s   192.168.31.33   c3     <none>           <none>
kube-system            kube-proxy-jlvzv                             1/1     Running   0          6m15s   192.168.31.31   c1     <none>           <none>
kube-system            kube-scheduler-c1                            1/1     Running   0         6m28s   192.168.31.31   c1     <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-926hr   1/1     Running   0          3m28s   10.15.2.50      c3     <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-78f87ddfc-fkh29         1/1     Running   0          3m28s   10.15.2.49      c3     <none>           <none>

加文
作者: 加文
运维工程师
版权声明:自由转载-非商用-非衍生-转载请注明出处!


目录