Please enable Javascript to view the contents

如何利用单个主机资源、快速创建多节点 k8s 集群环境

 ·  🕒 5 分钟  ·  ✍️ 加文 · 👀... 阅读

Kind ( Kubernetes In Docker ) 使用一个 Container 来模拟一个 Node, 即每个 “Node” 作为一个 docker 容器运行,因此可使用多个 Container 搭建具有多个 Node 的 k8s 集群; 节点内 containerd 、 kubelet 以 systemd 方式运行,而 etcd 、kube-apiserver 、 kube-scheduler 、kube-controller-manager 、 kube-proxy 以容器的方式运行;本环境适用于快速部署多节点 k8s 集群,用以验证、测试 k8s 功能特征

环境介绍

# 各组件版本
CentOS 7.9.2009 ( 5.4.180-1.el7 )
Docker Engine Community : V20.10.20
Kubernetes Version : V1.25.2

# 网段规划
node network: 172.18.0.0/16
pod  network: 10.15.0.0/16
service network: 10.16.0.0/16
测试主机 IP 地址: 192.168.31.19/24

# kind 相关镜像
kindest/node:v1.25.2                 # 用于运行嵌套容器、systemd 和 Kubernetes 组件的 Docker 镜像
kindest/haproxy:v20220607-9a4d8d2a   # 实现对 kube-apiserver 访问的负载均衡

基础环境部署

kubectl 安装

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安装命令行工具 kubectl
yum makecache fast
yum install -y  kubectl

# kubectl 命令自动补全
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc

Docker 安装

yum-config-manager  --add-repo  https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce-20.10.9-3.el7 -y
systemctl enable docker && systemctl start docker
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion

自定义 Kind 配置

该自定义配置启动了 6 个容器分别用于模拟 3 个 k8s 控制器节点和 3 个 k8s Worker 节点

cat > kind-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  kubeProxyMode: "ipvs"         # kube-proxy 工作模式
  # podSubnet: "10.15.0.0/16"
  # serviceSubnet: "10.16.0.0/16"
kubeadmConfigPatches:
- |
  apiVersion: kubeadm.k8s.io/v1beta2
  kind: InitConfiguration
  metadata:
    name: config
  imageRepository: registry.aliyuncs.com/google_containers    # 指定镜像仓库
  nodeRegistration:
    kubeletExtraArgs:
      pod-infra-container-image: registry.aliyuncs.com/google_containers/pause:3.7    # 指定 pause 镜像版本  
- |
  apiVersion: kubeadm.k8s.io/v1beta3
  kind: ClusterConfiguration
  metadata:
    name: config
  kubernetesVersion: "1.25.2"
  networking:
    serviceSubnet: 10.15.0.0/16
    podSubnet: 10.16.0.0/16
    dnsDomain: cluster.local  
nodes:
- role: control-plane
  # 通过 kubeadm 配置 kubeletExtraArgs,指定 ingress controller 绑定到当前 kind 节点
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"    
  extraPortMappings:
  - containerPort: 30443
    hostPort: 31443
    listenAddress: "0.0.0.0"
    protocol: TCP
  - containerPort: 80
    hostPort: 80
  - containerPort: 443
    hostPort: 443
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker
EOF

extraPortMappings 将 kind 节点端口(containerPort)映射到本地宿主机端口(hostPort), containerPort 端口为 k8s 中类型为 NodePort 的 Service 映射到本地 kind 节点的端口
其流量访问流程: 31443(宿主机) –> 30443(kind 节点) -> 443(service 端口) –> 8443(pod 端口)

使用 Kind 创建 k8s 集群

# 二进制解压方式安装
wget https://github.com/kubernetes-sigs/kind/releases/download/v0.16.0/kind-linux-amd64
mv kind-linux-amd64 /usr/local/bin/kind
chmod +x /usr/local/bin/kind

# 创建集群
kind create cluster -n demo --config kind-config.yaml
# 查看、删除集群
kind get clusters
kind delete cluster -n demo

集群 Dashboard 部署

用于可视化管理 k8s 集群环境

修改 Service 类型

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
修改 recommended.yaml,设置 service 类型为 NodePort,并指定将 Service 端口 443 映射到 k8s 节点的 30443;即端口访问流程依次为: 30443 –> 443 –> 8443

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort        # 新加配置行
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30443   # 新加配置行
  selector:
    k8s-app: kubernetes-dashboard

kubectl apply -f recommended.yaml

添加管理员账号

为了保护你的集群数据,默认情况下,Dashboard 会使用最少的 RBAC 配置进行部署,使用如下命令创建其访问 token
kubectl -n kubernetes-dashboard create token kubernetes-dashboard

创建一个具有超级权限的服务账号,用于访问 kubernetes dashboard

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

配置解读 :
通过创建服务账号 admin-user, 将 admin-user 与 集群角色 cluster-admin 进行绑定,因此该服务账号就具有了该集群角色所拥有的权限;可通过命令 kubectl describe clusterroles.rbac.authorization.k8s.io cluster-admin 查看角色 cluster-admin 所拥有的权限;通过执行命令 kubectl -n kubernetes-dashboard create token admin-user 获取账号 admin-user 对应的 token,用于 web 访问 Dashboard 时的 token 认证

部署 Ingress-Nginx 控制器

镜像版本: registry.k8s.io/ingress-nginx/controller:v1.4.0;

# Ingress-Nginx 控制器,默认 Service 类型为 NodePort
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

# 修改 usage.yaml 配置,添加 Ingress 资源对应 spec.rules.host: demo01.it123.me 配置,添加该 url ,将该 url 解析指向测试主机 IP (192.168.31.19)即可
wget https://kind.sigs.k8s.io/examples/ingress/usage.yaml
kubectl apply -f usage.yaml

# 访问测试
http://demo01.it123.me/foo
> foo
http://demo01.it123.me/bar
> bar

k8s 集群环境的验证、测试

1、验证、测试 k8s 集群环境是否正常

# 进入 k8s 控制器节点 demo-control-plane
docker exec -it demo-control-plane bash

kubectl cluster-info
# > Kubernetes control plane is running at https://demo-external-load-balancer:6443
# > CoreDNS is running at https://demo-external-load-balancer:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
# > To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

kubectl get node -o wide
# > NAME                  STATUS   ROLES           AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION                CONTAINER-RUNTIME
# > demo-control-plane    Ready    control-plane   118m   v1.25.2   172.18.0.3    <none>        Ubuntu 22.04.1 LTS   5.4.188-1.el7.elrepo.x86_64   containerd://1.6.8
# > demo-control-plane2   Ready    control-plane   117m   v1.25.2   172.18.0.4    <none>        Ubuntu 22.04.1 LTS   5.4.188-1.el7.elrepo.x86_64   containerd://1.6.8
# > demo-control-plane3   Ready    control-plane   116m   v1.25.2   172.18.0.6    <none>        Ubuntu 22.04.1 LTS   5.4.188-1.el7.elrepo.x86_64   containerd://1.6.8
# > demo-worker           Ready    <none>          115m   v1.25.2   172.18.0.5    <none>        Ubuntu 22.04.1 LTS   5.4.188-1.el7.elrepo.x86_64   containerd://1.6.8
# > demo-worker2          Ready    <none>          115m   v1.25.2   172.18.0.7    <none>        Ubuntu 22.04.1 LTS   5.4.188-1.el7.elrepo.x86_64   containerd://1.6.8
# > demo-worker3          Ready    <none>          115m   v1.25.2   172.18.0.2    <none>        Ubuntu 22.04.1 LTS   5.4.188-1.el7.elrepo.x86_64   containerd://1.6.8

# 获取指定节点下所有命名空间下的 pod
kubectl get pod -A -o wide --field-selector spec.nodeName='demo-control-plane'

# 查看指定 pod 的详细配置
kubectl -n kube-system get pod kube-apiserver-demo-control-plane3 -o yaml

# 查看当前节点运行的所有容器
crictl ps -a
# > CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
# > 72543f8c33d29       d921cee849482       16 hours ago        Running             kindnet-cni               1                   43e0043ff3f43       kindnet-629vz
# > b2e84393b76ee       d681a4ce3c509       17 hours ago        Running             controller                0                   0fcf6e2c1b623       ingress-nginx-controller-7d68cdddd8-7kvdh
# > cb08c0aa55734       ca0ea1ee3cfd3       17 hours ago        Running             kube-scheduler            1                   ea1562867409f       kube-scheduler-demo-control-plane
# > 63e6b0a3307a8       dbfceb93c69b6       17 hours ago        Running             kube-controller-manager   1                   9fecff5941a02       kube-controller-manager-demo-control-plane
# > c1c5a2fd825b1       5185b96f0becf       17 hours ago        Running             coredns                   0                   4a074814568df       coredns-c676cc86f-vv6bd
# > b5eb1a5d4bd4f       5185b96f0becf       17 hours ago        Running             coredns                   0                   14bb62cd7ec40       coredns-c676cc86f-r6fg2
# > 02330ab9295b9       4c1e997385b8f       17 hours ago        Running             local-path-provisioner    0                   d34f2aacb74ac       local-path-provisioner-684f458cdd-zkv44
# > 97ca308797460       1c7d8c51823b5       17 hours ago        Running             kube-proxy                0                   8625facdc9af7       kube-proxy-z8wh6
# > d12e75fb3911f       a8a176a5d5d69       17 hours ago        Running             etcd                      0                   679d1dfd21a64       etcd-demo-control-plane
# > 93e3ad34a0d4e       97801f8394908       17 hours ago        Running             kube-apiserver            0                   eabda9c0df282       kube-apiserver-demo-control-plane

备注 : 在测试主机也可以执行 kubectl 的所有命令,以与 k8s 集群环境交互;而像 ctr、crictl 命令的执行,则需要进入 k8s 集群环境中的节点内才行!!

2、kind 部署环境中,默认安装 haproxy 组件,实现 kube-apiserver 的负载均衡

# 经测试主机端口 46097 映射到 haproxy frontend bind port, 该 frontend 对应 backend 为三个 k8s master 节点 上部署的 kube-apiserver
docker ps |grep haproxy
# > ca94fa820cdf   kindest/haproxy:v20220607-9a4d8d2a   "haproxy -sf 7 -W -d…"   17 hours ago   Up 17 hours   127.0.0.1:46097->6443/tcp    demo-external-load-balancer

# haproxy 主要配置 /usr/local/etc/haproxy/haproxy.cfg
frontend control-plane
  bind *:6443
  default_backend kube-apiservers

backend kube-apiservers
  option httpchk GET /healthz
  server demo-control-plane demo-control-plane:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4
  server demo-control-plane2 demo-control-plane2:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4
  server demo-control-plane3 demo-control-plane3:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4

网络分析

kind 附带一个简单的网络实现 kindnetd , 它基于标准 CNI 插件和简单的 netlink 路由来实现 k8s 集群网络

route -n
# > Kernel IP routing table
# > Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
# > 0.0.0.0         172.18.0.1      0.0.0.0         UG    0      0        0 eth0
# > 10.16.0.2       0.0.0.0         255.255.255.255 UH    0      0        0 vethb86d2467
# > 10.16.0.3       0.0.0.0         255.255.255.255 UH    0      0        0 vethd5a7e000
# > 10.16.0.4       0.0.0.0         255.255.255.255 UH    0      0        0 veth5da47f46
# > 10.16.0.5       0.0.0.0         255.255.255.255 UH    0      0        0 vethe5f5947e
# > 10.16.1.0       172.18.0.3      255.255.255.0   UG    0      0        0 eth0
# > 10.16.2.0       172.18.0.6      255.255.255.0   UG    0      0        0 eth0
# > 10.16.3.0       172.18.0.8      255.255.255.0   UG    0      0        0 eth0
# > 10.16.4.0       172.18.0.4      255.255.255.0   UG    0      0        0 eth0
# > 10.16.5.0       172.18.0.7      255.255.255.0   UG    0      0        0 eth0
# > 172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0

查看当前 kind 节点路由表,可确定当前节点 pod 网段为 10.16.0.0/24 , kind 节点 172.18.0.3 的 pod 网段为 10.16.1.0, 其它 kind 节点如上图依次类推

Q&A

kubectl 命令补全按 tab 键时提示 .bash: _get_comp_words_by_ref: command not found , 解决方式如下:

apt install bash-completion
source /usr/share/bash-completion/bash_completion

重 要 提 醒: 由于笔者时间、视野、认知有限,本文难免出现错误、疏漏等问题,期待各位读者朋友、业界大佬指正交流, 共同进步 !!

参考

https://kind.sigs.k8s.io/docs/user/quick-start


加文
作者: 加文
运维工程师
版权声明:自由转载-非商用-非衍生-转载请注明出处!


目录