Kubernetes(k8s) 1.23.6版本基于Docker的集群安装部署

艺帆风顺 发布于 2025-04-03 31 次阅读


1. 部署方式

有几下几种部署方式:

  • minikube:一个用于快速搭建单节点的kubernetes工具

  • kubeadm:一个用于快速搭建kubernetes集群的工具

  • 二进制包:从官网上下载每个组件的二进制包,依次去安装

这里我们选用kubeadm方式进行安装

2. 集群规划

Kubernetes有一主多从或多主多从的集群部署方式,这里我们采用一主多从的方式

服务器名称服务器IP角色CPU(最低要求)内存(最低要求)
k8s-master192.168.23.160master2核2G
k8s-node1192.168.23.161node2核2G
k8s-node2192.168.23.162node2核2G

3. docker安装

这里需要安装与Kubernetes兼容的docker版本,参考链接:

  1. https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md

  2. https://github.com/kubernetes/kubernetes/blob/v1.23.6/build/dependencies.yaml

containerd也需要和Docker兼容,参考链接:

  1. https://docs.docker.com/engine/release-notes/

  2. https://github.com/moby/moby/blob/v20.10.7/vendor.conf

所以这里Docker安装20.10.7版本,containerd安装1.4.6版本

Docker的安装参考centos7基于yum repository方式安装docker和卸载docker

4. 安装k8s集群

4.1 基础环境

1. 禁用selinux

临时禁用方法

[root@k8s-master ~]# setenforce 0
[root@k8s-master ~]# getenforce
Permissive
[root@k8s-master ~]#

永久禁用方法。需重启服务器

[root@k8s-master ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

2. 关闭swap

swap分区指的是虚拟内存分区,它的作用是在物理内存使用完之后,将磁盘空间虚拟成内存来使用。但是会对系统性能产生影响。所以这里需要关闭。如果不能关闭,则在需要修改集群的配置参数

临时关闭方法

[root@k8s-master ~]# swapoff -a 
[root@k8s-master ~]# free -m
total used free shared buff/cache available
Mem: 1819 286 632 9 900 1364
Swap: 0 0 0
[root@k8s-master ~]#

永久关闭方法。需重启服务器

[root@k8s-master ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab

3. bridged网桥设置

为了让服务器的iptables能发现bridged traffic,需要添加网桥过滤和地址转发功能

新建modules-load.d/k8s.conf文件

[root@k8s-master ~]# cat overlay
br_netfilter
EOF
[root@k8s-master ~]#

新建sysctl.d/k8s.conf文件

[root@k8s-master ~]# cat net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-master ~]#

加载配置文件

[root@k8s-master ~]# sysctl --system

加载br_netfilter网桥过滤模块,和加载网络虚拟化技术模块

[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# modprobe overlay

检验网桥过滤模块是否加载成功

[root@k8s-master ~]# lsmod | grep -e br_netfilter -e overlay
br_netfilter 22256 0
bridge 151336 1 br_netfilter
overlay 91659 0
[root@k8s-master ~]#

3.4 配置IPVS

service有基于iptables和基于ipvs两种代理模型。基于ipvs的性能要高一些。需要手动载入才能使用ipvs模块

安装ipset和ipvsadm

[root@k8s-master ~]# yum install ipset ipvsadm

新建脚本文件/etc/sysconfig/modules/ipvs.modules,内容如下

    [root@k8s-master ~]# cat > /etc/sysconfig/modules/ipvs.modules #!/bin/bash
    modprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOF[root@k8s-master ~]#
    [root@k8s-master ~]# /bin/bash /etc/sysconfig/modules/ipvs.modules
    [root@k8s-master ~]#

    检验模块是否加载成功

    [root@k8s-master ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    ip_vs_sh 12688 0
    ip_vs_wrr 12697 0
    ip_vs_rr 12600 0
    ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack_ipv4 15053 2
    nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
    nf_conntrack 139264 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
    libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
    [root@k8s-master ~]#

    4.2 安装kubelet、kubeadm、kubectl

    添加yum源

    [root@k8s-master ~]# cat [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
    http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    [root@k8s-master ~]#

    安装,然后启动kubelet

    [root@k8s-master ~]# yum install -y --setopt=obsoletes=0 kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
    [root@k8s-master ~]# systemctl enable kubelet --now
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    [root@k8s-master ~]#

    说明如下:

    • obsoletes等于1表示更新旧的rpm包的同时会删除旧包,0表示更新旧的rpm包不会删除旧包

    • kubelet启动后,可以用命令journalctl -f -u kubelet查看kubelet更详细的日志

    • kubelet默认使用systemd作为cgroup driver

    • 启动后,kubelet现在每隔几秒就会重启,因为它陷入了一个等待kubeadm指令的死循环

    4.3 下载各个机器需要的镜像

    查看集群所需镜像的版本

    [root@k8s-master ~]# kubeadm config images list
    I0510 21:58:56.690111 9902 version.go:255] remote version is much newer: v1.24.0; falling back to: stable-1.23
    k8s.gcr.io/kube-apiserver:v1.23.6
    k8s.gcr.io/kube-controller-manager:v1.23.6
    k8s.gcr.io/kube-scheduler:v1.23.6
    k8s.gcr.io/kube-proxy:v1.23.6
    k8s.gcr.io/pause:3.6
    k8s.gcr.io/etcd:3.5.1-0
    k8s.gcr.io/coredns/coredns:v1.8.6
    [root@k8s-master ~]#

    编辑镜像下载文件images.sh,然后执行。其中node节点只需要kube-proxy和pause

    [root@k8s-master ~]# tee ./images.sh #!/bin/bash

    images=(
    kube-apiserver:v1.23.6
    kube-controller-manager:v1.23.6
    kube-scheduler:v1.23.6
    kube-proxy:v1.23.6
    pause:3.6
    etcd:3.5.1-0
    coredns:v1.8.6
    )
    for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    done
    EOF
    [root@k8s-master ~]#
    [root@k8s-master ~]# chmod +x ./images.sh && ./images.sh
    • 1

    • 2

    • 3

    • 4

    • 5

    • 6

    • 7

    • 8

    • 9

    • 10

    • 11

    • 12

    • 13

    • 14

    • 15

    • 16

    • 17

    • 18

    4.4 初始化主节点(只在master节点执行)

    [root@k8s-master ~]# kubeadm init 
    --apiserver-advertise-address=192.168.23.160
    --control-plane-endpoint=k8s-master
    --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
    --kubernetes-version v1.23.6
    --service-cidr=10.96.0.0/16
    --pod-network-cidr=10.244.0.0/16
    [init] Using Kubernetes version: v1.23.6
    [preflight] Running pre-flight checks
    ......省略部分......
    Your Kubernetes control-plane has initialized successfully!

    To start using your cluster, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Alternatively, if you are the root user, you can run:

    export KUBECONFIG=/etc/kubernetes/admin.conf

    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/

    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:

    kubeadm join k8s-master:6443 --token 0qc9py.n6az0o2jy1tryg2b
    --discovery-token-ca-cert-hash sha256:f049a62946af45c27d9a387468d598906fd68e6d918d925ce699cb4f2a32e111
    --control-plane

    Then you can join any number of worker nodes by running the following on each as root:

    kubeadm join k8s-master:6443 --token 0qc9py.n6az0o2jy1tryg2b
    --discovery-token-ca-cert-hash sha256:f049a62946af45c27d9a387468d598906fd68e6d918d925ce699cb4f2a32e111
    [root@k8s-master ~]#
    • 1

    • 2

    • 3

    • 4

    • 5

    • 6

    • 7

    • 8

    • 9

    • 10

    • 11

    • 12

    • 13

    • 14

    • 15

    • 16

    • 17

    • 18

    • 19

    • 20

    • 21

    • 22

    • 23

    • 24

    • 25

    • 26

    • 27

    • 28

    • 29

    • 30

    • 31

    • 32

    • 33

    • 34

    • 35

    • 36

    • 37

    • 38

    说明:

    • 可以使用参数--v=6--v=10等查看详细的日志

    • 所有参数的网络不能重叠。比如192.168.2.x和192.168.3.x是重叠的
      –pod-network-cidr:指定pod网络的IP地址范围。直接填写这个就可以了
      –service-cidr:service VIP的IP地址范围。默认就10.96.0.0/12。直接填写这个就可以了
      –apiserver-advertise-address:API Server监听的IP地址

    另一种kubeadm init的方法

    # 打印默认的配置信息
    [root@k8s-master ~]# kubeadm config print init-defaults --component-configs KubeletConfiguration
    # 通过默认的配置信息,进行编辑修改,其中serviceSubnet和podSubnet在同一层级。然后拉取镜像
    [root@k8s-master ~]# kubeadm config images pull --config kubeadm-config.yaml
    # 进行初始化
    [root@k8s-master ~]# kubeadm init --config kubeadm-config.yaml

    如果init失败,使用如下命令进行回退

    [root@k8s-master ~]# kubeadm reset -f
    [root@k8s-master ~]#
    [root@k8s-master ~]# rm -rf /etc/kubernetes
    [root@k8s-master ~]# rm -rf /var/lib/etcd/
    [root@k8s-master ~]# rm -rf $HOME/.kube

    4.5 设置.kube/config(只在master执行)

    [root@k8s-master ~]# mkdir -p $HOME/.kube
    [root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

    kubectl会读取该配置文件

    4.6 安装网络插件flannel(只在master执行)

    插件使用的是DaemonSet的控制器,会在每个节点都运行

    根据github上的README.md当前说明,这个是支持Kubenetes1.17+的

    如果因为镜像下载导致部署出错。可以先替换yaml文件内的image源为国内的镜像源

    [root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
    Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
    podsecuritypolicy.policy/psp.flannel.unprivileged created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds created
    [root@k8s-master ~]#

    会下载rancher/mirrored-flannelcni-flannel:v0.17.0和rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1这两个镜像

    此时查看master的状态

    [root@k8s-master ~]# 
    [root@k8s-master ~]# kubectl get pods -A
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-65c54cc984-lqcxl 1/1 Running 0 49m
    kube-system coredns-65c54cc984-q2n72 1/1 Running 0 49m
    kube-system etcd-k8s-master 1/1 Running 2 (14m ago) 49m
    kube-system kube-apiserver-k8s-master 1/1 Running 2 (14m ago) 49m
    kube-system kube-controller-manager-k8s-master 1/1 Running 2 (14m ago) 49m
    kube-system kube-flannel-ds-6v9jg 1/1 Running 0 9m15s
    kube-system kube-proxy-6dz9x 1/1 Running 2 (14m ago) 49m
    kube-system kube-scheduler-k8s-master 1/1 Running 2 (14m ago) 49m
    [root@k8s-master ~]#
    [root@k8s-master ~]# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    k8s-master Ready control-plane,master 49m v1.23.6
    [root@k8s-master ~]#
    • 1

    • 2

    • 3

    • 4

    • 5

    • 6

    • 7

    • 8

    • 9

    • 10

    • 11

    • 12

    • 13

    • 14

    • 15

    • 16

    4.7 加入node节点(只在node执行)

    由上面的kubeadm init成功后的结果得来的

    [root@k8s-node1 ~]# kubeadm join k8s-master:6443 --token 0qc9py.n6az0o2jy1tryg2b 
    --discovery-token-ca-cert-hash sha256:f049a62946af45c27d9a387468d598906fd68e6d918d925ce699cb4f2a32e111
    [preflight] Running pre-flight checks
    ......省略部分......
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.

    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

    [root@k8s-node1 ~]#

    令牌有效期24小时,可以在master节点生成新令牌命令

    [root@k8s-master ~]# kubeadm token create --print-join-command

    然后在master通过命令watch -n 3 kubectl get pods -Akubectl get nodes查看状态

    4.7.1 node节点可以执行kubectl命令方法

    在master节点上将$HOME/.kube复制到node节点的$HOME目录下

    [root@k8s-master ~]# 
    [root@k8s-master ~]# scp -r $HOME/.kube k8s-node1:$HOME
    [root@k8s-master ~]#

    5. 部署dashboard(只在master执行)

    Kubernetes官方可视化界面:https://github.com/kubernetes/dashboard

    5.1 部署

    dashboard和kubernetes的版本对应关系,参考:https://github.com/kubernetes/dashboard/blob/v2.5.1/go.mod

      [root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yamlnamespace/kubernetes-dashboard createdserviceaccount/kubernetes-dashboard createdservice/kubernetes-dashboard createdsecret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-csrf createdsecret/kubernetes-dashboard-key-holder createdconfigmap/kubernetes-dashboard-settings createdrole.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createddeployment.apps/kubernetes-dashboard createdservice/dashboard-metrics-scraper createddeployment.apps/dashboard-metrics-scraper created[root@k8s-master ~]#

      会下载kubernetesui/dashboard:v2.5.1、kubernetesui/metrics-scraper:v1.0.7两个镜像

      在master通过命令watch -n 3 kubectl get pods -A查看状态

      5.2 设置访问端口

      将type: ClusterIP改为:type: NodePort

      [root@k8s-master ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
      service/kubernetes-dashboard edited
      [root@k8s-master ~]#

      查看端口命令

      [root@k8s-master ~]# kubectl get svc -A | grep kubernetes-dashboard
      kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.96.114.89 8000/TCP 2m52s
      kubernetes-dashboard kubernetes-dashboard NodePort 10.96.180.218 443:32314/TCP 2m52s
      [root@k8s-master ~]#

      访问dashborad页面:https://k8s-node1:32314,如下所示

      这里需要登录令牌,通过下面的步骤获取

      5.3 创建访问账号

      创建资源文件,然后应用

        [root@k8s-master ~]# tee ./dash.yaml apiVersion: v1kind: ServiceAccountmetadata: name: admin-user namespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: admin-userroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects:- kind: ServiceAccount name: admin-user namespace: kubernetes-dashboardEOF[root@k8s-master ~]# [root@k8s-master ~]# kubectl apply -f dash.yamlserviceaccount/admin-user createdclusterrolebinding.rbac.authorization.k8s.io/admin-user created[root@k8s-master ~]#

        5.4 获取访问令牌

        [root@k8s-master ~]# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
        eyJhbGciOiJSUzI1NiIsImtpZCI6ImJYYjZIYUV5YXZ6QTZMUFV1UTVnZG9pb3ZlUGkwUjNRcVVqbW4waUJ3aTQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZoeng3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0ZTZjNTU1NC1kYThiLTRjYTgtYjMwYy1jY2UwZDIxNzVlNzYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.YZm6NOIR8Owh8FOu_z0XVfgNjlW2qfz5SaAYz3blzXy51-HZUdkKqQq7fc5zHaAnlVWlIHF35FTOrk-JKI89IlLvbNYiTLbUHOthWf075O4gMIB6siX863c9Ao0ZAujEnrXjyQGdpI3HgdjHBEFkNgTrzR5kRPNnpf36dNG4IZ0hNzyFLH2daJTri0bVRXZ40ZsqaH_0BPf_uVYdZzlVMxe_ZgDYVWgR9W0OYr1oV-OW3vFHBy9b_GhJZkruzN58QDj-Zg20KfYD5Kk5xBS5SMaYVyq7cHs0RagI-3SNFHWYVYYaSKYvLzZWjjwx1SopF9rBbBeEIjdLJkgMZ0RqeQ[root@k8s-master ~]#

        令牌为:eyJhbGci…gMZ0RqeQ

        将令牌复制到登录页面,进行登录即可

        6. 安装nginx进行测试

        部署

        [root@k8s-master ~]# kubectl create deployment nginx --image=nginx
        deployment.apps/nginx created
        [root@k8s-master ~]#

        在master通过命令watch -n 3 kubectl get pods -A查看状态

        暴露端口

        [root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
        service/nginx exposed
        [root@k8s-master ~]#

        查看端口

        [root@k8s-master ~]# kubectl get pods,svc
        NAME READY STATUS RESTARTS AGE
        pod/nginx-85b98978db-hldcq 1/1 Running 0 64s

        NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
        service/kubernetes ClusterIP 10.96.0.1 443/TCP 76m
        service/nginx NodePort 10.96.167.226 80:30523/TCP 15s
        [root@k8s-master ~]#

        访问nginx页面:http://k8s-node1:30523

        7. 其它可选模块部署

        7.1 metrics-server安装

        metrics-server的介绍和安装,请参考这篇博客的kubernetes-sigs/metrics-server的介绍和安装部分

        7.2 IPVS的开启

        IPVS的开启,请参考这篇博客的ipvs的开启部分

        7.3 ingress-nginx的安装

        ingress-nginx Controller的安装,请参考这篇博客的ingress-nginx Controller安装部分

        7.4 搭建NFS服务器

        搭建NFS服务器,请参考这篇博客的搭建NFS服务器部分

          版权声明:本文内容始发于CSDN>作者: Bulut0907,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。本作品采用知识共享署名-非商业性使用-禁止演绎 2.5 中国大陆许可协议进行可。始发链接:https://blog.csdn.net/yy8623977/article/details/124685772在此特别鸣谢原作者的创作。此篇文章的所有版权归原作者所有,商业转载建议请联系原作者,非商业转载请注明出处