二进制安装k8s

fifee
1
2025-08-21

二进制安装k8s

1.安装环境

1.1实验环境准备

1.1.1主机分配

节点主机名

ip

VIP

CPU

内存

master1

20.20.20.56

20.20.20.88

4

4G

master2

20.20.20.57

4

4G

master3

20.20.20.13

4

4G

node1

20.20.20.14

2

2G

node2

20.20.20.15

2

2G

1.1.2实验环境准备

更改主机名,方便区分角色

时间同步安装ntp服务,计划更新

master节点ssh免密登录全节点

主机系统级优化

1.1.3.高可用集群搭建

安装docker或container版本支持k8s二进制版本

安装HAproxy+keepalive配置vip保证高可用

2.安装准备

2.1下载k8s二进制包

2.1.1 二进制k8s下载地址

https://github.com/kubernetes/kubernetes

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#server-binaries

2.1.2 使用我们上传的软件包

[root@master1 ~]# lsanaconda-ks.cfg original-ks.cfgkubernetes-server-linux-amd64.tar.gz start_up.sh

  [root@master1 ~]# tar -zxvf kubernetes-server-linux-amd64.tar.gz
  [root@master1 ~]# cd kubernetes/server/bin/
  #将kube-apiserver kube-aggregator kube-controller-manager kubectl kubelet kube-proxy kube-scheduler命令传递到管理节点
  [root@master1 bin]# for i in master1 master2 master3;do scp kube-apiserver kube-aggregator kube-controller-manager kubectl kubelet kube-proxy kube-scheduler  $i:/usr/local/bin;done
  #将kubelet kube-proxy命令传递到普通节点 
  [root@master1 bin]# for i in node1 node2;do scp  kubelet kube-proxy  $i:/usr/local/bin;done
  [root@master1 bin]# cd

2.2CA证书创建和分发

从k8s的1.8版本开始,K8S系统各组件需要使用TLS证书对通信进行加密。每一个K8S集群都需要独立的CA证书体系。CA证书有以下三种:easyrsa、openssl、cfssl。这里使用cfssl证书,也是目前使用最多的,相对来说配置简单一些,通过json的格式,把证书相关的东西配置进去即可。这里使用cfssl的版本为1.6版本。

2.1.1所有master节点生成证书目录
  #所有master节点生成证书目录
  [root@master1 ~]# mkdir -p /etc/etcd/ssl
  [root@master2 ~]# mkdir -p /etc/etcd/ssl/
  [root@master3 ~]# mkdir -p /etc/etcd/ssl/
  #所有master节点生成k8s证书目录
  [root@master1 ~]# mkdir -p /etc/kubernetes/pki
  [root@master2 ~]# mkdir -p /etc/kubernetes/pki
  [root@master3 ~]# mkdir -p /etc/kubernetes/pki
2.1.2 安装CFSSL
2.1.2.1、 网络下载地址:(下载很慢,使用我们上传的文件)
  wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
  wget  https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
  wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
  #把3个文件放到/usr/local/bin目录下,或者做软连接也可以
  [root@master1 ~]# ls
  anaconda-ks.cfg                   kubernetes
  cfssl_1.6.1_linux_amd64           kubernetes-server-linux-amd64.tar.gz
  cfssl-certinfo_1.6.1_linux_amd64  
  cfssljson_1.6.1_linux_amd64       
  #把3个文件放到/usr/local/bin目录下,或者做软连接也可以
  #改个名-可以不改
  [root@master1 ~]# mv cfssl_1.6.1_linux_amd64 cfssl
  [root@master1 ~]# mv cfssljson_1.6.1_linux_amd64 cfssljson
  [root@master1 ~]# mv cfssl-certinfo_1.6.1_linux_amd64 cfssl-certinfo
  [root@master1 ~]# ls
  anaconda-ks.cfg  cfssljson                             original-ks.cfg
  cfssl            kubernetes                            start_up.sh
  cfssl-certinfo   kubernetes-server-linux-amd64.tar.gz
  [root@master1 ~]# ln cfssl* /usr/local/bin/
  [root@master1 ~]# chmod +x /usr/local/bin/cfssl*
  [root@master1 ~]# ls /usr/local/bin/

2.1.3 创建用来生成CA证书的json配置文件

[root@master1 ~]# mkdir ssl[root@master1 ~]# cd ssl

[root@master1 ssl]# vim ca-config.json

  {
    "signing": {
      "default": {
        "expiry": "876000h"
      },
      "profiles": {
        "kubernetes": {
          "usages": [
              "signing",
              "key encipherment",
              "server auth",
              "client auth"
          ],
          "expiry": "876000h"
        }
      }
    }
  }

 signing: 表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;  server auth: 表示 client 可以用该 CA 对 server 提供的证书进行验证;  client auth: 表示 server 可以用该 CA 对 client 提供的证书进行验证;

2.1.4 创建etcd证书
2.1.4.1 生成etcd根证书请求文件

[root@master1 ssl]# vim etcd-ca-csr.json

  {
    "CN": "etcd",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "Beijing",
        "O": "etcd",
        "OU": "Etcd Security"
      }
    ],
    "ca": {
      "expiry": "876000h"
    }
  }

[root@master1 ssl]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca

[root@master1 ssl]# ls /etc/etcd/ssl/etcd-ca.csr etcd-ca-key.pem etcd-ca.pem

2.1.4.2 生成etcd客户端证书

[root@master1 ssl]# vim etcd-csr.json

  {
    "CN": "etcd",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "Beijing",
        "O": "etcd",
        "OU": "Etcd Security"
      }
    ]
  }
  cfssl gencert \
    -ca=/etc/etcd/ssl/etcd-ca.pem \
    -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
    -config=ca-config.json \
    -hostname=127.0.0.1,20.20.20.56,20.20.20.57,20.20.20.13 \
    -profile=kubernetes \
    etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

2.1.4.3、 拷贝证书到其他节点

定义etcd节点 奇数节点

[root@master1 ssl]# masternode="master1 master2 master3"[root@master1 ssl]# for i in ​i:/etc/etcd/ssl/;done

  for i in $masternode;do scp -r /etc/etcd/ssl/*.pem $i:/etc/etcd/ssl/;done

[root@master1 ssl]# ln -s /etc/etcd/ssl /etc/kubernetes/pki/etcd

[root@master2 ~]# ln -s /etc/etcd/ssl /etc/kubernetes/pki/etcd

[root@master3 ~]# ln -s /etc/etcd/ssl /etc/kubernetes/pki/etcd

2.1.5 创建k8s集群证书
2.1.5.1 生成k8s集群CA证书的csr文件

[root@master1 ssl]# vim ca-csr.json

  {
    "CN": "kubernetes",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "Beijing",
        "O": "Kubernetes",
        "OU": "Kubernetes-manual"
      }
    ],
    "ca": {
      "expiry": "876000h"
    }
  }
2.1.5.2、 生成k8s CA证书

[root@master1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca

2.1.6 创建apiserver客户端证书
2.1.6.1 创建apiserver csr文件

[root@master1 ssl]# vim apiserver-csr.json

  {
    "CN": "kube-apiserver",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "Beijing",
        "O": "Kubernetes",
        "OU": "Kubernetes-manual"
      }
    ]
  }

 “CN”:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;  “O”:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);

2.1.6.2、 创建apiserver客户端证书

[root@master1 ssl]# cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=127.0.0.1,20.20.20.88,172.17.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,20.20.20.56,20.20.20.57,20.20.20.13 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

  cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=127.0.0.1,20.20.20.88,172.17.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,20.20.20.56,20.20.20.57,20.20.20.13 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

2.1.7 创建front-proxy-ca证书
2.1.7.1 创建front-proxy-ca证书

API聚合证书,主要做流量过滤

[root@master1 ssl]# vim front-proxy-ca-csr.json

  {
    "CN": "kubernetes",
    "key": {
       "algo": "rsa",
       "size": 2048
    }
  }

[root@master1 ssl]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca

2.1.7.2 创建front-proxy-client证书

[root@master1 ssl]# vim front-proxy-client-csr.json

  {
    "CN": "front-proxy-client",
    "key": {
       "algo": "rsa",
       "size": 2048
    }
  }

[root@master1 ssl]# cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

2.1.8 创建Controller Manager证书
2.1.8.1 创建csr文件

[root@master1 ssl]# vim manager-csr.json

  {
    "CN": "system:kube-controller-manager",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "Beijing",
        "O": "system:kube-controller-manager",
        "OU": "Kubernetes-manual"
      }
    ]
  }
2.1.8.2、 生成Controller Manager证书

[root@master1 ssl]# cfssl gencert \

-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager

  cfssl gencert \
      -ca=/etc/kubernetes/pki/ca.pem \
      -ca-key=/etc/kubernetes/pki/ca-key.pem \
      -config=ca-config.json \
      -profile=kubernetes \
      manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager

2.1.8.3、 设置集群参数

[root@master1 ssl]# kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://20.20.20.88:16443 \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfigCluster "kubernetes" set.

  kubectl config set-cluster kubernetes \
        --certificate-authority=/etc/kubernetes/pki/ca.pem \
        --embed-certs=true \
        --server=https://20.20.20.88:16443 \
        --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
2.1.8.4、 设置一个用户
  kubectl config set-credentials system:kube-controller-manager \
        --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
        --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
        --embed-certs=true \
        --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

2.1.8.5、 设置上下文参数
  kubectl config set-context system:kube-controller-manager@kubernetes \
       --cluster=kubernetes \
       --user=system:kube-controller-manager \
       --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
2.1.8.6、 设置默认上下文环境
  kubectl config use-context system:kube-controller-manager@kubernetes \
        --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
2.1.9 创建scheduller证书

[root@master1 ssl]# vim scheduler-csr.json

  {
    "CN": "system:kube-scheduler",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "Beijing",
        "O": "system:kube-scheduler",
        "OU": "Kubernetes-manual"
      }
    ]
  }

[root@master1 ssl]# cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler

2.1.9.1、 设置集群参数
   kubectl config set-cluster kubernetes \
        --certificate-authority=/etc/kubernetes/pki/ca.pem \
        --embed-certs=true \
        --server=https://20.20.20.88:16443 \
        --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
2.1.9.1、 设置一个用户
  kubectl config set-credentials system:kube-scheduler \
        --client-certificate=/etc/kubernetes/pki/scheduler.pem \
        --client-key=/etc/kubernetes/pki/scheduler-key.pem \
        --embed-certs=true \
        --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

2.1.9.1、 设置上下文参数
   kubectl config set-context system:kube-scheduler@kubernetes \
      --cluster=kubernetes \
      --user=system:kube-scheduler \
      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
2.1.9.1、 设置默认上下文环境
   kubectl config use-context system:kube-scheduler@kubernetes \
        --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

2.1.10 创建admin证书

[root@master1 ssl]# vim admin-csr.json

  {
    "CN": "admin",
    "key": {
      "algo": "rsa",
      "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "Beijing",
        "O": "system:masters",
        "OU": "Kubernetes-manual"
      }
    ]
  }

[root@master1 ssl]# cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin

2.1.10.1 设置集群参数
  kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem   \
     --embed-certs=true \
     --server=https://20.20.20.88:16443 \
     --kubeconfig=/etc/kubernetes/admin.kubeconfig
2.1.10.2设置一个用户
  kubectl config set-credentials kubernetes-admin \
     --client-certificate=/etc/kubernetes/pki/admin.pem \
     --embed-certs=true \
     --client-key=/etc/kubernetes/pki/admin-key.pem \
     --kubeconfig=/etc/kubernetes/admin.kubeconfig
2.1.11.3 设置上下文参数
   kubectl config set-context kubernetes-admin@kubernetes \
      --cluster=kubernetes \
      --user=kubernetes-admin \
      --kubeconfig=/etc/kubernetes/admin.kubeconfig
2.1.12.4 设置默认环境
   kubectl config use-context kubernetes-admin@kubernetes \
      --kubeconfig=/etc/kubernetes/admin.kubeconfig

2.1.11 生成service-account的key

[root@master1 ssl]# openssl genrsa -out /etc/kubernetes/pki/sa.key 2048

[root@master1 ssl]# openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

2.1.12 拷贝证书到其他master节点

[root@master1 ssl]# for i in master2 master3;do scp -r /etc/kubernetes/* $i:/etc/kubernetes/;done

2.3 ETCD集群部署

所有持久化的状态信息以K/V的形式存储在ETCD中。类似zookeeper,提供分布式协调服务。之所以说kubenetes各个组件是无状态的,就是因为其中把数据都存放在ETCD中。由于ETCD支持集群,这里在三台主机上都部署上ETCD。

2.3.1、 准备ETCD二进制文件

二进制包下载地址:

  wget https://storage.googleapis.com/etcd/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz

[root@master1 ssl]# cd[root@master1 ~]# tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz

2.3.2、 拷贝二进制文件到指定目录

[root@master1 etcd-v3.4.13-linux-amd64]# for i in master1 master2 master3;do scp etcd etcdctl $i:/usr/local/bin/;done

2.3.3、 设置etcd配置文件

[root@master1 etcd-v3.4.13-linux-amd64]# vim /etc/etcd/etcd.config.yml

  name: 'master1'
  data-dir: /var/lib/etcd
  wal-dir: /var/lib/etcd/wal
  snapshot-count: 5000
  heartbeat-interval: 100
  election-timeout: 1000
  quota-backend-bytes: 0
  listen-peer-urls: 'https://20.20.20.56:2380'
  listen-client-urls: 'https://20.20.20.56:2379,http://127.0.0.1:2379'
  max-snapshots: 3
  max-wals: 5
  cors:
  initial-advertise-peer-urls: 'https://20.20.20.56:2380'
  advertise-client-urls: 'https://20.20.20.56:2379'
  discovery:
  discovery-fallback: 'proxy'
  discovery-proxy:
  discovery-srv:
  initial-cluster: 'master1=https://20.20.20.56:2380,master2=https://20.20.20.57:2380,master3=https://20.20.20.13:2380'
  initial-cluster-token: 'etcd-k8s-cluster'
  initial-cluster-state: 'new'
  strict-reconfig-check: false
  enable-v2: true
  enable-pprof: true
  proxy: 'off'
  proxy-failure-wait: 5000
  proxy-refresh-interval: 30000
  proxy-dial-timeout: 1000
  proxy-write-timeout: 5000
  proxy-read-timeout: 0
  client-transport-security:
    cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
    key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
    client-cert-auth: true
    trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
    auto-tls: true
  peer-transport-security:
    cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
    key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
    peer-client-cert-auth: true
    trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
    auto-tls: true
  debug: false
  log-package-levels:
  log-outputs: [default]
  force-new-cluster: false
2.3.4、 etcd 3.4注意事项:

ETCD3.4版本ETCDCTL_API=3 etcdctl 和 etcd --enable-v2=false 成为了默认配置,如要使用v2版本,执行etcdctl时候需要设置ETCDCTL_API环境变量,例如:ETCDCTL_API=2 etcdctl

flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API

注意:flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API,为了兼容flannel,将默认开启v2版本

2.3.5、 创建etcd系统服务

[root@master1 ~]# vim /usr/lib/systemd/system/etcd.service

  [Unit]
  Description=Etcd Service
  Documentation=https://coreos.com/etcd/docs/latest/
  After=network.target
  ​
  [Service]
  Type=notify
  ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
  Restart=on-failure
  RestartSec=10
  LimitNOFILE=65536
  ​
  [Install]
  WantedBy=multi-user.target
  Alias=etcd3.service
2.3.6 配置文件和服务文件拷贝到其他两个节点

[root@master1 ~]# for i in master2 master3;do scp /etc/etcd/etcd.config.yml ​i:/usr/lib/systemd/system/;done

  for i in master2 master3;do scp /etc/etcd/etcd.config.yml $i:/etc/etcd;scp /usr/lib/systemd/system/etcd.service $i:/usr/lib/systemd/system/;done

[root@master2 ~]# vim /etc/etcd/etcd.config.yml 修改ip主机名

  name: 'master2' #修改这里
  data-dir: /var/lib/etcd
  wal-dir: /var/lib/etcd/wal
  snapshot-count: 5000
  heartbeat-interval: 100
  election-timeout: 1000
  quota-backend-bytes: 0
  listen-peer-urls: 'https://20.20.20.57:2380'  #修改这里
  listen-client-urls: 'https://20.20.20.57:2379,http://127.0.0.1:2379' #修改这里
  max-snapshots: 3
  max-wals: 5
  cors:
  initial-advertise-peer-urls: 'https://20.20.20.57:2380'   #修改这里
  advertise-client-urls: 'https://20.20.20.57:2379'  #修改这里
  discovery:
  discovery-fallback: 'proxy'
  discovery-proxy:
  discovery-srv:
  initial-cluster: 'master1=https://20.20.20.56:2380,master2=https://20.20.20.57:2380,master3=https://20.20.20.13:2380'

[root@master3 ~]# vim /etc/etcd/etcd.config.yml

  name: 'master3'
  data-dir: /var/lib/etcd
  wal-dir: /var/lib/etcd/wal
  snapshot-count: 5000
  heartbeat-interval: 100
  election-timeout: 1000
  quota-backend-bytes: 0
  listen-peer-urls: 'https://20.20.20.13:2380'
  listen-client-urls: 'https://20.20.20.13:2379,http://127.0.0.1:2379'
  max-snapshots: 3
  max-wals: 5
  cors:
  initial-advertise-peer-urls: 'https://20.20.20.13:2380'
  advertise-client-urls: 'https://20.20.20.13:2379'
  discovery:
  discovery-fallback: 'proxy'
  discovery-proxy:
  discovery-srv:
  initial-cluster: 'master1=https://20.20.20.56:2380,master2=https://20.20.20.57:2380,master3=https://20.20.20.13:2380'
2.3.7、 启动ETCD服务所master节点

[root@master1 ~]# systemctl daemon-reload[root@master1 ~]# systemctl start etcd[root@master1 ~]# systemctl enable etcd[root@master1 ~]# systemctl status etcd

2.3.8、 集群验证
  etcdctl --endpoints=https://20.20.20.56:2379,https://20.20.20.57:2379,https://20.20.20.13:2379 \
     --cacert=/etc/etcd/ssl/etcd-ca.pem \
     --cert=/etc/etcd/ssl/etcd.pem \
     --key=/etc/etcd/ssl/etcd-key.pem endpoint status --write-out=table

 注意:调整master1为leader

  etcdctl --endpoints=https://20.20.20.56:2379,https://20.20.20.57:2379,https://20.20.20.13:2379 \
     --cacert=/etc/etcd/ssl/etcd-ca.pem \
     --cert=/etc/etcd/ssl/etcd.pem \
     --key=/etc/etcd/ssl/etcd-key.pem endpoint health
  etcdctl --endpoints=https://20.20.20.56:2379,https://20.20.20.57:2379,https://20.20.20.13:2379 \
     --cacert=/etc/etcd/ssl/etcd-ca.pem \
     --cert=/etc/etcd/ssl/etcd.pem \
     --key=/etc/etcd/ssl/etcd-key.pem member list

2.4 Master节点部署

2.4.1、 部署apiserver
2.4.1.1 创建kube-apiserver系统服务

[root@master1 ~]# vim /usr/lib/systemd/system/kube-apiserver.service

  [Unit]
  Description=Kubernetes API Server
  Documentation=https://github.com/kubernetes/kubernetes
  After=network.target
  ​
  [Service]
  ExecStart=/usr/local/bin/kube-apiserver \
        --v=2  \
        --logtostderr=true  \
        --allow-privileged=true  \
        --bind-address=0.0.0.0  \
        --secure-port=6443  \
        --insecure-port=0  \
        --advertise-address=20.20.20.56 \
        --service-cluster-ip-range=10.96.0.0/16  \
        --service-node-port-range=30000-32767  \
        --etcd-servers=https://20.20.20.56:2379,https://20.20.20.57:2379,https://20.20.20.13:2379 \
        --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
        --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
        --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
        --client-ca-file=/etc/kubernetes/pki/ca.pem  \
        --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
        --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
        --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
        --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
        --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
        --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
        --service-account-issuer=https://kubernetes.default.svc.cluster.local \
        --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
        --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
        --authorization-mode=Node,RBAC  \
        --enable-bootstrap-token-auth=true  \
        --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
        --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
        --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
        --requestheader-allowed-names=aggregator  \
        --requestheader-group-headers=X-Remote-Group  \
        --requestheader-extra-headers-prefix=X-Remote-Extra-  \
        --requestheader-username-headers=X-Remote-User
        # --token-auth-file=/etc/kubernetes/token.csv
  ​
  Restart=on-failure
  RestartSec=10s
  LimitNOFILE=65535
  ​
  [Install]
  WantedBy=multi-user.target
2.4.1.2、 拷贝文件到master2,master3

[root@master1 ~]# for i in master2 master3;do scp /usr/lib/systemd/system/kube-apiserver.service $i:/usr/lib/systemd/system/; done

[root@master2 ~]# vim /usr/lib/systemd/system/kube-apiserver.service

--advertise-address=20.20.20.57 \

[root@master2 ~]# vim /usr/lib/systemd/system/kube-apiserver.service

[root@master3 ~]# vim /usr/lib/systemd/system/kube-apiserver.service

--advertise-address=20.20.20.13 \

2.4.1.3、 启动apiserver

[root@master1 ~]# systemctl daemon-reload[root@master1 ~]# systemctl enable --now kube-apiserver[root@master1 ~]# systemctl status kube-apiserver

2.3.1.4、 通过url访问api接口
  curl --cacert /etc/kubernetes/pki/ca.pem --cert /etc/kubernetes/pki/admin.pem --key /etc/kubernetes/pki/admin-key.pem https://20.20.20.56:6443/
  curl --cacert /etc/kubernetes/pki/ca.pem --cert /etc/kubernetes/pki/admin.pem --key /etc/kubernetes/pki/admin-key.pem https://20.20.20.57:6443/
  curl --cacert /etc/kubernetes/pki/ca.pem --cert /etc/kubernetes/pki/admin.pem --key /etc/kubernetes/pki/admin-key.pem https://20.20.20.13:6443/
  curl --cacert /etc/kubernetes/pki/ca.pem --cert /etc/kubernetes/pki/admin.pem --key /etc/kubernetes/pki/admin-key.pem https://20.20.20.88:6443/
2.3.5、 部署Controller Manager
2.3.5.1、 创建服务文件

[root@master1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service

  [Unit]
  Description=Kubernetes Controller Manager
  Documentation=https://github.com/kubernetes/kubernetes
  After=network.target
  ​
  [Service]
  ExecStart=/usr/local/bin/kube-controller-manager \
        --v=2 \
        --logtostderr=true \
        --bind-address=127.0.0.1 \
        --root-ca-file=/etc/kubernetes/pki/ca.pem \
        --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
        --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
        --service-account-private-key-file=/etc/kubernetes/pki/sa.key \
        --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
        --leader-elect=true \
        --use-service-account-credentials=true \
        --node-monitor-grace-period=40s \
        --node-monitor-period=5s \
        --pod-eviction-timeout=2m0s \
        --controllers=*,bootstrapsigner,tokencleaner \
        --allocate-node-cidrs=true \
        --cluster-cidr=172.168.0.0/16 \
        --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
        --node-cidr-mask-size=24
        
  Restart=always
  RestartSec=10s
  ​
  [Install]
  WantedBy=multi-user.target
2.3.5.2、 拷贝文件到其他master节点

[root@master1 ~]# for i in master2 master3;do scp /usr/lib/systemd/system/kube-controller-manager.service $i:/usr/lib/systemd/system/;done

2.3.5.3、 启动服务

[root@master1 ~]# systemctl daemon-reload[root@master1 ~]# systemctl enable --now kube-controller-manager[root@master1 ~]# systemctl status kube-controller-manager

2.3.6、 部署scheduller
2.3.6.1、 创建服务文件

[root@master1 ~]# vim /usr/lib/systemd/system/kube-scheduler.service

  [Unit]
  Description=Kubernetes Scheduler
  Documentation=https://github.com/kubernetes/kubernetes
  After=network.target
  ​
  [Service]
  ExecStart=/usr/local/bin/kube-scheduler \
        --v=2 \
        --logtostderr=true \
        --address=127.0.0.1 \
        --leader-elect=true \
        --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
  ​
  ​
  Restart=always
  RestartSec=10s
  ​
  [Install]
  WantedBy=multi-user.target
2.3.6.1、 传递到其他master节点

[root@master1 ~]# for i in master2 master3;do scp /usr/lib/systemd/system/kube-scheduler.service $i:/usr/lib/systemd/system/;done

2.3.6.2服务启动

[root@master1 ~]# systemctl daemon-reload[root@master1 ~]# systemctl enable --now kube-scheduler[root@master1 ~]# systemctl status kube-scheduler

2.3.7、 创建**bootstrap**

在安装 Kubernetes 时,我们需要为每一个工作节点上的 Kubelet 分别生成一个证书。由于工作节点可能很多,手动生成 Kubelet 证书的过程会比较繁琐。

为了解决这个问题,Kubernetes 提供了 TLS bootstrapping 的方式来简化 Kubelet 证书的生成过程。其原理是预先提供一个 bootstrapping token,kubelet 通过该 kubelet 调用 kube-apiserver 的证书签发 API 来生成自己需要的证书。

2.3.7.1、 采用TLS bootstrapping 生成证书的流程如下

1、调用 kube-apiserver 生成一个 bootstrap token。

2、将该 bootstrap token 写入到一个 kubeconfig 文件中,作为 kubelet 调用 kube-apiserver 的客户端验证方式。

3、通过 --bootstrap-kubeconfig 启动参数将 bootstrap token 传递给 kubelet 进程。

4、Kubelet 采用bootstrap token 调用 kube-apiserver API,生成自己所需的服务器和客户端证书。

5、证书生成后,Kubelet 采用生成的证书和 kube-apiserver 进行通信,并删除本地的 kubeconfig 文件,以避免 bootstrap token 泄漏风险。

2.3.7.2、 设置集群参数、
  kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem  \
     --embed-certs=true \
     --server=https://20.20.20.88:16443 \
     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
2.3.7.3、 设置一个用户
  kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
2.3.7.4、 设置上下文参数
  kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
2.3.7.5、 设置默认上下文环境
  kubectl config use-context tls-bootstrap-token-user@kubernetes     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
2.3.7.6、 配置kubectl(管理节点)

[root@master1 ~]# mkdir -p /root/.kube;cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

[root@master2 ~]# mkdir -p /root/.kube;cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

[root@master3 ~]# mkdir -p /root/.kube;cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

[root@master1 ~]# for i in master2 master3;do ssh ​i:/root/.kube/;doneconfig 100% 6384 1.2MB/s 00:00 config 100% 6384 5.3MB/s 00:00 [root@master1 ~]# kubectl get cs

管理节点配置

[root@master1 ~]# yum install -y bash-completion

[root@master1 ~]# source /usr/share/bash-completion/bash_completion[root@master1 ~]# source <(kubectl completion bash)[root@master1 ~]# kubectl completion bash > ~/.kube/completion.bash.inc[root@master1 ~]# source '/root/.kube/completion.bash.inc' [root@master1 ~]# source $HOME/.bash_profile[root@master1 ~]# vim bootstrap.secret.yaml[root@master1 ~]# kubectl create -f bootstrap.secret.yaml

[root@master1 ~]# kubectl apply -f bootstrap.secret.yaml

2.5 Node节点部署

Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

2.5.1 拷贝证书文件
  [root@master1 ~]# for i in node1 node2;do ssh $i mkdir -p /etc/etcd/ssl /etc/kubernetes/pki/;scp /etc/etcd/ssl/{etcd-ca.pem,etcd.pem,etcd-key.pem} $i:/etc/etcd/ssl;scp /etc/kubernetes/pki/{ca.pem,ca-key.pem,front-proxy-ca.pem} $i:/etc/kubernetes/pki/;done
  [root@master1 ~]# for i in master2 master3 node1 node2;do scp /etc/kubernetes/bootstrap-kubelet.kubeconfig $i:/etc/kubernetes/ ;done
2.5.2 部署kubelet(所有节点)
2.5.2.1 所有节点创建目录

[root@master1 ~]# mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/

2.5.2.2 创建kubelet系统服务

[root@node2 ~]# vim /usr/lib/systemd/system/kubelet.service

  [Unit]
  Description=Kubernetes Kubelet
  After=docker.service
  Requires=docker.service
  ​
  [Service]
  ExecStart=/usr/local/bin/kubelet
  Restart=on-failure
  KillMode=process
  ​
  [Install]
  WantedBy=multi-user.target

[root@node2 ~]# vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

  [Service]
  Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
  Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
  Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1"
  Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
  ExecStart=
  ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

[root@node2 ~]# vim /etc/kubernetes/kubelet-conf.yml

  apiVersion: kubelet.config.k8s.io/v1beta1
  kind: KubeletConfiguration
  address: 0.0.0.0
  port: 10250
  readOnlyPort: 10255
  authentication:
    anonymous:
      enabled: false
    webhook:
      cacheTTL: 2m0s
      enabled: true
    x509:
      clientCAFile: /etc/kubernetes/pki/ca.pem
  authorization:
    mode: Webhook
    webhook:
      cacheAuthorizedTTL: 5m0s
      cacheUnauthorizedTTL: 30s
  cgroupDriver: systemd
  cgroupsPerQOS: true
  clusterDNS:
  - 10.96.0.10
  clusterDomain: cluster.local
  containerLogMaxFiles: 5
  containerLogMaxSize: 10Mi
  contentType: application/vnd.kubernetes.protobuf
  cpuCFSQuota: true
  cpuManagerPolicy: none
  cpuManagerReconcilePeriod: 10s
  enableControllerAttachDetach: true
  enableDebuggingHandlers: true
  enforceNodeAllocatable:
  - pods
  eventBurst: 10
  eventRecordQPS: 5
  evictionHard:
    imagefs.available: 15%
    memory.available: 100Mi
    nodefs.available: 10%
    nodefs.inodesFree: 5%
  evictionPressureTransitionPeriod: 5m0s
  failSwapOn: true
  fileCheckFrequency: 20s
  hairpinMode: promiscuous-bridge
  healthzBindAddress: 127.0.0.1
  healthzPort: 10248
  httpCheckFrequency: 20s
  imageGCHighThresholdPercent: 85
  imageGCLowThresholdPercent: 80
  imageMinimumGCAge: 2m0s
  iptablesDropBit: 15
  iptablesMasqueradeBit: 14
  kubeAPIBurst: 10
  kubeAPIQPS: 5
  makeIPTablesUtilChains: true
  maxOpenFiles: 1000000
  maxPods: 110
  nodeStatusUpdateFrequency: 10s
  oomScoreAdj: -999
  podPidsLimit: -1
  registryBurst: 10
  registryPullQPS: 5
  resolvConf: /etc/resolv.conf
  rotateCertificates: true
  runtimeRequestTimeout: 2m0s
  serializeImagePulls: true
  staticPodPath: /etc/kubernetes/manifests
  streamingConnectionIdleTimeout: 4h0m0s
  syncFrequency: 1m0s
  volumeStatsAggPeriod: 1m0s
2.5.2.3 启动kubectl

[root@master1 ~]# systemctl daemon-reload[root@master1 ~]# systemctl enable --now kubelet[root@master1 ~]# systemctl status kubelet

[root@master1 ~]# kubectl get node

[root@master1 ~]# kubectl get csr

2.5.2.4 查看证书信息
  [root@master1 ~]# openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem
  Certificate:
      Data:
          Version: 3 (0x2)
          Serial Number:
              6f:b4:54:27:54:1a:a5:ed:3c:e7:49:d0:fc:cd:ca:9d
      Signature Algorithm: sha256WithRSAEncryption
          Issuer: C=CN, ST=Beijing, L=Beijing, O=Kubernetes, OU=Kubernetes-manual, CN=kubernetes
          Validity
              Not Before: Aug 20 11:11:44 2025 GMT
              Not After : Aug 20 11:11:44 2026 GMT
          Subject: O=system:nodes, CN=system:node:master1
          Subject Public Key Info:
              Public Key Algorithm: id-ecPublicKey
                  Public-Key: (256 bit)
                  pub: 
                      04:be:66:11:3a:b7:f9:a3:7a:d4:a3:75:30:b7:9d:
                      47:a9:10:5c:1c:89:6d:f8:3d:d2:42:2c:3c:e3:35:
                      c1:65:6b:68:75:5c:ab:f3:75:f3:77:ec:33:23:3e:
                      e5:71:84:3a:d3:86:09:61:63:3d:e4:b7:3b:3f:ef:
                      57:24:95:09:6b
                  ASN1 OID: prime256v1
                  NIST CURVE: P-256
          X509v3 extensions:
              X509v3 Key Usage: critical
                  Digital Signature, Key Encipherment
              X509v3 Extended Key Usage: 
                  TLS Web Client Authentication
              X509v3 Basic Constraints: critical
                  CA:FALSE
              X509v3 Authority Key Identifier: 
                  keyid:3C:83:60:4E:36:FB:2D:FC:46:8B:85:59:99:CE:A2:10:97:60:39:20
  ​
      Signature Algorithm: sha256WithRSAEncryption
           2d:21:30:5b:8a:45:ae:c5:d7:4f:4a:ba:fc:d8:30:8a:bf:0e:
           b9:d0:d5:44:c1:77:17:a1:d6:7c:a0:d2:76:ba:fc:ed:43:91:
           80:f5:ba:4c:c6:7d:13:cf:bc:13:1a:8f:98:7b:b9:af:fd:f0:
           03:99:68:b3:a9:16:bc:04:4e:37:39:86:4e:7d:e0:6b:c0:50:
           0f:8d:e1:57:ce:55:09:e1:cf:6c:ab:ec:c5:0f:65:fe:b4:1f:
           f5:21:ae:a4:ef:2c:75:e6:8e:e0:84:ae:bb:07:52:e5:54:9a:
           ff:7b:6c:94:75:3b:ae:ae:59:c1:84:fe:2d:dc:9f:d7:2a:06:
           09:d3:3f:57:e7:68:48:25:cf:e7:15:4c:7c:15:0b:91:2e:e5:
           7a:f5:c4:5c:3a:90:3b:de:51:36:cb:8b:3d:1a:47:68:08:43:
           d2:84:c0:7f:ce:79:9d:e1:f3:4f:f8:4f:76:e2:db:28:f3:b0:
           30:1c:9c:a5:9c:4f:56:a8:1e:15:a1:b2:8b:4b:23:47:bc:e1:
           ce:31:b1:17:2f:37:d1:0d:e3:2e:51:c8:81:31:9b:4f:58:31:
           e6:0d:d5:2f:df:22:e3:3e:b7:53:d3:3f:d2:6c:aa:7f:f8:43:
           f5:53:a0:26:8a:0b:13:28:23:68:c1:f4:a8:5f:b1:6a:f5:eb:
           7e:08:fc:42
2.5.4、 kube-proxy部署

[root@master1 ~]# kubectl -n kube-system create serviceaccount kube-proxyserviceaccount/kube-proxy created[root@master1 ~]# kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxyclusterrolebinding.rbac.authorization.k8s.io/system:kube-proxy created[root@master1 ~]# SECRET=$(kubectl -n kube-system get sa/kube-proxy \

--output=jsonpath='{.secrets[0].name}')[root@master1 ~]# JWT_TOKEN=​SECRET > --output=jsonpath='{.data.token}' | base64 -d)

  kubectl -n kube-system create serviceaccount kube-proxy
  kubectl create clusterrolebinding system:kube-proxy         --clusterrole system:node-proxier         --serviceaccount kube-system:kube-proxy
  SECRET=$(kubectl -n kube-system get sa/kube-proxy   --output=jsonpath='{.secrets[0].name}')
  JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET  --output=jsonpath='{.data.token}' | base64 -d)

2.5.4.2.1、 设置集群参数

  kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem  \
     --embed-certs=true \
     --server=https://20.20.20.88:16443 \
     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
1.1.1.1、 设置一个用户
  [root@master1 ~]# kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
1.1.1.1、 设置上下文参数
  [root@master1 ~]# kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
1.1.1.1、 设置默认上下文环境
  [root@master1 ~]# kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
1.1.1.1、 创建kube-proxy kubeconfig文件

[root@master1 ~]# vim /etc/kubernetes/kube-proxy.conf

  apiVersion: kubeproxy.config.k8s.io/v1alpha1
  bindAddress: 0.0.0.0
  clientConnection:
    acceptContentTypes: ""
    burst: 10
    contentType: application/vnd.kubernetes.protobuf
    kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
    qps: 5
  clusterCIDR: 172.168.0.0/16
  configSyncPeriod: 15m0s
  conntrack:
    max: null
    maxPerCore: 32768
    min: 131072
    tcpCloseWaitTimeout: 1h0m0s
    tcpEstablishedTimeout: 24h0m0s
  enableProfiling: false
  healthzBindAddress: 0.0.0.0:10256
  hostnameOverride: ""
  iptables:
    masqueradeAll: false
    masqueradeBit: 14
    minSyncPeriod: 0s
    syncPeriod: 30s
  ipvs:
    masqueradeAll: true
    minSyncPeriod: 5s
    scheduler: "rr"
    syncPeriod: 30s
  kind: KubeProxyConfiguration
  metricsBindAddress: 127.0.0.1:10249
  mode: "ipvs"
  nodePortAddresses: null
  oomScoreAdj: -999
  portRange: ""
  udpIdleTimeout: 250ms

[root@master1 ~]# vim /usr/lib/systemd/system/kube-proxy.service

  [Unit]
  Description=Kubernetes Kube Proxy
  Documentation=https://github.com/kubernetes/kubernetes
  After=network.target
  ​
  [Service]
  ExecStart=/usr/local/bin/kube-proxy \
    --config=/etc/kubernetes/kube-proxy.conf \
    --v=2
  ​
  Restart=always
  RestartSec=10s
  ​
  [Install]
  WantedBy=multi-user.target
1.1.1.1、 分发kubeconfig配置文件
  for i in master1 master2 master3 node1 node2 ;do scp /etc/kubernetes/kube-proxy.kubeconfig $i:/etc/kubernetes/;scp /etc/kubernetes/kube-proxy.conf $i:/etc/kubernetes/kube-proxy.conf;scp /usr/lib/systemd/system/kube-proxy.service $i:/usr/lib/systemd/system/kube-proxy.service;done
  ​
  ​
1.1.1.1、 启动kube-proxy**(所有节点)**

[root@master1 ~]# systemctl daemon-reload[root@master1 ~]# systemctl enable --now kube-proxy[root@master1 ~]# systemctl status kube-proxy

2.6 使用calico安装k8s网络

2.6.1 官方yaml文件下载地址:

[root@master1 ~]# wget https://docs.projectcalico.org/manifests/calico-etcd.yaml

2.6.2 修改calico-etcd.yaml的命令:
   sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://20.20.20.56:2379,https://20.20.20.57:2379,https://20.20.20.13:2379"#g' calico-etcd.yaml   
  sed -i \
    -e 's|docker.io/calico|registry.aliyuncs.com/google_containers/calico|g' \
    -e 's|imagePullPolicy: Always|imagePullPolicy: IfNotPresent|g' \
    calico-etcd.yaml
  sed -ri 's|/calico-secrets/etcd-ca|/etc/etcd/ssl/etcd-ca.pem|'  calico-etcd.yaml
  sed -ri 's|/calico-secrets/etcd-cert|/etc/etcd/ssl/etcd.pem|'   calico-etcd.yaml
  sed -ri 's|/calico-secrets/etcd-key|/etc/etcd/ssl/etcd-key.pem|' calico-etcd.yaml
  ​
  ​
  ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
  ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
  ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`
  ​
  sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml 
  ​
  sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml  #添加etcd证书地址
  ​
  POD_SUBNET="172.168.0.0/16"
  ​
  sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml   
  #修改我们pod地址段,需要和上面配置一样
  sed -i 's/command: \["calico-node", "-init", "-best-effort"\]/command: \["calico-node", "-init"\]/' calico-etcd.yaml
  ​
  cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=127.0.0.1,20.20.20.88,172.17.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,20.20.20.56,20.20.20.57,20.20.20.13 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
  ​
  #实际操作
  [root@master1 ~]#  sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://20.20.20.56:2379,https://20.20.20.57:2379,https://20.20.20.13:2379"#g' calico-etcd.yaml   
  [root@master1 ~]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
  [root@master1 ~]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
  [root@master1 ~]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`
  [root@master1 ~]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
  [root@master1 ~]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
  [root@master1 ~]# POD_SUBNET="172.168.0.0/16"
  [root@master1 ~]# sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml 
  [root@master1 ~]# sed -i 's/command: \["calico-node", "-init", "-best-effort"\]/command: \["calico-node", "-init"\]/' calico-etcd.yaml
  ​
  [root@master1 ~]# sed -i \
  >   -e 's|docker.io/calico|registry.aliyuncs.com/google_containers/calico|g' \
  >   -e 's|imagePullPolicy: Always|imagePullPolicy: IfNotPresent|g' \
  >   calico-etcd.yaml

手动拉取镜像

  #临时去除node节点污点
  [root@master1 calico]# for node in $(kubectl get nodes -o name); do
  >   kubectl taint node $node node.kubernetes.io/not-ready:NoSchedule-
  > done
  node/master1 untainted
  node/master2 untainted
  node/master3 untainted
  node/node1 untainted
  node/node2 untainted
  ​
  docker pull docker.m.daocloud.io/calico/cni:v3.25.0
  docker pull docker.m.daocloud.io/calico/node:v3.25.0
  ​