侧边栏壁纸
博主头像
码途 博主等级

行动起来,活在当下

  • 累计撰写 72 篇文章
  • 累计创建 0 个标签
  • 累计收到 0 条评论

目 录CONTENT

文章目录
k8s

Kubernetes - centos

htmltoo
2024-02-15 / 0 评论 / 0 点赞 / 15 阅读 / 0 字

ok:1.27.2

测试:1.29.2

https://github.com/kubernetes/kubernetes/releases

kubectl version

client代表kubectl版本信息,

server代表的是master节点的k8s版本信息

1-每台安装kube等组件

 cat >>/etc/default/kubelet<<EOF
 KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
 EOF
 cat >>| sudo tee /etc/yum.repos.d/kubernetes.repo<<EOF
 [kubernetes]
 name=Kubernetes
 baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
 enabled=1
 gpgcheck=1
 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
 exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
 EOF
 yum remove -y kubelet kubeadm kubectl
 rm -rf /usr/local/bin/kubectl /usr/local/bin/kubelet  /usr/local/bin/kubeadm
 yum install -y kubectl-1.29.2 kubelet-1.29.2 kubeadm-1.29.2 --disableexcludes=kubernetes
 systemctl enable --now kubelet
curl -LO "https://dl.k8s.io/release/v1.29.2/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/release/v1.29.2/bin/linux/amd64/kubelet"
curl -LO "https://dl.k8s.io/release/v1.29.2/bin/linux/amd64/kubeadm"
chmod +x  ./kubectl   ./kubelet    ./kubeadm
cp ./kubectl /usr/local/bin/kubectl
cp ./kubelet /usr/local/bin/kubelet
cp ./kubeadm /usr/local/bin/kubeadm
-查看是否安装成功
kubelet --version
kubectl version
kubeadm version

2-初始化集群

2.1-预先下载镜像能够解决kubernetes安装时网络慢下载慢的问题

kubeadm config images pull

-从国内镜像拉取

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.29.2 
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.29.2 
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.29.2 
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.29.2 
docker pull coredns/coredns:1.11.1 
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9 
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.10-0 
-
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.29.2  registry.k8s.io/kube-apiserver:v1.29.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.29.2  registry.k8s.io/kube-controller-manager:v1.29.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.29.2  registry.k8s.io/kube-scheduler:v1.29.2 
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.29.2 registry.k8s.io/kube-proxy:v1.29.2 
docker tag coredns/coredns:1.11.1 registry.k8s.io/coredns/coredns:v1.11.1 
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9  registry.k8s.io/pause:3.9
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.10-0 registry.k8s.io/etcd:3.5.10-0

-指定镜像源为阿里云,指定pod节点IP为10.224.0.0/16网段,指定service网段为10.96.0.0/12,指定kubernetes版本为1.29.2,指定cri-dockerd路径

kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers  --kubernetes-version=v1.29.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=192.168.1.101 --ignore-preflight-errors=all

-出现类似输出则表示成功

 `Your Kubernetes control-plane has initialized successfully!`
 `To start using your cluster, you need to run the following as a regular user:`
   ``mkdir -p $HOME/.kube
   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config`
   `sudo chown $(id -u):$(id -g) $HOME/.kube/config``
 `Then you can join any number of worker nodes by running the following on each as root:`
 `kubeadm join 10.11.81.152:6443 --token abcdef.1234567890abcdef \`
     `--discovery-token-ca-cert-hash sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef`

2.2-如果kubelet启动失败查看启动文件

cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf cat /var/lib/kubelet/kubeadm-flags.env

2.3-初始化出错重置命令

kubeadm reset all -f

rm -fr ~/.kube/ /etc/kubernetes/* var/lib/etcd/*

systemctl restart containerd

2.4-配置一下

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl get nodes

3-安装kubernetes worker

-所有work节点都需要执行

kubeadm join 192.168.63.61:6443 --token byjaat.knb8kma4j3zof9qf \ --discovery-token-ca-cert-hash sha256:920c7aee5791e6b6b846d78d59953d609ff02fdcebc00bb644fe1696a97d5011

-如果hash串过期或丢失,可在master上执行如下命令获取hash串

kubeadm token create --print-join-command

4-配置网络插件

4.1-配置网络,采用calico网络

https://github.com/projectcalico/calico/releases

wget  https://github.com/projectcalico/calico/releases/download/v3.27.0/release-v3.27.0.tgz 
-每个节点都要执行,设置每个节点的DNS,不能为127.0.0.1 
tar -xxvf release-v3.27.0.tgz 
cd  release-v3.27.0/manifests/
 
vim  calico.yaml
# 修改定义pod网络CALICO_IPV4POOL_CIDR的值和kubeadm init pod-network-cidr的值一致
## 取消注释
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
...
# Cluster type to identify the deployment type
  - name: CLUSTER_TYPE
  value: "k8s,bgp"
# 下方新增
  - name: IP_AUTODETECTION_METHOD
    value: "interface=ens18"

kubectl apply -f calico.yaml

4.2-设置网络模式为ipvs

-修改ConfigMap的kube-system/kube-proxy中的config.conf,

-把 mode: “” 改为mode: “ipvs” 保存退出即可

kubectl edit cm kube-proxy -n kube-system

4.3-重启网络插件

kubectl get pod -n kube-system |grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'

-flannel一直处于CrashLoopBackOff状态

vim /etc/kubernetes/manifests/kube-controller-manager.yaml

  • --allocate-node-cidrs=true

  • --cluster-cidr=10.244.0.0/16

systemctl restart kubelet

--在主节点执行,节点加入集群后的状态

kubectl get nodes

--在主节点执行

kubectl get pod -n kube-system -o wide

5-Dashboard可视化管理

https://github.com/kubernetes/dashboard/tags

kubectl apply -f https://github.com/kubernetes/dashboard/blob/master/charts/kubernetes-dashboard.yaml

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

mv recommended.yaml dashboard.yaml

kubectl apply -f dashboard.yaml

-设置固定主机访问端口

-type: ClusterIP 改为 type: NodePort

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

-找到端口,在安全组放行

kubectl get svc -A |grep kubernetes-dashboar

kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.104.200.118   <none>        8000/TCP                 124m
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.105.122.160   <none>        443:32483/TCP            124m

https://192.168.1.101:32483

-创建访问账号

kubectl apply -f dash.yaml

vi dash.yaml

 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: admin-user
   namespace: kubernetes-dashboard
 ---
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: admin-user
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 - kind: ServiceAccount
   name: admin-user
   namespace: kubernetes-dashboard
-创建访问令牌
kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IjZCMFh3WUlkWUFmcU1zM0lqUUtmTTR6TVJzSGp3dklfMWhiX3ZlSkhINU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzA4MDA4MTc0LCJpYXQiOjE3MDgwMDQ1NzQsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiN2JmNWFkNjAtMDFkOC00YTkxLTlmZTgtNzg3NzRiMDcxZDE3In19LCJuYmYiOjE3MDgwMDQ1NzQsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.Vn9_gQW38LUtvjVGadTJvnk2cPu1WCW6IvlWen9S-VhrvWQ0I-yQUl8qX7aBG80CrlrBleoP73nV28rBPsujLs8RCqRZHQ6rZycjT2rpzchFBqJ0BVFfqJSq0_zSgMU5bSPj8sPJoYOcSSHfkp5rHjzrRPM7w474XFnpfP11A7jE8so2lX382vb9jjw-bU-7Gik_q5IIKHZnQqfGxEOHVCXUydgvK5iAXy_VRF9F8JC1KCQn3h72t2CoVy9HPVWgjipGPZ_80q4nlN4ZoOTaKFhvcPG0hjW3sZbB5CYb3_5ktUGGi4Oqduilfz412Aa5Y0bvz17dEXe36GTjJZ4KQQ

-Clean
kubectl -n kubernetes-dashboard delete serviceaccount admin-user
kubectl -n kubernetes-dashboard delete clusterrolebinding admin-user
 ​
-创建用户
kubectl create serviceaccount dashboard-admin -n kube-system
 ​
-用户授权
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
 ​
-获取用户Token
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
 ​

-认证token回话失效,需要重新登录认证

kubectl edit deployment kubernetes-dashboard -n kubernetes-dashboard

-有效期修改12h

- --token-ttl=432000

-访问登录永不过期

- --token-ttl=0

    spec:
      containers:
      - args:
        - --auto-generate-certificates
        - --token-ttl=0

6-k8s在集群维护管理

在master中,查看节点数和要删除的节点数,因集群ip进行了修改,节点出现了异常。

[root@k8s-master ~]# kubectl get nodes

-进行删除节点操作。

[root@k8s-master ~]# kubectl delete nodes k8s-node1

[root@k8s-master ~]# kubectl delete nodes k8s-node2

-在被删除的node节点中清空集群数据信息。

kubeadm reset

-在集群中查看集群的token值并重新加入到里面

kubeadm token create --print-join-command

kubeadm join 10.0.1.48:6443 --token 8xwcaq.qxekio9xd02ed936 --discovery-token-ca-cert-hash sha256:d988ba566675095ae25255d63b21cc4d5a9a69bee9905dc638f58b217c651c14

-查看pod情况

kubectl get pods -n kube-system -o wide

-查看node情况

kubectl get nodes

7-K8S 采集 pod 和 Events 日志

https://abc.htmltoo.com/thread-46799.htm

8-存储挂载

8.1-挂载本地路径

spec:
  containers:
  - name: test
    image: library/busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello; sleep 10;done"]
    volumeMounts:
    - mountPath: /mnt/host_src  # 容器内挂载路径,会自动创建
      name: local-vol
  volumes:
  - name: local-vol # 与volumeMounts对应
    hostPath:
      path: /tmp # 宿主机路径

8.2-pod直接挂载nfs

1.创建pv(相当于存储设备)

2.创建pvc(相当于调度存储设备资源的)

3.创建pod(去请求pvc的)

-nfs配置

https://abc.htmltoo.com/thread-45963.htm

spec:
  containers:
  - name: test-nfs
    image: library/busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo hello; sleep 10;done"]
    volumeMounts:
    - mountPath: /mnt/data  # 容器内挂载路径
      name: nfs-vol
  volumes:
  - name: nfs-vol
    nfs:
      server: 192.168.185.6
      path: /data

8.3-pvc挂载nfs

pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv3
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - nolock
    - nfsvers=3
    - vers=3
  nfs:
    path: /data
    server: 192.168.0.101

kubectl apply -f pv.yaml

kubectl get pv

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: slow
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

kubectl apply -f pvc.yaml

kubectl get pvc

pvc-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          hostPort: 8008
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage

kubectl apply -f pvc-pod.yaml

kubectl get pod

8.4-挂载本地pvc

pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data"

kubectl apply -f pv.yaml

kubectl get pv

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: slow
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

kubectl apply -f pvc.yaml

kubectl get pvc

pvc-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          hostPort: 8008
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage

kubectl apply -f pvc-pod.yaml

kubectl get pod

9-devpos

gitlab,代码仓库,企业内部使用最多的代码版本管理工具。

Jenkins, 一个可扩展的持续集成引擎,用于自动化各种任务,包括构建、测试和部署软件。多分支流水线, Webhook 构建触发器.

robotFramework, 基于Python的自动化测试框架

sonarqube,代码质量管理平台,开源的代码分析平台,支持Java、Python、PHP、JavaScript、CSS等25种以上的语言,可以检测出重复代码、代码漏洞、代码规范和安全性漏洞的问题

maven,java包构建管理工具

Kubernetes

Docker

10-搭建多主节点高可用集群

10.1-部署keepalived+haproxy 高可用

-仅需部署在三台master节点

-haproxy配置

global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master1	172.16.12.111:6443  check  
  server k8s-master2	172.16.12.112:6443  check
  server k8s-master3	172.16.12.113:6443  check

-k8s-master1配置

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh" #健康检查脚本
    interval 5
    weight -5
    fall 2  
    rise 1
}
vrrp_instance VI_1 {
    state MASTER					#高可用主1
    interface eth0					#网卡名称
    mcast_src_ip 172.16.12.111		#该节点 IP
    virtual_router_id 51
    priority 100				#设置最高级优先级
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
        172.16.12.220			#vip地址
    }
    track_script {
       chk_apiserver
    }
}

-健康检测脚本

-所有主节点都需要

cat /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $(seq 1 3);do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
#给监测脚本添加执行权限
chmod +x /etc/keepalived/check_apiserver.sh

10.2-初始化k8s

10.2.1-master节点初始化

[root@k8s-master1 ~]# vim kubeadm-config.yaml
1
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.12.111
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 172.16.12.220
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.16.12.220:16443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.21.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.0.0.0/16
  serviceSubnet: 10.96.0.0/12 #pod和service不能在同一网段
scheduler: {}

10.2.2-更新配置文件

kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml

10.2.3-同步配置文件

将new.yaml文件复制到其他master节点

-查看需要的镜像文件

kubeadm config images list --config new.yaml

10.2.4-下载镜像

-所有master节点操作

kubeadm config images pull --config new.yaml

-coredns镜像可能下载不了,可以“曲线救国”
docker pull coredns/coredns:1.8.0
docker tag coredns/coredns:1.8.0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.0

10.2.5-初始化master1节点

master1节点 初始化,初始化后生成对应的证书

kubeadm init --config new.yaml --upload-certs

...

10.3-加入集群

-Token过期后生成新的token:

kubeadm token create --print-join-command

-Master需要生成–certificate-key

-kubeadm init phase upload-certs --upload-certs

在其他主节点执行,加入集群

kubeadm join ......

master1节点 配置环境变量,用于访问Kubernetes集群

cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
#让其生效
source /root/.bashrc

-在 master1节点 查看集群节点状态

kubectl get nodes

-kubeadm安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:

kubectl get pods -n kube-system -o wide

--发现coredns处于pending状态,是因为集群网络还未连通,无法相互访问

10.4-Calico网络组件

(只在 master1节点 操作)【网络插件,用于连接其他节点】

cd release-v3.27.0/manifests/

#将要修改的ip改为自己的ip,按照master节点顺序的改成自己的节点ip
sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://172.16.12.111:2379,https://172.16.12.112:2379,https://172.16.12.113:2379"#g' calico-etcd.yaml
#设置临时环境变量ETCD_CA查看ca.crt文件并转化为base64格式取消换行符
ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
#设置临时环境变量ETCD_CERT查看server.crt文件并转化为base64格式取消换行符
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
#设置临时环境变量ETCD_KEY查看server.key文件并转化为base64格式取消换行符
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
#更换calico-etcd.yaml文件中的# etcd-key: null、# etcd-cert: null、# etcd-ca: null为指定值,临时的环境变量这这用。
sed -i "s/# etcd-key: null/etcd-key: ${ETCD_KEY}/g; s/# etcd-cert: null/etcd-cert: ${ETCD_CERT}/g; s/# etcd-ca: null/etcd-ca: ${ETCD_CA}/g" calico-etcd.yaml
#更换calico-etcd.yaml文件中的etcd_ca: ""#、etcd_cert: ""、etcd_key: "" 为指定值
sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
#设置临时环境变量POD_SUBNET从kubernetes配置文件中查找自己的网关
POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
#注意下面的这个步骤是把calico-etcd.yaml文件里面的CALICO_IPV4POOL_CIDR下的网段改成自己的Pod网段(kubeadm-config.yaml),并打开注释,不用手动改,会用到上面的环境变量;
sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

-启动

kubectl apply -f calico-etcd.yaml

-查看容器状态

kubectl get pods -n kube-system -o wide

-查看节点网络连接状态

kubectl get nodes

10.5--Metrics部署

-将Master1节点的front-proxy-ca.crt复制到所有Node节点

scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node1:/etc/kubernetes/pki/front-proxy-ca.crt

scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node2:/etc/kubernetes/pki/front-proxy-ca.crt

-安装metrics server

https://github.com/dotbalo/k8s-ha-install.git

-进入拉取的k8s-ha-install/metrics-server-0.4.x-kubeadm/目录

cd /metrics-server-0.4.x-kubeadm/

-根据该目录下的yaml文件创建容器

kubectl apply -f comp.yaml

-查看节点状态

kubectl top nodes

11-卸载

-清理运行到k8s群集中的pod

kubeadm reset -f

kubectl delete node --all

0

评论区