侧边栏壁纸
博主头像
码途 博主等级

行动起来,活在当下

  • 累计撰写 72 篇文章
  • 累计创建 0 个标签
  • 累计收到 0 条评论

目 录CONTENT

文章目录
k8s

KubeKey-使用说明

htmltoo
2023-12-09 / 0 评论 / 0 点赞 / 80 阅读 / 0 字

KubeKey-使用说明

0.环境安装

https://github.com/kubernetes/kubernetes/releases

https://github.com/kubesphere/kubesphere/releases

https://github.com/kubesphere/kubekey/releases

yum -y install vim net-tools lrzsz unzip gcc telnet wget sshpass ntpdate ntp curl
yum -y install conntrack ipvsadm ipset  iptables  sysstat libseccomp git 
yum -y install socat conntrack  ebtables ipset
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
yum install -y kubectl kubelet kubeadm --disableexcludes=kubernetes
-使用 KubeKey 搭建集群,KubeKey 会默认安装最新版本的 Docker
-KubeKey 可以将 Kubernetes 和 KubeSphere 一同安装。
curl -sfL https://get-kk.kubesphere.io | KKZONE=cn sh -
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
-kk 添加可执行权限:
chmod +x kk
-查看能使用 KubeKey 安装的所有受支持的 Kubernetes 版本
./kk version --show-supported-k8s
cp kk /usr/local/bin/
kk version
-启用 kubectl 自动补全
yum -y install --skip-broken bash-completion
apt-get install bash-completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
-同时安装 Kubernetes 和 KubeSphere

1.单机版

-最新版
./kk create cluster --with-kubernetes v1.27.2 --with-kubesphere v3.4.1
-稳定版
./kk create cluster --with-kubernetes v1.23.17 --with-kubesphere v3.4.1

2.集群, 服务器非22端口

./kk create config --with-kubernetes v1.27.2 --with-kubesphere v3.4.1  -f 1.27.2.yaml
./kk create config --with-kubernetes v1.23.17 --with-kubesphere v3.4.1  -f 1.23.10.yaml

如果您不更改名称,那么将创建默认文件 config-sample.yaml,

以下是多节点集群(具有一个主节点)配置文件的示例:

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  # Assume that the default port for SSH is 22. Otherwise, add the port number after the IP address. 
  # If you install Kubernetes on ARM, add "arch: arm64". For example, {...user: ubuntu, password: Qcloud@123, arch: arm64}.
  - {name: node1, address: 172.16.0.2, internalAddress: 172.16.0.2, port: 8022, user: ubuntu, password: "Qcloud@123"}
  # For default root user.
  # Kubekey will parse `labels` field and automatically label the node.
  - {name: node2, address: 172.16.0.3, internalAddress: 172.16.0.3, password: "Qcloud@123", labels: {disk: SSD, role: backend}}
  # For password-less login with SSH keys.
  - {name: node3, address: 172.16.0.4, internalAddress: 172.16.0.4, privateKeyPath: "~/.ssh/id_rsa"}
  roleGroups:
    etcd:
    - node1 # All the nodes in your cluster that serve as the etcd nodes.
    master:
    - node1
    - node[2:10] # From node2 to node10. All the nodes in your cluster that serve as the master nodes.
    worker:
    - node1
    - node[10:100] # All the nodes in your cluster that serve as the worker nodes.
  controlPlaneEndpoint:
    # Internal loadbalancer for apiservers. Support: haproxy, kube-vip [Default: ""]
    internalLoadbalancer: haproxy
    # Determines whether to use external dns to resolve the control-plane domain. 
    # If 'externalDNS' is set to 'true', the 'address' needs to be set to "".
    externalDNS: false  
    domain: lb.kubesphere.local
    # The IP address of your load balancer. If you use internalLoadblancer in "kube-vip" mode, a VIP is required here.
    address: ""  
    port: 6443
  system:
    # The ntp servers of chrony.
    ntpServers:
      - time1.cloud.tencent.com
      - ntp.aliyun.com
      - node1 # Set the node name in `hosts` as ntp server if no public ntp servers access.
    timezone: "Asia/Shanghai"
    # Specify additional packages to be installed. The ISO file which is contained in the artifact is required.
    rpms:
      - nfs-utils
    # Specify additional packages to be installed. The ISO file which is contained in the artifact is required.
    debs: 
      - nfs-common
    #preInstall:  # Specify custom init shell scripts for each nodes, and execute according to the list order at the first stage.
    #  - name: format and mount disk  
    #    bash: /bin/bash -x setup-disk.sh
    #    materials: # scripts can has some dependency materials. those will copy to the node  
    #      - ./setup-disk.sh # the script which shell execute need
    #      -  xxx            # other tools materials need by this script
    #postInstall: # Specify custom finish clean up shell scripts for each nodes after the Kubernetes install.
    #  - name: clean tmps files
    #    bash: |
    #       rm -fr /tmp/kubekey/*
    #skipConfigureOS: true # Do not pre-configure the host OS (e.g. kernel modules, /etc/hosts, sysctl.conf, NTP servers, etc). You will have to set these things up via other methods before using KubeKey.

  kubernetes:
    #kubelet start arguments
    #kubeletArgs:
      # Directory path for managing kubelet files (volume mounts, etc).
    #  - --root-dir=/var/lib/kubelet
    version: v1.21.5
    # Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.
    apiserverCertExtraSans:  
      - 192.168.8.8
      - lb.kubespheredev.local
    # Container Runtime, support: containerd, cri-o, isula. [Default: docker]
    containerManager: docker
    clusterName: cluster.local
    # Whether to install a script which can automatically renew the Kubernetes control plane certificates. [Default: false]
    autoRenewCerts: true
    # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false].
    masqueradeAll: false
    # maxPods is the number of Pods that can run on this Kubelet. [Default: 110]
    maxPods: 110
    # podPidsLimit is the maximum number of PIDs in any pod. [Default: 10000]
    podPidsLimit: 10000
    # The internal network node size allocation. This is the size allocated to each node on your network. [Default: 24]
    nodeCidrMaskSize: 24
    # Specify which proxy mode to use. [Default: ipvs]
    proxyMode: ipvs
    # enable featureGates, [Default: {"ExpandCSIVolumes":true,"RotateKubeletServerCertificate": true,"CSIStorageCapacity":true, "TTLAfterFinished":true}]
    featureGates: 
      CSIStorageCapacity: true
      ExpandCSIVolumes: true
      RotateKubeletServerCertificate: true
      TTLAfterFinished: true
    ## support kata and NFD
    # kata:
    #   enabled: true
    # nodeFeatureDiscovery
    #   enabled: true
    # additional kube-proxy configurations
    kubeProxyConfiguration:
      ipvs:
        # CIDR's to exclude when cleaning up IPVS rules.
        # necessary to put node cidr here when internalLoadbalancer=kube-vip and proxyMode=ipvs
        # refer to: https://github.com/kubesphere/kubekey/issues/1702
        excludeCIDRs:
          - 172.16.0.2/24
  etcd:
    # Specify the type of etcd used by the cluster. When the cluster type is k3s, setting this parameter to kubeadm is invalid. [kubekey | kubeadm | external] [Default: kubekey]
    type: kubekey  
    ## The following parameters need to be added only when the type is set to external.
    ## caFile, certFile and keyFile need not be set, if TLS authentication is not enabled for the existing etcd.
    # external:
    #   endpoints:
    #     - https://192.168.6.6:2379
    #   caFile: /pki/etcd/ca.crt
    #   certFile: /pki/etcd/etcd.crt
    #   keyFile: /pki/etcd/etcd.key
    dataDir: "/var/lib/etcd"
    # Time (in milliseconds) of a heartbeat interval.
    heartbeatInterval: 250
    # Time (in milliseconds) for an election to timeout. 
    electionTimeout: 5000
    # Number of committed transactions to trigger a snapshot to disk.
    snapshotCount: 10000
    # Auto compaction retention for mvcc key value store in hour. 0 means disable auto compaction.
    autoCompactionRetention: 8
    # Set level of detail for etcd exported metrics, specify 'extensive' to include histogram metrics.
    metrics: basic
    ## Etcd has a default of 2G for its space quota. If you put a value in etcd_memory_limit which is less than
    ## etcd_quota_backend_bytes, you may encounter out of memory terminations of the etcd cluster. Please check
    ## etcd documentation for more information.
    # 8G is a suggested maximum size for normal environments and etcd warns at startup if the configured value exceeds it.
    quotaBackendBytes: 2147483648 
    # Maximum client request size in bytes the server will accept.
    # etcd is designed to handle small key value pairs typical for metadata.
    # Larger requests will work, but may increase the latency of other requests
    maxRequestBytes: 1572864
    # Maximum number of snapshot files to retain (0 is unlimited)
    maxSnapshots: 5
    # Maximum number of wal files to retain (0 is unlimited)
    maxWals: 5
    # Configures log level. Only supports debug, info, warn, error, panic, or fatal.
    logLevel: info
  network:
    plugin: calico
    calico:
      ipipMode: Always  # IPIP Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, vxlanMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Always]
      vxlanMode: Never  # VXLAN Mode to use for the IPv4 POOL created at start up. If set to a value other than Never, ipipMode should be set to "Never". [Always | CrossSubnet | Never] [Default: Never]
      vethMTU: 0  # The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. By default, MTU is auto-detected. [Default: 0]
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  storage:
    openebs:
      basePath: /var/openebs/local # base path of the local PV provisioner
  registry:
    registryMirrors: []
    insecureRegistries: []
    privateRegistry: ""
    namespaceOverride: ""
    auths: # if docker add by `docker login`, if containerd append to `/etc/containerd/config.toml`
      "dockerhub.kubekey.local":
        username: "xxx"
        password: "***"
        skipTLSVerify: false # Allow contacting registries over HTTPS with failed TLS verification.
        plainHTTP: false # Allow contacting registries over HTTP.
        certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local" # Use certificates at path (*.crt, *.cert, *.key) to connect to the registry.
  addons: [] # You can install cloud-native addons (Chart or YAML) by using this field.
  #dns:
  #  ## Optional hosts file content to coredns use as /etc/hosts file.
  #  dnsEtcHosts: |
  #    192.168.0.100 api.example.com
  #    192.168.0.200 ingress.example.com
  #  coredns:
  #    ## additionalConfigs adds any extra configuration to coredns
  #    additionalConfigs: |
  #      whoami
  #      log
  #    ## Array of optional external zones to coredns forward queries to. It's injected into coredns' config file before
  #    ## default kubernetes zone. Use it as an optimization for well-known zones and/or internal-only domains, i.e. VPN for internal networks (default is unset)
  #    externalZones:
  #    - zones:
  #      - example.com
  #      - example.io:1053
  #      nameservers:
  #      - 1.1.1.1
  #      - 2.2.2.2
  #      cache: 5
  #    - zones:
  #      - mycompany.local:4453
  #      nameservers:
  #      - 192.168.0.53
  #      cache: 10
  #    - zones:
  #      - mydomain.tld
  #      nameservers:
  #      - 10.233.0.3
  #      cache: 5
  #      rewrite:
  #      - name substring website.tld website.namespace.svc.cluster.local
  #    ## Rewrite plugin block to perform internal message rewriting.
  #    rewriteBlock: |
  #      rewrite stop {
  #        name regex (.*)\.my\.domain {1}.svc.cluster.local
  #        answer name (.*)\.svc\.cluster\.local {1}.my.domain
  #      }
  #    ## DNS servers to be added *after* the cluster DNS. These serve as backup
  #    ## DNS servers in early cluster deployment when no cluster DNS is available yet.
  #    upstreamDNSServers:
  #    - 8.8.8.8
  #    - 1.2.4.8
  #    - 114.114.114.114
  #  nodelocaldns:
  #    ## It's possible to extent the nodelocaldns' configuration by adding an array of external zones.
  #    externalZones:
  #    - zones:
  #      - example.com
  #      - example.io:1053
  #      nameservers:
  #      - 1.1.1.1
  #      - 2.2.2.2
  #      cache: 5
  #    - zones:
  #      - mycompany.local:4453
  #      nameservers:
  #      - 192.168.0.53
  #      cache: 10
  #    - zones:
  #      - mydomain.tld
  #      nameservers:
  #      - 10.233.0.3
  #      cache: 5
  #      rewrite:
  #      - name substring website.tld website.namespace.svc.cluster.local
Network Configuration sample
Hybridnet
To learn more about hybridnet, check out https://github.com/alibaba/hybridnet

  network:
    plugin: hybridnet
    hybridnet:
      defaultNetworkType: Overlay
      enableNetworkPolicy: false
      init: false
      preferVxlanInterfaces: eth0
      preferVlanInterfaces: eth0
      preferBGPInterfaces: eth0
      networks:
      - name: "net1"
        type: Underlay
        nodeSelector:
          network: "net1"
        subnets:
          - name: "subnet-10"
            netID: 10
            cidr: "192.168.10.0/24"
            gateway: "192.168.10.1"
          - name: "subnet-11"
            netID: 11
            cidr: "192.168.11.0/24"
            gateway: "192.168.11.1"
      - name: "net2"
        type: Underlay
        nodeSelector:
          network: "net2"
        subnets:
          - name: "subnet-30"
            netID: 30
            cidr: "192.168.30.0/24"
            gateway: "192.168.30.1"
          - name: "subnet-31"
            netID: 31
            cidr: "192.168.31.0/24"
            gateway: "192.168.31.1"
      - name: "net3"
        type: Underlay
        netID: 0
        nodeSelector:
          network: "net3"
        subnets:
          - name: "subnet-50"
            cidr: "192.168.50.0/24"
            gateway: "192.168.50.1"
            start: "192.168.50.100"
            end: "192.168.50.200"
            reservedIPs: ["192.168.50.101","192.168.50.102"]
            excludeIPs: ["192.168.50.111","192.168.50.112"]

其他端口需要在 IP 地址后添加对应端口号:
{name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, port: 8022, user: ubuntu, password: Testing123}

使用密码登录示例:
{name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, privateKeyPath: "~/.ssh/id_rsa"}

-修改配置文件 config.yaml

vim config.yaml


apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.1.101, internalAddress: 192.168.1.101, port: 52341,  user: root, password: "wd"}
  - {name: node2, address: 192.168.1.102, internalAddress: 192.168.1.102, port: 52341,  user: root, password: "wd"}
  - {name: node3, address: 192.168.1.103, internalAddress: 192.168.1.103, port: 52341,  user: root, password: "wd"}
  roleGroups:
    etcd:
    - node1
    control-plane: 
    - node1
    worker:
    - node1
    - node2
    - node3
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: htmltoo
    address: ""
    port: 6443
  kubernetes:
    version: v1.23.10
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []



---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.1
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  local_registry: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      enableHA: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: false
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
    opensearch:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: true
      logMaxAge: 7
      opensearchPrefix: whizard
      basicAuth:
        enabled: true
        username: "admin"
        password: "admin"
      externalOpensearchHost: ""
      externalOpensearchPort: ""
      dashboard:
        enabled: false
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    jenkinsCpuReq: 0.5
    jenkinsCpuLim: 1
    jenkinsMemoryReq: 4Gi
    jenkinsMemoryLim: 4Gi
    jenkinsVolumeSize: 16Gi
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    ruler:
      enabled: true
      replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: true
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  gatekeeper:
    enabled: false
    # controller_manager:
    #   resources: {}
    # audit:
    #   resources: {}
  terminal:
    timeout: 600

修改点1:指定节点的port

修改点2:搜索 openpitrix,并将 enabled 的 false 改为 true

systemctl restart containerd

-临时

swapoff -a

-永久防止开机自动挂载swap

sed -ri 's/.swap./#&/' /etc/fstab

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

2.1-启用可插拔组件

2.1.1-启用应用商店

搜索:openpitrix,并将 enabled 的 false 改为 true

-在安装后启用应用商店

登录控制台,点击左上角的平台管理,选择集群管理

定制资源定义,在搜索栏中输入 clusterconfiguration,点击结果查看其详细页面

在自定义资源中,点击 ks-installer 右侧的 ,选择编辑 YAML

搜索 openpitrix,将 enabled 的 false 改为 true。完成后,点击右下角的确定

2.1.2-启用 DevOps

搜索 devops,并将 enabled 的 false 改为 true

2.1.3-日志系统

搜寻到 logging,并将 enabled 的 false 改为 true

2.1.4-事件系统

搜寻到 events,并将 enabled 的 false 改为 true

2.1.5-服务网格

服务网格基于 Istio,将微服务治理和流量管理可视化。它拥有强大的工具包,包括熔断机制、蓝绿部署、金丝雀发布、流量镜像、链路追踪、可观测性和流量控制等。KubeSphere 服务网格支持代码无侵入的微服务治理,帮助开发者快速上手,Istio 的学习曲线也极大降低。KubeSphere 服务网格的所有功能都旨在满足用户的业务需求。

搜索 servicemesh,并将 enabled 的 false 改为 true

2.1.6-网络策略

网络策略是一种以应用为中心的结构,使您能够指定如何允许容器组通过网络与各种网络实体进行通信。通过网络策略,用户可以在同一集群内实现网络隔离,这意味着可以在某些实例(容器组)之间设置防火墙

搜索 network.networkpolicy,并将 enabled 的 false 改为 true

2.1.7-Metrics Server

KubeSphere 支持用于部署的容器组(Pod)弹性伸缩程序 (HPA)。在 KubeSphere 中,Metrics Server 控制着 HPA 是否启用。您可以根据不同类型的指标(例如 CPU 和内存使用率,以及最小和最大副本数),使用 HPA 对象对部署 (Deployment) 自动伸缩。通过这种方式,HPA 可以帮助确保您的应用程序在不同情况下都能平稳、一致地运行

搜寻到 metrics_server,并将 enabled 的 false 改为 true

2.1.8-服务拓扑图

您可以启用服务拓扑图以集成 Weave Scope(Docker 和 Kubernetes 的可视化和监控工具)。Weave Scope 使用既定的 API 收集信息,为应用和容器构建拓扑图。服务拓扑图显示在您的项目中,将服务之间的连接关系可视化

搜索 network.topology.type,并将 none 改为 weave-scope

2.1.9-容器组 IP 池

容器组 IP 池用于规划容器组网络地址空间,每个容器组 IP 池之间的地址空间不能重叠。创建工作负载时,可选择特定的容器组 IP 池,这样创建出的容器组将从该容器组 IP 池中分配 IP 地址

搜索 network.ippool.type,然后将 none 更改为 calico

2.1.10-KubeEdge

KubeEdge 是一个开源系统,用于将容器化应用程序编排功能扩展到边缘的主机。KubeEdge 支持多个边缘协议,旨在对部署于云端和边端的应用程序与资源等进行统一管理。

KubeEdge 的组件在两个单独的位置运行——云上和边缘节点上。在云上运行的组件统称为 CloudCore,包括 Controller 和 Cloud Hub。Cloud Hub 作为接收边缘节点发送请求的网关,Controller 则作为编排器。在边缘节点上运行的组件统称为 EdgeCore,包括 EdgeHub,EdgeMesh,MetadataManager 和 DeviceTwin

搜索 edgeruntime 和 kubeedge,然后将它们 enabled 值从 false 更改为 true

将kubeedge.cloudCore.cloudHub.advertiseAddress值为集群公共 IP 地址或边缘节点可访问IP

2.2-启动后卸载组件

左上角点击平台管理,选择集群管理,在定制资源定义中搜索 ClusterConfiguration

更改值之后,需要等待配置更新完成,然后继续进行后续操作

在卸载除服务拓扑图和容器组 IP 池之外的可插拔组件之前,

必须将 CRD 配置文件 ClusterConfiguration 中的 ks-installer 中的 enabled 字段的值从 true 改为 false。

2.2.1-卸载DevOps

helm uninstall -n kubesphere-devops-system devops
kubectl patch -n kubesphere-system cc ks-installer --type=json -p='[{"op": "remove", "path": "/status/devops"}]'
kubectl patch -n kubesphere-system cc ks-installer --type=json -p='[{"op": "replace", "path": "/spec/devops/enabled", "value": false}]'
# 删除所有 DevOps 相关资源
for devops_crd in $(kubectl get crd -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep "devops.kubesphere.io"); do
    for ns in $(kubectl get ns -ojsonpath='{.items..metadata.name}'); do
        for devops_res in $(kubectl get $devops_crd -n $ns -oname); do
            kubectl patch $devops_res -n $ns -p '{"metadata":{"finalizers":[]}}' --type=merge
        done
    done
done
# 删除所有 DevOps CRD
kubectl get crd -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep "devops.kubesphere.io" | xargs -I crd_name kubectl delete crd crd_name
# 删除 DevOps 命名空间
kubectl delete namespace kubesphere-devops-system

2.2.2-卸载日志系统

kubectl delete inputs.logging.kubesphere.io -n kubesphere-logging-system tail
卸载包括 Elasticsearch 的日志系统,请执行以下操作:
kubectl delete crd fluentbitconfigs.logging.kubesphere.io
kubectl delete crd fluentbits.logging.kubesphere.io
kubectl delete crd inputs.logging.kubesphere.io
kubectl delete crd outputs.logging.kubesphere.io
kubectl delete crd parsers.logging.kubesphere.io
kubectl delete deployments.apps -n kubesphere-logging-system fluentbit-operator
helm uninstall elasticsearch-logging --namespace kubesphere-logging-system
此操作可能导致审计、事件和服务网格的异常
kubectl delete deployment logsidecar-injector-deploy -n kubesphere-logging-system
kubectl delete ns kubesphere-logging-system

2.2.3-卸载事件系统

helm delete ks-events -n kubesphere-logging-system

2.2.4-卸载告警系统(3.4默认安装,无需卸载)

kubectl -n kubesphere-monitoring-system delete thanosruler kubesphere

2.2.5-卸载审计

helm uninstall kube-auditing -n kubesphere-logging-system
kubectl delete crd rules.auditing.kubesphere.io
kubectl delete crd webhooks.auditing.kubesphere.io

2.2.6-卸载服务网格

curl -L https://istio.io/downloadIstio | sh -
istioctl x uninstall --purgekubectl -n istio-system delete kiali kiali
helm -n istio-system delete kiali-operatorkubectl -n istio-system delete jaeger jaeger
helm -n istio-system delete jaeger-operator

2.2.7-卸载网络策略

仅需要修改配置

2.2.8-卸载Metrics Server

kubectl delete apiservice v1beta1.metrics.k8s.io
kubectl -n kube-system delete service metrics-server
kubectl -n kube-system delete deployment metrics-server

2.2.9-卸载服务拓扑图

kubectl delete ns weave

2.2.10-卸载容器组IP池

仅需要修改配置

2.2.11-卸载KubeEdge

helm uninstall kubeedge -n kubeedge
kubectl delete ns kubeedge

2.3-安装 NFS Client

vim /opt/nfs-client.yaml

nfs:
  server: "192.168.1.101"
  path: "/data/file"
storageClass:
  defaultClass: true

vim /opt/1.23.17.yaml

  addons:
  - name: nfs-client
    namespace: kube-system
    sources:
      chart:
        name: nfs-client-provisioner
        repo: https://charts.kubesphere.io/main
        valuesFile: /opt/nfs-client.yaml

2.4-配置:1.23.17.yaml

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.1.101, internalAddress: 192.168.1.101, port: 52341,  user: root, password: "w"}
  - {name: node2, address: 192.168.1.102, internalAddress: 192.168.1.102, port: 52341,  user: root, password: "w"}
  - {name: node3, address: 192.168.1.103, internalAddress: 192.168.1.103, port: 52341,  user: root, password: "w"}
  - {name: node4, address: 192.168.1.104, internalAddress: 192.168.1.104, port: 52341,  user: root, password: "w"}
  roleGroups:
    etcd:
    - node1
    control-plane: 
    - node1
    worker:
    - node1
    - node2
    - node3
    - node4
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: htmltoo
    address: ""
    port: 6443
  kubernetes:
    version: v1.23.17
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons:
  - name: nfs-client
    namespace: kube-system
    sources:
      chart:
        name: nfs-client-provisioner
        repo: https://charts.kubesphere.io/main
        valuesFile: /opt/nfs-client.yaml



---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.1
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  local_registry: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      enableHA: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: false
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
    opensearch:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: true
      logMaxAge: 7
      opensearchPrefix: whizard
      basicAuth:
        enabled: true
        username: "admin"
        password: "admin"
      externalOpensearchHost: ""
      externalOpensearchPort: ""
      dashboard:
        enabled: false
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: true
    jenkinsCpuReq: 0.5
    jenkinsCpuLim: 1
    jenkinsMemoryReq: 4Gi
    jenkinsMemoryLim: 4Gi
    jenkinsVolumeSize: 16Gi
  events:
    enabled: true
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    ruler:
      enabled: true
      replicas: 2
    #   resources: {}
  logging:
    enabled: true
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: true
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  gatekeeper:
    enabled: false
    # controller_manager:
    #   resources: {}
    # audit:
    #   resources: {}
  terminal:
    timeout: 600

2.5-多主设置

  controlPlaneEndpoint:
    ##启用内置haproxy模式实现高可用
    internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443

2.6-解决etcd监控证书找不到问题

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs--from-file=etcd-client-ca.crt=/etc/ssl/etcd/ssl/ca.pem--from-file=etcd-client.crt=/etc/ssl/etcd/ssl/member-k8s-master01.pem--from-file=etcd-client.key=/etc/ssl/etcd/ssl/member-k8s-master01-key.pem

3.节点管理

3.1-安装

k8s 1.24之后弃用了docker容器

Docker Engine 没有实现 CRI, 而这是容器运行时在 Kubernetes 中工作所需要的。 为此,必须安装一个额外的服务cri-dockerd。 cri-dockerd 是一个基于传统的内置 Docker 引擎支持的项目, 它在 1.24 版本从 kubelet 中移除

https://github.com/Mirantis/cri-dockerd/releases/

--如果k8s版本低于1.24版,可以忽略此步骤

3.1.1-安装cri-dockerd

yum -y install --skip-broken golang
go env -w GOPROXY=https://goproxy.cn,direct
-
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.9/cri-dockerd-0.3.9-3.el8.x86_64.rpm
wget  http://up.htmltoo.com/soft/docker.tar/cri-dockerd-0.3.9-3.el8.x86_64.rpm
rpm -ivh cri-dockerd-0.3.9-3.el8.x86_64.rpm
-
cd /opt
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.9/cri-dockerd-0.3.9.amd64.tgz
wget http://up.htmltoo.com/soft/docker.tar/cri-dockerd-0.3.9.amd64.tgz
tar -xf cri-dockerd-0.3.9.amd64.tgz
-
git clone https://github.com/Mirantis/cri-dockerd.git
cd cri-dockerd 
ARCH=amd64 make cri-dockerd
install -o root -g root -m 0755 cri-dockerd /usr/local/bin/cri-dockerd
install packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
systemctl daemon-reload
systemctl enable cri-docker.socket cri-docker
systemctl start cri-docker.socket cri-docker
systemctl is-active cri-docker.socket
systemctl status cri-docker cri-docker.socket
vi  /etc/systemd/system/cri-docker.service # 找到第10行ExecStart= 修改为
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
-
ExecStart=/usr/local/bin/cri-dockerd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9 --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --cri-dockerd-root-directory=/var/lib/dockershim --docker-endpoint=unix:///var/run/docker.sock --
cri-dockerd-root-directory=/var/lib/docker
systemctl daemon-reload && systemctl restart cri-docker.socket cri-docker

-配置启动服务

cat <<"EOF" >  /etc/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF

-pause的版本

kubeadm config images list

-⽣成 socket ⽂件

cat <<"EOF" >  /etc/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF

-启动 cri-docker 服务并配置开机启动

systemctl daemon-reload
systemctl enable cri-docker
systemctl restart cri-docker
systemctl is-active cri-docker

3.1.2-稳定版

kk create cluster -f 1.23.17.yaml

3.1.3-最新版

kk create cluster -f 1.27.2.yaml

3.1.4-添加集群节点

kk add nodes -f 1.23.17.yaml

-添加主节点以实现高可用

添加主节点的步骤与添加工作节点的步骤大体一致,不过您需要为集群配置负载均衡器

注意:controlPlaneEndpoint, domain: lb.kubesphere.local, address: 172.16.0.253

3.1.5-删除集群节点

kk delete node node3 -f 1.23.17.yaml

3.1.6-kubekey证书管理

-查询集群证书的到期时间

kk certs check-expiration

-查询集群证书位置

/etc/kubernetes/pki/

-执行命令更新集群证书

kk cert renew -f 1.23.17.yaml

3.1.7-只安装 Kubernetes

kk create config --with-kubernetes v1.26.5 -f k8s-1.26.5.yaml

kk create cluster -f k8s-1.26.5.yaml

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.1.101, internalAddress: 192.168.1.101, port: 52341,  user: root, password: "w"}
  - {name: node2, address: 192.168.1.102, internalAddress: 192.168.1.102, port: 52341,  user: root, password: "w"}
  - {name: node3, address: 192.168.1.103, internalAddress: 192.168.1.103, port: 52341,  user: root, password: "w"}
  roleGroups:
    etcd:
    - node1
    control-plane: 
    - node1
    worker:
    - node1
    - node2
    - node3
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.26.5
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons:
  - name: nfs-client
    namespace: kube-system
    sources:
      chart:
        name: nfs-client-provisioner
        repo: https://charts.kubesphere.io/main
        valuesFile: /opt/nfs-client.yaml

3.1.8-contained安装k8s

-在使用containerd安装的k8s集群时需要增加cni配置/etc/crictl.yaml,指定containerd端点

cat >> /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
pull-image-on-create: false
EOF

systemctl restart containerd

3.1.9-配置-http仓库

vim /etc/containerd/config.toml

-如果/etc/containerd/config.toml不存在,运行如下命令生成:

containerd config default > /etc/containerd/config.toml

[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."g.htmltoo.com:5000"]
    endpoint = ["http://g.htmltoo.com:5000"]
...
#disabled_plugins = ["cri"]
...
sandbox_image = "redistry.k8s.io/pause:3.9" #由3.8修改为3.9

vim /etc/containerd/certs.d/registry.io

server = "http://g.htmltoo.com:5000"
[host."http://g.htmltoo.com:5000"]
  capabilities = ["pull", "resolve","push"]
  skip_verify = true

sed -i s#SystemdCgroup\ =\ false#SystemdCgroup\ =\ true# /etc/containerd/config.toml

sed -i s#registry.k8s.io#registry.aliyuncs.com/google_containers# /etc/containerd/config.toml

systemctl enable --now containerd

systemctl restart containerd

3.2-升级

3.2.1-使用KubeKey升级

kk upgrade --with-kubernetes v1.23.17 --with-kubesphere v3.4.1

kk upgrade --with-kubernetes v1.23.17 --with-kubesphere v3.4.1 -f 1.23.10.yaml

3.2.2-使用KubeKey离线升级

3.2.2.1-单节点

https://github.com/kubesphere/ks-installer/

-私有仓库

http://g.htmltoo.com:5100

-下载镜像清单文件

curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/images-list.txt

curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/offline-installation-tool.sh

chmod +x offline-installation-tool.sh

-查看如何使用脚本

./offline-installation-tool.sh -h

Description:
-b : save kubernetes' binaries.
-d IMAGES-DIR : the dir of files (tar.gz) which generated by docker save. default: /home/ubuntu/kubesphere-images
-l IMAGES-LIST : text file with list of images.
-r PRIVATE-REGISTRY : target private registry:port.
-s : save model will be applied. Pull the images in the IMAGES-LIST and save images as a tar.gz file.
-v KUBERNETES-VERSION : download kubernetes' binaries. default: v1.17.9
-h : usage message

-下载 Kubernetes 二进制文件。

./offline-installation-tool.sh -b -v v1.23.17

-变更国内来源。

export KKZONE=cn;./offline-installation-tool.sh -b -v v1.27.2

-拉取镜像。

./offline-installation-tool.sh -s -l images-list.txt -d ./kubesphere-images

-推送镜像至私有仓库, g.htmltoo.com:5000

./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r g.htmltoo.com:5000

3.2.2.2-多节点

KubeSphere 安装在单个节点上,您需要指定一个配置文件以添加主机信息。

此外,离线安装时,请务必将.spec.registry.privateRegistry设置为您自己的仓库地址

kk create config --with-kubernetes v1.23.17 --with-kubesphere v3.4.1 -f 1.23.10.yaml

搜索:privateRegistry 值改为 g.htmltoo.com:5000

./kk upgrade -f 1.23.17.yaml

3.2.3-使用ks-installer升级

对于 Kubernetes 集群不是通过 KubeKey 部署而是由云厂商托管或自行搭建的用户,推荐使用 ks-installer 升级。仅用于升级 KubeSphere。集群运维员应负责提前升级 Kubernetes

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/kubesphere-installer.yaml --force

3.2.4-使用ks-installer离线升级

curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/kubesphere-installer.yaml

-如果您的集群采用在线安装方式搭建而需要进行离线升级

kubectl edit cc -n kubesphere-system

搜索:local_registry: 值改为 g.htmltoo.com:5000

-将 ks-installer 替换为您自己仓库的地址

sed -i "s#^\simage: kubesphere./ks-installer:.*# image: dockerhub.kubekey.local/kubesphere/ks-installer:v3.4.1#" kubesphere-installer.yaml

-升级 KubeSphere

kubectl apply -f kubesphere-installer.yaml

3.3-查看

-查看安装进度

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

-报错日志
journalctl -xeu kubelet
journalctl -u flannel
journalctl -ex |grep failed

-重新创建配置文件

./kk delete cluster -f 1.23.17.yaml
./kk create cluster -f 1.23.17.yaml

-列出 node 和pod

kubectl get nodes
kubectl get pod --all-namespaces

-重启pod

kubectl scale deployment ks-controller-manager --replicas=0 -n kubesphere-systemjou
kubectl scale deployment ks-controller-manager --replicas=1 -n kubesphere-system

访问 KubeSphere 的 Web 控制台

http://g.htmltoo.com:30880/
使用默认帐户和密码 (admin/P@88w0rd)

4.访问Web控制台

http://192.168.101:30880

使用默认帐户和密码 admin/P@88w0rd

5.部署具有三个最小化集群ubeSphere

5.1-获取 Kubernetes 集群上的可用节点列表

-在 master-0 节点运行 kubectl 命令

kubectl get nodes

5.2-运行的 Pod 列表:

kubectl get pods --all-namespaces

5.3-创建 Nginx Deployment

5.3.1-创建具有两个副本基于 nginx:alpine 镜像的 Pod

kubectl create deployment nginx --image=nginx:alpine --replicas=2

5.3.2-创建 Nginx Service

-创建一个新的 Kubernetes 服务,服务名称 nginx,服务类型 Nodeport,对外的服务端口 80

kubectl create service nodeport nginx --tcp=80:80

5.4-查看创建的 Deployment 和 Pod 资源。

kubectl get deployment -o wide
kubectl get pods -o wide

5.4.1-显示有关 Deployment 的信息

kubectl describe deployments nginx

5.4.2-显示有关 Service 的详细信息

kubectl describe services nginx

5.4.3-查看default命名空间下面的所有资源

kubectl get all -n default

5.4.4-弹性伸缩

-至少有2个Pod,当该Deployment下的所有Pod的CPU使用率之和达到80%时会扩容Pod到3~3个之间

kubectl autoscale deployment nginx --min=2 --max=5 --cpu-percent=80

-上述命令会创建一个HPA

kubectl get hpa

5.4.5-删除资源

-要删除 Service

kubectl delete services nginx
-要删除正在运行 Hello World 应用程序的 Deployment、ReplicaSet 和 Pod
kubectl delete deployment nginx

-删除 StatefulSet 中所有的 Pod:
kubectl delete pod -l app=nginx

-根据pod.yaml定义的名称删除Pod

kubectl delete -f pod.yaml

-删除所有包含某个Label的Pod和Service

kubectl delete pods,services -l name=

-删除所有Pod

kubectl delete pods --all

5.4.6-日志

-跟踪查看容器的日志,相当于tail -f命令的结果

kubectl logs

kubectl logs -f -c

5.4.7-进容器执行命令

-执行Pod的date命令,默认使用Pod中的第一个容器执行

kubectl exec date

-指定Pod中的某个容器执行date命令

kubectl exec -c date

-通过bash获得Pod中某个容器的TTY,相当于登陆容器

kubectl exec -it -c /bin/bash

5.4.8--复制文件

kubectl cp :/tmp/java.out /root/java.out -c 容器名 -n

kubectl cp /tmp/dir :/tmp/ -c 容器名 -n

5.4.9-导出一个标准Deployment的yaml文件

kubectl get deployment nginx -o yaml > deployment.nginx

5.4.10-发布一个服务

--port=80 #该Service监听的端口,可以通过Cluster IP+该端口访问

--target-port=8080 #业务容器监听的端口

kubectl expose deployment nginx --port=80 --target-port=8080 --type=NodePort

kubectl expose deployment nginx --port=80 --target-port=8080 --type=NodePort --dry-run=client -o yaml > service.nginx

5.4.12-将Pod的开放端口映射到本地

-把Pod的80端口映射到本地的8888端口

kubectl port-forward --address 0.0.0.0 8888:80

5.4.13-Yaml文件详解

http://g.htmltoo.com:8090/?p=eab96f1e-f39f-4389-8b96-517b6bf4a40b

5.4.14-添加taint,避免Pod 调度到特定Node 上

-查看节点污点
kubectl describe node1 node2 node3 | grep Taints

Kubernetes 里面有三个 taints 行为
NoSchedule:表示 k8s 不会将Pod调度到具有该污点的Node上
PreferNoSchedule:表示 k8s 将尽量避免将Pod调度到具有该污点的Node上
NoExecute:表示 k8s 将不会将Pod调度到具有该污点的Node上,同时会将Node上已有的Pod驱逐出去(相当于结婚了还得离婚)

-在 node1 上加一个 Taint,该 Taint 的键为 key,值为 value,Taint 的效果是 NoSchedule。这意味着除非 pod 明确声明可以容忍这个 Taint,否则就不会被调度到 node1 上

kubectl taint node node1 key=value:NoSchedule

-删除taint

kubectl taint node node1 key:NoSchedule-

-pod 上声明 Toleration。下面的 Toleration 设置为可以容忍具有该 Taint 的 Node,使得 pod 能够被调度到 node1 上

-Tolerations: 允许Pod调度到持有Taints的Node上

operator 的值为 Exists,这时无需指定 value
operator 的值为 Equal 并且 value 相等
如果不指定 operator,则默认值为 Equal

 spec:  
 tolerations:  
\- key: "key"  
 operator: "Equal"
 value:"value" effect: "NoSchedule"

5.4.15-创建pod配置,扩容Pod: 容忍、污点

-创建pod配置

taint-pod.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: taint-deploy
spec:
  replicas: 3
  selector:
    matchLabels:
      app: taint-pod
  template:
    metadata:
      labels:
        app: taint-pod
    spec:
      containers:
      - image: busybox:latest
        name: taint-pod
        command: [ "/bin/sh", "-c", "tail -f /etc/passwd" ]

-创建pod

kubectl apply -f taint-pod.yaml
deployment.apps/taint-deploy created

-查看pods

kubectl get pods -o wide | grep taint

-扩容Pod 我们将Pod扩容至5台,可以直观的展现污点

kubectl scale --replicas=5 deploy/taint-deploy -n default

-删除node1污点

kubectl taint node node1 type:NoSchedule-
node/node1 untainted

-缩容再扩容

kubectl scale --replicas=1 deploy/taint-deploy -n default

kubectl scale --replicas=2 deploy/taint-deploy -n default

-查看pods

kubectl get pods -o wide | grep taint

5.4.16-用节点亲和性把 Pods 分配到节点

-选择一个节点,给它添加一个标签
kubectl label nodes disktype=ssd

-验证你所选节点具有 disktype=ssd 标签:
kubectl get nodes --show-labels

-依据强制的节点亲和性调度 Pod
-清单描述了一个 Pod,它有一个节点亲和性配置 requiredDuringSchedulingIgnoredDuringExecution , disktype=ssd 。 这意味着 pod 只会被调度到具有 disktype=ssd 标签的节点上

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd  
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent

-使用首选的节点亲和性调度 Pod

-清单描述了一个Pod,它有一个节点亲和性设置 preferredDuringSchedulingIgnoredDuringExecution , disktype: ssd 。 这意味着 pod 将首选具有 disktype=ssd 标签的节点

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd  
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent  

-执行此清单创建一个会调度到所选节点上的 Pod:

kubectl apply -f pod-nginx-preferred-affinity.yaml

-验证 pod 是否在所选节点上运行:
kubectl get pods --output=wide

5.4.17-滚动升级

-nginx为deployment的名字,nginx-app为容器的名字,nginx:latest为镜像,--record记录操作日志,方便回滚

kubectl set image deployment nginx nginx-app=nginx:latest --record

-查看升级状态

kubectl rollout status deployment nginx

-容器重新部署

kubectl rollout restart deploy nginx

-查看Deployment部署过的版本

kubectl rollout history deployment nginx

-回滚到上一个版本

kubectl rollout undo deployment nginx

-回滚到指定版本

kubectl rollout undo deployment nginx --to-revision=2

5.5-验证 Nginx Service

-查看可用的服务列表,在列表中我们可以看到 Nginx 服务类型 为 Nodeport,并在 Kubernetes 主机上开放了 32710 端口。

kubectl get svc -o wide

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR

kubernetes ClusterIP 10.233.0.1 443/TCP 16h
nginx NodePort 10.233.10.40 80:32710/TCP 6m21s app=nginx

-访问 Nginx 服务

http://192.168.1.101:32710

6.卸载

kk delete cluster

kk delete cluster -f config.yaml

-kubesphere

wget https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/kubesphere-delete.sh

wget http://up.htmltoo.com/soft/docker.tar/kubesphere-delete.sh
sh kubesphere-delete.sh

0

评论区