KubeSphere

Stone大约 14 分钟

KubeSphere

概述

KubeSphereopen in new window 是在 Kubernetesopen in new window 之上构建的面向云原生应用的分布式操作系统,完全开源,支持多云与多集群管理,提供全栈的 IT 自动化运维能力,简化企业的 DevOps 工作流。它的架构可以非常方便地使第三方应用与云原生生态组件进行即插即用的集成。

作为全栈的多租户容器平台,KubeSphere 提供了运维友好的向导式操作界面,帮助企业快速构建一个强大和功能丰富的容器云平台。KubeSphere 为用户提供构建企业级 Kubernetes 环境所需的多项功能,例如多云与多集群管理、Kubernetes 资源管理、DevOps、应用生命周期管理、微服务治理(服务网格)、日志查询与收集、服务与网络、多租户管理、监控告警、事件与审计查询、存储管理、访问权限控制、GPU 支持、网络策略、镜像仓库管理以及安全管理等。

KubeSphere 还开源了 KubeKeyopen in new window 帮助企业一键在公有云或数据中心快速搭建 Kubernetes 集群,提供单节点、多节点、集群插件安装,以及集群升级与运维。

借助开源的模式,KubeSphere 社区驱动着开发工作以开放的方式进行。KubeSphere 100% 开源免费,已大规模服务于社区用户,广泛地应用在以 Docker 和 Kubernetes 为中心的开发、测试及生产环境中,大量服务平稳地运行在 KubeSphere 之上。

简而言之,KubeKey 就是一个简化部署 Kubernetes 和 KubeSphere 的高效工具,KubeSphere 就是一个管理 Kubernetes 的 Web 平台。

部署

按照官方文档部署 Kubernetesopen in new window 非常之繁琐,需要自行安装各种组件并选择匹配的组件版本。现在可以使用 KubeKey 来进行高效部署 Kubernetes。

KubeKey 的几种使用场景:

  • 仅安装 Kubernetes;
  • 使用一个命令同时安装 Kubernetes 和 KubeSphere;
  • 扩缩集群;
  • 升级集群;
  • 安装 Kubernetes 相关的插件(Chart 或 YAML)。

KubeKey 提供内置高可用模式,支持一键安装高可用 Kubernetes 集群。

高可用集群

KubeKey 作为一种集群安装工具,从版本 V1.2.1 开始,提供了内置高可用模式,支持一键部署高可用集群环境。KubeKey 的高可用模式实现方式称作本地负载均衡模式。具体表现为 KubeKey 会在每一个工作节点上部署一个负载均衡器(HAproxy),所有主节点的 Kubernetes 组件连接其本地的 kube-apiserver ,而所有工作节点的 Kubernetes 组件通过由 KubeKey 部署的负载均衡器反向代理到多个主节点的 kube-apiserver 。这种模式相较于专用到负载均衡器来说效率有所降低,因为会引入额外的健康检查机制,但是如果当前环境无法提供外部负载均衡器或者虚拟 IP(VIP)时这将是一种更实用、更有效、更方便的高可用部署模式。

架构图如下:

高可用架构

环境

主机规划:

No.HostIPCPUMemoryRole
1master1192.168.92.14324 GBControl Plane,Etcd
2master2192.168.92.14424 GBControl Plane,Etcd
3master3192.168.92.14524 GBControl Plane,Etcd
4node1192.168.92.14624 GBWorker Node 1
5node2192.168.92.14724 GBWorker Node 2

注意:

CPU 数量至少为 2,内存建议至少 4 GB。

软件版本:

No.NameRelease
1CentOS7.9
2KubeKey3.1.8
3Kubernetes1.23.17
4KubeSphere3.4.1

准备

创建 5 个 CentOSopen in new window 主机,先检查 CPU 和内存:

[root@localhost ~]# grep processor /proc/cpuinfo
processor       : 0
processor       : 1
[root@localhost ~]# grep MemTotal /proc/meminfo
MemTotal:        3861288 kB

再分别配置 5 台主机的主机名open in new window网络地址open in new window

Master 1:

[root@localhost ~]# hostnamectl set-hostname master1.stonecoding.net
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
IPADDR=192.168.92.143
NETMASK=255.255.255.0
GATEWAY=192.168.92.2
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
[root@localhost ~]# systemctl restart network 

Master 2:

[root@localhost ~]# hostnamectl set-hostname master2.stonecoding.net
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
IPADDR=192.168.92.144
NETMASK=255.255.255.0
GATEWAY=192.168.92.2
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
[root@localhost ~]# systemctl restart network 

Master 3:

[root@localhost ~]# hostnamectl set-hostname master3.stonecoding.net
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
IPADDR=192.168.92.145
NETMASK=255.255.255.0
GATEWAY=192.168.92.2
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
[root@localhost ~]# systemctl restart network 

Node 1:

[root@localhost ~]# hostnamectl set-hostname node1.stonecoding.net
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
IPADDR=192.168.92.146
NETMASK=255.255.255.0
GATEWAY=192.168.92.2
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
[root@localhost ~]# systemctl restart network 

Node 2:

[root@localhost ~]# hostnamectl set-hostname node2.stonecoding.net
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
IPADDR=192.168.92.147
NETMASK=255.255.255.0
GATEWAY=192.168.92.2
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
[root@localhost ~]# systemctl restart network 

然后创建脚本 kubesphere-prepare.sh

#!/bin/bash

# 配置 /etc/hosts
echo "=========edit /etc/hosts============="
cat >> /etc/hosts << EOF
192.168.92.143   master1   master1.stonecoding.net
192.168.92.144   master2   master2.stonecoding.net
192.168.92.145   master3   master3.stonecoding.net
192.168.92.146   node1     node1.stonecoding.net
192.168.92.147   node2     node2.stonecoding.net
EOF

# 配置 DNS
echo "=========edit /etc/resolv.conf ============="
cat >> /etc/resolv.conf << EOF
nameserver 192.168.92.2
EOF

# 关闭防火墙
echo "=========stop firewalld============="
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld

# 关闭 NetworkManager
echo "=========stop NetworkManager ============="
systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl status NetworkManager

# 关闭 SELinux
echo "=========disable selinux============="
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0
getenforce

# 关闭 SWAP
echo "=========close swap============="
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
free -m

# 配置 YUM
echo "=========config yum============="
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 如果是 RedHat,则需要执行下面替换命令
#sed -i "s/\$releasever/7/g" /etc/yum.repos.d/CentOS-Base.repo
curl -o /etc/yum.repos.d/epel.repo  http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache

# 时间同步
echo "=========sync time============="
yum -y install chrony
cat >> /etc/chrony.conf << EOF
server ntp.aliyun.com iburst
EOF
systemctl start chronyd
systemctl enable chronyd
chronyc sources

分别在 5 台主机执行以上脚本,然后重启主机。

[root@master1 ~]# sh kubesphere-prepare.sh
[root@master1 ~]# init 6

下载 KubeKey

官方文档open in new window 下载 KubeKey:

[root@master1 ~]# export KKZONE=cn
[root@master1 ~]# curl -sfL https://get-kk.kubesphere.io | sh -  

结果下载失败,只能在 SourceForgeopen in new window 下载并解压:

[root@master1 ~]# curl -o kubekey-v3.1.8-linux-amd64.tar.gz https://master.dl.sourceforge.net/project/kubekey.mirror/v3.1.8/kubekey-v3.1.8-linux-amd64.tar.gz?viasf=1
[root@master1 ~]# tar -xvzf kubekey-v3.1.8-linux-amd64.tar.gz 
kk
[root@master1 ~]# ./kk -h
Deploy a Kubernetes or KubeSphere cluster efficiently, flexibly and easily. There are three scenarios to use KubeKey.
1. Install Kubernetes only
2. Install Kubernetes and KubeSphere together in one command
3. Install Kubernetes first, then deploy KubeSphere on it using https://github.com/kubesphere/ks-installer

Usage:
  kk [command]

Available Commands:
  add          Add nodes to kubernetes cluster
  alpha        Commands for features in alpha
  artifact     Manage a KubeKey offline installation package
  certs        cluster certs
  cluster-info display cluster information
  completion   Generate shell completion scripts
  create       Create a cluster or a cluster configuration file
  delete       Delete node or cluster
  help         Help about any command
  init         Initializes the installation environment
  plugin       Provides utilities for interacting with plugins
  upgrade      Upgrade your cluster smoothly to a newer version with this command
  version      print the client version information

Flags:
  -h, --help   help for kk

Use "kk [command] --help" for more information about a command.
[root@master1 ~]# ./kk version
kk version: &version.Info{Major:"3", Minor:"1", GitVersion:"v3.1.8", GitCommit:"dbb1ee4aa1ecf0586565ff3374427d8a7d9b327b", GitTreeState:"clean", BuildDate:"2025-03-26T04:49:07Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}

查看当前版本 KubeKey 支持的 Kubernetes 版本列表:

[root@master1 ~]# ./kk version --show-supported-k8s
v1.19.0
v1.19.8
v1.19.9
v1.19.15
v1.20.4
v1.20.6
v1.20.10
v1.21.0
......
v1.21.14
v1.22.0
......
v1.22.17
v1.23.0
......
v1.23.17
......
v1.31.7
v1.32.0
v1.32.1
v1.32.2
v1.32.3

KubeSphere 3.4 受支持的 Kubernetes 版本如下:

KubeSphere 版本受支持的 Kubernetes 版本
v3.4v1.21.x、 v1.22.x、 v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x
  • 能使用 KubeKey 安装的 Kubernetes 版本与 KubeSphere 3.4 支持的 Kubernetes 版本不同。如需在现有 Kubernetes 集群上安装 KubeSphere 3.4,Kubernetes 版本必须为 v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。
  • 带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如果需要使用 KubeEdge,为了避免兼容性问题,建议安装 v1.23.x 版本的 Kubernetes。

创建集群配置文件

创建集群配置文件,指定 KubeSphere 版本为 3.4.1,Kubernetes 版本为 1.23.17:

[root@master1 ~]# ./kk create config --with-kubesphere v3.4.1 --with-kubernetes v1.23.17
Generate KubeKey config file successfully

这里没有使用 -f 选项指定配置文件的名称,默认的文件名称为 config-sample.yaml。根据环境编辑该文件以添加机器信息、配置负载均衡器和其他内容。

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: stone
spec:
  hosts:
  - {name: master1, address: 192.168.92.143, internalAddress: 192.168.92.143, user: root, password: "123456"}
  - {name: master2, address: 192.168.92.144, internalAddress: 192.168.92.144, user: root, password: "123456"}
  - {name: master3, address: 192.168.92.145, internalAddress: 192.168.92.145, user: root, password: "123456"}
  - {name: node1, address: 192.168.92.146, internalAddress: 192.168.92.146, user: root, password: "123456"}
  - {name: node2, address: 192.168.92.147, internalAddress: 192.168.92.147, user: root, password: "123456"}
  roleGroups:
    etcd:
    - master1
    - master2
    - master3
    control-plane: 
    - master1
    - master2
    - master3
    worker:
    - node1
    - node2
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.23.17
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: docker
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: "registry.cn-beijing.aliyuncs.com"
    namespaceOverride: "kubesphereio"
    registryMirrors: ["https://registry.cn-hangzhou.aliyuncs.com"]
    insecureRegistries: ["harbor.stonecoding.net"]
  addons: []



---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.1
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  local_registry: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      enableHA: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: false
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
    opensearch:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: true
      logMaxAge: 7
      opensearchPrefix: whizard
      basicAuth:
        enabled: true
        username: "admin"
        password: "admin"
      externalOpensearchHost: ""
      externalOpensearchPort: ""
      dashboard:
        enabled: false
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    jenkinsCpuReq: 0.5
    jenkinsCpuLim: 1
    jenkinsMemoryReq: 4Gi
    jenkinsMemoryLim: 4Gi
    jenkinsVolumeSize: 16Gi
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    ruler:
      enabled: true
      replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  gatekeeper:
    enabled: false
    # controller_manager:
    #   resources: {}
    # audit:
    #   resources: {}
  terminal:
    timeout: 600
  zone: "

spec:hosts 参数下设置各服务器的信息:

参数描述
name用户自定义的服务器名称。
address服务器的 SSH 登录 IP 地址。
internalAddress服务器在子网内部的 IP 地址。
port服务器的 SSH 端口号。如果使用默认端口 22 可不设置此参数。
user服务器的 SSH 登录用户名,该用户必须为 root 用户或其他具有 sudo 命令执行权限的用户。如果使用 root 用户可不设置此参数。
password服务器的 SSH 登录密码。

spec:roleGroups 参数下设置服务器的角色:

参数描述
etcd安装 Etcd 数据库的节点。请在此参数下设置集群控制平面节点。
control-plane集群控制平面节点。如果已经为集群配置了高可用性,可以设置多个控制平面节点。
worker集群工作节点。
registry用于创建私有镜像服务的服务器。该服务器不会用作集群节点。
安装、升级 KubeSphere 时,如果集群节点无法连接互联网,需要在此参数下设置用于创建私有镜像服务的服务器。
其他情况下请将此参数注释掉。

spec:controlPlaneEndpoint 参数下设置高可用性信息:

参数描述
internalLoadbalancer本地负载均衡器的类型。如果使用本地负载均衡配置,请将此参数设置为 haproxy。否则,请将此参数注释掉。
domain负载均衡器的内部访问域名。请将此参数设置为 lb.kubesphere.local
address负载均衡器的 IP 地址。
如果使用本地负载均衡配置,请将此参数留空;
如果使用专用负载均衡器,请将此参数设置为负载均衡器的 IP 地址;
如果使用通用服务器作为负载均衡器,请将此参数设置为负载均衡器的浮动 IP 地址。
port负载均衡器监听的端口号,即 Apiserver 服务的端口号。请将此参数设置为 6443

spec:kubernetes 参数下设置 Kubernetes 信息:

参数描述
versionKubernetes 安装版本。
clusterNameKubernetes 集群名称。
autoRenewCerts实现证书到期自动续期,默认为 true
containerManager根据 Kubernetes 版本指定容器运行时,1.23 及以下使用 docker,1.23 以上使用 containerd

spec:network 参数下设置 Kubernetes 网络信息:

参数描述
plugin是否使用 CNI 插件。KubeKey 默认安装 Calico,您也可以指定为 Flannel。请注意,只有使用 Calico 作为 CNI 插件时,才能使用某些功能,例如 Pod IP 池。
kubePodsCIDRKubernetes Pod 子网的有效 CIDR 块。CIDR 块不应与您的节点子网和 Kubernetes 服务子网重叠。
kubeServiceCIDRKubernetes 服务的有效 CIDR 块。CIDR 块不应与您的节点子网和 Kubernetes Pod 子网重叠。

spec:registry 参数下设置 Kubernetes 仓库信息:

参数描述
privateRegistry配置私有镜像仓库,用于离线安装(例如,Docker 本地仓库或 Harbor)。解决部署过程中 Docker 官方镜像不可用的问题。
namespaceOverride配置镜像仓库中用于存放相关镜像的命名空间。解决部署过程中 Docker 官方镜像不可用的问题。
registryMirrors配置 Docker 仓库镜像以加速下载。会写入 /etc/docker/daemon.json 配置文件。
insecureRegistries配置 Docker 不安全镜像仓库的地址。会写入 /etc/docker/daemon.json 配置文件。

配置完成后,执行以下命令为所有节点安装依赖包:

[root@master1 ~]# ./kk init os -f config-sample.yaml

创建集群

执行以下命令创建集群:

[root@master1 ~]# ./kk create cluster -f config-sample.yaml

查看运行结果或者运行以下命令查看安装日志:

[root@master1 ~]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

若看到以下信息,高可用集群则创建成功:

#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.92.143:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2025-04-26 15:32:03
#####################################################

其中 http://192.168.92.143:30880 为 KubeSphere 的访问地址,admin 为用户名,P@88w0rd 为初始密码。

还可以查看日志文件 /root/kubekey/logs/kubekey.log 获取安装过程信息。

如果安装出现问题,可以执行以下命令删除集群,再重新安装:

[root@master1 ~]# ./kk delete cluster -f config-sample.yaml

查看集群状态:

[root@master1 ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE   VERSION
master1   Ready    control-plane,master   18m   v1.23.17
master2   Ready    control-plane,master   18m   v1.23.17
master3   Ready    control-plane,master   18m   v1.23.17
node1     Ready    worker                 18m   v1.23.17
node2     Ready    worker                 18m   v1.23.17

查看 Pod:

[root@master1 ~]# kubectl get pods -A
NAMESPACE                      NAME                                               READY   STATUS    RESTARTS   AGE
kube-system                    calico-kube-controllers-6f996c8485-96rm2           1/1     Running   0          24m
kube-system                    calico-node-h7rzt                                  1/1     Running   0          24m
kube-system                    calico-node-krw89                                  1/1     Running   0          24m
kube-system                    calico-node-mrjq5                                  1/1     Running   0          24m
kube-system                    calico-node-v4pj8                                  1/1     Running   0          24m
kube-system                    calico-node-xpgqb                                  1/1     Running   0          24m
kube-system                    coredns-d9bcd6987-4hk6b                            1/1     Running   0          24m
kube-system                    coredns-d9bcd6987-wb2dm                            1/1     Running   0          24m
kube-system                    haproxy-node1                                      1/1     Running   0          24m
kube-system                    haproxy-node2                                      1/1     Running   0          24m
kube-system                    kube-apiserver-master1                             1/1     Running   0          25m
kube-system                    kube-apiserver-master2                             1/1     Running   0          24m
kube-system                    kube-apiserver-master3                             1/1     Running   0          24m
kube-system                    kube-controller-manager-master1                    1/1     Running   0          25m
kube-system                    kube-controller-manager-master2                    1/1     Running   0          24m
kube-system                    kube-controller-manager-master3                    1/1     Running   0          24m
kube-system                    kube-proxy-9995j                                   1/1     Running   0          24m
kube-system                    kube-proxy-c54x2                                   1/1     Running   0          24m
kube-system                    kube-proxy-gmvf7                                   1/1     Running   0          24m
kube-system                    kube-proxy-jm8pf                                   1/1     Running   0          24m
kube-system                    kube-proxy-nbxbz                                   1/1     Running   0          24m
kube-system                    kube-scheduler-master1                             1/1     Running   0          25m
kube-system                    kube-scheduler-master2                             1/1     Running   0          24m
kube-system                    kube-scheduler-master3                             1/1     Running   0          24m
kube-system                    nodelocaldns-4bsmq                                 1/1     Running   0          24m
kube-system                    nodelocaldns-52p76                                 1/1     Running   0          24m
kube-system                    nodelocaldns-c4l27                                 1/1     Running   0          24m
kube-system                    nodelocaldns-mqps7                                 1/1     Running   0          24m
kube-system                    nodelocaldns-zrbpm                                 1/1     Running   0          24m
kube-system                    openebs-localpv-provisioner-7bbcf865cd-zbsnd       1/1     Running   0          23m
kube-system                    snapshot-controller-0                              1/1     Running   0          22m
kubesphere-controls-system     default-http-backend-659cc67b6b-vwn2x              1/1     Running   0          20m
kubesphere-controls-system     kubectl-admin-7966644f4b-l8h9m                     1/1     Running   0          16m
kubesphere-monitoring-system   alertmanager-main-0                                2/2     Running   0          19m
kubesphere-monitoring-system   alertmanager-main-1                                2/2     Running   0          19m
kubesphere-monitoring-system   alertmanager-main-2                                2/2     Running   0          19m
kubesphere-monitoring-system   kube-state-metrics-856b7b8fdd-sc88z                3/3     Running   0          19m
kubesphere-monitoring-system   node-exporter-8hsgv                                2/2     Running   0          19m
kubesphere-monitoring-system   node-exporter-gngxw                                2/2     Running   0          19m
kubesphere-monitoring-system   node-exporter-mr2bv                                2/2     Running   0          19m
kubesphere-monitoring-system   node-exporter-tqnms                                2/2     Running   0          19m
kubesphere-monitoring-system   node-exporter-z954m                                2/2     Running   0          19m
kubesphere-monitoring-system   notification-manager-deployment-6cd86468dc-4b4x5   2/2     Running   0          18m
kubesphere-monitoring-system   notification-manager-deployment-6cd86468dc-h7vp6   2/2     Running   0          18m
kubesphere-monitoring-system   notification-manager-operator-b9d6bf9d4-86x5t      2/2     Running   0          18m
kubesphere-monitoring-system   prometheus-k8s-0                                   2/2     Running   0          19m
kubesphere-monitoring-system   prometheus-k8s-1                                   2/2     Running   0          19m
kubesphere-monitoring-system   prometheus-operator-684988fc5c-lcql8               2/2     Running   0          19m
kubesphere-system              ks-apiserver-647c688448-67cqj                      1/1     Running   0          20m
kubesphere-system              ks-apiserver-647c688448-sj5wc                      1/1     Running   0          20m
kubesphere-system              ks-apiserver-647c688448-zxfc8                      1/1     Running   0          20m
kubesphere-system              ks-console-777b56767b-7vdzq                        1/1     Running   0          20m
kubesphere-system              ks-console-777b56767b-cjvh5                        1/1     Running   0          20m
kubesphere-system              ks-console-777b56767b-lbhls                        1/1     Running   0          20m
kubesphere-system              ks-controller-manager-84f9949db4-544pl             1/1     Running   0          20m
kubesphere-system              ks-controller-manager-84f9949db4-fmn48             1/1     Running   0          20m
kubesphere-system              ks-controller-manager-84f9949db4-vlrlv             1/1     Running   0          20m
kubesphere-system              ks-installer-ddbcf44f8-q4dhj                       1/1     Running   0          23m
kubesphere-system              redis-76dd4856b6-lrmtz                             1/1     Running   0          21m

待所有 Pod 的状态都为 Running 后,使用浏览器访问 KubeSphere 地址 http://192.168.92.143:30880,输入用户名和密码后,提示修改密码,设置密码为 Abcd1234,然后进行到 KubeSphere,即可对 Kubernetes 集群进行查看和管理。

上次编辑于:
贡献者: stone,stonebox