当前位置: 首页 > news >正文

做线上网站的风险分析做方案还找不到素材 这里有最全的设计网站

做线上网站的风险分析,做方案还找不到素材 这里有最全的设计网站,比较有名的编程培训机构,徐州市网站开发文章目录 1.灌装集群节点操作系统1.1 设置hosts1.2 设置nameserver1.3 关闭防火墙1.4 关闭Selinux1.5 关闭Swap分区1.6 时间同步1.7 调整内核参数1.8 系统内核升级 2.安装Docker2.1 卸载旧Docker2.2 配置Docker软件源2.3 安装Docker 3.部署Kubernets集群3.1 设置 K8s 软件源3.2… 文章目录 1.灌装集群节点操作系统1.1 设置hosts1.2 设置nameserver1.3 关闭防火墙1.4 关闭Selinux1.5 关闭Swap分区1.6 时间同步1.7 调整内核参数1.8 系统内核升级 2.安装Docker2.1 卸载旧Docker2.2 配置Docker软件源2.3 安装Docker 3.部署Kubernets集群3.1 设置 K8s 软件源3.2 安装kubeadmkubelet 和kubectl3.3 下载镜像3.4 部署Kubernetes Master3.5 配置文件3.6 添加其他节点3.7 安装网络插件3.8 检查节点是否污染3.9 k8s集群常用命令3.10 安装k8s可视化页面 1.灌装集群节点操作系统 系统一台或多台机器操作系统CentOS-7-2009根据实际情况选择。 硬件配置2GB 或更多RAM2 个CPU 或更多CPU硬盘30GB 或更多。 网络集群中所有机器之间网络互通可以访问外网需要拉取镜像。 其他禁止swap 分区时间同步等。目标 1在所有节点上安装Docker 和 kubeadm 2部署 Kubernetes Master 3部署容器网络插件 4部署 Kubernetes Node将节点加入 Kubernetes 集群中 5部署 Dashboard Web 页面可视化查看 Kubernetes 资源HostNameIPCPU内存磁盘数据盘OS角色ceph61192.168.120.6144G80G(OS)30G30G30G/dev/sdb/dev/sdc/dev/sddCentOS-7-x86_64-Everything-2009.isok8s-masterceph62192.168.120.6244G50G(OS)30G30G30G/dev/sdb/dev/sdc/dev/sddCentOS-7-x86_64-Everything-2009.isok8s-nodeceph63192.168.120.6344G50G(OS)30G30G30G/dev/sdb/dev/sdc/dev/sddCentOS-7-x86_64-Everything-2009.isok8s-node 1.1 设置hosts # 集群所有节点都执行 cat /etc/hosts EOF 192.168.120.61 ceph61 192.168.120.62 ceph62 192.168.120.63 ceph63 EOF1.2 设置nameserver # 集群所有节点都执行如果是内网集群此处设置为网关ip地址 vim /etc/resolv.conf nameserver 114.114.114.114 nameserver 8.8.8.81.3 关闭防火墙 # 集群所有节点都执行 systemctl stop firewalld systemctl disable firewalld1.4 关闭Selinux # 集群所有节点都执行 # 永久 sed -i s/SELINUXenforcing/SELINUXdisabled/g /etc/selinux/config # 临时 setenforce 01.5 关闭Swap分区 # 集群所有节点都执行 # 临时 swapoff -a # 永久 sed -ri s/.*swap.*/#/ /etc/fstab1.6 时间同步 # 集群所有节点都执行 # 实验方法 yum install ntpdate -y ntpdate time.windows.com ------------------------------------------------------------------- # 生产环境-在线环境 yum install chrony -y # 备份配置 cp /etc/chrony.conf /etc/chrony.conf.orig sed -i /^pool/s/^/#/ /etc/chrony.conf #注解掉pool grep #pool /etc/chrony.conf sed -i /#pool/a\server cn.pool.ntp.org iburst /etc/chrony.conf sed -i /#pool/a\server ntp.ntsc.ac.cn iburst /etc/chrony.conf sed -i /#pool/a\server ntp1.aliyun.com iburst /etc/chrony.conf grep -A 3 #pool /etc/chrony.conf ------------------------------------------------------------------- # 生产环境-离线环境 # 时间服务节点 allow 192.168.120.0/24 server 127.127.0.1 iburst driftfile /var/lib/chrony/drift keyfile /etc/chrony.keys leapsectz right/UTC local stratum 10 makestep 1.0 3 rtcsync logdir /var/log/chrony# 时间客户端 allow 192.168.120.0/24 server 192.168.120.41 iburst driftfile /var/lib/chrony/drift keyfile /etc/chrony.keys leapsectz right/UTC local stratum 10 makestep 1.0 3 rtcsync logdir /var/log/chrony# 重启服务 systemctl restart chronyd.service1.7 调整内核参数 # 集群所有节点都执行 cat /etc/sysctl.d/kubernetes.conf EOF net.bridge.bridge-nf-call-iptables1 net.bridge.bridge-nf-call-ip6tables1 EOF# 执行命令使修改生效。 modprobe br_netfilter sysctl -p /etc/sysctl.d/kubernetes.conf1.8 系统内核升级 # 集群所有节点都执行 # 1.获取源 yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm # 2.安装 yum --enablerepoelrepo-kernel install -y kernel-lt # 3.设置开机从新内核启动 grub2-set-default CentoS Linux (5.4.267-1.el7.elrepo.×86_64) 7 (Core) # 4.重启生效 reboot # 5.查看 uname -a # 6.安装更新 yum update -y2.安装Docker 2.1 卸载旧Docker # 集群所有节点都执行卸载旧版本docker或docker-engine和相关依赖包 sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine2.2 配置Docker软件源 # 集群所有节点都执行安装需要的包 yum install -y yum-utils device-mapper-persistent-data lvm2 # 阿里镜像 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 更新软件包索引 yum makecache fast # 查看docker版本 yum list docker-ce --showduplicates | sort -r2.3 安装Docker # 集群所有节点都执行不指定版本默认安装最新的版本 yum install docker-ce docker-ce-cli containerd.io docker-compose-plugin # 或者指定版本安装 # yum install docker-ce-VERSION_STRING docker-ce-cli-VERSION_STRING containerd.io # 例如yum install docker-ce-24.xx.x.ce docker-ce-cli-24.xx.x.ce containerd.io# 设置镜像加速器 mkdir /etc/docker cat /etc/docker/daemon.json EOF {registry-mirrors: [https://i8d2zxyn.mirror.aliyuncs.com],exec-opts: [native.cgroupdriversystemd],log-driver: json-file,log-opts: {max-size: 100m},storage-driver: overlay2 } EOF # 设置开启自启动并启动docker服务 systemctl status docker # 查看docker服务状态 systemctl start docker # 启动docker服务 systemctl stop docker # 关闭docker服务 systemctl enable docker # 设置docker服务开机启动 systemctl is-enabled docker # 查看docker服务是否设置开机启动[rootceph61 ~]# docker --version Docker version 25.0.2, build 29cf6293.部署Kubernets集群 3.1 设置 K8s 软件源 # 集群所有节点都执行 cat /etc/yum.repos.d/kubernetes.repo EOF [kubernetes] nameKubernetes baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled1 gpgcheck0 gpgkeyhttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg EOF# 创建本地缓存 yum makecache3.2 安装kubeadmkubelet 和kubectl # 此处需要指定版本如果不指定版本默认安装最新的版本有时和docker不兼容导致k8s集群初始化报错 # 因为Docker安装了最新的版本此处k8s版本也默认安装最新的 yum -y install kubelet kubectl kubeadm # 例如指定版本yum -y install kubelet-1.22.4 kubectl-1.22.4 kubeadm-1.22.4[rootceph61 ~]# kubelet --version Kubernetes v1.28.2 [rootceph61 ~]# kubectl version Client Version: v1.28.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 [rootceph61 ~]# kubeadm version kubeadm version: version.Info{Major:1, Minor:28, GitVersion:v1.28.2, GitCommit:89a4ea3e1e4ddd7f7572286090359983e0387b2f, GitTreeState:clean, BuildDate:2023-09-13T09:34:32Z, GoVersion:go1.20.8, Compiler:gc, Platform:linux/amd64}# 启动kubelet服务并设置开机自启 systemctl start kubelet systemctl enable kubelet# 此处还无法启动kubelet服务kubeadm init xxx 命令执行过程中会启动该服务 systemctl status kubelet.service # 查看错误信息 journalctl -xefu kubelet 3.3 下载镜像 # 集群所有节点都执行 # 如果不手动下载镜像当执行 kubeadm init xxx 时从国外网站下载镜像失败 # 1.查询需要下载的镜像及版本号kubeadm config images list [rootceph61 ~]# kubeadm config images list I0205 11:20:22.892197 35269 version.go:256] remote version is much newer: v1.29.1; falling back to: stable-1.28 registry.k8s.io/kube-apiserver:v1.28.6 registry.k8s.io/kube-controller-manager:v1.28.6 registry.k8s.io/kube-scheduler:v1.28.6 registry.k8s.io/kube-proxy:v1.28.6 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1# 下载可以手动下载如果是外网可以不下载 docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.6 docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.6 docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.6 docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.28.6 docker pull registry.aliyuncs.com/google_containers/pause:3.9 docker pull registry.aliyuncs.com/google_containers/etcd:3.5.9-0 docker pull registry.aliyuncs.com/google_containers/coredns:v1.10.1 # 从新打tag docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.6 registry.k8s.io/kube-apiserver:v1.28.6 docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.6 registry.k8s.io/kube-controller-manager:v1.28.6 docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.6 registry.k8s.io/kube-scheduler:v1.28.6 docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.28.6 registry.k8s.io/kube-proxy:v1.28.6 docker tag registry.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.9 docker tag registry.aliyuncs.com/google_containers/etcd:3.5.9-0 registry.k8s.io/etcd:3.5.9-0 docker tag registry.aliyuncs.com/google_containers/coredns:v1.10.1 registry.k8s.io/coredns/coredns:v1.10.1 # 删除旧的tag信息可以暂时保留 docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.6 docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.6 docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.6 docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.28.6 docker rmi registry.aliyuncs.com/google_containers/pause:3.9 docker rmi registry.aliyuncs.com/google_containers/etcd:3.5.9-0 docker rmi registry.aliyuncs.com/google_containers/coredns:v1.10.1# 查询镜像手动下载的情况 [rootceph61 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-apiserver v1.28.6 70e88c5e3a8e 2 weeks ago 126MB registry.k8s.io/kube-apiserver v1.28.6 70e88c5e3a8e 2 weeks ago 126MB registry.aliyuncs.com/google_containers/kube-scheduler v1.28.6 7597ecaaf120 2 weeks ago 60.1MB registry.k8s.io/kube-scheduler v1.28.6 7597ecaaf120 2 weeks ago 60.1MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.28.6 18dbd2df3bb5 2 weeks ago 122MB registry.k8s.io/kube-controller-manager v1.28.6 18dbd2df3bb5 2 weeks ago 122MB registry.aliyuncs.com/google_containers/kube-proxy v1.28.6 342a759d8815 2 weeks ago 77.9MB registry.k8s.io/kube-proxy v1.28.6 342a759d8815 2 weeks ago 77.9MB registry.aliyuncs.com/google_containers/etcd 3.5.9-0 73deb9a3f702 8 months ago 294MB registry.k8s.io/etcd 3.5.9-0 73deb9a3f702 8 months ago 294MB registry.aliyuncs.com/google_containers/coredns v1.10.1 ead0a4a53df8 12 months ago 53.6MB registry.k8s.io/coredns/coredns v1.10.1 ead0a4a53df8 12 months ago 53.6MB registry.aliyuncs.com/google_containers/pause 3.9 e6f181688397 15 months ago 744kB registry.k8s.io/pause 3.9 e6f181688397 15 months ago 744kB 3.4 部署Kubernetes Master # 问题1如果报错需要执行下面步骤关闭CRI [rootceph61 ~]# kubeadm init --apiserver-advertise-address0.0.0.0 --image-repositoryregistry.aliyuncs.com/google_containers --kubernetes-versionv1.28.6 --service-cidr10.96.0.0/12 --pod-network-cidr10.244.0.0/16 [init] Using Kubernetes version: v1.28.6 [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR CRI]: container runtime is not running: output: time2024-02-05T11:28:2808:00 levelfatal msgvalidate service connection: CRI v1 runtime API is not implemented for endpoint \unix:///var/run/containerd/containerd.sock\: rpc error: code Unimplemented desc unknown service runtime.v1.RuntimeService , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors... To see the stack trace of this error execute with --v5 or higher# 步骤1进行如下设置kubernetes使用crictl命令管理CRI查看其配置文件/etc/crictl.yaml。 # 初始情况下没有这个配置文件这里建议添加这个配置否则kubeadm init时会报其他错。 cat /etc/crictl.yaml EOF runtime-endpoint: unix:///var/run/containerd/containerd.sock image-endpoint: unix:///var/run/containerd/containerd.sock timeout: 0 debug: false pull-image-on-create: false EOF# 步骤2关闭CRI集群三节点都需要修改 vim /etc/containerd/config.toml # 注释掉CRI # disabled_plugins [cri]# 步骤3重启containerd [rootceph61 ~]# systemctl restart containerd# 问题2报错pause镜像获取失败 [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory /etc/kubernetes/manifests. This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.Unfortunately, an error has occurred:timed out waiting for the conditionThis error is likely caused by:- The kubelet is not running- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:- systemctl status kubelet- journalctl -xeu kubeletAdditionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all running Kubernetes containers by using crictl:- crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pauseOnce you have found the failing container, you can inspect its logs with:- crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID error execution phase wait-control-plane: couldnt initialize a Kubernetes cluster To see the stack trace of this error execute with --v5 or higher# 参考https://zhuanlan.zhihu.com/p/660905540 # 步骤1 containerd config dump /etc/containerd/config.toml vim /etc/containerd/config.toml # 将下面值 sandbox_image registry.k8s.io/pause:3.6 # 修改为 sandbox_image registry.aliyuncs.com/google_containers/pause:3.9 # 步骤2 sudo systemctl restart containerd# K8s集群初始化、重置... # 当该init命令重新执行时需要先执行 kubeadm reset --force# 用kubeadm ini初始化kubernetes在Master(ceph61:k8s-master)节点执行 kubeadm init \--apiserver-advertise-address0.0.0.0 \--image-repositoryregistry.aliyuncs.com/google_containers \--kubernetes-versionv1.28.6 \--service-cidr10.96.0.0/12 \--pod-network-cidr10.244.0.0/16# 由于默认拉取镜像地址k8s.gcr.io 国内无法访问这里指定阿里云镜像仓库地址 --apiserver-advertise-addres : 填写 k8s-master ip默认只填写0.0.0.0即可 --image-repository : 镜像地址指定为阿里源即可 --kubernetes-version : 指定版本跳过网络请求Kubernetes版本号必须一致通过kubeadm config images list查看版本号# 因为它的默认值是stable-1会从https://storage.googleapis.com/kubernetes-release/release/stable-1.txt下载最新的版本号 --service-cidr10.96.0.0/12 : # 这个参数后的IP地址直接就套用10.96.0.0/12以后安装时也套用即可不要更改 --pod-network-cidr10.244.0.0/16 : # k8s内部的pod节点之间网络可以使用的IP段不能和service-cidr写一样如果不知道怎么配就先用这个10.244.0.0/16[rootceph61 ~]# kubeadm init \--apiserver-advertise-address0.0.0.0 \--image-repositoryregistry.aliyuncs.com/google_containers \--kubernetes-versionv1.28.6 \--service-cidr10.96.0.0/12 \--pod-network-cidr10.244.0.0/16 [init] Using Kubernetes version: v1.28.6 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using kubeadm config images pull [certs] Using certificateDir folder /etc/kubernetes/pki [certs] Generating ca certificate and key [certs] Generating apiserver certificate and key [certs] apiserver serving cert is signed for DNS names [ceph61 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.120.61] [certs] Generating apiserver-kubelet-client certificate and key [certs] Generating front-proxy-ca certificate and key [certs] Generating front-proxy-client certificate and key [certs] Generating etcd/ca certificate and key [certs] Generating etcd/server certificate and key [certs] etcd/server serving cert is signed for DNS names [ceph61 localhost] and IPs [192.168.120.61 127.0.0.1 ::1] [certs] Generating etcd/peer certificate and key [certs] etcd/peer serving cert is signed for DNS names [ceph61 localhost] and IPs [192.168.120.61 127.0.0.1 ::1] [certs] Generating etcd/healthcheck-client certificate and key [certs] Generating apiserver-etcd-client certificate and key [certs] Generating sa key and public key [kubeconfig] Using kubeconfig folder /etc/kubernetes [kubeconfig] Writing admin.conf kubeconfig file [kubeconfig] Writing kubelet.conf kubeconfig file [kubeconfig] Writing controller-manager.conf kubeconfig file [kubeconfig] Writing scheduler.conf kubeconfig file [etcd] Creating static Pod manifest for local etcd in /etc/kubernetes/manifests [control-plane] Using manifest folder /etc/kubernetes/manifests [control-plane] Creating static Pod manifest for kube-apiserver [control-plane] Creating static Pod manifest for kube-controller-manager [control-plane] Creating static Pod manifest for kube-scheduler [kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env [kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory /etc/kubernetes/manifests. This can take up to 4m0s [apiclient] All control plane components are healthy after 6.502526 seconds [upload-config] Storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace [kubelet] Creating a ConfigMap kubelet-config in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node ceph61 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node ceph61 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: 6d91mq.mlm8kkyos1c2hvev [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the cluster-info ConfigMap in the kube-public namespace [kubelet-finalize] Updating /etc/kubernetes/kubelet.conf to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster. Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.120.61:6443 --token 6d91mq.mlm8kkyos1c2hvev \--discovery-token-ca-cert-hash sha256:fb4ec669becd632f799cfc2ddf48746cd7689ce58a24598c1250e353b5d3dd64 3.5 配置文件 # 在 masterceph61 节点 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config# 拷贝到其余节点 scp /etc/kubernetes/admin.conf ceph62:/etc/kubernetes/ scp /etc/kubernetes/admin.conf ceph63:/etc/kubernetes/# 配置环境变量 echo export KUBECONFIG/etc/kubernetes/admin.conf ~/.bash_profile source ~/.bash_profile3.6 添加其他节点 # 在其余节点ceph62,ceph63节点执行如下命令 kubeadm join 192.168.120.61:6443 --token 6d91mq.mlm8kkyos1c2hvev --discovery-token-ca-cert-hash sha256:fb4ec669becd632f799cfc2ddf48746cd7689ce58a24598c1250e353b5d3dd64# 加入集群后 [rootceph61 ~]# kubectl get node NAME STATUS ROLES AGE VERSION ceph61 NotReady control-plane 3m11s v1.28.2 ceph62 NotReady none 13s v1.28.2 ceph63 NotReady none 9s v1.28.23.7 安装网络插件 # 方法1在ceph集群中暂时不使用 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [rootceph61 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml namespace/kube-flannel created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created\# 方法2Ceph集群建议使用该网络插件只需要在ceph61节点运行即可 wget https://docs.projectcalico.org/v3.25/manifests/calico.yaml --no-check-certificate # wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml --no-check-certificate kubectl apply -f calico.yaml [rootceph61 ~]# kubectl apply -f calico.yaml poddisruptionbudget.policy/calico-kube-controllers created serviceaccount/calico-kube-controllers created serviceaccount/calico-node created configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created deployment.apps/calico-kube-controllers created# 查看节点状态 [rootceph61 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION ceph61 Ready control-plane 40m v1.28.2 ceph62 Ready none 37m v1.28.2 ceph63 Ready none 37m v1.28.2# 下载镜像如果查看节点状态失败 docker pull docker.io/calico/cni:v3.25.0 docker pull docker.io/calico/node:v3.25.0 docker pull docker.io/calico/kube-controllers:v3.25.03.8 检查节点是否污染 # 检查节点是否污染 - ceph61节点被污染被污染后无法参与调度 [rootceph61 ~]# kubectl describe node ceph61 | grep Taints Taints: node-role.kubernetes.io/control-plane:NoSchedule [rootceph61 ~]# kubectl describe node ceph62 | grep Taints Taints: none [rootceph61 ~]# kubectl describe node ceph63 | grep Taints Taints: none# 取消节点污染 [rootceph41 ~]# kubectl taint nodes --all node-role.kubernetes.io/control-plane- # 打印输出如下 node/ceph61 untainted taint node-role.kubernetes.io/master not found taint node-role.kubernetes.io/master not found # 再次检查 [rootceph61 ~]# kubectl describe node ceph61 | grep Taints Taints: none3.9 k8s集群常用命令 # 获取集群namespace kubectl get namespace # 获取当前运行的所有进程 kubectl get pod -A kubectl get pod --all-namespaces # 获取指定namespace的pod kubectl get pod -n namespace # 获取更详细信息 kubectl get pod -o wide -A # 查看某一个pod的详细数据 kubectl describe pod podname -n namespace # 查看日志 kubectl logs podname -f kubectl logs podname -f -n namespace # 进入容器 kubectl exec -it podname -- bash # 删除pod kubectl delete pod podname -n namespace3.10 安装k8s可视化页面 # 第1步查看k8s版本兼容性 https://github.com/kubernetes/dashboard/releases# 第2步在 masterceph61节点执行如下命令 docker run -d \--restartunless-stopped \--namekuboard \-p 8080:80/tcp \-p 10081:10081/tcp \-e KUBOARD_ENDPOINThttp://192.168.120.61:8080 \-e KUBOARD_AGENT_SERVER_TCP_PORT10081 \-v /root/kuboard-data:/data \eipwork/kuboard:v3[rootceph61 ~]# docker run -d \--restartunless-stopped \--namekuboard \-p 8080:80/tcp \-p 10081:10081/tcp \-e KUBOARD_ENDPOINThttp://192.168.120.61:8080 \-e KUBOARD_AGENT_SERVER_TCP_PORT10081 \-v /root/kuboard-data:/data \eipwork/kuboard:v3 Unable to find image eipwork/kuboard:v3 locally v3: Pulling from eipwork/kuboard 39cf15d1b231: Pull complete ecd0ab02f0ae: Pull complete 225e08117bbd: Pull complete abcb1f095da7: Pull complete 1eeda1b6f001: Pull complete 4349852fff77: Pull complete 1f029b610fdb: Pull complete 4df394d7d606: Pull complete 3c697407405f: Pull complete ee935ad7cf4e: Pull complete 09e01f13e911: Pull complete 4e388503d89a: Pull complete 35e609fe422f: Pull complete 2e14fa3ae7d7: Pull complete cec83c92c2a8: Pull complete d6932e6ef2a1: Pull complete Digest: sha256:0ea7d38afa2bb31ae178f8dc32feeccd480376097a2e3b7423750d02f123fa8c Status: Downloaded newer image for eipwork/kuboard:v3 3b9ae57bb5506bf53df125e8ee656beeef6b5833a75cfb966b261eb2179df81d# http://192.168.120.61:8080/ # 账号密码默认是admin/Kuboard123
http://www.sczhlp.com/news/212854/

相关文章:

  • 公会网站免费建设网站上做播放器流量算谁的
  • 在线借贷网站建设邢台市建设工程质量监督网站
  • 无锡新区网站建设网站建设项目背景
  • 珠海哪个公司建设网站好国家建设部网站倪虹
  • 西宁网站维护公司广州网站建设公司广州企业网站建设公司公司网站建设
  • 网站开发 哪些文档企业邮箱登陆
  • 网站建设dns解析设置wordpress百度收录查询
  • ui设计作品欣赏网站建设一个电商网站的流程是什么
  • 静态网页模板 网站模板wordpress 布局
  • 如何入侵网站后台个人网站建设方案书实例
  • 2025 年瓷砖厂家最新推荐榜,技术实力与市场口碑深度解析助力消费者精准选购亮光砖/哑光砖/木纹砖/仿古砖/玛缇马毛砖厂家推荐
  • 2025年10月长白山亲子酒店评测榜:松果里领衔对比排行全解析
  • 网站新闻专题怎么做施工企业八大员
  • CMake 入门实战手册:从理解原理开始,打造高效 C/C++ 开发流程 - 实践
  • 公司网页网站建设 ppt山西专业网站建设价目
  • 网站访客代码js自助推广平台
  • 专做新车分期的网站管理软件属于什么软件
  • 网站制作教程ppt我的网站怎么做
  • 网页模板网站公司网站建设有什么好处
  • 工程服务建设网站济南槐荫区最新消息
  • 网站规划建设实训报告网站优化怎么做ppt
  • 邮轮哪个网站是可以做特价网站建设河南公司
  • 网站创建工具软件外包公司成都
  • 网站平台建设合同模板沈阳建设局网站首页
  • 360 网站备案自己怎么做家政网站
  • 网站编辑内容做家政的在哪些网站推广
  • 建外贸网站费用怎样做农村电商网站
  • html5个人网站模板互联网营销师培训基地
  • 如何免费开自己的网站百度网站官网怎么做
  • 网站开发外包接单wordpress新站SEO优化