昆明网站开发正规培训,短视频营销成功的案例,wordpress 压缩包,网站建设宽度一般都是多少文章目录 机器介绍centos基本配置安装 VMware Tools设置静态ip关闭防火墙关闭SELinux开启时间同步配置host和hostname 安装kubesphere依赖项安装配置文件准备执行安装命令 机器介绍
在ESXI中准备虚拟机#xff0c;部署参考官网#xff1a;https://kubesphere.io/zh/
CentOs… 文章目录 机器介绍centos基本配置安装 VMware Tools设置静态ip关闭防火墙关闭SELinux开启时间同步配置host和hostname 安装kubesphere依赖项安装配置文件准备执行安装命令 机器介绍
在ESXI中准备虚拟机部署参考官网https://kubesphere.io/zh/
CentOs7.5192.168.31.21master, etcdCentOs7.5192.168.31.22master, etcdCentOs7.5192.168.31.23master, etcdCentOs7.5192.168.31.24workerCentOs7.5192.168.31.25workerCentOs7.5192.168.31.26worker
centos基本配置 安装 VMware Tools
运行以下命令以安装 VMware Tools
sudo yum install open-vm-tools这将使用 yum 从 VMware Tools 软件源安装 open-vm-tools 软件包。
安装完成后重新启动虚拟机以使 VMware Tools 生效
sudo reboot设置静态ip
使用vi编辑器打开:
sudo vim /etc/sysconfig/network-scripts/ifcfg-ens33TYPEEthernet
PROXY_METHODnone
BROWSER_ONLYno
BOOTPROTOstatic #dhcp改为static
DEFROUTEyes
IPV4_FAILURE_FATALno
IPV6INITyes
IPV6_AUTOCONFyes
IPV6_DEFROUTEyes
IPV6_FAILURE_FATALno
IPV6_ADDR_GEN_MODEstable-privacy
NAMEens33
UUID74ca9b68-1475-4b02-9750-f48b871504df
DEVICEens33
ONBOOTyes #开机启用本配置
IPADDR192.168.0.180 #静态IP
GATEWAY192.168.0.1 #默认网关
NETMASK255.255.255.0 #子网掩码
DNS1192.168.0.1 #DNS地址1
DNS2223.6.6.6 #DNS地址2重启网络服务使配置生效:
sudo service network restart关闭防火墙
#设置开机 “启动” 防火墙命令
systemctl enable firewalld.service# 设置开机 “禁用” 防火墙命令
systemctl disable firewalld.service#防火墙开启命令
systemctl start firewalld#防火墙关闭命令
systemctl stop firewalld#防火墙状态查看命令1
systemctl status firewalld关闭SELinux
关闭 SELinux 可以通过编辑 /etc/selinux/config 文件并将 SELINUX 参数设置为 disabled。具体步骤如下
以 root 用户身份登录 Linux 系统。打开 /etc/selinux/config 文件可以使用命令 vi /etc/selinux/config。找到 SELINUX 参数并将其设置为 disabled。
# This file controls the state of SELinux on the system.
# SELINUX can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUXdisabled保存并关闭文件。重启系统使更改生效。
SELinux 作为一个安全模块可以提供强制访问控制机制限制进程和用户对系统资源的访问从而提高系统的安全性和可靠性。但是在某些情况下关闭 SELinux 可能是必要的例如
应用程序与 SELinux 不兼容某些应用程序可能与 SELinux 不兼容导致运行时出现问题。在这种情况下关闭 SELinux 可能是解决问题的一种方法。调试问题在调试系统问题时关闭 SELinux 可能有助于确定问题的根本原因。降低系统负载在某些情况下关闭 SELinux 可能有助于降低系统负载提高系统性能。简化系统管理在某些情况下关闭 SELinux 可能会简化系统管理减少管理工作量。
需要注意的是关闭 SELinux 可能会降低系统的安全性和可靠性因此应该谨慎考虑。如果必须关闭 SELinux请确保在关闭之前仔细评估系统的安全风险并采取其他措施来保护系统的安全性例如使用防火墙、限制用户权限等。在大多数情况下建议仅在必要时关闭 SELinux并在关闭之前备份系统以便在需要时进行恢复。关闭 SELinux 的方法包括编辑 /etc/selinux/config 文件并将 SELINUX 参数设置为 disabled或者使用命令 setenforce 0 临时禁用 SELinux。
开启时间同步
开启时间同步。
yum install -y chrony
systemctl enable chronyd
systemctl start chronyd
timedatectl set-ntp true设置时区。
timedatectl set-timezone Asia/Shanghai检查 ntp-server 是否可用。
chronyc activity -v配置host和hostname
设置主机名在21-26上分别挨条执行
sudo hostnamectl set-hostname ksmaster21
sudo hostnamectl set-hostname ksmaster22
sudo hostnamectl set-hostname ksmaster23
sudo hostnamectl set-hostname ksnode21
sudo hostnamectl set-hostname ksnode22
sudo hostnamectl set-hostname ksnode23vi /etc/hosts 配置host
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.31.21 ksmaster21
192.168.31.22 ksmaster22
192.168.31.23 ksmaster23
192.168.31.24 ksnode24
192.168.31.25 ksnode25
192.168.31.26 ksnode26验证hosts配置
ping ksmaster21
ping ksmaster22
ping ksmaster23
ping ksnode21
ping ksnode22
ping ksnode23安装kubesphere 依赖项安装
KubeKey 可以将 Kubernetes 和 KubeSphere 一同安装。针对不同的 Kubernetes 版本需要安装的依赖项可能有所不同。您可以参考以下列表查看是否需要提前在节点上安装相关的依赖项。
依赖项Kubernetes 版本 ≥ 1.18Kubernetes 版本 1.18socat必须可选但建议conntrack必须可选但建议ebtables可选但建议可选但建议ipset可选但建议可选但建议
执行下述命令一键安装
yum -y install socat conntrack ebtables ipset由于使用群辉nfs作为nas需安装nfs
yum install -y nfs-utils配置文件准备
创建nfs-client.yaml文件
nfs:server: nas.yxym.com # 这是群辉服务器IP地址把它换成你自己的path: /volume5/ks # 用您自己的目录替换导出的目录
storageClass:defaultClass: true生成kubesphere安装配置文件
# 环境设置
export KKZONEcn
# 创建配置文件
./kk create config --with-kubernetes v1.23.10 --with-kubesphere v3.4.1配置文件内容如下 apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:name: sample
spec:hosts:- {name: ksmaster21, address: 192.168.31.21, internalAddress: 192.168.31.21, user: root, password: 密码}- {name: ksmaster22, address: 192.168.31.22, internalAddress: 192.168.31.22, user: root, password: 密码}- {name: ksmaster23, address: 192.168.31.23, internalAddress: 192.168.31.23, user: root, password: 密码}- {name: ksnode24, address: 192.168.31.24, internalAddress: 192.168.31.24, user: root, password: 密码}- {name: ksnode25, address: 192.168.31.25, internalAddress: 192.168.31.25, user: root, password: 密码}- {name: ksnode26, address: 192.168.31.26, internalAddress: 192.168.31.26, user: root, password: 密码} roleGroups:etcd:- ksmaster21- ksmaster22- ksmaster23control-plane:- ksmaster21- ksmaster22- ksmaster23worker:- ksnode24- ksnode25- ksnode26 controlPlaneEndpoint:## Internal loadbalancer for apiservers internalLoadbalancer: haproxydomain: lb.kubesphere.localaddress: port: 6443kubernetes:version: v1.23.10clusterName: cluster.localautoRenewCerts: truecontainerManager: dockeretcd:type: kubekeynetwork:plugin: calicokubePodsCIDR: 10.10.0.0/18kubeServiceCIDR: 10.20.0.0/18## multus support. https://github.com/k8snetworkplumbingwg/multus-cnimultusCNI:enabled: falseregistry:privateRegistry: namespaceOverride: registryMirrors: [https://0j62md6t.mirror.aliyuncs.com,http://hub-mirror.c.163.com]insecureRegistries: []addons:- name: nfs-clientnamespace: kube-systemsources:chart:name: nfs-client-provisionerrepo: https://charts.kubesphere.io/mainvaluesFile: /opt/ks/v3.3/nfs-client.yaml---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:name: ks-installernamespace: kubesphere-systemlabels:version: v3.3.2
spec:persistence:storageClass: authentication:jwtSecret: zone: local_registry: namespace_override: # dev_tag: etcd:monitoring: falseendpointIps: localhostport: 2379tlsEnable: truecommon:core:console:enableMultiLogin: trueport: 30880type: NodePort# apiserver:# resources: {}# controllerManager:# resources: {}redis:enabled: falsevolumeSize: 2Giopenldap:enabled: falsevolumeSize: 2Giminio:volumeSize: 20Gimonitoring:# type: externalendpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090GPUMonitoring:enabled: falsegpu:kinds:- resourceName: nvidia.com/gpuresourceType: GPUdefault: truees:# master:# volumeSize: 4Gi# replicas: 1# resources: {}# data:# volumeSize: 20Gi# replicas: 1# resources: {}logMaxAge: 7elkPrefix: logstashbasicAuth:enabled: falseusername: password: externalElasticsearchHost: externalElasticsearchPort: alerting:enabled: false# thanosruler:# replicas: 1# resources: {}auditing:enabled: false# operator:# resources: {}# webhook:# resources: {}devops:enabled: false# resources: {}jenkinsMemoryLim: 8GijenkinsMemoryReq: 4GijenkinsVolumeSize: 8Gievents:enabled: false# operator:# resources: {}# exporter:# resources: {}# ruler:# enabled: true# replicas: 2# resources: {}logging:enabled: falselogsidecar:enabled: truereplicas: 2# resources: {}metrics_server:enabled: falsemonitoring:storageClass: node_exporter:port: 9100# resources: {}# kube_rbac_proxy:# resources: {}# kube_state_metrics:# resources: {}# prometheus:# replicas: 1# volumeSize: 20Gi# resources: {}# operator:# resources: {}# alertmanager:# replicas: 1# resources: {}# notification_manager:# resources: {}# operator:# resources: {}# proxy:# resources: {}gpu:nvidia_dcgm_exporter:enabled: false# resources: {}multicluster:clusterRole: nonenetwork:networkpolicy:enabled: falseippool:type: nonetopology:type: noneopenpitrix:store:enabled: falseservicemesh:enabled: falseistio:components:ingressGateways:- name: istio-ingressgatewayenabled: falsecni:enabled: falseedgeruntime:enabled: falsekubeedge:enabled: falsecloudCore:cloudHub:advertiseAddress:- service:cloudhubNodePort: 30000cloudhubQuicNodePort: 30001cloudhubHttpsNodePort: 30002cloudstreamNodePort: 30003tunnelNodePort: 30004# resources: {}# hostNetWork: falseiptables-manager:enabled: truemode: external# resources: {}# edgeService:# resources: {}terminal:timeout: 600执行安装命令
# 环境设置
export KKZONEcn
# 创建配置文件
./kk create config --with-kubernetes v1.23.10 --with-kubesphere v3.4.1
# 安装
./kk create cluster -f config-sample.yaml
# 卸载
./kk delete cluster -f config-sample.yaml