贵州省住房和城乡建设厅网站打不开,购物网站建设开发,网站seo基础优化,网站建设需要的条件一、环境与软件准备
说明#xff1a;服务器已用主机名代替#xff0c;可根据自己的需求#xff0c;改为IP地址
环境
服务器组件masterNameNode、DataNode、Nodemanager、ResourceManager、Hive、Hive的metastore、Hive的hiveserver2、mysqlSecondarySecondaryNameNode、D…一、环境与软件准备
说明服务器已用主机名代替可根据自己的需求改为IP地址
环境
服务器组件masterNameNode、DataNode、Nodemanager、ResourceManager、Hive、Hive的metastore、Hive的hiveserver2、mysqlSecondarySecondaryNameNode、DataNode、NodeManagerDatanodeDataNode、NodeManager、Hive的beeline访问方式
1、java版本1.8
下载地址http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
linux$:] cd /soft
linux$:] tar -zxvf jdk-8u321-linux-x64.tar.gz
linux$:] cp -r jdk1.8.0_321 /usr/bin/jdklinux$:] vi /etc/profileexport JAVA_HOME/usr/bin/jdk # jdk1.8.0_311为解压缩的目录名称
export PATH$JAVA_HOME/bin:$PATH
export CLASSPATH.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/liblinux$:] source /etc/profile2、Rsync CentOS中默认存在
3、zstd、openssl、autoconf、automake、libtool、ca-certificates安装
linux$:] yum -y install zstdyum -y install openssl-devel autoconf automake libtool ca-certificates4、ISA-L
下载地址https://github.com/intel/isa-l
linux$:] cd /soft
linux$:] unzip master.zip
linux$:] cd master
linux$:] ./autogen.sh
linux$:] ./configure
linux$:] make make install make -f Makefile.unx
其它操作可省略(后面有解释)
make check : create and run tests
make tests : create additional unit tests
make perfs : create included performance tests
make ex : build examples
make other : build other utilities such as compression file tests
make doc : build API manual5、nasm与yasm
yasm组件
linux$:] curl -O -L http://www.tortall.net/projects/yasm/releases/yasm-1.3.0.tar.gz
linux$:] tar -zxvf yasm-1.3.0.tar.gz
linux$:] cd yasm
linux$:] ./configure;make -j 8;make install
nasm组件
linux$:] wget http://www.nasm.us/pub/nasm/releasebuilds/2.14.02/nasm-2.14.02.tar.xz
linux$:] cd nasm
linux$:] tar xf nasm-2.14.02.tar.xz
linux$:] ./configure;make -j 8;make install6、ssh
linux$:] ssh-keygen -t rsa
所有主机之间互通后本机与本机间也需要进行
linux$:] ssh-copy-id -i ~/.ssh/id_rsa.pub rootIP7、hadoop
官网地址https://hadoop.apache.org/
【Getting started】【Download】【Apache Download Mirrors】【HTTP】
linux$:] cd /soft
linux$:] wget https://dlcdn.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz
linux$:] tar -zxvf hadoop-3.3.1.tar.gz
linux$:] mv hadoop-3.3.1 hadoop8、Linux环境变量配置
linux$:] vi /etc/hosts
IP地址 Master
IP地址 Secondary
IP地址 Datanodelinux$:] vi /etc/profile
export JAVA_HOME/usr/bin/jdk
export PATH$JAVA_HOME/bin:$PATH
export CLASSPATH.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib
export HADOOP_HOME/soft/hadoop #配置Hadoop安装路径
export PATH$HADOOP_HOME/bin:$PATH #配置Hadoop的hdfs命令路径
export PATH$HADOOP_HOME/sbin:$PATH #配置Hadoop的命令路径
export HIVE_HOME/soft/hive
export PATH$PATH:$HIVE_HOME/bin
export HDFS_NAMENODE_USERroot
export HDFS_DATANODE_USERroot
export HDFS_SECONDARYNAMENODE_USERroot
export YARN_RESOURCEMANAGER_USERroot
export YARN_NODEMANAGER_USERrootlinux$:] source /etc/profile9、hadoop的各类文件配置
配置文件信息
linux$:] vi /soft/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME/usr/bin/jdk配置文件信息【可一条命令启动以下全部机器start-all.sh/stop-all.sh】
linux$:] vi /soft/hadoop/etc/hadoop/workers
Master
Secondary
Datanode配置文件信息
linux$:] vi /soft/hadoop/etc/hadoop/core-site.xml
configuration
!-- hdfs访问地址 --propertynamefs.defaultFS/namevaluehdfs://Master:9000/value/property
!-- hadoop运行时临时文件存储路径 --propertynamehadoop.tmp.dir/namevalue/hadoop/tmp/value/property
!-- hadoop验证 --propertynamehadoop.security.authorization/namevaluefalse/value/property
!-- hadoop代理用户主机用户是root可自定义 --propertynamehadoop.proxyuser.root.hosts/namevalue*/value/property
!-- hadoop代理用户组主机用户组是root可自定义 --propertynamehadoop.proxyuser.root.groups/namevalue*/value/property
/configuration配置文件信息
linux$:] vi /soft/hadoop/etc/hadoop/hdfs-site.xmlconfiguration
!-- namenode Linux本地信息存储路径 --propertynamedfs.namenode.name.dir/namevalue/hadoop/namenodedata/value/property
!-- 定义块大小 --propertynamedfs.blocksize/namevalue256M/value/property
!-- namenode能处理的来之datanode 节点的Threads --propertynamedfs.namenode.handler.count/namevalue100/value/property
!-- datanode Linux 本地存储路径 --propertynamedfs.datanode.data.dir/namevalue/hadoop/datanodedata/value/propertypropertynamedfs.replication/namevalue3/value/property
!-- hdfs启动时不启动的机器 --propertynamedfs.hosts.exclude/namevalue/soft/hadoop/etc/hadoop/workers.exclude/value/property
!-- 指定Secondary服务器不指定则默认有NodeName同一主机 --propertynamedfs.secondary.http.address/namevalueecondary:50070/value/property
!-- hdfs权限验证 --propertynamedfs.permissions/namevaluefalse/value/property
/configuration配置文件信息
linux$:] vi /soft/hadoop/etc/hadoop/mapred-site.xmlconfigurationpropertynamemapreduce.framework.name/namevalueyarn/value/propertypropertynamemapreduce.map.memory.mb/namevalue125/value/propertypropertynamemapreduce.map.java.opts/namevalue-Xmx512M/value/propertypropertynamemapreduce.reduce.memory.mb/namevalue512/value/propertypropertynamemapreduce.reduce.java.opts/namevalue-Xmx512M/value/propertypropertynamemapreduce.task.io.sort.mb/namevalue125/value/propertypropertynamemapreduce.task.io.sort.factor/namevalue100/value/propertypropertynamemapreduce.reduce.shuffle.parallelcopies/namevalue50/value/propertypropertynamemapreduce.jobhistory.address/namevalueMaster:10020/value/propertypropertynamemapreduce.jobhistory.webapp.address/namevalueMaster:19888/value/propertypropertynamemapreduce.jobhistory.intermediate-done-dir/namevalue/hadoop/hislog/value/propertypropertynamemapreduce.jobhistory.done-dir/namevalue/hadoop/hisloging/value/property配置文件信息
linux$:] vi /soft/hadoop/etc/hadoop/yarn-site.xmlconfigurationpropertynameyarn.acl.enable/namevaluefalse/value/propertypropertynameyarn.admin.acl/namevalue*/value/propertypropertynameyarn.log-aggregation-enable/namevaluetrue/value/propertypropertynameyarn.resourcemanager.address/namevalueMaster:8032/value/propertypropertynameyarn.resourcemanager.scheduler.address/namevalueMaster:8030/value/propertypropertynameyarn.resourcemanager.resource-tracker.address/namevalueMaster:8031/value/propertypropertynameyarn.resourcemanager.admin.address/namevalueMaster:8033/value/propertypropertynameyarn.resourcemanager.webapp.address/namevalueMaster:8088/value/propertypropertynameyarn.resourcemanager.hostname/namevalueMaster/value/propertypropertynameyarn.resourcemanager.scheduler.class/namevalueorg.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler/value/propertypropertynameyarn.scheduler.minimum-allocation-mb/namevalue4/value/propertypropertynameyarn.scheduler.maxmum-allocation-mb/namevalue125/value/propertypropertynameyarn.nodemanager.resource.memory-mb/namevalue2048/value/propertypropertynameyarn.nodemanager.vmem-pmem-ratio/namevalue2.1/value/propertypropertynameyarn.nodemanager.local-dirs/namevalue/hadoop/temppackage/value/propertypropertynameyarn.nodemanager.aux-services/namevaluemapreduce_shuffle/value/propertypropertynameyarn.nodemanager.env-whitelist/name valueJAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZ,HADOOP_MAPRED_HOME/value/propertypropertynameyarn.log-aggregation.retain-seconds/namevalue-1/value/propertypropertynameyarn.log-aggregation.retian-check-interval-seconds/namevalue -1 /value/propertypropertynameyarn.resourcemanage.node.exclude-path/namevalue/soft/hadoop/etc/hadoop/workers.exclude/value/property
/configuration二、启动hadoop集群
$HADOOP_HOME/bin/hdfs namenode -format
start-all.sh
$HADOOP_HOME/bin/yarn --daemon start proxyserver
$HADOOP_HOME/bin/mapred --daemon start historyserver
四、webapp访问
hdfs
http://Master:9870/
yarn_node
http://Master:8088/三、Hive的安装
1、Mysql的安装
linux$:] touch /etc/yum.repos.d/mysql.repo
linux$:] cat /etc/yum.repos.d/mysql.repo EOF
[mysql57-community]
nameMySQL 5.7 Community Server
baseurlhttps://mirrors.cloud.tencent.com/mysql/yum/mysql-5.7-community-el7-x86_64/
enabled1
gpgcheck0
gpgkeyfile:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
EOF
linux$:] yum clean all
linux$:] yum makecache
linux$:] yum -y install mysql-community-server
linux$:] systemctl start mysqld
linux$:] systemctl enable mysqld
linux$:] grep temporary password is generated /var/log/mysqld.log
linux$:] mysql -uroot -p
Mysql 5.7.6以后的版本用下面的命令进行账号密码初始化SQLALTER USER USER() IDENTIFIED BY Twcx2023;SQLFLUSH PRIVILEGES;
linux$:] systemctl restart mysqld
linux$:] ystemctl enable mysqld2、Hive安装
linux$:] cd /soft
linux$:] wget https://mirrors.tuna.tsinghua.edu.cn/apache/hive/hive-3.1.3/apache-hive-3.1.3-bin.tar.gz
linux$:] tar -zxvf apache-hive-3.1.3-bin.tar.gz
linux$:] mv apache-hive-3.1.3-bin hive
linux$:] cd /soft/hive/conf
linux$:] mv hive-env.sh.template hive-env.sh
linux$:] echo hive-env.sh
linux$:] mv hive-default.xml.template hive-site.xml
linux$:] echo hive-site.xml解决hadoop与hive包之间jar冲突的问题
linux$:] cd /soft/hive/lib
linux$:] rm -rf guava-19.0.jar
linux$:] cp /soft/hadoop/share/hadoop/common/lib/guava-27.0-jre.jar ./解决Mysql 关联依赖包
mysql驱动下载地址
https://dev.mysql.com/downloads/connector/j/
mysql 8.0驱动下载地址
linux$:] wget https://downloads.mysql.com/archives/get/p/3/file/mysql-connector-java-8.0.11.tar.gz
linux$:] tar -zxvf mysql-connector-java-8.0.11.tar.gz
linux$:] cd mysql-connector-java-8.0.11
linux$:] cp mysql-connector-java-8.0.11.jar /soft/hive/lib/mysql 5.7驱动下载地址[当前用的此驱动]
linux$:] wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/6.0.6/mysql-connector-java-6.0.6.jar
linux$:] cp mysql-connector-java-6.0.6.jar /soft/hive/lib/3、Hive配置
配置文件
linux$:] vi /soft/hive/conf/hive-env.sh
export HADOOP_HOME/soft/hadoop
export HIVE_CONF_DIR/soft/hive/conf
export HIVE_AUX_JARS_PATH/soft/hive/lib配置日志文件可以更改级别为DEBUG用于调试
linux$:] vi /soft/hive/conf/hive-log4j2.properties
linux$:] cp hive-log4j2.properties.template hive-log4j2.properties
linux$:] vi hive-log4j2.properties
property.hive.log.dir /user/hive/log配置文件:
注意配置mysql访问的时候就算是指定了字符集mysql初始化时的字符集依然为latin
linux$:] vi /soft/hive/conf/hive-site.xml?xml version1.0?
?xml-stylesheet typetext/xsl hrefconfiguration.xsl?
configurationpropertynamehive.metastore.warehouse.dir/namevalue/user/hive/warehouse/value/propertypropertynamejavax.jdo.option.ConnectionURL/namevaluejdbc:mysql://Master:3306/hive?createDatabaseIfNotExisttrueamp;useSSLfalse/value/propertypropertynamejavax.jdo.option.ConnectionDriverName/namevaluecom.mysql.cj.jdbc.Driver/value/propertypropertynamejavax.jdo.option.ConnectionUserName/namevaluepyroot/value/propertypropertynamejavax.jdo.option.ConnectionPassword/namevalueTwcx2023/value/propertypropertynamehive.metastore.uris/namevaluethrift://Master:9083/value/propertypropertynamehive.metastore.event.db.notification.api.auth/namevaluefalse/value/propertypropertynamehive.metastore.schema.verification/namevaluefalse/value/propertypropertynamehive.server2.thrift.bind.host/namevalueMaster/value/propertypropertynamehive.server2.thrift.port/namevalue10000/value/propertypropertynamehive.cli.print.header/namevaluetrue/value/propertypropertynamehive.cli.print.current.db/namevaluetrue/value/propertypropertynamebeeline.hs2.connection.user/namevalueroot/value/propertypropertynamebeeline.hs2.connection.password/namevalueroot/value/property
/configuration4、启动Hive 说明 命令行客户端 bin/hive 不推荐使用是shell客户端 bin/beeline 强烈推荐使用是jdbc的客户端可以在嵌入式与远程客户端使用且访问的hiveServer2,通过hiveServer2访问metastore再Hive mysql数据。 HiveServer2支持多客户端的并发和身份证认证旨在为开发API客户端如JDBC,ODBC提供更好的支持
重启hdfs
linux$:] stop-all.sh
linux$:] start-all.sh初始化hive元数据信息到mysql中
linux$:] schematool -dbType mysql -initSchema #初始化schema检查mysql是否存在hive库hive库的74张表
linux$:] mysql -uroot -pSQL show databases;SQL use hiveSQL show tables;启动metastore
linux$:] mkdir -p /soft/hive/metastorelog
linux$:] cd /soft/hive/metastorelog
linux$:] nohup hive --service metastore --hiveconf hive.root.loggerDEBUG,console 启动hiveserver2
linux$:] mkdir -p /soft/hive/hiveserver2log
linux$:] cd /soft/hive/hiveserver2log
linux$:] nohup $HIVE_HOME/bin/hive --service hiveserver2 5、远程测试metastore与hiveserver2【可在Datanode主机上搭建客户端】
安装Hive软件
linux$:] cd /soft
linux$:] wget https://mirrors.tuna.tsinghua.edu.cn/apache/hive/hive-3.1.3/apache-hive-3.1.3-bin.tar.gz
linux$:] tar -zxvf apache-hive-3.1.3-bin.tar.gz
linux$:] mv apache-hive-3.1.3-bin hive解决hadoop与hive包之间jar冲突的问题
linux$:] cd /soft/hive/lib
linux$:] rm -rf guava-19.0.jar
linux$:] cp /soft/hadoop/share/hadoop/common/lib/guava-27.0-jre.jar ./驱动部署远程可不需要
linux$:] wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/6.0.6/mysql-connector-java-6.0.6.jar
linux$:] cp mysql-connector-java-6.0.6.jar /soft/hive/lib/配置Hive文件
配置文件
linux$:] vi /soft/hive/conf/hive-env.sh
export HADOOP_HOME/soft/hadoop
export HIVE_CONF_DIR/soft/hive/conf
export HIVE_AUX_JARS_PATH/soft/hive/lib配置文件
linux$:] vi /soft/hive/conf/hive-site.xml
configurationpropertynamehive.metastore.warehouse.dir/namevalue/user/hive/warehouse/value/propertypropertynamehive.metastore.uris/namevaluethrift://Master:9083/value/property
/configuration测试metastore不加主机与IP默认是访问的metastore的暴露端口 9083
linux$:] beeline -u jdbc:hive2://show databases;测试hiveserver2端口10000是访问的是hiverserver2的暴露端口
linux$:] beeline -u jdbc:hive2://Master:10000show databases;其它测试
win 环境下载DBeaver,通过10000号进行访问链接。账号默认为hive,密码为空或者填入hive。6、webapp的访问
http://Master:10002/