Hadoop高可用集群部署

大数据 Hadoop
在master节点中使用yarn rmadmin -getAllServiceState命令查看各个节点ResourceManager状态,接下来停止active状态节点的ResourceManage进程,ResourceManage active自动跳转到其他节点,集群仍可用,随后再启动该节点ResourceManager进程,最后再次查看状态,Failed状态恢复为standby。

背景

生产中Hadoop分布式集群中可能存在着单点故障问题,如果Namenode宕机或是软硬件升级,集群将无法使用,所以进行搭建高可用的来消除单点故障。

Hadoop介绍

Hadoop集群一般为一个NameNode和ResourceManager,但在实际生产环境中,若恰好具有NameNode和ResourceManager的节点出现故障,那么整个Hadoop集群将会崩溃,这是因为在HDFS中NameNode是系统的核心节点,ResourceManager负责整个系统的资源管理和分配。

为了解决单点故障问题,在Hadoop2后中引入了高可用机制,支持NameNode和ResourceManager一个主节点和一个备用节点,而在Hadoop3中继续对其进行了优化和提升,它支持一个主节点和多个备用节点。所谓高可用(High Availability, HA)就是7*24 小时不中断服务,消除单点故障。

Hadoop HA严格来说应该分成各个组件的HA机制:HDFS的HA和YARN的HA,可以通过配置多个NameNode和ResourceManager(Active/Standby)实现在集群中的热备来解决上述问题。

环境准备:

图片图片

节点相关进程如下:

图片图片

  • 操作系统:centos8
  • 内存:4G
  • Java 版本:jdk8

HDFS和YARN HA集群搭建

3.1 下载hadoop安装包

官网https://hadoop.apache.org/下载hadoop 3.3.0安装包解压至/usr/local下3台机器需修改的配置文件目录为/usr/local/hadoop/etc/hadoop下。

3.2  hadoop.env

export JAVA_HOME=/usr/local/jdk #配置jdk路径
#添加两行
export HDFS_JOURNALNODE_USER=root
export HDFS_ZKFC_USER=root

3.3 core-site.xml

<?xml versinotallow="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<!-- hdfs分布式文件系统名字/地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns</value>
</property>
<!--存放namenode、datanode数据的根路径 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
<!-- 存放journalnode数据的地址 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/tmp/jn</value>
</property>
<!-- 列出运行 ZooKeeper 服务的主机端口对 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop:2181,k8s-2:2181,k8s-3:2181</value>
</property>
</configuration>

3.4 hdfs-site.xml

<?xml versinotallow="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/usr/local/hadoop/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/usr/local/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/usr/local/hadoop/dfs/journalnode</value>
        <description>The path where the JournalNode daemon will store its local state.</description>
    </property>
    <property>
        <name>dfs.nameservices</name>
        <value>ns</value>
        <description>The logical name for this new nameservice.</description>
    </property>
    <property>
        <name>dfs.ha.namenodes.ns</name>
        <value>nn1,nn2,nn3</value>
        <description>Unique identifiers for each NameNode in the nameservice.</description>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns.nn1</name>
        <value>hadoop:8020</value>
        <description>The fully-qualified RPC address for nn1 to listen on.</description>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns.nn2</name>
        <value>k8s-2:8020</value>
        <description>The fully-qualified RPC address for nn2 to listen on.</description>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns.nn3</name>
        <value>k8s-3:8020</value>
        <description>The fully-qualified RPC address for nn3 to listen on.</description>
    </property>
    <property>
        <name>dfs.namenode.http-address.ns.nn1</name>
        <value>hadoop:9870</value>
        <description>The fully-qualified HTTP address for nn1 to listen on.</description>
    </property>
    <property>
        <name>dfs.namenode.http-address.ns.nn2</name>
        <value>k8s-2:9870</value>
        <description>The fully-qualified HTTP address for nn2 to listen on.</description>
    </property>
    <property>
        <name>dfs.namenode.http-address.ns.nn3</name>
        <value>k8s-3:9870</value>
        <description>The fully-qualified HTTP address for nn3 to listen on.</description>
    </property>
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://hadoop:8485;k8s-3:8485;k8s-2:8485/ns</value>
        <description>The URI which identifies the group of JNs where the NameNodes will write/read edits.</description>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.ns</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        <description>The Java class that HDFS clients use to contact the Active NameNode.</description>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>
            sshfence
            shell(/bin/true)
        </value>
        <description>
            A list of scripts or Java classes which will be used to fence the Active NameNode during a failover.
            sshfence - SSH to the Active NameNode and kill the process
            shell - run an arbitrary shell command to fence the Active NameNode
        </description>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
        <description>Set SSH private key file.</description>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
        <description>Automatic failover.</description>
    </property>
</configuration>

3.5 mapred-site.xml

<?xml versinotallow="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value>
</property>
</configuration>

3.6 yarn-site.xml

<?xml versinotallow="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
    <description>Enable RM HA.</description>
</property>
<property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>yrc</value>
    <description>Identifies the cluster.</description>
</property>
<property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2,rm3</value>
    <description>List of logical IDs for the RMs. e.g., "rm1,rm2".</description>
</property>
<property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>hadoop</value>
    <description>Set rm1 service addresses.</description>
</property>
<property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>k8s-2</value>
    <description>Set rm2 service addresses.</description>
</property>
<property>
    <name>yarn.resourcemanager.hostname.rm3</name>
    <value>k8s-3</value>
    <description>Set rm3 service addresses.</description>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>hadoop:8088</value>
    <description>Set rm1 web application addresses.</description>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>k8s-2:8088</value>
    <description>Set rm2 web application addresses.</description>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address.rm3</name>
    <value>k8s-3:8088</value>
    <description>Set rm3 web application addresses.</description>
</property>
<property>
    <name>hadoop.zk.address</name>
    <value>hadoop:2181,k8s-2:2181,k8s-3:2181</value>
    <description>Address of the ZK-quorum.</description>
</property>
</configuration>

3.7 workers

hadoop
k8s-2
k8s-3

安装zookeeper

版本:zookeeper-3.6.4

通过https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.6.4/apache-zookeeper-3.6.4-bin.tar.gz 下载安装包,3台机器进行解压配置安装。

echo "1" > /data/zookeeperdata/myid #不同机器id不同

zoo.cfg配置如下:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeperdata #数据目录
dataLogDir=/data/zookeeperdata/logs #日志目录
clientPort=2181 #端口
server.1=192.xxx.xxx.128:2888:3888
server.2=192.xxx.xxx.132:2888:3888
server.3=192.xxx.xxx.131:2888:3888

环境变量配置

vi /etc/profile

export JAVA_HOME=/usr/local/jdk
export HAD00P_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export HADOOP_CLASSPATH=`hadoop classpath`
source /etc/profile

启动集群

在所有节点上使用rm -rf /usr/local/hadoop/dfs命令,删除之前创建的存储路径,同时在master节点上执行mkdir -p /usr/local/hadoop/dfs/name /usr/local/hadoop/dfs/data /usr/local/hadoop/dfs/journalnode,再次创建存储路径。

在所有节点上使用rm -rf /usr/local/hadoop/tmp /usr/local/hadoop/logs && mkdir -p /usr/local/hadoop/tmp /usr/local/hadoop/logs命令,重置临时路径和日志信息。

通过以上步骤,Hadoop HA集群就配置完成了,当第一次启动HA集群时需要依次执行以下命令:

$ZOOKEEPER_HOME/bin/zkServer.sh start # 开启Zookeeper进程(所有节点上执行)
$HADOOP_HOME/bin/hdfs --daemon start journalnode # 开启监控NameNode的管理日志的JournalNode进程(所有节点上执行)
$HADOOP_HOME/bin/hdfs namenode -format # 命令格式化NameNode(在master节点上执行)
scp -r /usr/local/hadoop/dfs k8s-2:/usr/local/hadoop  # 将格式化后的目录复制到slave1中(在master节点上执行)
scp -r /usr/local/hadoop/dfs k8s-3:/usr/local/hadoop   # 将格式化后的目录复制到slave2中(在master节点上执行)
$HADOOP_HOME/bin/hdfs zkfc -formatZK # 格式化Zookeeper Failover Controllers(在master节点上执行)
start-dfs.sh&&start-yarn.sh # 启动HDFS和Yarn集群(在master节点上执行)

若不是第一次启动HA集群(常规启动HA集群),则只需要依次执行以下命令:

$ZOOKEEPER_HOME/bin/zkServer.sh start    # 开启Zookeeper进程(所有节点上执行)
start-all.sh或者$HADOOP_HOME/sbin/start-dfs.sh && $HADOOP_HOME/sbin/start-yarn.sh # 启动HDFS和Yarn集群(在master节点上执行)

启动完成后每个节点使用jps命令会出现NameNode、DataNode、ResourceManager、NodeManager、JournalNode、DFSZKFailoverController、QuorumPeerMain和Jps 8个进程。

图片图片

图片图片

通过页面访问查看:

http://192.xxx.xxx.128:9870/

图片图片

http://192.xxx.xxx.128:8088/cluster/nodes

图片图片

hdfs HA验证

6.1 查看各个节点NameNode状态

图片图片

6.2 验证hdfs的高可用

在开启HA集群并成功启动的情况下,在master节点中使用hdfs haadmin -getAllServiceState命令查看各个节点NameNode状态,接下来停止active状态节点的NameNode进程。

图片图片

Namenode active自动跳转其他节点,集群仍可用。

图片图片

图片图片

随后再启动该节点NameNode进程,最后再次查看状态,可以发现HDFS HA是正常的,并且没有发生抢占。

图片图片

验证yarn 高可用

在master节点中使用yarn rmadmin -getAllServiceState命令查看各个节点ResourceManager状态,接下来停止active状态节点的ResourceManage进程,ResourceManage active自动跳转到其他节点,集群仍可用,随后再启动该节点ResourceManager进程,最后再次查看状态,Failed状态恢复为standby。

图片图片

若要关闭集群可以master使用$HADOOP_HOME/sbin/stop-yarn.sh && $HADOOP_HOME/sbin/stop-dfs.sh命令或者stop-all.sh 即可关闭hadoop集群,然后关闭zookeeper,三台机器分别执行/data/apache-zookeeper-3.6.4-bin/bin/zkServer.sh  stop。

责任编辑:武晓燕 来源: IT那活儿
相关推荐

2013-07-11 13:39:23

Hadoop

2019-11-27 16:34:00

配置

2021-03-17 10:05:42

KubernetesRedis数据库

2019-12-24 14:28:00

KeepalivedNginxTomcat

2022-09-02 16:59:13

Kubernetes高可用Linux

2012-02-15 22:40:23

heartbeat高可用

2023-05-08 08:16:00

高可用MongoDB

2018-01-12 14:20:37

数据库MySQL高可用架构

2017-02-19 19:57:05

ZooKeeper集群

2019-08-27 15:56:44

MySQL 互联网数据库

2017-02-06 11:43:57

ZooKeeper集群

2021-03-04 13:10:32

KubernetesRedisLinux

2023-10-13 18:57:22

2014-10-09 10:04:23

CentOS集群

2022-05-31 08:04:03

Redis高可用集群

2024-02-27 09:48:25

Redis集群数据库

2017-01-17 10:25:06

HBase集群运维

2022-07-08 14:17:18

Kubernetes集群高可用Linux

2017-11-13 11:07:32

Nginx搭建高可用

2020-10-28 07:10:07

Nginx高可用高并发
点赞
收藏

51CTO技术栈公众号