教程使用版本: hadoop-2.8.1.tar.gz zookeeper-3.4.10.tar.gz Linux: Centos 7 x64 (CentOS-7-x86_64-DVD-1708)

前期准备:

集群规划:

主机名IP安装的软件运行的进程
node1192.168.66.3jdk、hadoopNameNode、DFSZKFailoverController(zkfc)
node2192.168.66.4jdk、hadoopNameNode、DFSZKFailoverController(zkfc)
node3192.168.66.5jdk、hadoopResourceManager
node4192.168.66.6jdk、hadoopResourceManager
node5192.168.66.7jdk、hadoop、zookeeperDataNode、NodeManager、JournalNode、QuorumPeerMain
node6192.168.66.8jdk、hadoop、zookeeperDataNode、NodeManager、JournalNode、QuorumPeerMain
node7192.168.66.9jdk、hadoop、zookeeperDataNode、NodeManager、JournalNode、QuorumPeerMain

修改主机名:

在每个节点上执行
其中node1 为主机名
$hostnamectl set-hostname node1

修改IP

$cd /etc/sysconfig/network-scripts
找到ifcfg-eth0修改内容


TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPADDR=192.168.66.3 #静态IP  
GATEWAY=192.168.66.2 #默认网关  
NETMASK=255.255.255.0 #子网掩码
DNS1=192.168.66.2
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="1a7840e2-4b8c-4770-9537-c1aa9ddf42e2"
DEVICE="ens33"
ONBOOT="yes"
1
service network restart

修改Hosts

$vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.66.3 node1
192.168.66.4 node2
192.168.66.5 node3
192.168.66.6 node4
192.168.66.7 node5
192.168.66.8 node6
192.168.66.9 node7

关闭防火墙

systemctl stop firewalld.service #停止firewall
systemctl disable firewalld.service #禁止firewall开机启动

ssh免密登陆

  1. 首先要配置node1到node2、node3、node4、node5、node6、node7的免密码登陆 在node1上生产一对钥匙
1
ssh-keygen -t rsa
  1. 将公钥拷贝到其他节点,包括自己
1
2
3
4
5
6
7
ssh-copy-id node1
ssh-copy-id node2
ssh-copy-id node3
ssh-copy-id node4
ssh-copy-id node5
ssh-copy-id node6
ssh-copy-id node7
  1. 配置node3到node4、node5、node6、node7的免密码登陆 在node3上生产一对钥匙
1
ssh-keygen -t rsa

将公钥拷贝到其他节点

1
2
3
4
ssh-copy-id node4
ssh-copy-id node5
ssh-copy-id node6
ssh-copy-id node7

**注意:**两个namenode之间要配置ssh免密码登陆,别忘了配置node2到node1的免登陆 在node2上生产一对钥匙

1
2
ssh-keygen -t rsa
ssh-copy-id -i node1

jdk安装&环境变量配置

Linux x64 180.99 MB jdk-8u152-linux-x64.tar.gz

$vi /etc/profile
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
JAVA_HOME=/home/ning/app/jdk1.8.0_152
JRE_HOME=$JAVA_HOME/jre
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
HADOOP_HOME=/home/ning/app/hadoop-2.8.1
export JAVA_HOME
export JRE_HOME
export PATH
export CLASSPATH
export HADOOP_HOME

配置Zookeeper

安装配置zooekeeper集群(在node5上)

  1. 解压
1
tar -zxvf zookeeper-3.4.10.tar.gz -C /home/hadoop/app/
  1. 修改配置
1
2
3
cd /home/hadoop/app/zookeeper-3.4.5/conf/
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg

修改:dataDir=/home/hadoop/app/zookeeper-3.4.5/tmp 在最后添加:

1
2
3
server.1=node5:2888:3888
server.2=node6:2888:3888
server.3=node7:2888:3888
  1. 保存退出
  2. 然后创建一个tmp文件夹
    1
    2
    
    mkdir /home/hadoop/app/zookeeper-3.4.10/tmp
    echo 1 > /home/hadoop/app/zookeeper-3.4.10/tmp/myid
    
  3. 将配置好的zookeeper拷贝到其他节点(首先分别在hadoop06、hadoop07根目录下创建一个hadoop目录:mkdir /hadoop)
1
2
scp -r /home/hadoop/app/zookeeper-3.4.5/ hadoop06:/home/hadoop/app/
scp -r /home/hadoop/app/zookeeper-3.4.5/ hadoop07:/home/hadoop/app/

注意:修改hadoop06、hadoop07对应/hadoop/zookeeper-3.4.5/tmp/myid内容 hadoop06:

1
echo 2 > /home/hadoop/app/zookeeper-3.4.5/tmp/myid

hadoop07:

1
echo 3 > /home/hadoop/app/zookeeper-3.4.5/tmp/myid

安装配置hadoop集群(在node1上操作)

  1. 解压
1
tar -zxvf hadoop-2.6.4.tar.gz -C /home/hadoop/app/
  1. 配置HDFS (hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目录下) 将hadoop添加到环境变量中
1
2
3
4
  vi /etc/profile
  export JAVA_HOME=/usr/java/jdk1.7.0_55
  export HADOOP_HOME=/hadoop/hadoop-2.6.4
  export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
  1. 修改hadoo-env.sh export JAVA_HOME=/home/hadoop/app/jdk1.7.0_55

core-site.xml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<configuration>
	<!-- 指定hdfs的nameservice为ns1 -->
	<property>
		<name>fs.defaultFS</name>
		<value>hdfs://ns1/</value>
	</property>
	<!-- 指定hadoop临时目录 -->
	<property>
		<name>hadoop.tmp.dir</name>
		<value>/home/ning/app/allAppFiles/hadoop/tmpDir/</value>
	</property>
	<!-- 指定zookeeper地址 -->
	<property>
		<name>ha.zookeeper.quorum</name>
		<value>node5:2181,node6:2181,node7:2181</value>
	</property>
</configuration>

hdfs-site.xml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
<configuration>
	<!--指定hdfs的nameservice为bi,需要和core-site.xml中的保持一致 -->
	<property>
		<name>dfs.nameservices</name>
		<value>ns1</value>
	</property>
	<!-- ns1下面有两个NameNode,分别是nn1,nn2 -->
	<property>
		<name>dfs.ha.namenodes.ns1</name>
		<value>nn1,nn2</value>
	</property>
	
	<!-- nn1的RPC通信地址 -->
	<property>
		<name>dfs.namenode.rpc-address.ns1.nn1</name>
		<value>node1:9000</value>
	</property>
	<!-- nn1的http通信地址 -->
	<property>
		<name>dfs.namenode.http-address.ns1.nn1</name>
		<value>node1:50070</value>
	</property>
	
	<!-- nn2的RPC通信地址 -->
	<property>
		<name>dfs.namenode.rpc-address.ns1.nn2</name>
		<value>node2:9000</value>
	</property>
	<!-- nn2的http通信地址 -->
	<property>
		<name>dfs.namenode.http-address.ns1.nn2</name>
		<value>node2:50070</value>
	</property>
	
	<!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->
	<property>
		<name>dfs.namenode.shared.edits.dir</name>
		<value>qjournal://node5:8485;node6:8485;node7:8485/ns1</value>
	</property>
	<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
	<property>
		<name>dfs.journalnode.edits.dir</name>
		<value>/home/ning/app/allAppFiles/journalData</value>
	</property>
	
	<!-- 开启NameNode失败自动切换 -->
	<property>
		<name>dfs.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>
	<!-- 配置失败自动切换实现方式 -->
	<property>
		<name>dfs.client.failover.proxy.provider.ns1</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
	<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
	<property>
		<name>dfs.ha.fencing.methods</name>
		<value>
		sshfence
		shell(/bin/true)
		</value>
	</property>
	<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/home/ning/.ssh/id_rsa</value>
	</property>
	<!-- 配置sshfence隔离机制超时时间30s -->
	<property>
		<name>dfs.ha.fencing.ssh.connect-timeout</name>
		<value>30000</value>
	</property>
</configuration>

mapred-site.xml

1
2
3
4
5
6
7
<configuration>
	<!-- 指定mr框架为yarn方式 -->
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
</configuration>

yarn-site.xml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
<configuration>
	<!-- Site specific YARN configuration properties -->
	<!-- 开启RM高可用 -->
	<property>
		<name>yarn.resourcemanager.ha.enabled</name>
		<value>true</value>
	</property>
	<!-- 指定RM的cluster id -->
	<property>
		<name>yarn.resourcemanager.cluster-id</name>
		<value>yrc</value>
	</property>
	<!-- 指定RM的名字 -->
	<property>
		<name>yarn.resourcemanager.ha.rm-ids</name>
		<value>rm1,rm2</value>
	</property>
	<!-- 分别指定RM的地址 -->
	<property>
		<name>yarn.resourcemanager.hostname.rm1</name>
		<value>node3</value>
	</property>
	<property>
		<name>yarn.resourcemanager.hostname.rm2</name>
		<value>node4</value>
	</property>
	<!-- 指定zk集群地址 -->
	<property>
		<name>yarn.resourcemanager.zk-address</name>
		<value>node5:2181,node6:2181,node7:2181</value>
	</property>
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
</configuration>

修改slaves

因为要在node1上启动HDFS、在node3启动yarn,所以 node1上的slaves文件指定的是datanode的位置, node3上的slaves文件指定的是nodemanager的位置)

1
2
3
node5
node6
node7

启动

  1. 启动zookeeper集群(分别在node5、node6、node7上启动zk)
1
2
cd /hadoop/zookeeper-3.4.10/bin/
./zkServer.sh start

查看状态:一个leader,两个follower

1
./zkServer.sh status
  1. 启动journalnode(分别在在mini5、mini6、mini7上执行)
1
2
cd /hadoop/hadoop-2.6.4
sbin/hadoop-daemon.sh start journalnode

运行jps命令检验,node5、node6、node7上多了JournalNode进程

  1. 格式化HDFS 在node1上执行命令:

    1
    
    hdfs namenode -format
    

    格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件, 这里我配置的是/hadoop/hadoop-2.6.4/tmp, 然后将/hadoop/hadoop-2.6.4/tmp拷贝到hadoop02的/hadoop/hadoop-2.6.10/下。

    1
    
    scp -r tmp/ hadoop02:/home/hadoop/app/hadoop-2.6.4/
    

    也可以这样,建议hdfs namenode -bootstrapStandby

  2. 格式化ZKFC(在mini1上执行一次即可)

1
hdfs zkfc -formatZK
  1. 启动HDFS(在mini1上执行)
1
sbin/start-dfs.sh
  1. 启动YARN (#####注意#####:是在hadoop02上执行start-yarn.sh, 把namenode和resourcemanager分开是因为性能问题,因为他们都要占用大量资源,所以把他们分开了,他们分开了就要分别在不同的机器上启动)
1
sbin/start-yarn.sh

node4上

1
yarn-daemon.sh start resourcemanager

验证

浏览器访问: http://node1:50070 NameNode ‘node1:9000’ (active) http://node2:50070 NameNode ‘node2:9000’ (standby) http://node4:8088