core-site.xml
1) fs.deafulteFS 设置网址(https://.....)
2)ha.zookeeper.qurom 设置高可用(hadoop1:2181,hadoop2:2181,。。。)
3)hadoop.tem.dir 设置临时文件路径(/opt/hadoop)
hdfs-site.xml
1)dfs.nameservices 设置服务器名称(随便叫)
2)dfs.ha.namenode... 设置高可用的namenode(namenode服务器别名:nn1)
3)dfs.ha.namenode.rpc-adress....nn1 设置高可用的rpc端口(hadoop1:8020)
4)同上,改成2
5)dfs.namenode.http-address......nn1 设置高可用的web端口(hadoop1:50070)
6)dfs.namenode.shared.edits.dir 指定namenode元数据存储在journalnode中的路径(qjournal://hadoop2:8485;....;..../sxt)
7)dfs.client.failover.proxy.provider.sxt 指定HDFS客户端连接active namenode的java类 (org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider)
8)dfs.ha.fencing.methods 配置隔离机制为ssh 防止脑裂 (sshfence)
9)dfs.ha.fencing.ssh.private-key-flies 指定秘钥位置(/root/.ssh/id_dsa)
10)dfs.journalnnode.edits.dir 指定JN日志文件的存储路径(/opt/hadoop/data)
11)dfs.ha.automatic-failover.enabled 开启自动故障转移 (true)
yarn-site.xml
1)yarn.nodemanager.aux-services (mapreduce_shuffle)
2)启用resourcemanager ha(是否开启RM ha,默认是开启的)
yarn.resourcemanager.ha.enabled (true)
3)声明两台resourcemanager的地址
yarn.resourcemanager.cluster-id (rmcluster)
yarn.resourcemanager.ha.rm-ids (rm1,rm2)
yarn.resourcemanager.hostname.rm1 (master)
yarn.resourcemanager.hostname.rm2 (master2)
4)指定zookeeper集群的地址
yarn.resourcemanager.zk-address (slave1:2181,slave2:2181,slave3:2181)
5)启用自动恢复,当任务进行一半,rm坏掉,就要启动自动恢复,默认是false
yarn.resourcemanager.recovery.enabled (true)
6)指定resourcemanager的状态信息存储在zookeeper集群,默认是存放在FileSystem里面
yarn.resourcemanager.store.class (org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore)
修改core-site.xml文件
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/modules/App/hadoop-2.5.0/data/tmp</value>
</property>
<property>
<name>hadoop.http.staticuser.user</name>
<value>hadoop</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>slave1:2181,slave2:2181,slave3:2181</value>
</property>
</configuration>
修改hdfs-site.xml文件
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>134217728</value>
</property>
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通信地址,nn1所在地址 -->
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>master:8020</value>
</property>
<!-- nn1的http通信地址,外部访问地址 -->
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>master:50070</value>
</property>
<!-- nn2的RPC通信地址,nn2所在地址 -->
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>master2:8020</value>
</property>
<!-- nn2的http通信地址,外部访问地址 -->
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>master2:50070</value>
</property>
<!-- 指定NameNode的元数据在JournalNode日志上的存放位置(一般和zookeeper部署在一起) -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://slave1:8485;slave2:8485;slave3:8485/ns1</value>
</property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/modules/hadoop-2.5.0-cdh5.3.6/data/journal</value>
</property>
<!--客户端通过代理访问namenode,访问文件系统,HDFS 客户端与Active 节点通信的Java 类,使用其确定Active 节点是否活跃 -->
<property>
<name>dfs.client.failover.proxy.provider.ns1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!--这是配置自动切换的方法,有多种使用方法,具体可以看官网,在文末会给地址,这里是远程登录杀死的方法 -->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<!-- 这个是使用sshfence隔离机制时才需要配置ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间,这个属性同上,如果你是用脚本的方法切换,这个应该是可以不配置的 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<!-- 这个是开启自动故障转移,如果你没有自动故障转移,这个可以先不配 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
修改mapred-site.xml.template名称为mapred-site.xml并修改
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
配置 yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- Site specific YARN configuration properties -->
<!--启用resourcemanager ha-->
<!--是否开启RM ha,默认是开启的-->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!--声明两台resourcemanager的地址-->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>rmcluster</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>master</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>master2</value>
</property>
<!--指定zookeeper集群的地址-->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>slave1:2181,slave2:2181,slave3:2181</value>
</property>
<!--启用自动恢复,当任务进行一半,rm坏掉,就要启动自动恢复,默认是false-->
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<!--指定resourcemanager的状态信息存储在zookeeper集群,默认是存放在FileSystem里面。-->
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>