1.在master节点上创建/home/hadoop/tools目录。
[hadoop@master ~]$ mkdir /home/hadoop/tools
[hadoop@master ~]$ cd /home/hadoop/tools
2.将本地脚本文件上传至/home/hadoop/tools目录下。
[hadoop@master tools]$ rz deploy.conf
[hadoop@master tools]$ rz deploy.sh //文件分发shell脚本
[hadoop@master tools]$ rz runRemoteCmd.sh //远程执行命令脚本
[hadoop@master tools]$ ls
deploy.conf deploy.sh runRemoteCmd.sh
3.查看一下deploy.conf配置文件内容。
[hadoop@djt11 tools]$ cat deploy.conf
master,all,namenode,zookeeper,resourcemanager,
node1,all,slave,namenode,zookeeper,resourcemanager,
node2,all,slave,datanode,zookeeper,
all是所有节点的标签,slave是node1和node2的标签。
4.查看一下deploy.sh远程复制文件脚本内容。
[hadoop@master tools]$ cat deploy.sh
#!/bin/bash
#set -x
if [ $# -lt 3 ]
then
echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag"
echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag confFile"
exit
fi
src=$1
dest=$2
tag=$3
if [ 'a'$4'a' == 'aa' ]
then
confFile=/home/hadoop/tools/deploy.conf
else
confFile=$4
fi
if [ -f $confFile ]
then
if [ -f $src ]
then
for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'`
do
scp $src $server":"${dest}
done
elif [ -d $src ]
then
for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'`
do
scp -r $src $server":"${dest}
done
else
echo "Error: No source file exist"
fi
else
echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"
fi
5.查看一下runRemoteCmd.sh远程执行命令脚本内容。
[hadoop@master tools]$ cat runRemoteCmd.sh
#!/bin/bash
#set -x
if [ $# -lt 2 ]
then
echo "Usage: ./runRemoteCmd.sh Command MachineTag"
echo "Usage: ./runRemoteCmd.sh Command MachineTag confFile"
exit
fi
cmd=$1
tag=$2
if [ 'a'$3'a' == 'aa' ]
then
confFile=/home/hadoop/tools/deploy.conf
else
confFile=$3
fi
if [ -f $confFile ]
then
for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'`
do
echo "*******************$server***************************"
ssh $server "source /etc/profile; $cmd"
done
else
echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"
6.对当前目录下的脚本文件的所有者添加可执行权限。
[hadoop@master tools]$ chmod u+x deploy.sh
[hadoop@master tools]$ chmod u+x runRemoteCmd.sh
7.将/home/hadoop/tools目录配置到PATH路径中
[hadoop@master tools]$ su root
Password:
[root@master tools]# vi /etc/profile
PATH=/home/hadoop/tools:$PATH
export PATH
8.测试运行
①在master节点上,通过runRemoteCmd.sh脚本,一键创建所有节点的软件安装目录/home/hadoop/app。
[hadoop@master tools]$ runRemoteCmd.sh "mkdir /home/hadoop/app" all
②将master节点的zookeeper文件分发到node1和node2节点上
[hadoop@master app]$ deploy.sh zookeeper /home/hadoop/app/ slave