当前位置: 首页 > 工具软件 > Hibench > 使用案例 >

Hibench简易教程

麹飞航
2023-12-01

1.准备

Hibench-7.0 https://github.com/Intel-bigdata/HiBench/archive/HiBench-7.0.tar.gz

Hadoop-2.7.5 https://github.com/apache/hadoop/archive/rel/release-2.7.5.tar.gz

spark-2.3.0-bin-hadoop2.7 https://archive.apache.org/dist/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz

2.搭建

(1) Hibench编译
mvn -Phadoopbench -Psparkbench -Dspark=2.1 -Dscala=2.11 clean package
(2)修改Hibench/conf下的hadoop.conf
cat hadoop.conf.template > hadoop.conf  //重定向文件
vim hadoop.conf

修改hibench.hadoop.homehadoop-2.7.5路径
修改hibench.hdfs.masterhdfs端口例如:hdfs://localhost:8020

(3)修改Hibench/conf下的spark.conf
cat spark.conf.template > spark.conf
vim spark.conf

修改hibench.spark.homespark-2.3.0-bin-hadoop2.7路径
修改hibench.spark.mastermesos://localhost:1080(自定义,这里只是给出例子)

(4)修改Hadoop-2.7.5/etc/hadoop下的文件

4.1 core-site.xml

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://lcoalhost:8020</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>~/hadoop-2.7.5/tmp</value>
    </property>
</configuration>

4.2 yarn-site.xml

<property>
   <name>yarn.resourcemanager.hostname</name>
   <value>hostname</value>
</property>
<property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
</property>
<property>
   <name>yarn.nodemanager.disk-health-checker.min-healthy-disks</name>
   <value>0.0</value>
</property>
<property>
    <name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
    <value>100.0</value>
</property>
</configuration>

4.3 hadoop-env.sh

export JAVA_HOME=/usr/local/jdk1.8.0_191

4.4 mapred-site.xml

<configuration>
<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
</property>
</configuration>

4.5 hdfs-site.xml

<configuration>
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
</configuration>

4.6 配置hadoop无密码登录
ssh-copy-id 用户名@ip地址
4.7配置/etc/profile

export JAVA_HOME=/usr/java/jdk1.8.0_191
export HADOOP_HOME=/app/hadoop-2.7.5
export PATH=$ PATH:$ JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 

4.8 hadoop启动
hadoop namenode -format第一次启动要格式化文件系统

cd sbin
./start-all.sh

4.9 创建用户

hadoop fs -mkdir /user
hadoop fs -mkdir /user/lemaker

4.10 关闭安全模式

bin/hadoop dfsadmin -safemode leave

3.测试

bin/workloads/micro/wordcount/prepare/prepare.sh
bin/workloads/micro/wordcount/spark/run.sh
 类似资料: