Giraph配置教程
一、Hadoop 2.7分布式集群环境搭建
1.见:Hadoop 2.7分布式集群环境搭建
2.补充配置mapred-site.xml :
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>4</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>4</value>
</property>
注:默认情况下,Hadoop允许同时运行两个映射程序。然而,Giraph的代码假设我们可以同时运行4个映射程序。因此,对于这个单节点的伪分布式部署,我们需要在mapred-site.xml中添加最后两个属性来反映这一需求。否则,一些单元测试将失败。
补充配置hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/data</value>
</property>
</configuration>
3.运行hadoop
start-all.sh
二、maven安装并添加镜像
见博客:Linux安装Maven并配置阿里云镜像
三、Giraph配置
1.下载giraph
cd /usr/local
git clone https://github.com/apache/giraph.git
2.编译
cd giraph
mvn -Phadoop_2 -Dhadoop.version=2.7.6 -DskipTests clean package
四、运行giraph
1.创建tiny_graph.txt,输入:
[0,0,[[1,1],[3,3]]]
[1,0,[[0,1],[2,2],[3,1]]]
[2,0,[[1,2],[4,4]]]
[3,0,[[0,3],[1,1],[4,4]]]
[4,0,[[3,4],[2,4]]]
2.将文件拷贝到hdfs:
cd /usr/local
hdfs dfs -put tiny_graph.txt /
hdfs dfs -ls / 检查是否拷贝成功
3.提交任务
单点运行:
hadoop jar /usr/local/giraph/giraph-examples/target/giraph-examples-1.3.0-SNAPSHOT-for-hadoop-2.7.6-jar-with-dependencies.jar org.apache.giraph.GiraphRunner org.apache.giraph.examples.SimpleShortestPathsComputation -vif org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat -vip /tiny_graph.txt -vof org.apache.giraph.io.formats.IdWithValueTextOutputFormat -op /shorttestpaths -w 1 -ca giraph.SplitMasterWorker=false
集群运行:
hadoop jar /usr/local/giraph/giraph-examples/target/giraph-examples-1.3.0-SNAPSHOT-for-hadoop-2.7.6-jar-with-dependencies.jar org.apache.giraph.GiraphRunner org.apache.giraph.examples.SimpleShortestPathsComputation -vif org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat -vip /tiny_graph.txt -vof org.apache.giraph.io.formats.IdWithValueTextOutputFormat -op /shorttestpaths -w 6 -ca mapred.job.tracker=192.168.1.1
4.查看结果
hadoop dfs -cat /user/hduser/output/shortestpaths/p* | less
附
日志端口:master:8082