当前位置: 首页 > 知识库问答 >
问题:

hadoop namenode、datanode、secondarynamenode未启动

吕嘉荣
2023-03-14

我刚刚下载了Hadoop-0.20tar并提取了。我设置了JAVA_HOME和hadoop_home。我修改了core-site.xml、hdfs-site.xml和mapred-site.xml。

我开始服务。

  jps


 jps
 JobTracker
 TaskTracker

我查了日志。上面写着

 2015-02-11 18:07:52,278 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

 /************************************************************
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = scspn0022420004.lab.eng.btc.netapp.in/10.72.40.68
 STARTUP_MSG:   args = []
 STARTUP_MSG:   version = 0.20.0
 STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 763504; compiled by 'ndaley' on Thu Apr  9 05:18:40 UTC 2009
 ************************************************************/
  2015-02-11 18:07:52,341 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.NullPointerException
    at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
    at   org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:175)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:955)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:964)

    2015-02-11 18:07:52,346 INFO   org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
   /************************************************************
   SHUTDOWN_MSG: Shutting down NameNode at   scspn0022420004.lab.eng.btc.netapp.in/10.72.40.68
   ************************************************************/
 <configuration>
  <property>
  <name>fs.defaultFS</name>
  <value>hdfs://localhost:8020</value>
  </property>
 </configuration>
 <configuration>
  <property>
  <name>dfs.replication</name>
  <value>1</value>
 </property>
 <!-- Immediately exit safemode as soon as one DataNode checks in.
   On a multi-node cluster, these configurations must be removed.  -->
 <property>
   <name>dfs.safemode.extension</name>
   <value>0</value>
  </property>
  <property>
   <name>dfs.safemode.min.datanodes</name>
   <value>1</value>
  </property>
  <property>
   <name>hadoop.tmp.dir</name>
   <value>/var/lib/hadoop-hdfs/cache/${user.name}</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/name</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/namesecondary</value>
   </property>
   <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/data</value>
   </property>

  </configuration>
  <configuration>
   <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
   </property>
  </configuration>
 localhost: starting secondarynamenode, logging to /root/hadoop/hadoop-0.20.0/bin/../logs/hadoop-root-secondarynamenode- hostname.out
 localhost: Exception in thread "main" java.lang.NullPointerException
 localhost:      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
 localhost:      at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
 localhost:      at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
 localhost:      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
 localhost:      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>   (SecondaryNameNode.java:115)
 localhost:      at   org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)

共有1个答案

岳晟
2023-03-14

我想您没有正确设置hadoop集群,请按照以下步骤操作:

步骤1:从设置.bashrc开始:

vi $HOME/.bashrc

文件末尾放入以下行:(将hadoop home更改为您的)

# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less
}

# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin
# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun
$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp
<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>
<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
</property>
<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>
 $ /usr/local/hadoop/bin/hadoop namenode -format
 类似资料:
  • krish@krish-virtualbox:~$start-dfs.sh 14/10/20 13:16:16警告util.nativeCodeLoader:无法为您的平台加载本机Hadoop库...在适用的情况下使用内置Java类 正在[localhost]上启动名称代码 我只想知道在里面所有的东西是不是都很完美。我在清单中没有看到Datanode。

  • 经过一番努力,我最终成功地在伪分布式节点中使用了hadoop,和工作正常(和) 昨天,我尝试用以下方法重新启动、等: 给出以下输出: Namenode似乎不愿意再启动了...几秒钟后Jobtracker就死了。 mapred-site.xml:

  • 由于我正处于Hadoop的学习阶段,我遇到了Hadoop单集群设置的问题。我使用的是Hadoop2.9.0和Java8。我已经完成了设置,如下所示 现在hdfs-site.xml中dfs.replication的值为1。现在我正在做start-all.sh如果我检查状态- 现在我有stop-all.sh和如果我将hdfs-site.xml中的dfs.replication的值更改为0(有些人提到这

  • 我正在尝试以伪分布式模式安装Hadoop2.2.0。当我试图启动datanode服务时,它显示了以下错误,有人能告诉我如何解决这个问题吗?

  • 问题内容: 我尝试在Ubuntu 11.04和Java 6 sun上安装Hadoop。我正在使用hadoop 0.20.203 rc1构建。我在使用Java-6-sun的Ubuntu 11.04上反复遇到问题。当我尝试启动hadoop时,由于“无法访问存储”,datanode无法启动。 我曾尝试从Apache甚至cloudera的0.20分支中升级和降级到几个版本,还尝试再次删除并安装hadoop