当前位置: 首页 > 知识库问答 >
问题:

没有要停止的Namenode、Datanode或辅助Namenode

秦锐
2023-03-14

我在我的Ubuntu 12.04中安装了Hadoop,按照下面链接中的步骤操作。

http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php

一切都已成功安装,当我运行start all时。sh只有部分服务正在运行。

wanderer@wanderer-Lenovo-IdeaPad-S510p:~$ su - hduse
Password:

hduse@wanderer-Lenovo-IdeaPad-S510p:~$ cd /usr/local/hadoop/sbin

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
hduse@localhost's password: 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out
hduse@localhost's password: 
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduse-datanode-wanderer-Lenovo-IdeaPad-S510p.out
Starting secondary namenodes [0.0.0.0]
hduse@0.0.0.0's password: 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out
hduse@localhost's password: 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduse-nodemanager-wanderer-Lenovo-IdeaPad-S510p.out

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ jps
7940 Jps
7545 ResourceManager
7885 NodeManager

一旦我通过运行脚本停止服务,停止所有。嘘

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
hduse@localhost's password: 
localhost: no namenode to stop
hduse@localhost's password: 
localhost: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
hduse@0.0.0.0's password: 
0.0.0.0: no secondarynamenode to stop
stopping yarn daemons
stopping resourcemanager
hduse@localhost's password: 
localhost: stopping nodemanager
no proxyserver to stop

我的配置文件

>

  • 编辑bashrc文件

    vi ~/.bashrc
    
    #HADOOP VARIABLES START
    export JAVA_HOME=/usr/lib/jvm/java-8-oracle/
    export HADOOP_INSTALL=/usr/local/hadoop
    export PATH=$PATH:$HADOOP_INSTALL/bin
    export PATH=$PATH:$HADOOP_INSTALL/sbin
    export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
    export HADOOP_COMMON_HOME=$HADOOP_INSTALL
    export HADOOP_HDFS_HOME=$HADOOP_INSTALL
    export YARN_HOME=$HADOOP_INSTALL
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
    #HADOOP VARIABLES END
    

    hdfs站点。xml

    vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
    
    <configuration>
     <property>
      <name>dfs.replication</name>
      <value>1</value>
      <description>Default block replication.
      The actual number of replications can be specified when the file is created.
      The default is used if replication is not specified in create time.
      </description>
     </property>
     <property>
       <name>dfs.namenode.name.dir</name>
       <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
     </property>
     <property>
       <name>dfs.datanode.data.dir</name>
       <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
     </property>
    </configuration>
    

    hadoop-env.sh

    vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
    
    export JAVA_HOME=/usr/lib/jvm/java-8-oracle/
    export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
    
    for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
      if [ "$HADOOP_CLASSPATH" ]; then
        export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
      else
        export HADOOP_CLASSPATH=$f
      fi
    done
    
    export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
    export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
    export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"
    
    export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"
    
    export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
    export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"
    
    # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
    export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
    export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
    
    export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
    export HADOOP_PID_DIR=${HADOOP_PID_DIR}
    export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
    
    # A string representing this instance of hadoop. $USER by default.
    export HADOOP_IDENT_STRING=$USER
    

    核心站点。xml

    vi /usr/local/hadoop/etc/hadoop/core-site.xml
    <configuration>
     <property>
      <name>hadoop.tmp.dir</name>
      <value>/app/hadoop/tmp</value>
      <description>A base for other temporary directories.</description>
     </property>
    
     <property>
      <name>fs.default.name</name>
      <value>hdfs://localhost:54310</value>
      <description>The name of the default file system.  A URI whose
      scheme and authority determine the FileSystem implementation.  The
      uri's scheme determines the config property (fs.SCHEME.impl) naming
      the FileSystem implementation class.  The uri's authority is used to
      determine the host, port, etc. for a filesystem.</description>
     </property>
    </configuration>
    

    mapred-site.xml

    vi /usr/local/hadoop/etc/hadoop/mapred-site.xml
    <configuration>
     <property>
      <name>mapred.job.tracker</name>
      <value>localhost:54311</value>
      <description>The host and port that the MapReduce job tracker runs
      at.  If "local", then jobs are run in-process as a single map
      and reduce task.
      </description>
     </property>
    </configuration>
    

    $javac-版本

    javac 1.8.0_66
    

    $java-版本

    java version "1.8.0_66"  
    Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
    Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
    

    我是Hadoop的新手,找不到这个问题。我在哪里可以找到Jobtracker和NameNode的日志文件,以便跟踪服务?

  • 共有3个答案

    司空元凯
    2023-03-14

    您必须为ssh设置无密码身份验证。hduse用户应该能够在没有密码的情况下通过ssh登录localhost。

    狄灵均
    2023-03-14

    如果你仔细看一看,开始一切。sh命令日志,您可以很容易地看到日志文件ş路径。尝试开始写入日志后的每个服务

    localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out
    ocalhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduse-datanode-wanderer-Lenovo-IdeaPad-S510p.out
    0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out
    starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out
    localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduse-nodemanager-wanderer-Lenovo-IdeaPad-S510p.out
    
    白和泽
    2023-03-14

    如果不是ssh问题,请执行下一步:

    >

    检查hduser是否有权使用ls-ld目录写入hadoop_存储/hdfs/namenode和datanode目录

    您可以通过sudo chmod 777 /hadoop_store/hdfs/namenode/

     类似资料:
    • 环境:运行在VMware工作站下。

    • 我刚刚下载了Hadoop-0.20tar并提取了。我设置了JAVA_HOME和hadoop_home。我修改了core-site.xml、hdfs-site.xml和mapred-site.xml。 我开始服务。 我查了日志。上面写着

    • 经过一番努力,我最终成功地在伪分布式节点中使用了hadoop,和工作正常(和) 昨天,我尝试用以下方法重新启动、等: 给出以下输出: Namenode似乎不愿意再启动了...几秒钟后Jobtracker就死了。 mapred-site.xml:

    • 我试图构造一个hadoop集群,它由EC2中的1个namenode、1个secondary namenode和3个DataNode组成。 所以我将secondary namenode的地址写入masters文件并执行start-dfs.sh。 但是,次要名称码并不是从主文件中写入的地址开始的。它只是从执行stat-dfs.sh脚本的节点开始。 :~/hadoop/etc/hadoop$start-

    • 我已经安装了一个总共有3台机器的hadoop集群,其中2个节点充当Datanode,1个节点充当Namenode,还有一个Datanode。我想澄清一些关于hadoop集群安装和体系结构的疑问。下面是我正在寻找答案的问题列表--- 我在集群中上传了一个大约500MB大小的数据文件,然后检查hdfs报告。我注意到我制作的namenode在hdfs中也占用了500MB大小,还有复制因子为2的datan

    • 我的hdfs-site.xml只有以下内容: 问题。NameNode和DataNode将安装在哪里?我在MSFT Surface笔记本电脑上使用了Hadoop 3.0.3版本的Windows10。