当前位置: 首页 > 知识库问答 >
问题:

当我运行WordCount示例时,输出文件夹不包含任何输出

司空思聪
2023-03-14
17/11/29 19:32:31 INFO output.FileOutputCommitter: File Output Committer Algorit
hm version is 1
17/11/29 19:32:31 INFO output.FileOutputCommitter: FileOutputCommitter skip clea
nup _temporary folders under output directory:false, ignore cleanup failures: fa
lse
17/11/29 19:32:31 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hado
op.mapreduce.lib.output.FileOutputCommitter
17/11/29 19:32:31 INFO mapred.LocalJobRunner: Waiting for map tasks
17/11/29 19:32:31 INFO mapred.LocalJobRunner: Starting task: attempt_local207220
8822_0001_m_000000_0
17/11/29 19:32:31 INFO output.FileOutputCommitter: File Output Committer Algorit
hm version is 1
17/11/29 19:32:31 INFO output.FileOutputCommitter: FileOutputCommitter skip clea
nup _temporary folders under output directory:false, ignore cleanup failures: fa
lse
17/11/29 19:32:31 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree curre
ntly is supported only on Linux.
17/11/29 19:32:32 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.a
pache.hadoop.yarn.util.WindowsBasedProcessTree@1106b4d9
17/11/29 19:32:32 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/u
ser/input/file02.txt:0+27
17/11/29 19:32:32 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
17/11/29 19:32:32 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
17/11/29 19:32:32 INFO mapred.MapTask: soft limit at 83886080
17/11/29 19:32:32 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
17/11/29 19:32:32 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
17/11/29 19:32:32 INFO mapred.MapTask: Map output collector class = org.apache.h
adoop.mapred.MapTask$MapOutputBuffer
17/11/29 19:32:32 INFO mapred.LocalJobRunner:
17/11/29 19:32:32 INFO mapred.MapTask: Starting flush of map output
17/11/29 19:32:32 INFO mapred.MapTask: Spilling map output
17/11/29 19:32:32 INFO mapred.MapTask: bufstart = 0; bufend = 44; bufvoid = 1048
57600
17/11/29 19:32:32 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26
214384(104857536); length = 13/6553600
17/11/29 19:32:32 INFO mapred.MapTask: Finished spill 0
17/11/29 19:32:32 INFO mapred.Task: Task:attempt_local2072208822_0001_m_000000_0
 is done. And is in the process of committing
17/11/29 19:32:32 INFO mapred.LocalJobRunner: map
17/11/29 19:32:32 INFO mapred.Task: Task 'attempt_local2072208822_0001_m_000000_
0' done.
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Finishing task: attempt_local20722
08822_0001_m_000000_0
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Starting task: attempt_local207220
8822_0001_m_000001_0
17/11/29 19:32:32 INFO output.FileOutputCommitter: File Output Committer Algorit
hm version is 1
17/11/29 19:32:32 INFO output.FileOutputCommitter: FileOutputCommitter skip clea
nup _temporary folders under output directory:false, ignore cleanup failures: fa
lse
17/11/29 19:32:32 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree curre
ntly is supported only on Linux.
17/11/29 19:32:32 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.a
pache.hadoop.yarn.util.WindowsBasedProcessTree@16def9af
17/11/29 19:32:32 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/u
ser/input/file01.txt:0+21
17/11/29 19:32:32 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
17/11/29 19:32:32 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
17/11/29 19:32:32 INFO mapred.MapTask: soft limit at 83886080
17/11/29 19:32:32 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
17/11/29 19:32:32 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
17/11/29 19:32:32 INFO mapred.MapTask: Map output collector class = org.apache.h
adoop.mapred.MapTask$MapOutputBuffer
17/11/29 19:32:32 INFO mapred.LocalJobRunner:
17/11/29 19:32:32 INFO mapred.MapTask: Starting flush of map output
17/11/29 19:32:32 INFO mapred.MapTask: Spilling map output
17/11/29 19:32:32 INFO mapred.MapTask: bufstart = 0; bufend = 38; bufvoid = 1048
57600
17/11/29 19:32:32 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26
214384(104857536); length = 13/6553600
17/11/29 19:32:32 INFO mapred.MapTask: Finished spill 0
17/11/29 19:32:32 INFO mapred.Task: Task:attempt_local2072208822_0001_m_000001_0
 is done. And is in the process of committing
17/11/29 19:32:32 INFO mapred.LocalJobRunner: map
17/11/29 19:32:32 INFO mapred.Task: Task 'attempt_local2072208822_0001_m_000001_
0' done.
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Finishing task: attempt_local20722
08822_0001_m_000001_0
17/11/29 19:32:32 INFO mapred.LocalJobRunner: map task executor complete.
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Waiting for reduce tasks
17/11/29 19:32:32 INFO mapred.LocalJobRunner: Starting task: attempt_local207220
8822_0001_r_000000_0
17/11/29 19:32:32 INFO output.FileOutputCommitter: File Output Committer Algorit
hm version is 1
17/11/29 19:32:32 INFO output.FileOutputCommitter: FileOutputCommitter skip clea
nup _temporary folders under output directory:false, ignore cleanup failures: fa
lse
17/11/29 19:32:32 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree curre
ntly is supported only on Linux.
17/11/29 19:32:32 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.a
pache.hadoop.yarn.util.WindowsBasedProcessTree@42a3c42
17/11/29 19:32:32 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apach
e.hadoop.mapreduce.task.reduce.Shuffle@afbba44
17/11/29 19:32:32 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=33433
8464, maxSingleShuffleLimit=83584616, mergeThreshold=220663392, ioSortFactor=10,
 memToMemMergeOutputsThreshold=10
17/11/29 19:32:32 INFO reduce.EventFetcher: attempt_local2072208822_0001_r_00000
0_0 Thread started: EventFetcher for fetching Map Completion Events
17/11/29 19:32:32 INFO mapred.LocalJobRunner: reduce task executor complete.
17/11/29 19:32:32 INFO mapreduce.Job: Job job_local2072208822_0001 running in ub
er mode : false
17/11/29 19:32:32 INFO mapreduce.Job:  map 100% reduce 0%
17/11/29 19:32:32 WARN mapred.LocalJobRunner: job_local2072208822_0001
java.lang.Exception: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleErro
r: error in shuffle in localfetcher#1
        at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.j
ava:489)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:5
56)
Caused by: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error i
n shuffle in localfetcher#1
        at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)

        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(Lo
calJobRunner.java:346)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:51
1)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:617)
        at java.lang.Thread.run(Thread.java:748)
**Caused by: java.io.FileNotFoundException: D:/tmp/hadoop-***Semab%20Ali/mapred/local
/localRunner/Semab%20Ali/jobcache/job_local2072208822_0001/attempt_local20722088
22_0001_m_000001_0/output/file.out.index*****
        at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:
212)
        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:786)
        at org.apache.hadoop.io.SecureIOUtils.openFSDataInputStream(SecureIOUtil
s.java:155)
        at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:70)
        at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:62)
        at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:57)
        at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.copyMapOutput(Lo
calFetcher.java:125)
        at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.doCopy(LocalFetc
her.java:103)
        at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.run(LocalFetcher
.java:86)
**17/11/29 19:32:33 INFO mapreduce.Job: Job job_local2072208822_0001 failed with s
tate FAILED due to: NA**
17/11/29 19:32:33 INFO mapreduce.Job: Counters: 23
        File System Counters
                FILE: Number of bytes read=6947
                FILE: Number of bytes written=658098
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=75
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=12
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Map-Reduce Framework
                Map input records=2
                Map output records=8
                Map output bytes=82
                Map output materialized bytes=85
                Input split bytes=216
                Combine input records=8
                Combine output records=6
                Spilled Records=6
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=599261184
        File Input Format Counters
                Bytes Read=48

我是Windows用户。这是我的yarn-site.xml配置,还有一件事是,在手动运行这个项目之前,我只启动数据节点和名称节点,而不是通过start-all.cmd命令,还有其他需要启动的吗?只是我的想法,如资源管理器等。

yarn-site.xml

<configuration>

<property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
   </property>
   <property>
       <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>

</configuration>

共有1个答案

马丰
2023-03-14

生成此错误是因为您的用户名有空格,因此在目录中显示%20。

为了解决这个问题,我们执行以下步骤:

  • 打开命令提示符(Win键+R->键入“cmd”->单击“运行”)
  • 输入netplwiz
  • 选择帐户,然后单击“属性”按钮
  • 输入帐户的新名称(不带空格)
                        hdfs dfs -rm -r /path/to/directory
 类似资料:
  • 下面是我的代码 函数executeCommand 误差 libavdevice 57。0.101/57。0.101 libavfilter 6。31.100/6。31.100 libswscale 4。0.100/4。0.100 输出#0,mp4,到'ffmpeg-ss 0-I/storage/emulated/0/video/demo.mp4-t 30-c copy/storage/emulat

  • 我试图在Hadoop 1.0.4和Ubuntu 12.04上用C++运行wordcount示例,但我得到以下错误: 错误消息: 13/06/14 13:50:11警告Mapred.JobClient:未设置作业jar文件。可能找不到用户类。请参阅JobConf(Class)或JobConf#setjar(String)。13/06/14 13:50:11 INFO util.NativEcodeL

  • 问题内容: 使用以下结构,如何使Eclipse正确构建WAR文件? 我需要以下文件夹才能进入以下输出文件夹: 将mail / src 转换为 mail / war / WEB-INF / classes 邮件/ www 到 邮件/ war / 我曾尝试在Eclipse中设置“输出”文件夹,但这只会导致Eclipse神奇地删除所有WEB-INF内容而不会告诉我! http://clausjoerge

  • 在运行与ignite CacheEvents程序相关的github示例时,没有得到任何输出(示例链接:https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/cacheEventsexample.java)。有人能帮我做这件事吗? 运行时生成的日志

  • 我已经用Eclipse创建了一个可运行的jar。在我的项目中,我有一个叫做问卷的文件夹,里面有一些我使用的文本文件。当我运行我的可运行jar时,它不起作用,除非我在与jar相同的文件夹中有文件夹调查表。我尝试了一些我在stackoverflow中读到的解决方案,比如将文件夹调查表添加为源文件夹,也可以从Properties->java build path->Libraries->add clas

  • 我正在编写一个简单的示例来测试Flink中CEP的新Scala API,使用最新的Github版本1.1-SNAPSHOT。 Pattern只是一个值的检查,并为每个匹配的模式输出一个字符串作为结果。代码如下: 它在1.1-SNAPSHOT下编译和运行,没有问题,但jobmanager输出没有显示该print()的迹象。即使放松模式条件,只设置“开始”(接受所有事件),也不会返回任何结果。 此外,