当前位置: 首页 > 知识库问答 >
问题:

Amazon EMR作业失败:由于步骤失败而关闭

淳于哲
2023-03-14

我正在写一个简单的流媒体地图减少工作使用Python在亚马逊电子病历上运行。它基本上是用户记录的聚合器,将每个用户标识的条目分组在一起。

制图器

#!/usr/bin/env python
import sys

def main(argv):
line = sys.stdin.readline()
try:
    while line:
        line = line.rstrip()
        elements = line.split()
        print '%s\t%s' % (elements[0] , (elements[1],elements[2]) )
        line = sys.stdin.readline()
except "end of file":
    return None

if __name__ == '__main__':
    main(sys.argv)

减速机:

#!/usr/bin/env python
import sys

def main(argv):
users=dict()
for line in sys.stdin:
    elements=line.split('\t',1)
    if elements[0] in users:
        users[elements[0]].append(elements[1])
    else:
        users[elements[0]]=elements[1]

for user in users:
    print '%s\t%s'% ( user, users[user] )

if __name__ == '__main__':
    main(sys.argv)

此作业应在包含五个文本文件的目录上运行。EMR作业的参数包括:

输入:[桶名]/[输入文件夹名]

输出:[存储桶名称]/Output

映射器:[Bucket name]/Mapper.py

Reducer:[存储桶名称]/Reducer.py

作业持续失败,原因是:步骤失败时关闭。这是日志的副本

2013-01-01 12:06:16,270 INFO org.apache.hadoop.mapred.JobClient (main): Default number      of map tasks: null

2013-01-01 12:06:16,271 INFO org.apache.hadoop.mapred.JobClient (main): Setting default number of map tasks based on cluster size to : 8

2013-01-01 12:06:16,271 INFO org.apache.hadoop.mapred.JobClient (main): Default number of reduce tasks: 3

2013-01-01 12:06:18,392 INFO org.apache.hadoop.security.ShellBasedUnixGroupsMapping (main): add hadoop to shell userGroupsCache

2013-01-01 12:06:18,393 INFO org.apache.hadoop.mapred.JobClient (main): Setting group to hadoop

2013-01-01 12:06:18,647 INFO com.hadoop.compression.lzo.GPLNativeCodeLoader (main): Loaded native gpl library

2013-01-01 12:06:18,670 WARN com.hadoop.compression.lzo.LzoCodec (main): Could not find build properties file with revision hash

2013-01-01 12:06:18,670 INFO com.hadoop.compression.lzo.LzoCodec (main): Successfully loaded & initialized native-lzo library [hadoop-lzo rev UNKNOWN]

2013-01-01 12:06:18,695 WARN org.apache.hadoop.io.compress.snappy.LoadSnappy (main): Snappy native library is available

2013-01-01 12:06:18,695 INFO org.apache.hadoop.io.compress.snappy.LoadSnappy (main): Snappy native library loaded

2013-01-01 12:06:19,050 INFO org.apache.hadoop.mapred.FileInputFormat (main): Total input paths to process : 5

2013-01-01 12:06:20,688 INFO org.apache.hadoop.streaming.StreamJob (main): getLocalDirs(): [/mnt/var/lib/hadoop/mapred]

2013-01-01 12:06:20,688 INFO org.apache.hadoop.streaming.StreamJob (main): Running job: job_201301011204_0001

2013-01-01 12:06:20,688 INFO org.apache.hadoop.streaming.StreamJob (main): To kill this job, run:

2013-01-01 12:06:20,688 INFO org.apache.hadoop.streaming.StreamJob (main): /home/hadoop/bin/hadoop job  -Dmapred.job.tracker=10.255.131.225:9001 -kill job_201301011204_0001

2013-01-01 12:06:20,689 INFO org.apache.hadoop.streaming.StreamJob (main): Tracking URL: http://domU-12-31-39-01-7C-13.compute-1.internal:9100/jobdetails.jsp?jobid=job_201301011204_0001

2013-01-01 12:06:21,696 INFO org.apache.hadoop.streaming.StreamJob (main):  map 0%  reduce 0%

2013-01-01 12:08:02,238 INFO org.apache.hadoop.streaming.StreamJob (main):  map 100%  reduce 100%

2013-01-01 12:08:02,239 INFO org.apache.hadoop.streaming.StreamJob (main): To kill this job, run:

2013-01-01 12:08:02,240 INFO org.apache.hadoop.streaming.StreamJob (main): /home/hadoop/bin/hadoop job  -Dmapred.job.tracker=10.255.131.225:9001 -kill job_201301011204_0001

2013-01-01 12:08:02,240 INFO org.apache.hadoop.streaming.StreamJob (main): Tracking URL: http://domU-12-31-39-01-7C-13.compute-1.internal:9100/jobdetails.jsp?jobid=job_201301011204_0001

 2013-01-01 12:08:02,240 ERROR org.apache.hadoop.streaming.StreamJob (main): Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201301011204_0001_m_000002

 2013-01-01 12:08:02,240 INFO org.apache.hadoop.streaming.StreamJob (main): killJob...

我做错了什么?

共有1个答案

微生恩
2023-03-14

解决了的。确保Python脚本开头没有任何注释,并且脚本的第一行是#/usr/bin/env pythonline

 类似资料:
  • 我正在尝试发布mvn,但由于git的问题,它失败了。我以前多次这样做都没有遇到这个问题,我真的不明白为什么会发生这种情况。 我首先是在mvn release:prepare中得到它的,但通过在我的根pom中添加下面显示的最后一行来绕过它: 但是现在,当我尝试做mvn发布:执行时,我再次收到错误消息: 我不明白它从哪里得到的想法,我的git安装文件夹应该是一个git存储库!错误发生前记录的git c

  • 我试图遵循Github的一个示例,使用Github操作测试我的构建,然后压缩测试结果并将其上传为工件。https://help.github.com/en/actions/automating-your-workflow-with-github-actions/persisting-workflow-data-using-artifacts#uploading-build-and-test-art

  • 错误:org.kitesdk.data.datasetoperationexception:未能追加{“clg_id”:“5”,.....19/03/27 00:37:06 INFO mapreduce.job:任务Id:advitt_15088_130_m_0002,状态:失败 查询以创建保存的作业: sqoop job-dhadoop.security.credential.provider.

  • 我想在阿兹卡班经营蜂巢工作

  • 我目前正在尝试为一个项目设置Elasticsearch。我已经安装了,还安装了Java,即。 但是当我尝试使用以下命令启动Elasticsearch时 我得到以下错误 loaded:loaded(/usr/lib/systemd/system/elasticsearch.service;disabled;vend 活动:自世界协调时2019-11-01 06:09:54开始失败(结果:退出-代码)

  • 我最近使用。我对DB表进行了必要的更改,并对一些与参数API相关的微小代码进行了更改。 现在,当我运行应用程序时,它正在工作,但是如果一个步骤的退出状态为失败,则作业的存在状态设置为完成。这会导致一些问题,因为我们的应用程序代码将其视为成功执行。我通过在中添加一个代码片段来解决这个问题,在这里我检查列表并手动设置作业退出状态,但是Spring批处理框架不应该处理退出状态吗?