当前位置: 首页 > 知识库问答 >
问题:

为什么hdfs在Hadoop集群中抛出LeaseExpiredException(AWS EMR)

班承恩
2023-03-14

tail-f/var/log/hadoop-hdfs/hadoop-hdfs-namenode-ip-172-30-2-148.log

2016-09-21 11:54:14,533 INFO BlockStateChange(8020上的IPC Server handler 10):Block*InvalidateBlocks:添加blk_107374750_6677到172.30.2.189:50010 2016-09-21 11:54:14,534 INFO org.apache.hadoop.IPC.Server(8020上的IPC Server handler 31):8020上的IPC Server handler 31,调用org.apache.hadoop.hdfs.protocol.clientProtocol.172.30.2.189:37674调用#34重试ExpiredException:/tmp/hive/hadoop/_tez_session_dir/1e4f71f0-9f29-468D-980e-9f19690bf849/.tez/application_1474442135017_0114/recovery/1/summary(inode26350):文件不存在。Holder DFSClient_NONMAPREDUCE_-143782605_1没有任何打开的文件。2016-09-21 11:54:15,557 INFO org.apache.hadoop.hdfs.statechange(8020上的IPC服务器处理程序0):block*分配blk_1073747503_6679{ucstate=under_construction,truncateBlock=null,primarynodeindex=-1,replicas=[replicaUC[[磁盘]DS-86592BA7-C51A-431D-8019-9E362D721B28:normal:172.30.2.189:50010RBW]]}for 0114/IP-172-30-2-122.us-west-2.compute.internal_8041.tmp

而且,一些配置单元查询也失败了。我猜,是因为上面的问题。

tail-f/var/log/hive/hive-server2.log

2016-09-21T11:59:35,126 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (Driver.java:execute(1477)) - Executing command(queryId=hive_20160921115934_c56d9c91-640b-4f5d-b490-34549a4258c7): 
INSERT INTO TABLE validation_logs 
SELECT 
"18364", 
"TABLE_VALIDATION",
error.code,
error.validator,
get_json_object(key, '$.table_name'),
NULL,
NULL,
error.failure_msg,
FROM_UNIXTIME(UNIX_TIMESTAMP('20160921','yyyyMMdd')), 
from_unixtime(unix_timestamp())
FROM
(SELECT 
MAP(concat("{\"table_name\" : \"", table_name , "\"}"), error) AS err_map
FROM table_level_validation_result
) AS res
LATERAL VIEW EXPLODE(res.err_map) tmp AS key, error WHERE error IS NOT NULL AND (error.code="error" OR error.code="warn")

2016-09-21T11:59:35,126 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (SessionState.java:printInfo(1054)) - Query ID = hive_20160921115934_c56d9c91-640b-4f5d-b490-34549a4258c7
2016-09-21T11:59:35,126 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (SessionState.java:printInfo(1054)) - Total jobs = 1
2016-09-21T11:59:35,127 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (SessionState.java:printInfo(1054)) - Launching Job 1 out of 1
2016-09-21T11:59:35,127 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Driver (Driver.java:launchTask(1856)) - Starting task [Stage-1:MAPRED] in serial mode
2016-09-21T11:59:35,127 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.TezSessionPoolManager (TezSessionPoolManager.java:canWorkWithSameSession(404)) - The current user: hadoop, session user: hadoop
2016-09-21T11:59:35,127 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.TezSessionPoolManager (TezSessionPoolManager.java:canWorkWithSameSession(421)) - Current queue name is null incoming queue name is null
2016-09-21T11:59:35,173 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Context (Context.java:getMRScratchDir(340)) - New scratch dir is hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/65cf7f02-a7d3-40ba-a93f-ff5214afbdfc/hive_2016-09-21_11-59-34_474_5003281239065359634-127
2016-09-21T11:59:35,174 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: exec.Task (TezTask.java:updateSession(279)) - Session is already open
2016-09-21T11:59:35,175 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142291 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/commons-vfs2-2.0.jar
2016-09-21T11:59:35,176 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142320 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/emr-ddb-hive.jar
2016-09-21T11:59:35,177 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142353 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/emr-hive-goodies.jar
2016-09-21T11:59:35,178 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142389 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/emr-kinesis-hive.jar
2016-09-21T11:59:35,178 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142423 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/hive-contrib-2.1.0-amzn-0.jar
2016-09-21T11:59:35,179 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: tez.DagUtils (DagUtils.java:createLocalResource(758)) - Resource modification time: 1474459142496 for hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/_tez_session_dir/85d36c12-c629-44a8-b23c-c628898a79b7/hive-plugins-0.0.1-emr-upgrade-20160919.070538-1.jar
2016-09-21T11:59:35,179 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: exec.Task (TezTask.java:build(321)) - Dag name: INSERT INTO TABLE valid...error.code="warn")(Stage-1)
2016-09-21T11:59:35,180 INFO  [HiveServer2-Background-Pool: Thread-3883([])]: ql.Context (Context.java:getMRScratchDir(340)) - New scratch dir is hdfs://ip-172-30-2-148.us-west-2.compute.internal:8020/tmp/hive/hadoop/65cf7f02-a7d3-40ba-a93f-ff5214afbdfc/hive_2016-09-21_11-59-34_474_5003281239065359634-127
2016-09-21T11:59:35,223 INFO  [HiveServer2-Background-Pool: Thread-3881([])]: impl.YarnClientImpl (YarnClientImpl.java:submitApplication(273)) - Submitted application application_1474442135017_0147
2016-09-21T11:59:35,224 INFO  [HiveServer2-Background-Pool: Thread-3881([])]: client.TezClient (TezClient.java:start(477)) - The url to track the Tez Session: http://ip-172-30-2-148.us-west-2.compute.internal:20888/proxy/application_1474442135017_0147/
2016-09-21T11:59:35,391 INFO  [HiveServer2-Background-Pool: Thread-3429([])]: SessionState (SessionState.java:printInfo(1054)) - Map 1: 0(+0,-4)/1  
2016-09-21T11:59:35,446 ERROR [HiveServer2-Background-Pool: Thread-3429([])]: SessionState (SessionState.java:printError(1063)) - Status: Failed
2016-09-21T11:59:35,447 ERROR [HiveServer2-Background-Pool: Thread-3429([])]: SessionState (SessionState.java:printError(1063)) - Vertex failed, vertexName=Map 1, vertexId=vertex_1474442135017_0134_2_00, diagnostics=[Task failed, taskId=task_1474442135017_0134_2_00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1474442135017_0134_2_00_000000_0:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:198)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:160)
    at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
    at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
    at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
    at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
    at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
    at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)
    at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:360)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:172)
    ... 14 more
Caused by: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
    at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:299)
    at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)
    ... 19 more
Caused by: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818)
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.open(S3NativeFileSystem.java:1193)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:771)
    at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.open(EmrFileSystem.java:168)
    at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
    at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:297)
    ... 20 more
], TaskAttempt 1 failed, info=[Error: Error while running task ( failure ) : attempt_1474442135017_0134_2_00_000000_1:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:198)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:160)
    at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
    at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
    at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
    at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
    at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
    at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)
    at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:360)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:172)
    ... 14 more
Caused by: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
    at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:299)
    at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)
    ... 19 more
Caused by: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818)
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.open(S3NativeFileSystem.java:1193)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:771)
    at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.open(EmrFileSystem.java:168)
    at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
    at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:297)
    ... 20 more
], TaskAttempt 2 failed, info=[Error: Error while running task ( failure ) : attempt_1474442135017_0134_2_00_000000_2:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:198)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:160)
    at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
    at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
    at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
    at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
    at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
    at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)
    at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.html" target="_blank">java:116)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:360)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:172)
    ... 14 more
Caused by: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
    at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:299)
    at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203)
    ... 19 more
Caused by: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818)
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.open(S3NativeFileSystem.java:1193)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:771)
    at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.open(EmrFileSystem.java:168)
    at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
    at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:297)
    ... 20 more
], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : attempt_1474442135017_0134_2_00_000000_3:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:198)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:160)
    at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
    at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
    at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
    at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
    at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: No such file or directory 's3://data-platform-insights/data-platform/internal_test_automation/2016/09/21/18364/logs/validations/table_col_aggregate_validation_result/.hive-staging_hive_2016-09-21_11-57-58_703_5106478639780932144-1/_tmp.-ext-10000/000000_0.gz'
    at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
    at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)
    at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:360)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:172)
    ... 14 more

启用调试模式的配置单元日志-

在执行查询之前还要设置配置单元配置属性-

set hive.mapjoin.smalltable.filesize = 2000000000
set mapreduce.map.speculative = false
set mapreduce.output.fileoutputformat.compress = true
set hive.exec.compress.output = true
set mapreduce.task.timeout = 6000000
set hive.optimize.bucketmapjoin.sortedmerge = true
set io.compression.codecs = org.apache.hadoop.io.compress.GzipCode
set hive.auto.convert.sortmerge.join.noconditionaltask = false
set hive.optimize.bucketmapjoin = true
set hive.exec.compress.intermediate = true
set hive.enforce.bucketmapjoin = true
set mapred.output.compress = true
set mapreduce.map.output.compress = true
set hive.auto.convert.sortmerge.join = false
set hive.auto.convert.join = false
set mapreduce.reduce.speculative = false
set mapred.output.compression.codec = org.apache.hadoop.io.compress.GzipCodec
set hive.cache.expr.evaluation=false
set mapred.output.compress=true
set hive.exec.compress.output=true
set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
set hive.exec.compress.intermediate=true
set mapreduce.map.output.compress=true
set hive.auto.convert.join=false
set mapreduce.map.speculative=false
set mapreduce.reduce.speculative=false

集群详情--

  1. 一个具有32 GB磁盘空间的数据节点。
  2. 配置单元-2.1.0,执行引擎-tez 0.8.3
  3. Hadoop-2.7.2

问题-

  1. 为什么抛出LeaseExpiredException?
  2. 配置单元查询失败是否与LeaseExpiredException有关?
  3. 是否因为配置单元配置属性错误?

更新-1

根据这个答案-leaseexpiredexception:HDFS上没有租约错误(关闭文件失败),

SET hive.exec.max.dynamic.partitions=100000; 
SET hive.exec.max.dynamic.partitions.pernode=100000;

但也显示了同样的例外。

共有1个答案

罗心思
2023-03-14

我解决了这个问题。让我详细解释一下。

即将到来的例外-

  1. LeaveExpirtedException-从HDFS端。
  2. FileNotFoundException-从配置单元端(当Tez执行引擎执行DAG时)
    null

然后,我将以下设置的计数从1000增加到100000-

设置hive.exec.max.dynamic.partitions=100000;设置hive.exec.max.dynamic.partitions.pernode=100000;

那也没用。

并且,在我们的例子中,2个并发作业在同一个表上执行“插入覆盖”,这导致删除1个映射器的元数据文件,这是造成此问题的原因。

决议-

  1. 将元数据文件位置移到表外(表位于S3)。
  2. 禁用HDFS加密(如stagingdir属性说明中所述。)
  3. 更改应用程序层代码以避免并发问题。
 类似资料:
  • 我成功地kerberized了一个测试Hortonworks集群。Ambari为这些服务创建了keytabs,并且它们都已启动。名称编号有HA。备用名称节点启动得快,活动名称节点需要更长得时间。Namenode UI显示一切都是正确的。可以使用Kerberos登录。名称编号为nn1.zim.com和nn2.zim.com 这个配置会有什么问题呢?以hdfs登录,用kinit-kt加载keytab。

  • 主要内容:部署集群HDFS 集群是建立在 Hadoop 集群之上的,由于 HDFS 是 Hadoop 最主要的守护进程,所以 HDFS 集群的配置过程是 Hadoop 集群配置过程的代表。 使用 Docker 可以更加方便地、高效地构建出一个集群环境。 每台计算机中的配置 Hadoop 如何配置集群、不同的计算机里又应该有怎样的配置,这些问题是在学习中产生的。本章的配置中将会提供一个典型的示例,但 Hadoop 复

  • 同时发布Hadoop多节点设置。一旦我开始我的hdfs恶魔在主(宾/start-dfs.sh) 我确实在主人的日志下面找到了 我确实在奴隶号上找到了下面的日志@ hadoop hduser数据节点本地主机。本地域。日志文件 有人能告诉我,设置有什么问题吗。

  • 18:38:14,391警告[org.hornetq.core.server.cluster.impl.clusterConnectionImpl](Thread-4(HornetQ-client-global-threads-20937207))MessageFlow RecordImpl[nodeid=3AFB4E60-Ed20-11E1-831C-109add44C09e,connector

  • 我们有一个Hadoop集群,数据节点为275个节点(55Tb总内存,12000个VCore)。这个集群与几个项目共享,我们有一个YARN队列分配给我们,资源有限。 为了提高性能,我们正在考虑为我们的项目构建一个单独的Spark集群(在同一网络中的Mesos上)并访问Hadoop集群上的HDFS数据。 正如Spark文档中提到的:https://spark.apache.org/docs/lates