当前位置: 首页 > 工具软件 > Apache::Queue > 使用案例 >

操作hive报Application xxx submitted by user hadoop to unknown queue: default

萧和平
2023-12-01

今天用beeline去操作hive,简单的语句能执行成功,复杂的就不行了,记录在此望下次不再入坑。

select 语句能出数据

执行select c_13930,c_45365,c_std,c_cv,c_22599,c_opp,c_cs,c_is_outnet
from devtest.yangtest001 a;能出数据

create 语句报错

执行create table devtest.t1 as select c_13930,c_45365,c_std,c_cv,c_22599,c_opp,c_cs,c_is_outnet from devtest.yangtest001 a;就会报错,但是看信息是队列名有问题

报错信息如下

ERROR : Job Submission failed with exception 'java.io.IOException(org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1624689709041_1440 to YARN : Application application_1624689709041_1440 submitted by user hadoop to unknown queue: default)'
java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1624689709041_1440 to YARN : Application application_1624689709041_1440 submitted by user hadoop to unknown queue: default
	at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:242)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1338)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1338)
	at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575)
	at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
	at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:570)
	at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561)
	at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:411)
	at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1232)
	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:255)
	at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1624689709041_1440 to YARN : Application application_1624689709041_1440 submitted by user hadoop to unknown queue: default
	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:276)
	at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
	at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
	... 35 more

原因

我这番操作是在默认的队列default上,需要设置成本hive指定的队列

解决办法

连上hive后,输入以下命令:

set mapreduce.job.queuename=test001

其他解释

使用spark引擎(连10001端口)时,它已经指定了队列名,所以不需要设置;
使用thriftserver引擎(连10000端口)时,默认使用的是default队列名,由于每个队列的权限不同,有需要的话需要指定队列名。

至于为什么我在执行select语句可以,create语句不行呢?hive里执行sql都是会提交job任务的,但因为create会需要往hdfs上落数据,当遇到队列的权限不匹配,就报错了;select的权限存在,就没有报错。

 类似资料: