当前位置: 首页 > 工具软件 > Apache HTrace > 使用案例 >

ClassNotFoundException: org.apache.htrace.core.HTraceConfiguration 和 TableInputFormatBase

何晗昱
2023-12-01

问题背景


Apache Spark2 整合 Hbase2 的时候报错
.

问题内容


出现两次错误

第一个: Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/mapreduce/TableInputFormatBase
----------------------------------分割线---------------------------------------------------------------
第二个: 2021-04-02 17:49:01,986 ERROR thriftserver.SparkSQLDriver: Failed in [select * from table]
org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NoClassDefFoundError: org/apache/htrace/core/HTraceConfiguration\

.

问题分析


首先第一个问题:原因是因为某些jar包没有引入,需要引入一下。
第二个问题:是因为某一个jar没有引入,导致的错误。

.

解决方案


报错信息

Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/mapreduce/TableInputFormatBase

每台worker节点,执行以下语句

scp $HBASE_HOME/lib/hbase-*  $SPARK_HOME/jars/
scp $HBASE_HOME/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar $SPARK_HOME/jars/

.
------------------------------------分割线-------------------------------------------------------------
.
报错信息

2021-04-02 17:49:01,986 ERROR thriftserver.SparkSQLDriver: Failed in [select * from table]
org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NoClassDefFoundError: org/apache/htrace/core/HTraceConfiguration
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.translateException(RpcRetryingCallerImpl.java:221)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:194)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
at org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:800)
at org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:768)
at org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:721)
at rg.apache.hadoop.hbase.MetaTableAccessor.scanMetaForTableRegions(MetaTableAccessor.java:716)
at org.apache.hadoop.hbase.client.HRegionLocator.listRegionLocations(HRegionLocator.java:114)
at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:78)
rg.apache.hadoop.hbase.mapreduce.RegionSizeCalculator.getRegionServersOfTable(RegionSizeCalculator.java:103)
at org.apache.hadoop.hbase.mapreduce.RegionSizeCalculator.init(RegionSizeCalculator.java:79)
at org.apache.hadoop.hbase.mapreduce.RegionSizeCalculator.(RegionSizeCalculator.java:61)
atorg.apache.hadoop.hbase.mapreduce.TableInputFormatBase.createRegionSizeCalculator(TableInputFormaorg.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat2.run(HiveHBaseTableInputFormat.java:269)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:269)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:204)
at org.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:273)
at org.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:269)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:273)
at org.apache.spark.rdd.RDDnonfunartitions2.apply(RDD.scala:269)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:273)
at org.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:269)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:273)
at org.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:269)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:273)
at org.apache.spark.rdd.RDDanonfunpartitions2.apply(RDD.scala:269)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:269)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:299)
at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:326)
at org.apache.spark.sql.execution.QueryExecution.hiveResultString(QueryExecution.scala:128)
at org.apache.spark.sql.hive.thriftserver.SparkSQLDriveranonfunrun1.apply(SparkSQLDriver.scala:64)
at org.apache.spark.sql.hive.thriftserver.SparkSQLDriveranonfunrun1.apply(SparkSQLDriver.scala:64)
at org.apache.spark.sql.execution.SQLExecutionanonfunwithNewExecutionId1.apply(SQLExecution.scala:80)
at org.apache.spark.sql.execution.SQLExecution.withSQLConfPropagated(SQLExecution.scala:127)
at org.apache.spark.sql.execution.SQLExecution.withNewExecutionId(SQLExecution.scala:75)
at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:371)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala:274)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org a p a c h e apache apachespark d e p l o y deploy deploySparkSubmitrunMain(SparkSubmit.scala:845)
at org.apache.spark.deploy.SparkSubmit.doRunMain1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmitnon2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala:929)
atorg.apache.hadoop.hbase.client.ScannerCallableWithReplicas.addCallsForCurrentReplica(ScannerCallableWithReplicas.java:329)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:191)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
… 80 more
Caused by: java.lang.ClassNotFoundException: org.apache.htrace.core.HTraceConfiguration
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
… 85 more

.
每台worker节点,执行以下语句

scp $HBASE_HOME/lib/client-facing-thirdparty/htrace-core4-4.2.0-incubating.jar $SPARK_HOME/jars/

.

 类似资料: