当前位置: 首页 > 知识库问答 >
问题:

如何解决番石榴依赖问题,同时提交Uber罐谷歌Dataproc

元英朗
2023-03-14

我正在使用maven shade插件构建Uber jar,以便将其作为作业提交给google dataproc集群。谷歌已经在其集群上安装了ApacheSpark2.0.2ApacheHadoop2.7.3。

apache火花2.0.2使用com.google.guava的14.0.1,apache hadoop 2.7.3使用11.0.2,这两个都应该已经在类路径中了。

<plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-shade-plugin</artifactId>
            <version>3.0.0</version>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>shade</goal>
                    </goals>
                    <configuration>
                    <!--  
                        <artifactSet>
                            <includes>
                                <include>com.google.guava:guava:jar:19.0</include>
                            </includes>
                        </artifactSet>
                    -->
                        <artifactSet>
                            <excludes>
                                <exclude>com.google.guava:guava:*</exclude>                                 
                            </excludes>
                        </artifactSet>
                    </configuration>
                </execution>
            </executions>
        </plugin>

当我加入guava 16.0.1 jar in shade插件时,我得到了以下例外:

Exception in thread "main" java.io.IOException: Failed to open native connection to Cassandra at {10.148.0.3}:9042
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:163)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$3.apply(CassandraConnector.scala:149)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$3.apply(CassandraConnector.scala:149)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:82)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:110)
at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:121)
at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:322)
at com.datastax.spark.connector.cql.Schema$.tableFromCassandra(Schema.scala:342)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.tableDef(CassandraTableRowReaderProvider.scala:50)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef$lzycompute(CassandraTableScanRDD.scala:60)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:60)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:137)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:60)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:232)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1913)
at org.apache.spark.rdd.RDD.count(RDD.scala:1134)
at com.test.scala.CreateVirtualTable$.main(CreateVirtualTable.scala:47)
at com.test.scala.CreateVirtualTable.main(CreateVirtualTable.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture;
at com.datastax.driver.core.Connection.initAsync(Connection.java:177)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:731)
at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:251)
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:199)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:156)

... 32 more
17/05/10 09:07:36 INFO                           

如果我排除Guava16.0.1,它就会抛出这个异常

Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common/reflect/TypeParameter
at com.datastax.driver.core.SanityChecks.checkGuava(SanityChecks.java:50)
at com.datastax.driver.core.SanityChecks.check(SanityChecks.java:36)
at com.datastax.driver.core.Cluster.<clinit>(Cluster.java:67)
at com.datastax.spark.connector.cql.DefaultConnectionFactory$.clusterBuilder(CassandraConnectionFactory.scala:35)
at com.datastax.spark.connector.cql.DefaultConnectionFactory$.createCluster(CassandraConnectionFactory.scala:92)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:154)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$3.apply(CassandraConnector.scala:149)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$3.apply(CassandraConnector.scala:149)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:31)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:56)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:82)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:110)
at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:121)
at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:322)
at com.datastax.spark.connector.cql.Schema$.tableFromCassandra(Schema.scala:342)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.tableDef(CassandraTableRowReaderProvider.scala:50)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef$lzycompute(CassandraTableScanRDD.scala:60)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:60)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:137)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:60)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:232)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1913)
at org.apache.spark.rdd.RDD.count(RDD.scala:1134)
at com.test.scala.CreateVirtualTable$.main(CreateVirtualTable.scala:47)
at com.test.scala.CreateVirtualTable.main(CreateVirtualTable.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.google.common.reflect.TypeParameter
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 38 more
17/05/11 08:24:00 INFO org.spark_project.jetty.server.ServerConnector: Stopped ServerConnector@edc6a5d{HTTP/1.1}{0.0.0.0:4040}
17/05/11 08:24:00 INFO com.datastax.spark.connector.util.SerialShutdownHooks: Successfully executed shutdown hook: Clearing session cache for C* connector

那么这里有什么问题呢?dataproc上的classloader是否从hadoop中选择了Guava11.0.2?因为guava 11.0.2没有com/google/common/reflect/TypeParameter类。所有关注此标签的GoogleDataProc开发者请帮助。

共有1个答案

范承望
2023-03-14

编辑:见https://cloud.google.com/blog/products/data-analytics/managing-java-dependencies-apache-spark-applications-cloud-dataproc关于Maven和SBT的完整示例

原始答案当我在Hadoop/Spark/Dataproc上运行uber JAR时,我经常使用适合我需要的任何版本的番石榴,然后使用阴影重新定位,允许不同版本共存而不会出现问题:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-shade-plugin</artifactId>
  <version>2.3</version>
  <executions>
    <execution>
      <phase>package</phase>
      <goals>
        <goal>shade</goal>
      </goals>
      <configuration>
      <artifactSet>
          <includes>
            <include>com.google.guava:*</include>
          </includes>
      </artifactSet>
      <minimizeJar>false</minimizeJar>
      <relocations>
          <relocation>
            <pattern>com.google.common</pattern>
            <shadedPattern>repackaged.com.google.common</shadedPattern>
          </relocation>
      </relocations>
      <shadedArtifactAttached>true</shadedArtifactAttached>
      </configuration>
  </execution>
</executions>
</plugin>
 类似资料:
  • 问题内容: 我在commons- lang中使用了简单的DTO。现在,我尝试使用Google Guava代替Apache Commons库。我在番石榴找到了。但是如果班上有很多成员,这太冗长了。例如: 如果我使用commons-lang,则简单得多: 有什么更好的方法可以用Guava而不是commons-lang 来实现? 番石榴文件 问题答案: 我有番石榴的小招。我将 IntelliJ IDEA

  • 我的 Spark 版本是 2.2.0,它在本地工作,但在具有相同版本的 EMR 上,它给出了以下异常。

  • 问题内容: 在我的Java代码中,我通过以下方式使用Guava的Multimap(com.google.common.collect.Multimap): 此处,Multimap键是URL的一部分,而值是URL的另一部分(转换为整数)。现在,我分配了JVM 2560 Mb(2.5 GB)堆空间(通过使用Xmx和Xms)。但是,它只能存储900万个这样的(键,值)整数对(大约1000万个)。但是,理

  • 问题内容: 说我有一个这样定义的: 根据我的阅读 (如果我错了,请纠正我): 如果将值写入0:00,则应在60秒后将其移至“准备退出”状态。从中实际删除值将在下一次 缓存修改 时发生(究竟是什么缓存修改?)。 那正确吗? 另外,我不确定和方法之间有什么区别,有人可以提供解释吗? 问题答案: *此链接的 *第一部分 :Guava如何使CacheBuilder中的条目到期? 我将重点讨论 expire

  • 问题内容: 刚刚发现了Guava库项目。 这些与GWT兼容吗? 问题答案: 从该页面的介绍性PDF中, 您可以在…上使用这些库。 在JDK 6上 在Android上, 我们认为。需要志愿者来帮助我们进行测试。 在Google App Engine上, 我们认为。需要志愿者来帮助我们进行测试。 在GWT上-参差不齐! 由于GWT的JDK库支持 可能参差不齐,也许是2/3,所以到目前为止,这些库中的东

  • 问题内容: 我想检查基类上的前提条件,以便知道子类型将始终使用有效的构造函数参数。 让我们以一个构造器为例: 接受2个或更多参数 接受不同类型的参数 对于一个参数,它执行多次检查(例如,String不为null 并且 不为空) 在这种情况下,如何最好地使用番石榴前提条件方法? 在这样的模拟示例中:(这是人为的!) 我最终在检查参数之前先进行了调用,因为对to的调用必须是方法的第一行,尽管可以这样做