当前位置: 首页 > 知识库问答 >
问题:

spark executor中的自定义log4j追加器

淳于凯
2023-03-14

我试图在spark executor中使用自定义的log4j appender,以便将所有日志转发到Apache Kafka。

log4j:ERROR Could not instantiate class [kafka.producer.KafkaLog4jAppender].
java.lang.ClassNotFoundException: kafka.producer.KafkaLog4jAppender
    at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:260)
    at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
    at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
    at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
    at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
    at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
    at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
    at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
    at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
    at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
    at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
    at org.apache.spark.Logging$class.initializeLogging(Logging.scala:122)
    at org.apache.spark.Logging$class.initializeIfNecessary(Logging.scala:107)
    at org.apache.spark.Logging$class.log(Logging.scala:51)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.log(CoarseGrainedExecutorBackend.scala:126)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:137)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:235)
    at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
log4j:ERROR Could not instantiate appender named "KAFKA".
2015-09-29 13:10:43 [driverPropsFetcher-akka.actor.default-dispatcher-4] INFO akka.event.slf4j.Slf4jLogger: Slf4jLogger started
2015-09-29 13:10:43 [driverPropsFetcher-akka.actor.default-dispatcher-4] INFO Remoting: Starting remoting
2015-09-29 13:10:43 [driverPropsFetcher-akka.actor.default-dispatcher-4] INFO Remoting: Remoting started; listening on addresses :[akka.tcp://driverPropsFetcher@gin3.dev:36918]
2015-09-29 13:10:43 [driverPropsFetcher-akka.actor.default-dispatcher-4] INFO Remoting: Remoting now listens on addresses: [akka.tcp://driverPropsFetcher@gin3.dev:36918]
2015-09-29 13:10:44 [driverPropsFetcher-akka.actor.default-dispatcher-4] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
2015-09-29 13:10:44 [driverPropsFetcher-akka.actor.default-dispatcher-4] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
2015-09-29 13:10:44 [sparkExecutor-akka.actor.default-dispatcher-3] INFO akka.event.slf4j.Slf4jLogger: Slf4jLogger started
2015-09-29 13:10:44 [sparkExecutor-akka.actor.default-dispatcher-2] INFO Remoting: Starting remoting
2015-09-29 13:10:44 [sparkExecutor-akka.actor.default-dispatcher-2] INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor@gin3.dev:40067]
2015-09-29 13:10:44 [sparkExecutor-akka.actor.default-dispatcher-2] INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkExecutor@gin3.dev:40067]
2015-09-29 13:10:44 [driverPropsFetcher-akka.actor.default-dispatcher-5] INFO Remoting: Remoting shut down
....

大卫

共有1个答案

窦夜洛
2023-03-14

最终提交了额外的带有日志DEP的jar,并在用户类路径之前加载它。

LOG_JAR="${THISDIR}/../lib/logging.jar"
spark-submit ...... \
  --files "${LOG4J_CONF},${LOG_JAR}" \
  --conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=`basename ${LOG4J_CONF}`" \
  --conf "spark.driver.extraClassPath=`basename ${LOG_JAR}`" \
  --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=`basename ${LOG4J_CONF}`" \
  --conf "spark.executor.extraClassPath=`basename ${LOG_JAR}`" \
  ...

https://issues.apache.org/jira/browse/spark-10881?filter=-2

 类似资料:
  • 我正在将log4j1升级到log4j2。我在log4j1中创建了一个自定义appender,并在append(LoggingEvent事件)方法中使用事件检索错误堆栈跟踪。getThrowableStrRep()并为每行追加'\t'。 我在log4j2中没有看到getThrowableStrRep。你能帮我解决我们如何在log4j2中做到这一点吗?

  • 我试图用log4j的JPA Appender保存一些日志事件,我在这里遵循了教程(JPAAppender)。 但是当我测试记录器时,我得到一些关于log4j的log4j错误/警告。特性: 假设问题在log4j。属性文件,有帮助吗??谢谢

  • 我目前正在对一个相对较大的项目的log4j配置进行微调。目前,我还没有为所有可能创建日志项的位置配置日志级别。 我想让log4j拥有某种回退appender来记录没有为其配置其他appender的所有消息。例如,如果我有一条日志消息: 登录: a. b. c. d: WARN 并且有一个附加程序,配置为用级别INFO记录包a. b. c,然后输出记录到该附加程序。 但是,如果我没有配置append

  • 类似地,我们如何在log4j2中创建自定义的appender,因为我们没有AppenderSkelton类要扩展,而所有其他appender都扩展AppenderBase类。

  • 有没有办法为不同的日志级别打印不同的布局?例如: 记录器。警告(“消息”);打印如下内容:2016-06-20 13:34:41245 INFO(main:)Message and for logger。信息(“消息2”);仅打印:消息2 有可能做到吗?定义一个布局以警告其他布局以获取信息 log4j.properties

  • 当用户从用户界面更改配置时,我想动态地重新加载log4j附加器(RollingFileAppender)。 我已经通过编程删除了追加器,并用新的配置值创建了新的追加器。在此之后,appender broked MaxBackupIndex和MaxFileSize无法正常工作。但是如果我更改了文件名(日志文件名),那么它可以正常工作。 能帮我解决这个问题吗?