我有一个Kafka流应用程序,它正在读取由控制台生产者生成的主题data
。我在应用程序中有许多步骤,它们会生成两个KTable,然后我希望加入它们。
每个KTable都是成功生成的,我甚至可以调用< code>toStream,然后< code>peek值来单独控制。当我试图将KTables连接在一起时,应用程序甚至无法启动,也就是说,无法引入行< code>bar.join(qux)。toStream()会导致以下死机。看起来像是产生了KTables bar
和< code>qux。
以下是我收到的作为错误消息的输出:
2020-02-14 15:56:28.599 INFO AssignorConfiguration - stream-thread [foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1-consumer] Cooperative rebalancing enabled now
2020-02-14 15:56:28.630 WARN ConsumerConfig - The configuration 'admin.retries' was supplied but isn't a known config.
2020-02-14 15:56:28.630 WARN ConsumerConfig - The configuration 'admin.retry.backoff.ms' was supplied but isn't a known config.
2020-02-14 15:56:28.630 INFO AppInfoParser - Kafka version: 2.4.0
2020-02-14 15:56:28.630 INFO AppInfoParser - Kafka commitId: 77a89fcf8d7fa018
2020-02-14 15:56:28.630 INFO AppInfoParser - Kafka startTimeMs: 1581695788630
2020-02-14 15:56:28.636 INFO KafkaStreams - stream-client [foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4] State transition from CREATED to REBALANCING
2020-02-14 15:56:28.636 INFO StreamThread - stream-thread [foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1] Starting
2020-02-14 15:56:28.636 INFO StreamThread - stream-thread [foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1] State transition from CREATED to STARTING
2020-02-14 15:56:28.637 INFO KafkaConsumer - [Consumer clientId=foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1-consumer, groupId=foo] Subscribed to pattern: 'foo-KSTREAM-AGGREGATE-STATE-STORE-0000000009-repartition|foo-KSTREAM-AGGREGATE-STATE-STORE-0000000016-repartition|foo-KSTREAM-AGGREGATE-STATE-STORE-0000000022-repartition|foo-KSTREAM-AGGREGATE-STATE-STORE-0000000029-repartition|data'
2020-02-14 15:56:28.906 INFO Metadata - [Consumer clientId=foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1-consumer, groupId=foo] Cluster ID: ghhNsZUZRSGD984ra7fXRg
2020-02-14 15:56:28.907 INFO AbstractCoordinator - [Consumer clientId=foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1-consumer, groupId=foo] Discovered group coordinator 10.1.36.24:9092 (id: 2147483647 rack: null)
2020-02-14 15:56:28.915 INFO AbstractCoordinator - [Consumer clientId=foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1-consumer, groupId=foo] (Re-)joining group
2020-02-14 15:56:28.920 INFO AbstractCoordinator - [Consumer clientId=foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1-consumer, groupId=foo] (Re-)joining group
2020-02-14 15:56:28.925 ERROR StreamThread - stream-thread [foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1] Encountered the following error during processing:
java.lang.IllegalArgumentException: Number of partitions must be at least 1.
at org.apache.kafka.streams.processor.internals.InternalTopicConfig.setNumberOfPartitions(InternalTopicConfig.java:62) ~[kafka-streams-2.4.0.jar:?]
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assign(StreamsPartitionAssignor.java:473) ~[kafka-streams-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:548) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:650) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1300(AbstractCoordinator.java:111) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:572) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:555) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1026) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1006) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:599) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:409) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:294) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:400) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:471) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211) ~[kafka-clients-2.4.0.jar:?]
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:843) ~[kafka-streams-2.4.0.jar:?]
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:743) ~[kafka-streams-2.4.0.jar:?]
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:698) ~[kafka-streams-2.4.0.jar:?]
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:671) ~[kafka-streams-2.4.0.jar:?]
2020-02-14 15:56:28.925 INFO StreamThread - stream-thread [foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1] State transition from STARTING to PENDING_SHUTDOWN
2020-02-14 15:56:28.925 INFO StreamThread - stream-thread [foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1] Shutting down
2020-02-14 15:56:28.925 INFO KafkaConsumer - [Consumer clientId=foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1-restore-consumer, groupId=null] Unsubscribed all topics or patterns and assigned partitions
2020-02-14 15:56:28.932 INFO StreamThread - stream-thread [foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1] State transition from PENDING_SHUTDOWN to DEAD
2020-02-14 15:56:28.932 INFO KafkaStreams - stream-client [foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4] State transition from REBALANCING to ERROR
2020-02-14 15:56:28.932 ERROR KafkaStreams - stream-client [foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4] All stream threads have died. The instance will be in error state and should be closed.
2020-02-14 15:56:28.932 INFO StreamThread - stream-thread [foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4-StreamThread-1] Shutdown complete
2020-02-14 15:56:28.934 INFO KafkaStreams - stream-client [foo-d2f546ef-f7eb-4088-ae04-1943ed71f7a4] State transition from ERROR to PENDING_SHUTDOWN
2020-02-14 15:56:28,935 kafka-streams-close-thread WARN [AsyncContext@18b4aac2] Ignoring log event after log4j was shut down.
2020-02-14 15:56:28,936 kafka-streams-close-thread WARN Ignoring log event after log4j was shut down
2020-02-14 15:56:28,938 kafka-streams-close-thread WARN [AsyncContext@18b4aac2] Ignoring log event after log4j was shut down.
2020-02-14 15:56:28,939 kafka-streams-close-thread WARN Ignoring log event after log4j was shut down
2020-02-14 15:56:28,939 Thread-1 WARN [AsyncContext@18b4aac2] Ignoring log event after log4j was shut down.
2020-02-14 15:56:28,939 Thread-1 WARN Ignoring log event after log4j was shut down
这是什么原因造成的?我需要包含一些魔术配置来处理我尝试引入的联接的额外状态存储吗?
降级到 v2.3.1 已修复此问题,如下所示:issues.apache.org/jira/browse/KAFKA-9335
感谢Spasoje Petronijević。
我已经在tomcat服务器上安装了应用程序。在启动和加载应用程序的过程中,我有以下堆栈跟踪的错误。如何解决这个问题? apache-tomcat-7.0.47/webapps/petclinic 2016年10月27日下午12:14:36 org。阿帕奇。卡塔琳娜。果心标准上下文起始内部 严重:ServletContainerInitializer处理javax时出错。servlet。Servle
尝试通过Intellij IEDA运行Appium服务器时发生以下错误: 错误:无法启动应用程序会话,错误是:错误:命令失败:C:\WINDOWS\system32\cmd。exe/s/c“c:\Android\sdk\platform tools\adb.exe-s emulator-5554安装”c:\Program Files(x86)\Appium\node\u modules\Appiu
报告如下: 我没有运行任何其他Postgres安装。通过运行以下命令确认了这一点: 端口5432上也没有运行的应用程序。通过跑步证实了这一点 有什么想法吗?
我的程序编译了所有内容,我没有出错,但我实际上期望tomcat应该永久在端口8080上。输出中也没有Spring。在另一个项目中,我做的一切都很好。谢谢你帮助我。 我的父母: 我的tarter.class: 我的Starter-Pom: 控制台输出: 然后什么都不会发生了。谢谢你的帮助。
我正在Visual Studio 2012中使用Xamarin进行Android项目。我最近从HDD升级到SSD,因此我重新安装了Windows和所有程序。 在克隆了我的git存储库并尝试在我的设备上运行该应用程序后,我无法让它运行。我能够启动一个新的hello world项目,我让它运行,但我无法让这个项目运行。 这是错误所说的: 应用程序无法启动。确保应用程序已经安装到目标设备上,并且具有可启
我有一个Spring Boot应用程序,其中有一个Kafka消费者和生产者。还有一个bean来创建主题。 我的Spring Boot应用程序和Kafka都是在Kubernetes的Docker启动的。有时Spring Boot应用程序在Kafka pod启动之前就启动了,因此无法启动,因为用户无法连接(参见stacktrace)。 有没有一种方法可以让我的应用程序以弹性的方式启动?例如,消费者应该
所以如果有人能帮我解决这个问题,我将不胜感激。