--通过将状态后端从文件系统更改为rocksdb解决了问题--
在AWS EMR上运行Flink 1.9。Flink应用程序使用动觉流作为输入数据,另一个动觉流作为输出。最近,检查点大小已增长到1GB(由于数据量增加)。有时,在尝试执行检查点的过程中,应用程序开始利用整个处理器资源(一天发生几次)
指标:
LA(具有作业/任务管理器的emr ec2核心节点)
运行循环时间-运动消费者
每次提取的记录-运动消费者
任务管理器GC
jobmanager日志
{"level":"INFO","timestamp":"2020-08-25 04:55:27,399","thread":"Checkpoint Timer","file":"CheckpointCoordinator.java","line":"617","message":"Triggering checkpoint 1232 @ 1598331327244 for job 0039825bafae26bc34db88e037a1dae3."}
{"level":"INFO","timestamp":"2020-08-25 04:58:24,509","thread":"flink-akka.actor.default-dispatcher-7010","file":"ResourceManager.java","line":"1144","message":"The heartbeat of TaskManager with id container_1597960565773_0003_01_000002 timed out."}
{"level":"INFO","timestamp":"2020-08-25 04:58:24,510","thread":"flink-akka.actor.default-dispatcher-7010","file":"ResourceManager.java","line":"805","message":"Closing TaskExecutor connection container_1597960565773_0003_01_000002 because: The heartbeat of TaskManager with id container_1597960565773_0003_01_000002 timed out."}
{"level":"INFO","timestamp":"2020-08-25 04:58:24,514","thread":"flink-akka.actor.default-dispatcher-7015","file":"Execution.java","line":"1493","message":"Sink: kinesis-events-sink (1/1) (573401e241fe0a0ac0a8a54c81c4eefd) switched from RUNNING to FAILED."}
java.util.concurrent.TimeoutException: The heartbeat of TaskManager with id container_1597960565773_0003_01_000002 timed out.
at org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1146)
at org.apache.flink.runtime.heartbeat.HeartbeatMonitorImpl.run(HeartbeatMonitorImpl.java:109)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
at akka.actor.ActorCell.invoke(ActorCell.scala:561)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
{"level":"INFO","timestamp":"2020-08-25 04:58:24,515","thread":"flink-akka.actor.default-dispatcher-7015","file":"ExecutionGraph.java","line":"1324","message":"Job JOB_NAME_HIDDEN (0039825bafae26bc34db88e037a1dae3) switched from state RUNNING to FAILING."}
java.util.concurrent.TimeoutException: The heartbeat of TaskManager with id container_1597960565773_0003_01_000002 timed out.
at org.apache.flink.runtime.resourcemanager.ResourceManager$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(ResourceManager.java:1146)
at org.apache.flink.runtime.heartbeat.HeartbeatMonitorImpl.run(HeartbeatMonitorImpl.java:109)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
at akka.actor.ActorCell.invoke(ActorCell.scala:561)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
任务管理器日志
{"level":"INFO","timestamp":"2020-08-25 04:58:17,504","thread":"kpl-daemon-0003","file":"LogInputStreamReader.java","line":"59","message":"[2020-08-25 04:58:02.469832] [0x00007394][0x00007f2673d70700] [info] [processing_statistics_logger.cc:127] (sensing-events-test) Aver
age Processing Time: -nan ms"}
{"level":"INFO","timestamp":"2020-08-25 04:58:17,504","thread":"kpl-daemon-0003","file":"LogInputStreamReader.java","line":"59","message":"[2020-08-25 04:58:17.469922] [0x00007394][0x00007f2673d70700] [info] [processing_statistics_logger.cc:109] Stage 1 Triggers: { stream
: 'sensing-events-test', manual: 0, count: 0, size: 0, matches: 0, timed: 0, UserRecords: 0, KinesisRecords: 0 }"}
{"level":"INFO","timestamp":"2020-08-25 04:58:17,504","thread":"kpl-daemon-0003","file":"LogInputStreamReader.java","line":"59","message":"[2020-08-25 04:58:17.469977] [0x00007394][0x00007f2673d70700] [info] [processing_statistics_logger.cc:112] Stage 2 Triggers: { stream
: 'sensing-events-test', manual: 0, count: 0, size: 0, matches: 0, timed: 0, KinesisRecords: 0, PutRecords: 0 }"}
{"level":"INFO","timestamp":"2020-08-25 04:58:17,504","thread":"kpl-daemon-0003","file":"LogInputStreamReader.java","line":"59","message":"[2020-08-25 04:58:17.469992] [0x00007394][0x00007f2673d70700] [info] [processing_statistics_logger.cc:127] (sensing-events-test) Aver
age Processing Time: -nan ms"}
{"level":"ERROR","timestamp":"2020-08-25 04:58:28,535","thread":"flink-akka.actor.default-dispatcher-628","file":"FatalExitExceptionHandler.java","line":"40","message":"FATAL: Thread 'flink-akka.actor.default-dispatcher-628' produced an uncaught exception. Stopping the pr
ocess..."}
java.lang.OutOfMemoryError: GC overhead limit exceeded
at sun.reflect.UTF8.encode(UTF8.java:36)
at sun.reflect.ClassFileAssembler.emitConstantPoolUTF8(ClassFileAssembler.java:103)
at sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:331)
at sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:112)
at sun.reflect.ReflectionFactory.generateConstructor(ReflectionFactory.java:398)
at sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:360)
at java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1588)
at java.io.ObjectStreamClass.access$1500(ObjectStreamClass.java:79)
at java.io.ObjectStreamClass$3.run(ObjectStreamClass.java:519)
at java.io.ObjectStreamClass$3.run(ObjectStreamClass.java:494)
at java.security.AccessController.doPrivileged(Native Method)
at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:494)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:391)
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:681)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1941)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1807)
at java.io.ObjectInputStream.readClass(ObjectInputStream.java:1770)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1595)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:464)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
at org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation$MethodInvocation.readObject(RemoteRpcInvocation.java:204)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2234)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2125)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1624)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:464)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:576)
upd。flink-conf.yaml
state.backend.fs.checkpointdir: s3a://s3-bucket-with-checkpoints/flink-checkpoints
taskmanager.numberOfTaskSlots: 1
state.backend: filesystem
taskmanager.heap.size: 3057m
state.checkpoints.dir: s3a://s3-bucket-with-checkpoints/external-checkpoints
Flink检查站
我想,这可能与SlidingEventTimeWindow
有关,据我从检查点屏幕截图中了解,它是一个大小为2分钟的窗口,带有2秒的窗口幻灯片。Flink为它所属的每个窗口创建一个元素的副本。因此,在您的滑动窗口示例中,它创建了大约60个元素副本,因此状态大小是翻滚窗口的60倍。
我猜,在检查点上,flink试图序列化状态,但内存不足,因此GC启动,最后内存耗尽。
通过将状态后端从文件系统更改为rocksdb解决了问题
我有一份很轻松的工作,在创建检查站方面很吃力。它几乎没有州(除了一些Kafka偏移)。 工作本身有以下基本设置: Kafka索资源- 迭代函数再次执行HTTP调用并转发成功的消息,丢弃4xx并重试5xx。从我的指标中可以看到,所有这些都发生了,我得到了一些5xx(返回迭代源)、一些4xx(忽略)和很多2xx(转发到HDFS)。 如果我查看线程转储,我可以看到某个任务被阻止了: 这一个正在等待对象监
我在Amazon KDA上部署了一个Apache Beam应用程序。 它使用默认设置启用了检查点。 但在应用程序日志中,我可以看到: "存在依赖检查点的无限制源,但检查点被禁用。" 只有当我将作为运行时属性传递给我的应用程序时,它才会进行检查点。那么有必要显式传递这些值吗? 该应用程序基本上从Kinesis读取窗口数据,将其转换为大小约为30的固定持续时间,然后将数据发布回PubSub。 应用程序
如果Flink应用程序在发生故障或更新后正在启动备份,那么不明确属于KeyedState或OperatorState的类变量是否会持久化? 例如,Flink的留档中描述的BoundedOutOfOrdernessGenerator有一个电流最大时间戳变量。如果更新了Flink应用程序,电流最大时间戳中的值是否会丢失,或者是否会写入在应用程序更新之前创建的保存点? 这样做的真正原因是我想实现一个自定
下面是我对Flink的疑问。 我们可以将检查点和保存点存储到RockDB等外部数据结构中吗?还是只是我们可以存储在RockDB等中的状态。 状态后端是否会影响检查点?如果是,以何种方式? 什么是状态处理器API?它与我们存储的保存点和检查点直接相关吗?状态处理器API提供了普通保存点无法提供的哪些额外好处? 对于3个问题,请尽可能描述性地回答。我对学习状态处理器API很感兴趣,但我想深入了解它的应
我想在flink中测试一次端到端的处理。我的工作是: Kafka资料来源- 我在mapper1中放了一个< code > thread . sleep(100000),然后运行了这个作业。我在停止作业时获取了保存点,然后从mapper1中删除了< code > thread . sleep(100000),我希望该事件应该会被重放,因为它没有下沉。但这并没有发生,乔布斯正在等待新的事件。 我的Ka
通过top命令查看到一个占用CPU资源>100%的进程,直接kill掉的话,过几个小时又重启了,查看注册服务也没看到跟这个进程相关的服务,通过lsof -p命令可以看到一下信息: 通过pstree命令可以看到一下信息: 另外,本地仅启动了一个java服务和一个nginx服务。大家可以给出什么建议和方向吗?