我在Hazelcast喷气式飞机上遇到过奇怪的行为。我正在同时开始很多工作(~30个,有些在其他工作之前稍微触发)。然而,当我的榛子喷气机任务数达到26(魔法数?)时,所有处理都被卡住了。
在threadumps中,我看到以下信息:
"hz._hzInstance_1_jet.cached.thread-1" #37 prio=5 os_prio=0 cpu=1093.29ms elapsed=393.62s tid=0x00007f95dc007000 nid=0x6bfc in Object.wait() [0x00007f95e6af4000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(java.base@11.0.2/Native Method)
- waiting on <no object reference available>
at com.hazelcast.spi.impl.AbstractCompletableFuture.get(AbstractCompletableFuture.java:229)
- waiting to re-lock in wait() <0x00000007864b7040> (a com.hazelcast.internal.util.SimpleCompletableFuture)
at com.hazelcast.spi.impl.AbstractCompletableFuture.get(AbstractCompletableFuture.java:191)
at com.hazelcast.spi.impl.operationservice.impl.InvokeOnPartitions.invoke(InvokeOnPartitions.java:88)
at com.hazelcast.spi.impl.operationservice.impl.OperationServiceImpl.invokeOnAllPartitions(OperationServiceImpl.java:385)
at com.hazelcast.map.impl.proxy.MapProxySupport.clearInternal(MapProxySupport.java:1016)
at com.hazelcast.map.impl.proxy.MapProxyImpl.clearInternal(MapProxyImpl.java:109)
at com.hazelcast.map.impl.proxy.MapProxyImpl.clear(MapProxyImpl.java:698)
at com.hazelcast.jet.impl.JobRepository.clearSnapshotData(JobRepository.java:464)
at com.hazelcast.jet.impl.MasterJobContext.tryStartJob(MasterJobContext.java:233)
at com.hazelcast.jet.impl.JobCoordinationService.tryStartJob(JobCoordinationService.java:776)
at com.hazelcast.jet.impl.JobCoordinationService.lambda$submitJob$0(JobCoordinationService.java:200)
at com.hazelcast.jet.impl.JobCoordinationService$$Lambda$634/0x00000008009ce840.run(Unknown Source)
"hz._hzInstance_1_jet.async.thread-2" #81 prio=5 os_prio=0 cpu=0.00ms elapsed=661.98s tid=0x0000025bb23ef000 nid=0x43bc in Object.wait() [0x0000005d492fe000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(java.base@11/Native Method)
- waiting on <no object reference available>
at com.hazelcast.spi.impl.AbstractCompletableFuture.get(AbstractCompletableFuture.java:229)
- waiting to re-lock in wait() <0x0000000725600100> (a com.hazelcast.internal.util.SimpleCompletableFuture)
at com.hazelcast.spi.impl.AbstractCompletableFuture.get(AbstractCompletableFuture.java:191)
at com.hazelcast.spi.impl.operationservice.impl.InvokeOnPartitions.invoke(InvokeOnPartitions.java:88)
at com.hazelcast.spi.impl.operationservice.impl.OperationServiceImpl.invokeOnAllPartitions(OperationServiceImpl.java:385)
at com.hazelcast.map.impl.proxy.MapProxySupport.removeAllInternal(MapProxySupport.java:619)
at com.hazelcast.map.impl.proxy.MapProxyImpl.removeAll(MapProxyImpl.java:285)
at com.hazelcast.jet.impl.JobRepository.deleteJob(JobRepository.java:332)
at com.hazelcast.jet.impl.JobRepository.completeJob(JobRepository.java:316)
at com.hazelcast.jet.impl.JobCoordinationService.completeJob(JobCoordinationService.java:576)
at com.hazelcast.jet.impl.MasterJobContext.lambda$finalizeJob$13(MasterJobContext.java:620)
at com.hazelcast.jet.impl.MasterJobContext$$Lambda$783/0x0000000800b26840.run(Unknown Source)
at com.hazelcast.jet.impl.MasterJobContext.finalizeJob(MasterJobContext.java:632)
at com.hazelcast.jet.impl.MasterJobContext.onCompleteExecutionCompleted(MasterJobContext.java:564)
at com.hazelcast.jet.impl.MasterJobContext.lambda$invokeCompleteExecution$6(MasterJobContext.java:544)
at com.hazelcast.jet.impl.MasterJobContext$$Lambda$779/0x0000000800b27840.accept(Unknown Source)
at com.hazelcast.jet.impl.MasterContext.lambda$invokeOnParticipants$0(MasterContext.java:242)
at com.hazelcast.jet.impl.MasterContext$$Lambda$726/0x0000000800a1c040.accept(Unknown Source)
at com.hazelcast.jet.impl.util.Util$2.onResponse(Util.java:172)
at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:256)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11/ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11/ThreadPoolExecutor.java:628)
at java.lang.Thread.run(java.base@11/Thread.java:834)
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64)
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80)
我的设置:-java11-hazelcast 3.12 snapshot-hazelcast Jet 3.0 Snapshot(我不能恢复到以前的版本,它会破坏我的逻辑;我需要n:m连接,将在3.1中添加)-CPU内核:4-ram:7GB-jet模式:server,连接到其他集群作为客户端插入最终数据。
有人遇到过类似的问题吗?问题是,它不能简单地复制,因此很难为Hazelcast团队制造问题。只有线程转储和一般行为才能提示发生了什么。
这是3.0-Snapshot开发过程中的一个问题,并在3.0版本中得到了修复。
请查找以下数据流作业详细信息:作业ID:2017-10-17_22_03_20-14123260585966292858作业名称:limit-test-file12-1508302687176批次:批次开始时间:2017年10月18日上午10:33:21状态:取消...地区:us-central1 作业日志详细信息:2017-10-18(11:34:56)工作流失败。原因:(b2725d597b8
EDIT2:测试过了。这无疑是新JPanel的一个问题。 解决方案:正如建议的那样,答案是我忘了明确地将我的JPanel设置到JFrame上。改变: 到 这对我很管用! 我的问题是JFrame的一个奇怪行为。 在我的代码中,我的主机上有一个单选按钮。按下时,会打开另一个框架。 在我的新框架里,我做了一个JPanel。在添加JPanel之前,另一个框架没有冻结,所以我相信问题与新的JPanel有关。
Photoshop、Illustrator 和 InDesign 提供“开始”工作区,通过该工作区,您可以快速访问最近打开的文件和在 Creative Cloud 中存储的文件。在此工作区中,您还可以访问可满足您需求的各种资源。您还可以从此工作区内搜索 Adobe Stock 资源。在 Photoshop 中,您甚至可以在“开始”工作区中处理 Lightroom 照片。 “开始”工作区 您将在下列
Hitting npm run build all the time will get boring eventually. Fortunately we can work around that quite easily. Let's set up webpack-dev-server. 如果需要一直输入 npm run build 确实是一件非常无聊的事情,幸运的是,我们可以把让他安静的运行,
我不能和docker一起经营jenkins的形象。它在运行时卡住: afik@ubuntu:~$docker run——名称myjenkins-p8080:8080-p50000:50000-v/var/jenkins_home jenkins