当前位置: 首页 > 知识库问答 >
问题:

从Wildfly 20升级到23.0.1后,在infinispan缓存中超时记录消息并重试storm

蓬宾白
2023-03-14

我们刚刚从野蝇20升级到23,现在看到infinispan错误输出、进入日志和重试循环的问题。问题在启动后每秒发生100次,只有在集群的一个节点关闭时才会停止。

我们得到了下面的错误,它无限期地退出,在服务器之间使用大约30Mbs的带宽,而通常大约是10-30Kbs。错误的令人困惑的部分是节点1,它收到了来自节点2的错误,而节点2的错误是来自节点1的超时。我已经尝试从udp转移到tcp堆栈,但仍然看到相同的问题(它是一个2节点集群)。

我将远程超时从默认的10秒增加到了30秒,并且几乎立即看到了相同的错误。

wildfly 23中是否需要一个新的设置,或者我这边是否有其他失误,或者我是否遇到了一个新的bug?

以下是jgroups配置:

                <stack name="udp" statistics-enabled="true">
                    <transport type="UDP" shared="false" socket-binding="jgroups-udp" statistics-enabled="true">
                        <property name="log_discard_msgs">
                            false
                        </property>
                        <property name="port_range">
                            50
                        </property>
                    </transport>
                    <protocol type="PING" module="org.jgroups" statistics-enabled="true"/>
                    <protocol type="MERGE3" module="org.jgroups" statistics-enabled="true"/>
                    <socket-protocol type="FD_SOCK" module="org.jgroups" socket-binding="jgroups-udp-fd" statistics-enabled="true"/>
                    <protocol type="FD_ALL" module="org.jgroups" statistics-enabled="true"/>
                    <protocol type="VERIFY_SUSPECT" module="org.jgroups" statistics-enabled="true"/>
                    <protocol type="pbcast.NAKACK2" module="org.jgroups" statistics-enabled="true"/>
                    <protocol type="UNICAST3" module="org.jgroups" statistics-enabled="true"/>
                    <protocol type="pbcast.STABLE" module="org.jgroups" statistics-enabled="true"/>
                    <protocol type="pbcast.GMS" module="org.jgroups" statistics-enabled="true"/>
                    <protocol type="UFC" module="org.jgroups" statistics-enabled="true"/>
                    <protocol type="MFC" module="org.jgroups" statistics-enabled="true"/>
                    <protocol type="FRAG3"/>
                </stack>

和无限的

<cache-container name="localsite-cachecontainer" default-cache="epi-localsite-default" statistics-enabled="true">
                <transport lock-timeout="60000" channel="localsite-appCache"/>
<replicated-cache name="bServiceCache" statistics-enabled="true">
                    <locking isolation="NONE"/>
                    <transaction mode="NONE"/>
                    <expiration lifespan="1800000"/>
                </replicated-cache>
22:47:52,823 WARN  [org.infinispan.CLUSTER] (thread-223,application-localsite,node1) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='application-bServiceCache', 
command=PutKeyValueCommand{key=SimpleKey [XXXX,2021-05-06,1412.0,75.0,null], value=[YYYY[pp=4 Pay,PaymentDue=2021-05-28], ppAvaliablity[firstPaymentDue=2021-05-28], ppAvaliablity[firstPaymentDue=2021-05-28]], flags=[], commandInvocationId=CommandInvocation:node2:537,
 putIfAbsent=true, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{version=null, lifespan=1800000, maxIdle=-1}, successful=true, topologyId=18}}: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from node2, see cause for remote stack trace
        at org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
        at org.infinispan.remoting.transport.impl.MapResponseCollector.addException(MapResponseCollector.java:64)
        at org.infinispan.remoting.transport.impl.MapResponseCollector$IgnoreLeavers.addException(MapResponseCollector.java:102)
        at org.infinispan.remoting.transport.ValidResponseCollector.addResponse(ValidResponseCollector.java:29)
        at org.infinispan.remoting.transport.impl.MultiTargetRequest.onResponse(MultiTargetRequest.java:93)
        at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52)
        at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1402)
        at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1305)
        at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:131)
        at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1445)
        at org.jgroups.JChannel.up(JChannel.html" target="_blank">java:784)
        at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:135)
        at org.jgroups.stack.Protocol.up(Protocol.java:309)
        at org.jgroups.protocols.FORK.up(FORK.java:142)
        at org.jgroups.protocols.FRAG3.up(FRAG3.java:165)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:876)
        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243)
        at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1049)
        at org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:772)
        at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:753)
        at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:405)
        at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:592)
        at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
        at org.jgroups.protocols.FD.up(FD.java:227)
        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254)
        at org.jgroups.protocols.MERGE3.up(MERGE3.java:281)
        at org.jgroups.protocols.Discovery.up(Discovery.java:300)
        at org.jgroups.protocols.TP.passMessageUp(TP.java:1396)
        at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at org.jboss.as.clustering.context.ContextReferenceExecutor.execute(ContextReferenceExecutor.java:49)
        at org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4485 from node1
        at sun.reflect.GeneratedConstructorAccessor551.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.infinispan.marshall.exts.ThrowableExternalizer.readGenericThrowable(ThrowableExternalizer.java:282)
        at org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:259)
        at org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:42)
        at org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728)
        at org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709)
        at org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
        at org.infinispan.marshall.core.BytesObjectInput.readObject(BytesObjectInput.java:32)
        at org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:49)
        at org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:41)
        at org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728)
        at org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709)
        at org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
        at org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
        at org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
        at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
        ... 28 more

共有1个答案

宗项禹
2023-03-14

您还可以附加“localsite appCache”频道的配置吗?您还可以附加一个代码段,演示如何在应用程序中引用缓存吗?

 类似资料:
  • 我正在将weblogic从12.2.1.0.0升级到12.2.1.4.0。Weblogic升级成功。但当我要启动服务器时,部署失败了。下面是完整的错误跟踪。我附上显示失败状态的管理控制台截图。你能帮我解决这个问题吗?部署状态为失败的Weblogic管理控制台

  • 尝试使用infinispan作为Hibernate的二级缓存,但总是给我以下错误 org.infinispan.jmx.JMX MBean实例类型=CacheManager, name="DefaultCacheManager"已经在'org.infinispan'JMX域下注册。如果您想允许多个配置了相同JMX域的实例,请在org.infinispan.jmx.JmxUtil.buildJmxD

  • 从9.4.11版更新Infinispan后。最终至10.0.1。最终尝试启动第四个缓存时,启动多个缓存会产生此错误。与以前的版本相比,它运行平稳。 我们正在使用此配置: 看起来软索引文件存储的某些配置不再正确,我只是不知道它可能是什么。我试图减小maxNodeSize,但没有成功。是否有人可以告诉我升级后需要调整配置的部分,或者我还缺少什么?

  • 这是我的超文本标记语言PHP表页面的完整代码...

  • 问题内容: 我正在尝试将Infinispan配置为休眠二级缓存。一切都很好,但是我想调整默认配置,即所有缓存共享的值。 缓存是用于注明实体自动创建的,我可以通过一个在对其进行自定义一个通过。但是,我希望所有这些缓存都具有默认值(例如,逐出策略)。 另一件事是,我想将所有这些生成的缓存标记为“分布式”(默认情况下它们是“本地的”)。 这是我的摘录: 我该怎么做? 问题答案: 实体的默认缓存配置名为:

  • 在我将我的系统从Ubuntu 13.04升级到13.10之后,apache、mysql和php配置出现了几个问题。 我解决了大部分问题,但我似乎无法让mCrypt库正常工作。软件包已安装,因此我不需要获取它。服务器工作正常,一切正常,但当我尝试使用Laravel4运行时,我得到一条消息,需要mCrypt。 我做了,输出是我尝试把到但是它没有工作。 有什么想法吗?