我发现Firebase在某些查询上执行得很慢,即使是在一小部分数据上。我将日志级别设置为Logger。数量调试,看看我是否能在日志中找到一些信息。在这里,在日志猫中,我发现在firebase查询开始之前有很多垃圾收集器调用。
W/art: Suspending all threads took: 119.246ms W/art: Suspending all threads took: 15.993ms I/art: Background partial concurrent mark sweep GC freed 178740(5MB) AllocSpace objects, 6(1210KB) LOS objects, 20% free, 61MB/77MB, paused 20.261ms total 246.362ms I/art: Background sticky concurrent mark sweep GC freed 421369(15MB) AllocSpace objects, 1(30KB) LOS objects, 11% free, 63MB/71MB, paused 3.699ms total 150.208ms W/art: Suspending all threads took: 38.838ms I/art: Background sticky concurrent mark sweep GC freed 234760(8MB) AllocSpace objects, 0(0B) LOS objects, 11% free, 63MB/71MB, paused 3.761ms total 124.938ms I/art: Background sticky concurrent mark sweep GC freed 177455(6MB) AllocSpace objects, 0(0B) LOS objects, 8% free, 65MB/71MB, paused 4.031ms total 109.770ms I/art: Background sticky concurrent mark sweep GC freed 165051(6MB) AllocSpace objects, 0(0B) LOS objects, 7% free, 65MB/71MB, paused 3.902ms total 105.397ms W/art: Suspending all threads took: 19.577ms W/art: Suspending all threads took: 30.319ms I/art: Background sticky concurrent mark sweep GC freed 133737(5MB) AllocSpace objects, 0(0B) LOS objects, 6% free, 66MB/71MB, paused 34.150ms total 116.078ms W/art: Suspending all threads took: 58.275ms W/art: Suspending all threads took: 31.344ms I/art: Background partial concurrent mark sweep GC freed 239886(8MB) AllocSpace objects, 12(391KB) LOS objects, 19% free, 65MB/81MB, paused 4.279ms total 308.630ms W/art: Suspending all threads took: 19.896ms W/art: Suspending all threads took: 17.442ms I/art: Background sticky concurrent mark sweep GC freed 413361(15MB) AllocSpace objects, 0(0B) LOS objects, 10% free, 67MB/75MB, paused 21.411ms total 169.188ms I/art: Background sticky concurrent mark sweep GC freed 230648(8MB) AllocSpace objects, 0(0B) LOS objects, 10% free, 67MB/75MB, paused 4.088ms total 127.504ms I/art: Background sticky concurrent mark sweep GC freed 214282(8MB) AllocSpace objects, 0(0B) LOS objects, 9% free, 68MB/75MB, paused 3.783ms total 121.986ms W/art: Suspending all threads took: 8.059ms W/art: Suspending all threads took: 8.102ms W/art: Suspending all threads took: 25.963ms I/art: Background sticky concurrent mark sweep GC freed 160942(6MB) AllocSpace objects, 0(0B) LOS objects, 7% free, 69MB/75MB, paused 33.338ms total 135.224ms I/art: Background sticky concurrent mark sweep GC freed 158679(5MB) AllocSpace objects, 0(0B) LOS objects, 7% free, 69MB/75MB, paused 4.614ms total 119.459ms I/art: Background sticky concurrent mark sweep GC freed 149782(5MB) AllocSpace objects, 0(0B) LOS objects, 6% free, 70MB/75MB, paused 3.840ms total 100.132ms W/art: Suspending all threads took: 59.880ms I/art: Background sticky concurrent mark sweep GC freed 86107(3MB) AllocSpace objects, 9(221KB) LOS objects, 2% free, 73MB/75MB, paused 5.562ms total 66.501ms W/art: Suspending all threads took: 16.000ms I/art: Background sticky concurrent mark sweep GC freed 68706(2MB) AllocSpace objects, 8(204KB) LOS objects, 2% free, 73MB/75MB, paused 5.063ms total 68.463ms I/art: Background sticky concurrent mark sweep GC freed 66336(2MB) AllocSpace objects, 10(242KB) LOS objects, 1% free, 73MB/75MB, paused 5.038ms total 69.289ms I/art: Background sticky concurrent mark sweep GC freed 60331(2MB) AllocSpace objects, 7(177KB) LOS objects, 1% free, 74MB/75MB, paused 5.181ms total 54.786ms W/art: Suspending all threads took: 27.598ms I/art: Background sticky concurrent mark sweep GC freed 64552(2MB) AllocSpace objects, 7(155KB) LOS objects, 1% free, 74MB/75MB, paused 17.137ms total 81.332ms I/art: Background sticky concurrent mark sweep GC freed 48948(1803KB) AllocSpace objects, 7(158KB) LOS objects, 1% free, 74MB/75MB, paused 6.488ms total 63.935ms I/art: Background sticky concurrent mark sweep GC freed 46732(1694KB) AllocSpace objects, 3(59KB) LOS objects, 1% free, 74MB/75MB, paused 6.090ms total 62.455ms I/art: Background sticky concurrent mark sweep GC freed 39782(1456KB) AllocSpace objects, 2(46KB) LOS objects, 1% free, 74MB/75MB, paused 5.956ms total 63.859ms I/art: Background partial concurrent mark sweep GC freed 339007(11MB) AllocSpace objects, 119(3MB) LOS objects, 18% free, 70MB/86MB, paused 4.144ms total 287.479ms I/art: Background sticky concurrent mark sweep GC freed 481371(17MB) AllocSpace objects, 83(2MB) LOS objects, 10% free, 71MB/79MB, paused 3.836ms total 136.776ms I/art: Background sticky concurrent mark sweep GC freed 306454(10MB) AllocSpace objects, 52(1234KB) LOS objects, 10% free, 70MB/78MB, paused 5.471ms total 103.283ms I/art: Background sticky concurrent mark sweep GC freed 186051(7MB) AllocSpace objects, 142(3MB) LOS objects, 10% free, 69MB/77MB, paused 4.630ms total 112.830ms I/art: WaitForGcToComplete blocked for 45.513ms for cause HeapTrim W/art: Suspending all threads took: 45.069ms I/art: Background sticky concurrent mark sweep GC freed 214400(7MB) AllocSpace objects, 31(749KB) LOS objects, 7% free, 70MB/76MB, paused 3.877ms total 105.103ms D/Connection: conn_1 - Sending data: {d={a=q, r=2, b={h=bHJ1mWrKhIU9GTHaIkUYeaP43Bo=, p=/, ch={hs=[bALDIOju3KaZT2xWiQLaLKH0SBo=, qNZ6/TpSUd3ssEdVt8dTAg0tkyA=, qjHFAPcW191k7pdiKTTYKcoM4jA=, YuhYIGzgf5OgmLXVSuzPPXECeWc=, KuI8A5+9hRky9NolHYvkl7bilNU=, 55dGIfQzMgxtbAU/zpVoKfl3LuA=, 9VoUeg4sRIYfhTJugtwIANcdPH0=, b4/sKMEhxWFOqBH3uMbpiWAwH30=, s5xLa798i1518s9djUrjgyb18XE=, xw69BGenjYvwg7RILHBPtwXhVyQ=, aHCtTSD3ox+WXaryBp3uu3Nfz/g=, tC0zEvCLJljHSpbfgGWIZIcYdLo=, bcOAzAwkWR8yUVLYEdTpiNXxvxE=, rbaiPzhMDh7ahB1aSsJ8cf63MZg=, IUDU7S7GA0tL3UGjxmI45ofEX24=, NzdrheN8DXqBUfY+RM3L3q7hG4Q=, ekjUn4lnbwdGwje6+cNES2VKJxE=, fI1b1+/CRsQIMT3mt/i50nbQ8Y8=, E9mvQdFfiStRtZ8eJkBHhVAqnvo=, BOF1YmJJFg2Lusty1JgUnMEw3xI=, lbwxveJ9cFhXSfT2D0WOmWrmYMM=, clZotfKSAOhwccTaUYycSTgj15M=, xkoLdSPI8W+e+plppywrVOFFb5s=, INp7uMxgaFNCX4kkBx3GRNHsRAQ=, euVHoK0S/jbYsnD61W5Wje7W7cQ=, GAvJN0eMxpQ8qKVq8hF67Rf5dWo=, HVAHhdpBFbSocGKEVpZDHhW4Ub0=, +8UAZrKp2fa2LjCNvx3ZufTEz98=, P6npjI8doaqjVHRMmP+s4jpMnEw=, s63dTSOlc9KnJwkT919JwkZcEDc=, BTwTf/vfbWhrLurgoDfAKuPiBQc=, 2L2cggm9LZpebi96iAPnNZPvrrs=, 49jkc562TuijtnlzWffNZDcc37o=, anUpNiBJRBG/eRVS+0Ltx2I6xoI=, IpeSX9YxH1qFH31lCFcgFkcmlXg=, bsYBuq040IDqT819iN2hKksZigA=, hOMsvBVfgF7ZKjbhyXakfW58v+g=, P6xez6jyWnPfinDxJiqNgMleoY4=, lmR8RESBCH6Cbjx/eAvgtw1nGy4=, LDDo8VgDjES9bAfYf0JuKa4G3us=, 9WuZH2BjjTZX4FcEZI6Og2+wglY=, eigyHjZr3DrFM/MjVtJbtf9WWcM=, 3Ode5pkh9nlg0NY6RE1f+IwiccI=, F4xesQWVS+0V5+h/W5Xd3/mUh1E=, eSRLc9btLel7qW9sMdk6iq1YA8o=, DD5MeL9tChYUuFfK0fyLlcS13vo=, 9NvnQzTkuCEXa5rlDehkdyJ3QkI=, ISlqMDakB8yeehWI+w0sAHLDYL8=, d2nv4J6j9KeNJyf/23cMsFGnxp0=, JQu5Sf3mwF/gdFxntP1sX6go+Wc=, jdZ7Ng/wSQDJC85dOMb6kHJaZVA=, OyRG9SYCDDEGfwmYYK7EnwBVfD0=, mV1cnouAhXHAg1gFL+oM6vDk0K4=, 2G7PhMcrabq8bm8ucu1rc++++jyfo=, 04r7kkwUEMCwxLl82W5lEZUFzwQ=, kxZiCIsxmYNmsn+5S3hXqKAQ1B0=, kjqNcnc87Roxx4jK5w9IfajXv18=, 5W5ltmvmCH2uJ7UzrPOMGlWhdNg=, 6BVCbfSQvEyliNsbJLksH1phDlA=, eYWv+u8Yp4vT6REToDi3ZFkCnkU=, b6b5kBTfHWT0Jv3fbn/DtHe1Two=, cR0wggN5fhYFJApgFKZMAFu9FYY=, obFhp1LRsmeYXnw6q39tcEbvIZk=, vD1zXBhHBT0u+KaR7dtPsNltkis=, /J1hZE92X92aC0GlECnsm+hF4tc=, fIDgCue4+UBUAiakpq8DrKKlb+o=, L5s5EOOBawNXDq1Qzne8+FCkP4M=, v8EVZoo16NlGwOjAwynL43pwa0Y=, OVfiRxAfMGtuLTi7WKEWN19vJ8E=, Fu+dPgUjDVDHW4CD9hBG6EjbsH4=, /hHBwp3jn88eHqWt2DtMDS2H8KQ=, NsIvujNzn18uwRkN06vhYcjTaSc=, dfASQGt5Qm3CQlIDcp1OeGRF6s4=, Sp/G1d3RFAKgMSR+//Uf+nR+Fug=, Vt3llCH3A/NECmtcbqAHVRnnXhk=, O56WoHNNa1rUEFzua7lbH5y4anA=, k1eZSSHDNS5aN9zsJ4vlE0je7cY=, 6Mn9ZqFp8hBSfZkfjEiR1dQ0oVU=, ctpxP/HfuNIGka2pVbvQee2k2eo=, h+jHwJ6Ppvb72+rmywuf4EK8gH4=, 512xoVjuUH7Rly+in9iQdlrU7nk=, vp57AKFRErjrDWG2NCGtbNlQIR8=, mqbYrJleKYpEdvT9ZDM5aUi2l/k=, vufp8Y6EXDKo/rJO0aVOL40WAWM=, 5gcElGrxGZJK7yiHEouhcz+QXh4=, JKoArtNvTCr0ET4yQhhPhuDJuqU=, 81stTPQmj0f5GkxCPrOLrOZIy7Y=, 1YluFfPncdYV1B+3Nyqksw22biI=, /T+je45nNFrVLLdD2okxP287D5U=, 26cmyM8dltqPRkeyd9kZay2z59I=, mHYxlmUbe1cwYsB9a87gxHRNb+U=, V1VGKgVsAsvBLG4kIGJ0ToMPXwk=, jhhxkLPVeJTSn9xVFEeOpwnn5+c=, I4hXdr/rhxm3rrwT8gec7+Gb+O8=, B/ke+zpb7n7zqm331PnZpzcINM0=, P1wbPYVaQK41TRLkvkYmVajrqkA=, 3pq/9cS67apkoGLJs09WDu3O6Ik=, y+TcFLKef1qyRAaexBfGfdl4awI=, qsZz8e8JKqKCfTACpmORWbKFWAU=, oIJS8GuJttgoOzTLOaLm0OeSQ0I=, p+bmPWYcjilFwxrbSL/l/qxhH2c=, ln3JW5ne3QxvyzCYRnzj+GaUnNg=, lFBj9a300XyDAK4WYA9xHGX1DSc=, pYdnAY/t9HjyXA9mCgNL9kKUAls=, lglAuBlXwGCDSM/xwq7mSnonoeM=, vyYQPIZb7ae828neRvMedglkQXM=, HHRnuDQuSnJIkOzgDbOw1pXwikk=, 1/TaNEsxa9eSK/l9Hd2vo0eUUuc=, zEsK0X3hzYRU7tCtAui7p20SgNQ=, Kxz1KBPUFjm73FDuaIxqHYUWcrQ=, M4T3sLsKhIjBF2n5rqcRQ3Gezy4=, ehE/3obD0woLxhxFRZImXRbLl14=, 5t5l2oAHI0se/BuavanDU8Xy+eo=, /ma9Aa7lwwmgPRIpZnK+L7r4Kkk=, VftPH6GBC+kNJ0mvgpB4KL07fv0=, Pq08isedk230HSbHIcuvtMi5aHI=, ++pnsz+wbybdUnZpaol44bhYds4=, mtRz1v6J7SbZQAI2z703JELlG74=, Q9bJeDtpf/B4O4NB2Uo5FDKmysc=, AfMe4uVvrK2cFHIdx2N/LcKwC+4=, EKaXGSUB2GGYIThPKFt3AKvZ3rU=, jqjJfCkF9aLo7eAyc8UJ2u4uOsA=, +hft27daYSvuWZ8dsgwhC7+VQj4=, dwjaXCITJqIZH1LwPplb7LNq8Ho=, Pbz4g+DcxprOt2ef8aUCNRyzfFE=, UB9ILOgzQZ+fA3G6I2Rcj9HyVmU=, ht3jvak0cAeSkuX9c7+LNzPCopk=, 7bEDyPX/QjUJFnkdoQ2Qubq8ang=, RHn75CDN80Jz3UQIVXGQPyIrgbg=, BXay0TBI3PlUH+bSsbd0UtvBd8I=, W8PxyIuVGta61mBe5JzdqLS7+Zg=, HzHQx4oxAirWHzbx5Atg3YsgdvU=, Swu1s+1wuaIPJTG67S
它会导致相对较长的延迟(
如何解决这个问题?
从您提供的日志输出(谢谢!)看起来您实际上正在监听整个Firebase数据库(最后一行中的“p=/”表示您正在监听/)。这将读取离线缓存的全部内容,并对其进行一些处理,以便我们可以将缓存数据的散列发送到服务器,以尽量减少产生的带宽。根据数据的大小,这很可能会触发相当数量的气相色谱活动。
Java 15 使 ZGC、Z 垃圾收集器成为标准功能。它是 Java 15 之前的一个实验性功能。它是低延迟、高度可扩展的垃圾收集器。 ZGC 是在 Java 11 中作为一项实验性功能引入的,因为开发人员社区认为它太大而无法提前发布。 即使在机器学习应用程序等海量数据应用程序的情况下,ZGC 也具有高性能和高效工作。它确保在处理数据时不会因垃圾收集而长时间停顿。它支持 Linux、Window
Java 15 使 ZGC、Z 垃圾收集器成为标准功能。它是 Java 15 之前的一个实验性功能。它是低延迟、高度可扩展的垃圾收集器。 ZGC 是在 Java 11 中作为一项实验性功能引入的,因为开发人员社区认为它太大而无法提前发布。从那时起,对这个垃圾收集做了很多改进,例如 - 并发类卸载 取消提交未使用的内存 支持班级数据共享 NUMA 多线程堆Pre-touch 最大堆大小限制从 4 T
在生产环境中运行的当前基于BPM的应用程序(在JBOSS中作为4.2.3部署)中,注意到了一些性能问题,这是因为在峰值负载期间运行的GC暂停周期较长。通过进一步分析,我发现运行JVM实例的jstat实用程序有以下输出。 /usr/jdk1.6.0-x64/bin/jstat-gccapacity 5583 NGCMN NGCMX NGC S0C S1C EC OGCMN OGCMX OGC OC
Kubernetes 垃圾收集器的角色是删除指定的对象,这些对象曾经有但以后不再拥有 Owner 了。 注意:垃圾收集是 beta 特性,在 Kubernetes 1.4 及以上版本默认启用。 Owner 和 Dependent 一些 Kubernetes 对象是其它一些的 Owner。例如,一个 ReplicaSet 是一组 Pod 的 Owner。具有 Owner 的对象被称为是 Owner
我遇到了一个JNI程序随机内存不足的问题。 这是一个32位java程序,它读取文件,进行一些图像处理,通常使用250MB到1GB。然后丢弃所有这些对象,然后程序对通常需要100-250MB的JNI程序进行一系列调用。 当交互运行时,我从未见过问题。但是,当对许多文件连续运行批处理操作时,JNI程序将随机运行内存溢出。它可能对一个或两个文件有内存问题,然后对下一个10个文件运行正常,然后再次出现故障
在我们的kafka broker设置中,GC平均需要20毫秒,但随机增加到1-2秒。极端情况持续9秒。这种情况的发生频率相当随机。平均每天发生15次。我尝试过使用GCEasy,但没有给出任何见解。我的内存使用率为20%,但进程仍然使用交换,尽管内存可用。感谢您对如何将其最小化的任何意见 JVM选择: GC日志: