连接到当地的redis,Lestuce需要近5000ms,但Jedis只需要30ms。我指的是以下示例ConnectToRedis
@Component
@Slf4j
class LettuceRunner implements CommandLineRunner {
@Override
public void run(String... args) throws Exception {
StopWatch watch = new StopWatch();
RedisClient redisClient = RedisClient.create("redis://localhost:6379");
watch.start();
StatefulRedisConnection<String, String> connection = redisClient.connect();
watch.stop();
log.info("lettuce : {} ms", watch.getLastTaskTimeMillis());
connection.close();
redisClient.shutdown();
}
}
@Component
@Slf4j
class JedisRunner implements CommandLineRunner {
@Override
public void run(String... args) throws Exception {
StopWatch watch = new StopWatch();
watch.start();
Jedis jedis = new Jedis("localhost");
jedis.get("redis_key");
watch.stop();
log.info("jedis : {} ms", watch.getLastTaskInfo().getTimeMillis());
}
}
2020-08-14 17:02:28.236信息21760--[main]com.example.demo.jedisrunner:jedis:27 ms 2020-08-14 17:02:33.318信息21760--[main]com.example.demo.lettucerner:lettuce:4815 ms
因为Lettuce使用Netty,它花了大量的时间在Netty中发起事情。
检查日志,可以看到,大部分时间都在io.netty包中:
2020-08-15 00:54:06.030 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Creating executor io.netty.util.concurrent.DefaultEventExecutorGroup
2020-08-15 00:54:06.031 DEBUG 728 --- [ main] io.lettuce.core.RedisClient : Trying to get a Redis connection for: RedisURI [host='localhost', port=6379]
2020-08-15 00:54:06.120 DEBUG 728 --- [ main] io.lettuce.core.EpollProvider : Starting without optional epoll library
2020-08-15 00:54:06.122 DEBUG 728 --- [ main] io.lettuce.core.KqueueProvider : Starting without optional kqueue library
2020-08-15 00:54:06.123 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Allocating executor io.netty.channel.nio.NioEventLoopGroup
2020-08-15 00:54:06.123 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Creating executor io.netty.channel.nio.NioEventLoopGroup
2020-08-15 00:54:06.124 DEBUG 728 --- [ main] i.n.channel.MultithreadEventLoopGroup : -Dio.netty.eventLoopThreads: 12
2020-08-15 00:54:06.129 DEBUG 728 --- [ main] io.netty.channel.nio.NioEventLoop : -Dio.netty.noKeySetOptimization: false
2020-08-15 00:54:06.129 DEBUG 728 --- [ main] io.netty.channel.nio.NioEventLoop : -Dio.netty.selectorAutoRebuildThreshold: 512
2020-08-15 00:54:06.421 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Adding reference to io.netty.channel.nio.NioEventLoopGroup@7c59cf66, existing ref count 0
2020-08-15 00:54:06.431 DEBUG 728 --- [ main] io.lettuce.core.RedisClient : Resolved SocketAddress localhost:6379 using RedisURI [host='localhost', port=6379]
2020-08-15 00:54:06.432 DEBUG 728 --- [ main] io.lettuce.core.RedisClient : Connecting to Redis at localhost:6379
2020-08-15 00:54:06.435 DEBUG 728 --- [ main] io.netty.channel.DefaultChannelId : -Dio.netty.processId: 728 (auto-detected)
2020-08-15 00:54:06.437 DEBUG 728 --- [ main] io.netty.util.NetUtil : -Djava.net.preferIPv4Stack: false
2020-08-15 00:54:06.437 DEBUG 728 --- [ main] io.netty.util.NetUtil : -Djava.net.preferIPv6Addresses: false
2020-08-15 00:54:06.659 DEBUG 728 --- [ main] io.netty.util.NetUtil : Loopback interface: lo (Software Loopback Interface 1, 127.0.0.1)
2020-08-15 00:54:06.660 DEBUG 728 --- [ main] io.netty.util.NetUtil : Failed to get SOMAXCONN from sysctl and file \proc\sys\net\core\somaxconn. Default: 200
2020-08-15 00:54:06.898 DEBUG 728 --- [ main] io.netty.channel.DefaultChannelId : -Dio.netty.machineId: 00:50:56:ff:fe:c0:00:08 (auto-detected)
2020-08-15 00:54:06.911 DEBUG 728 --- [ main] io.netty.buffer.ByteBufUtil : -Dio.netty.allocator.type: pooled
2020-08-15 00:54:06.912 DEBUG 728 --- [ main] io.netty.buffer.ByteBufUtil : -Dio.netty.threadLocalDirectBufferSize: 0
2020-08-15 00:54:06.912 DEBUG 728 --- [ main] io.netty.buffer.ByteBufUtil : -Dio.netty.maxThreadLocalCharBufferSize: 16384
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.maxCapacityPerThread: 4096
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.maxSharedCapacityFactor: 2
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.linkCapacity: 16
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.ratio: 8
2020-08-15 00:54:06.928 DEBUG 728 --- [ioEventLoop-8-1] io.netty.util.Recycler : -Dio.netty.recycler.delayedQueue.ratio: 8
2020-08-15 00:54:06.933 DEBUG 728 --- [ioEventLoop-8-1] io.netty.buffer.AbstractByteBuf : -Dio.netty.buffer.checkAccessible: true
2020-08-15 00:54:06.933 DEBUG 728 --- [ioEventLoop-8-1] io.netty.buffer.AbstractByteBuf : -Dio.netty.buffer.checkBounds: true
2020-08-15 00:54:06.933 DEBUG 728 --- [ioEventLoop-8-1] i.n.util.ResourceLeakDetectorFactory : Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@20e9fc6c
2020-08-15 00:54:06.950 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, [id: 0x7bd077d9] (inactive), chid=0x1] channelRegistered()
2020-08-15 00:54:06.953 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelActive()
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] activateEndpointAndExecuteBufferedCommands 0 command(s) buffered
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] activating endpoint
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] flushCommands()
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] flushCommands() Flushing 0 commands
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] i.l.core.protocol.ConnectionWatchdog : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, last known addr=localhost/127.0.0.1:6379] channelActive()
2020-08-15 00:54:06.954 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelActive() done
2020-08-15 00:54:06.955 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.RedisClient : Connecting to Redis at localhost:6379: Success
2020-08-15 00:54:06.956 INFO 728 --- [ main] c.h.s.c.c.CacheStudyApplicationTests : lettuce : 925 ms
2020-08-15 00:54:06.956 DEBUG 728 --- [ main] io.lettuce.core.RedisChannelHandler : close()
2020-08-15 00:54:06.956 DEBUG 728 --- [ main] io.lettuce.core.RedisChannelHandler : closeAsync()
2020-08-15 00:54:06.956 DEBUG 728 --- [ main] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] closeAsync()
2020-08-15 00:54:06.957 DEBUG 728 --- [ioEventLoop-8-1] i.l.core.protocol.ConnectionWatchdog : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, last known addr=localhost/127.0.0.1:6379] userEventTriggered(ctx, io.lettuce.core.ConnectionEvents$Activated@1cda757f)
2020-08-15 00:54:06.958 DEBUG 728 --- [ main] io.lettuce.core.RedisClient : Initiate shutdown (0, 2, SECONDS)
2020-08-15 00:54:06.959 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelInactive()
2020-08-15 00:54:06.959 DEBUG 728 --- [ioEventLoop-8-1] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, epid=0x1] deactivating endpoint handler
2020-08-15 00:54:06.960 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelInactive() done
2020-08-15 00:54:06.960 DEBUG 728 --- [ioEventLoop-8-1] i.l.core.protocol.ConnectionWatchdog : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, last known addr=localhost/127.0.0.1:6379] channelInactive()
2020-08-15 00:54:06.960 DEBUG 728 --- [ioEventLoop-8-1] i.l.core.protocol.ConnectionWatchdog : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, last known addr=localhost/127.0.0.1:6379] Reconnect scheduling disabled
2020-08-15 00:54:06.960 DEBUG 728 --- [ioEventLoop-8-1] io.lettuce.core.protocol.CommandHandler : [channel=0x1ced470d, /127.0.0.1:2106 -> localhost/127.0.0.1:6379, chid=0x1] channelUnregistered()
2020-08-15 00:54:06.961 DEBUG 728 --- [ main] i.l.c.resource.DefaultClientResources : Initiate shutdown (0, 2, SECONDS)
2020-08-15 00:54:06.963 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Initiate shutdown (0, 2, SECONDS)
2020-08-15 00:54:06.963 DEBUG 728 --- [ main] i.l.c.r.DefaultEventLoopGroupProvider : Release executor io.netty.channel.nio.NioEventLoopGroup@7c59cf66
2020-08-15 00:54:06.965 DEBUG 728 --- [ioEventLoop-8-1] io.netty.buffer.PoolThreadCache : Freed 1 thread-local buffer(s) from thread: lettuce-nioEventLoop-8-1
在配置包中配置Jedis和Redis后。我用bean注释创建了jedisConnectionFactory和redisTemplate。但是应用程序无法运行“错误:创建名为“redisConnectionFactory”的bean。我需要做什么?
主要内容:迪杰斯特拉算法的实现思路,迪杰斯特拉算法的具体实现迪杰斯特拉算法用于查找图中某个顶点到其它所有顶点的最短路径,该算法既适用于无向加权图,也适用于有向加权图。 注意,使用迪杰斯特拉算法查找最短路径时,必须保证图中所有边的权值为非负数,否则查找过程很容易出错。 迪杰斯特拉算法的实现思路 图 1 是一个无向加权图,我们就以此图为例,给大家讲解迪杰斯特拉算法的实现思路。 图 1 无向加权图 假设用迪杰斯特拉算法查找从顶点 0 到其它顶点的最短路径,具体过
我已经为消息创建了一个操作栏项,当收到新消息时,它应该被更新。问题是,当我刷新活动时,它有时会将图标显示为新消息,有时则不会显示新消息。这是随机发生的。它没有正确更新。我检查了如何更新ActionBar中显示的菜单项? 但无法解决我的问题。我意识到问题是onCreateOptionMenu在oncreate时执行。我如何延迟?
问题内容: 我想保存一个或数组。 我尝试与和一起使用,发现前者总是花费更少的时间。 我的实际数据要大得多,但在这里我仅展示一小段用于演示目的: 输出: 我的实际大小(字典中约有100,000个键)时差更加明显。 为什么在保存和加载时,泡菜比np.save花费的时间更长? 我什么时候应该使用? 问题答案: 因为只要书面对象不包含Python数据, numpy对象在内存中的表示方式比Python对象简
我有一个未加权的连接图形G,具有n个顶点和m条边。 m=O(n 对数 n)。 我想找到从顶点s到顶点v的最短路径 ,我想知道BFS遍历或Dijkstra的算法是否会渐近更快。 BFS将从s. Dijkstra算法开始,从s开始,并实现斐波那契堆。 BFS的运行时间是θ(n m)= O(n n log n)= O(n log n)< br >而迪杰斯特拉的运行时间是O(m n log n)= O(n
我使用的是Spring2.1.1和Redis4.0.1。我已经配置了两台节点计算机,一台具有主配置,另一台具有从配置。我正在两个系统上使用jedis(不使用spring-jedis)运行Springboot应用程序,出现了不同的情况- > 在主节点上运行应用程序(用主节点配置的redis)时,数据会集中在缓存中。 在从节点上运行应用程序时(redis配置了从节点),出现异常-(i.)我能够从sen