当前位置: 首页 > 知识库问答 >
问题:

Ubuntu上的Kubernetes:通过consul与其他主机交互的微服务问题

齐才艺
2023-03-14

我已经转了几个星期了,但在以下问题上没有取得进展:

这段视频总结了:https://www.youtube.com/watch?v=48gb1HBHuC8

但从那时起,代码本身/脚本已经更新。有各种shell脚本。

微服务应用程序是用Micronauts编写的,如果不通过kubernetes以记录的方式执行,它看起来确实工作得很好。(所以我们知道它确实有效)

现在,我试图通过kubernetes让它工作起来,最终得到了以下结果:

kubectl get svc
NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                   AGE
billing                      ClusterIP   10.104.228.223   <none>        8085/TCP                                                                  3h
front                        ClusterIP   10.107.198.62    <none>        8080/TCP                                                                  8m
kafka-service                ClusterIP   None             <none>        9093/TCP                                                                  3h
kind-cheetah-consul-dns      ClusterIP   10.101.52.36     <none>        53/TCP,53/UDP                                                             3h
kind-cheetah-consul-server   ClusterIP   None             <none>        8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP   3h
kind-cheetah-consul-ui       ClusterIP   10.97.158.51     <none>        80/TCP                                                                    3h
kubernetes                   ClusterIP   10.96.0.1        <none>        443/TCP                                                                   3h
mongodb                      ClusterIP   10.104.205.91    <none>        27017/TCP                                                                 3h
react                        ClusterIP   10.106.74.166    <none>        3000/TCP                                                                  3h
stock                        ClusterIP   10.109.203.36    <none>        8083/TCP                                                                  9m
waiter                       ClusterIP   10.107.166.108   <none>        8084/TCP                                                                  3h
zipkin-deployment            NodePort    10.108.102.81    <none>        9411:31919/TCP                                                            3h
zk-cs                        ClusterIP   10.100.139.233   <none>        2181/TCP                                                                  3h
zk-hs                        ClusterIP   None             <none>        2888/TCP,3888/TCP                                                         3h

请注意服务名称stock这是我们将关注的两个。

它们被称为前端部署和库存部署即服务。它被重命名了,因为正如你所看到的,根据领事:

stock-675d778b7d-bg98c:8083
stock:8083

这些是可解析的名称:在本例中,库存部署解析为 IP 10.109.203.36,现在在下面称为 stock:

我们有以下豆荚:

kubectl get pod
NAME                                 READY   STATUS    RESTARTS   AGE
billing-59b66cb85d-24mnz             1/1     Running   13         3h
curl-775f9567b5-vzclh                1/1     Running   2          27m
front-7c6d588fd4-ftk7n               1/1     Running   2          18m
kafka-0                              1/1     Running   13         3h
kind-cheetah-consul-server-0         1/1     Running   4          3h
kind-cheetah-consul-wgwfk            1/1     Running   4          3h
mongodb-744f8f5d4-9mgh2              1/1     Running   4          3h
react-6b7f565d96-h5khb               1/1     Running   4          3h
stock-675d778b7d-bg98c               1/1     Running   2          18m
waiter-584b466754-bzs7s              1/1     Running   13         3h
zipkin-deployment-5bf954f879-tbhdf   1/1     Running   4          3h
zk-0    

如果我跑了:

kubectl attach curl-775f9567b5-vzclh -c curl -i -t
If you don't see a command prompt, try pressing enter.
[ root@curl-775f9567b5-vzclh:/ ]$ nslookup stock
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      stock
Address 1: 10.109.203.36 stock.default.svc.cluster.local
[ root@curl-775f9567b5-vzclh:/ ]$ nslookup front
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      front
Address 1: 10.107.198.62 front.default.svc.cluster.local

如果我跑了:

kubectl exec front-7c6d588fd4-ftk7n -- nslookup stock
nslookup: can't resolve '(null)': Name does not resolve

Name:      stock
Address 1: 10.109.203.36 stock.default.svc.cluster.local


$ kubectl exec stock-675d778b7d-bg98c -- nslookup front
nslookup: can't resolve '(null)': Name does not resolve

Name:      front
Address 1: 10.107.198.62 front.default.svc.cluster.local

使用这些方法中的任何一种,DNS似乎工作正常。

如果我跑

minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ curl 10.109.203.36:8083/stock/lookup/Budweiser
{"name":"Budweiser","bottles":1000,"barrels":2.0,"availablePints":654.636}$ 

问题在于:

 curl 10.107.198.62:8080/lookup/Budweiser
{"message":"Internal Server Error: The source Publisher is empty"}$ 
$ 

上面的curl是调用啤酒前端应用程序GatewayControl ler方法查找,该查找调用stockControllerClient.find:这反过来又调用啤酒库存应用程序中的斯托克控制

@Get("/lookup/{name}")
@ContinueSpan
public Maybe<BeerStock> lookup(@SpanTag("gateway.beerLookup") @NotBlank String name) {
    System.out.println("Looking up beer for "+name+" "+new Date());
    return stockControllerClient.find(name)
            .onErrorReturnItem(new BeerStock());
}

我知道它试图调用客户端:

 kubectl logs front-7c6d588fd4-ftk7n
11:54:27.629 [main] INFO  i.m.context.env.DefaultEnvironment - Established active environments: [cloud, k8s]
11:54:31.662 [main] INFO  io.micronaut.runtime.Micronaut - Startup completed in 4023ms. Server Running: http://front-7c6d588fd4-ftk7n:8080
11:54:32.168 [nioEventLoopGroup-1-3] INFO  i.m.d.registration.AutoRegistration - Registered service [gateway] with Consul
Looking up beer for Budweiser Tue Nov 27 12:13:38 GMT 2018
12:13:38.851 [nioEventLoopGroup-1-14] ERROR i.m.h.s.netty.RoutingInBoundHandler - Unexpected error occurred: The source Publisher is empty
java.util.NoSuchElementException: The source Publisher is empty

但是实际的客户端方法似乎都不能连接到远程服务。

主要的问题是,我不确定是哪一点出了问题,http客户端无法连接到远程服务。虽然consul配置不正确,但实际应用程序未能针对它注册自己,也未能启动。

版本:

 kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}


 $ helm version
    Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}


$ minikube version
minikube version: v0.30.0

对本地主机进行了以下端口转发:

ps auwx|grep kubectl
xxx       6916  0.0  0.1  50584  9952 pts/4    Sl   11:51   0:00 kubectl port-forward kind-cheetah-consul-server-0 8500:8500
xxx       7332  0.0  0.1  49524  9936 pts/4    Sl   11:52   0:00 kubectl port-forward react-6b7f565d96-h5khb 3000:3000
xxx       8704  0.0  0.1  49524  9644 pts/4    Sl   11:55   0:00 kubectl port-forward front-7c6d588fd4-ftk7n 8080:8080

作为兴趣点,我启用了 http 客户端跟踪并点击了前端应用程序的当前 ip:8080/stock,这是生成的日志:

 09:34:27.929 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.discovery.event.ServiceStartedEvent] of candidate Definition: io.micronaut.health.HeartbeatTask
09:34:27.929 [pool-1-thread-1] TRACE i.m.context.DefaultBeanContext - Existing bean io.micronaut.health.HeartbeatTask@363a3d15 does not match qualifier <HeartbeatEvent> for type io.micronaut.context.event.ApplicationEventListener
09:34:27.929 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.server.event.ServerStartupEvent] of candidate Definition: io.micronaut.discovery.consul.ConsulServiceInstanceList
09:34:27.929 [pool-1-thread-1] TRACE i.m.context.DefaultBeanContext - Existing bean io.micronaut.discovery.consul.ConsulServiceInstanceList@5d01ea21 does not match qualifier <HeartbeatEvent> for type io.micronaut.context.event.ApplicationEventListener
09:34:27.929 [pool-1-thread-1] DEBUG i.m.context.DefaultBeanContext - Qualifying bean [io.micronaut.context.event.ApplicationEventListener] from candidates [Definition: io.micronaut.discovery.consul.ConsulServiceInstanceList, Definition: io.micronaut.discovery.consul.registration.ConsulAutoRegistration, Definition: io.micronaut.http.client.scope.ClientScope, Definition: io.micronaut.health.HeartbeatTask, Definition: io.micronaut.runtime.context.scope.refresh.RefreshScope] for qualifier: <HeartbeatEvent> 
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.server.event.ServerStartupEvent] of candidate Definition: io.micronaut.discovery.consul.ConsulServiceInstanceList
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.context.scope.refresh.RefreshEvent] of candidate Definition: io.micronaut.http.client.scope.ClientScope
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.discovery.event.ServiceStartedEvent] of candidate Definition: io.micronaut.health.HeartbeatTask
09:34:27.930 [pool-1-thread-1] TRACE i.m.i.q.TypeArgumentQualifier - Bean type interface io.micronaut.context.event.ApplicationEventListener is not compatible with candidate generic types [class io.micronaut.runtime.context.scope.refresh.RefreshEvent] of candidate Definition: io.micronaut.runtime.context.scope.refresh.RefreshScope
09:34:27.930 [pool-1-thread-1] DEBUG i.m.context.DefaultBeanContext - Found 1 beans for type [<HeartbeatEvent> io.micronaut.context.event.ApplicationEventListener]: [io.micronaut.discovery.consul.registration.ConsulAutoRegistration@3402b4c9] 
09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.ApplicationEventPublisher - Established event listeners [io.micronaut.discovery.consul.registration.ConsulAutoRegistration@3402b4c9] for event: io.micronaut.health.HeartbeatEvent[source=io.micronaut.http.server.netty.NettyEmbeddedServerInstance@3f1ddac2]
09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.ApplicationEventPublisher - Invoking event listener [io.micronaut.discovery.consul.registration.ConsulAutoRegistration@3402b4c9] for event: io.micronaut.health.HeartbeatEvent[source=io.micronaut.http.server.netty.NettyEmbeddedServerInstance@3f1ddac2]
09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.PropertySourcePropertyResolver - No value found for property: vcap.application.instance_id
09:34:27.931 [pool-1-thread-1] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Publisher pass(String checkId,String note)] invocation on target: io.micronaut.discovery.consul.client.v1.AbstractConsulClient$Intercepted@47b179d7
09:34:27.931 [pool-1-thread-1] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc] in chain for method invocation: Publisher pass(String checkId,String note)
09:34:27.931 [pool-1-thread-1] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] in chain for method invocation: Publisher pass(String checkId,String note)
09:34:27.938 [nioEventLoopGroup-1-4] DEBUG i.m.d.registration.AutoRegistration - Successfully reported passing state to Consul
09:34:30.602 [nioEventLoopGroup-1-12] DEBUG i.m.h.server.netty.NettyHttpServer - Server waiter-7dd7998f77-bfkbt:8084 Received Request: GET /waiter/beer/a
09:34:30.602 [nioEventLoopGroup-1-12] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Matching route GET - /waiter/beer/a
09:34:30.604 [nioEventLoopGroup-1-12] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Matched route GET - /waiter/beer/a to controller class micronaut.demo.beer.$WaiterControllerDefinition$Intercepted
09:34:30.606 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Single serveBeerToCustomer(String customerName)] invocation on target: micronaut.demo.beer.$WaiterControllerDefinition$Intercepted@a624fe7
09:34:30.606 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.validation.ValidatingInterceptor@6642e95d] in chain for method invocation: Single serveBeerToCustomer(String customerName)
09:34:30.607 [nioEventLoopGroup-1-12] TRACE o.h.v.i.e.c.SimpleConstraintTree - Validating value a against constraint defined by ConstraintDescriptorImpl{annotation=j.v.c.NotBlank, payloads=[], hasComposingConstraints=true, isReportAsSingleInvalidConstraint=false, elementType=PARAMETER, definedOn=DEFINED_IN_HIERARCHY, groups=[interface javax.validation.groups.Default], attributes={groups=[Ljava.lang.Class;@71cccd2d, message={javax.validation.constraints.NotBlank.message}, payload=[Ljava.lang.Class;@5044372c}, constraintType=GENERIC, valueUnwrapping=DEFAULT}.
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.aop.chain.InterceptorChain$$Lambda$449/1045761764@6d4672c0] in chain for method invocation: Single serveBeerToCustomer(String customerName)
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)] invocation on target: micronaut.demo.beer.client.TicketControllerClient$Intercepted@eaba75d
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc] in chain for method invocation: HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)
09:34:30.608 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] in chain for method invocation: HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)
09:34:30.609 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Flowable getInstances(String serviceId)] invocation on target: compositeDiscoveryClient(consul,kubernetes)
09:34:30.610 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.cache.interceptor.CacheInterceptor@2b772100] in chain for method invocation: Flowable getInstances(String serviceId)
09:34:30.610 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.aop.chain.InterceptorChain$$Lambda$449/1045761764@19a66abd] in chain for method invocation: Flowable getInstances(String serviceId)
09:34:30.610 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Intercepted method [Publisher getHealthyServices(String service,Boolean passing,String tag,String dc)] invocation on target: io.micronaut.discovery.consul.client.v1.AbstractConsulClient$Intercepted@47b179d7
09:34:30.611 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc] in chain for method invocation: Publisher getHealthyServices(String service,Boolean passing,String tag,String dc)
09:34:30.611 [nioEventLoopGroup-1-12] TRACE i.m.aop.chain.InterceptorChain - Proceeded to next interceptor [io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] in chain for method invocation: Publisher getHealthyServices(String service,Boolean passing,String tag,String dc)
09:34:30.691 [nioEventLoopGroup-1-12] ERROR i.m.r.intercept.RecoveryInterceptor - Type [micronaut.demo.beer.client.TicketControllerClient$Intercepted] executed with error: Empty body
io.micronaut.http.client.exceptions.HttpClientResponseException: Empty body
    at io.micronaut.http.client.HttpClient.lambda$null$0(HttpClient.java:161)
    at java.util.Optional.orElseThrow(Optional.java:290)
    at io.micronaut.http.client.HttpClient.lambda$retrieve$1(HttpClient.java:161)
    at io.micronaut.core.async.publisher.Publishers$1.doOnNext(Publishers.java:143)
    at io.micronaut.core.async.subscriber.CompletionAwareSubscriber.onNext(CompletionAwareSubscriber.java:53)
    at io.reactivex.internal.util.HalfSerializer.onNext(HalfSerializer.java:45)
    at io.reactivex.internal.subscribers.StrictSubscriber.onNext(StrictSubscriber.java:97)
    at io.reactivex.internal.operators.flowable.FlowableSwitchMap$SwitchMapSubscriber.drain(FlowableSwitchMap.java:307)
    at io.reactivex.internal.operators.flowable.FlowableSwitchMap$SwitchMapInnerSubscriber.onNext(FlowableSwitchMap.java:391)
    at io.reactivex.internal.operators.flowable.FlowableSubscribeOn$SubscribeOnSubscriber.onNext(FlowableSubscribeOn.java:97)
    at io.reactivex.internal.operators.flowable.FlowableOnErrorNext$OnErrorNextSubscriber.onNext(FlowableOnErrorNext.java:79)
    at io.reactivex.internal.operators.flowable.FlowableTimeoutTimed$TimeoutSubscriber.onNext(FlowableTimeoutTimed.java:99)
    at io.micronaut.http.client.filters.ClientServerRequestTracingPublisher$1.lambda$onNext$1(ClientServerRequestTracingPublisher.java:60)
    at io.micronaut.http.context.ServerRequestContext.with(ServerRequestContext.java:53)
    at io.micronaut.http.client.filters.ClientServerRequestTracingPublisher$1.onNext(ClientServerRequestTracingPublisher.java:60)
    at io.micronaut.http.client.filters.ClientServerRequestTracingPublisher$1.onNext(ClientServerRequestTracingPublisher.java:52)
    at io.reactivex.internal.util.HalfSerializer.onNext(HalfSerializer.java:45)
    at io.reactivex.internal.subscribers.StrictSubscriber.onNext(StrictSubscriber.java:97)
    at io.reactivex.internal.operators.flowable.FlowableCreate$NoOverflowBaseAsyncEmitter.onNext(FlowableCreate.java:403)
    at io.micronaut.http.client.DefaultHttpClient$10.channelRead0(DefaultHttpClient.java:1773)
    at io.micronaut.http.client.DefaultHttpClient$10.channelRead0(DefaultHttpClient.java:1705)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.micronaut.http.netty.stream.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:186)
    at io.micronaut.http.netty.stream.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:181)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
    at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
    at io.micronaut.tracing.instrument.util.TracingRunnable.run(TracingRunnable.java:54)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.lang.Thread.run(Thread.java:748)
09:34:30.692 [nioEventLoopGroup-1-12] DEBUG i.m.r.intercept.RecoveryInterceptor - Type [micronaut.demo.beer.client.TicketControllerClient$Intercepted] resolved fallback: HttpResponse addBeerToCustomerBill(BeerItem beer,String customerName)
09:34:30.692 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - Looking up existing bean for key: @Fallback micronaut.demo.beer.client.TicketControllerClient
09:34:30.692 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - No existing bean found for bean key: @Fallback micronaut.demo.beer.client.TicketControllerClient
09:34:30.693 [nioEventLoopGroup-1-12] DEBUG i.m.context.DefaultBeanContext - Resolving beans for type: <RecoveryInterceptor|HttpClientIntroductionAdvice> io.micronaut.aop.Interceptor 
09:34:30.693 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - Looking up existing beans for key: <RecoveryInterceptor|HttpClientIntroductionAdvice> io.micronaut.aop.Interceptor
09:34:30.693 [nioEventLoopGroup-1-12] TRACE i.m.context.DefaultBeanContext - Found 2 existing beans for type [<RecoveryInterceptor|HttpClientIntroductionAdvice> io.micronaut.aop.Interceptor]: [io.micronaut.retry.intercept.RecoveryInterceptor@280d9edc, io.micronaut.http.client.interceptor.HttpClientIntroductionAdvice@6a282fdd] 
09:34:30.694 [nioEventLoopGroup-1-12] DEBUG i.m.context.DefaultBeanContext - Created bean [micronaut.demo.beer.client.NoCostTicket$Intercepted@77053015] from definition [Definition: micronaut.demo.beer.client.NoCostTicket$Intercepted] with qualifier [@Fallback]
 Blank beer from fall back being served
09:34:30.695 [nioEventLoopGroup-1-12] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Encoding emitted response object [micronaut.demo.beer.Beer@5caca659] using codec: io.micronaut.jackson.codec.JsonMediaTypeCodec@2ba33e2c

任何帮助都将不胜感激。项目链接在上面的链接中,有各种外壳脚本,让它全部设置和运行相当复杂,所以也许看一会儿视频可能更实际。

更新我基本上已经远离了这一点,但我真的无法继续,目前已升级到最新的执政官helm v0.5.0和micronaut 1.0.4,但仍面临相同的问题,不太确定这是否正常:

09:34:27.930 [pool-1-thread-1] TRACE i.m.c.e.PropertySourcePropertyResolver - No value found for property: vcap.application.instance_id

我最终在这个分支上制作了一个非常基本的基于 2 个应用程序的版本

有一个更新的更完整的日志-在这里找到-运行后全新安装。/install-minikube.sh(如果为其他人运行该脚本,则需要修改docker用户名)生成日志

共有1个答案

祖麻雀
2023-03-14

看起来你的啤酒柜台无法连接领事。它被定义为无头服务。您会注意到,猎豹领事服务器没有ClusterIP。您可以尝试直接连接为“kind-cheetah-consul-server-0.[headless service fqdn]”或仅连接为“kind-cheetach-consul server-0”吗。由于您的领事正在使用statefulset,您将拥有一个稳定的pod名称和dns。

 类似资料:
  • Atom包可以通过叫做服务的带有版本控制的APi,和其它包进行交互。在你的package.json文件中指定一个或者多个版本号来提供服务,每个版本号都要带有一个包的主模块中的方法。 { "providedServices": { "my-service": { "description": "Does a useful thing", "versions": {

  • 我正在进行一个Personal项目,将一个整体的web应用程序转换为微服务(每个服务都有自己的数据库)。 第二个想法是使用RabbitMQ这样的消息代理。“Register Service”仍然在自己的数据库中插入有趣的东西,并以用户信息作为数据在队列中发布消息。“用户服务”使用此消息并将数据持久化到其“用户”数据库中。通过使用这个概念,这两个服务是完全隔离的,这可能是一个很好的想法。 但是,发送

  • 我正在做一个演示项目,它有5个微服务-发现服务器,api-gateway,user-order-detail,order和user Service。 我将在GKE上内部公开订单和用户服务 我将对外公开user-order-detail服务,它将使用restendpoint调用其他两个服务 google kubernetes引擎上的服务: user-order-detail LoadBalancer

  • 我在windows机器上用虚拟盒子管理器运行Ubuntu。在VM box ubuntu中,我正在运行一个python flask应用程序,它运行在http://localhost:5000上。 我尝试使用我使用获得的虚拟机盒IP访问Windows计算机上的虚拟机框本地主机URL。但它说: 我访问它的方式正确吗? 这是我的蟒蛇烧瓶代码:

  • 我正在尝试使用三个虚拟机(Master–10.x.x.4、Node1–10.x.x.150、Node2–10.x.x.160)创建Kubernetes集群。 我能够通过此链接成功创建留言簿应用程序:http://kubernetes.io/v1.0/examples/guestbook/.我只对frontend-service.yaml做了一个更改:使用NodePort。我可以使用节点IP和端口号

  • 我使用此处的说明在 VirtualBox 上创建了一个 3 节点 kubernetes 集群(1 个主 2 个工作线程)。我正在使用法兰绒作为覆盖网络。 我在安装过程中在主服务器上设置了< code > sysctl-w net . bridge . bridge-nf-call-iptables = 1 和< code > sysctl-w net . bridge . bridge-nf-ca