我在Fargate上使用AWS EKS测试kubernetes设置,在容器启动时遇到问题。
这是一个利用hibernate的java应用程序。它似乎在启动时无法连接到MySQL服务器,出现了“通信链路故障”错误。数据库服务器在AWS RDS上正常运行,docker映像可以在本地按预期运行。
我想知道这是否是由于容器/节点/服务上没有正确配置MySQL端口3306造成的。想看看您是否能发现问题所在,请不要犹豫,指出任何错误配置,非常感谢。
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.1.RELEASE)
2020-08-13 11:39:39.930 INFO 1 --- [ main] com.example.demo.DemoApplication : The following profiles are active: prod
2020-08-13 11:39:58.536 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFERRED mode.
...
......
2020-08-13 11:41:27.606 ERROR 1 --- [ task-1] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:197) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477) [HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:560) [HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) [HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) [HikariCP-3.4.5.jar!/:na]
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcEnvironmentInitiator.java:180) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:68) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:35) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.initiateService(StandardServiceRegistryImpl.java:101) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.createService(AbstractServiceRegistryImpl.java:263) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:237) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.id.factory.internal.DefaultIdentifierGeneratorFactory.injectServices(DefaultIdentifierGeneratorFactory.java:152) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.injectDependencies(AbstractServiceRegistryImpl.java:286) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:243) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.boot.internal.InFlightMetadataCollectorImpl.<init>(InFlightMetadataCollectorImpl.java:176) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.complete(MetadataBuildingProcess.java:118) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.metadata(EntityManagerFactoryBuilderImpl.java:1224) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1255) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:58) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:365) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:391) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_212]
...
......
patricks-mbp:test patrick$ kubectl get services -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test NodePort 10.100.160.22 <none> 80:31176/TCP 4h57m
service.yaml
kind: Service
apiVersion: v1
metadata:
name: test
namespace: test
spec:
selector:
app: test
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 8080
patricks-mbp:test patrick$ kubectl get deployments -n test
NAME READY UP-TO-DATE AVAILABLE AGE
test 0/1 1 0 4h42m
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
strategy: {}
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: <image location>
ports:
- containerPort: 8080
resources: {}
patricks-mbp:test patrick$ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
test-8648f7959-4gdvm 1/1 Running 6 21m
patricks-mbp:test patrick$ kubectl describe pod test-8648f7959-4gdvm -n test
Name: test-8648f7959-4gdvm
Namespace: test
Priority: 2000001000
Priority Class Name: system-node-critical
Node: fargate-ip-192-168-123-170.ec2.internal/192.168.123.170
Start Time: Thu, 13 Aug 2020 21:29:07 +1000
Labels: app=test
eks.amazonaws.com/fargate-profile=fp-1a0330f1
pod-template-hash=8648f7959
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.123.170
IPs:
IP: 192.168.123.170
Controlled By: ReplicaSet/test-8648f7959
Containers:
test:
Container ID: containerd://a1517a13d66274e1d7f8efcea950d0fe3d944d1f7208d057494e208223a895a7
Image: <image location>
Image ID: <image ID>
Port: 8080/TCP
Host Port: 0/TCP
State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 13 Aug 2020 21:48:07 +1000
Finished: Thu, 13 Aug 2020 21:50:28 +1000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 13 Aug 2020 21:43:04 +1000
Finished: Thu, 13 Aug 2020 21:45:22 +1000
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5hdzd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-5hdzd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5hdzd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> fargate-scheduler Successfully assigned test/test-8648f7959-4gdvm to fargate-ip-192-168-123-170.ec2.internal
Normal Pulling 21m kubelet, fargate-ip-192-168-123-170.ec2.internal Pulling image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2"
Normal Pulled 21m kubelet, fargate-ip-192-168-123-170.ec2.internal Successfully pulled image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2"
Normal Created 11m (x5 over 21m) kubelet, fargate-ip-192-168-123-170.ec2.internal Created container test
Normal Started 11m (x5 over 21m) kubelet, fargate-ip-192-168-123-170.ec2.internal Started container test
Normal Pulled 11m (x4 over 19m) kubelet, fargate-ip-192-168-123-170.ec2.internal Container image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2" already present on machine
Warning BackOff 11s (x27 over 17m) kubelet, fargate-ip-192-168-123-170.ec2.internal Back-off restarting failed container
patricks-mbp:~ patrick$ kubectl describe ing -n test test
Name: test
Namespace: test
Address: <ALB public address>
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/ test:80 (192.168.72.15:8080)
Annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"alb.ingress.kubernetes.io/scheme":"internet-facing","alb.ingress.kubernetes.io/target-type":"ip","kubernetes.io/ingress.class":"alb"},"name":"test","namespace":"test"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"test","servicePort":80},"path":"/"}]}}]}}
kubernetes.io/ingress.class: alb
Events: <none>
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
namespace: test
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: test
servicePort: 80
ALB入口控制器与集群通信的权限-
创建使用ALB的入口控制器-
要允许Fargate的pod连接到RDS,您需要打开安全组。
我用java从MySQL查询了一些记录。但是在一些查询中,我遇到了一个问题,使查询失败,但在其他情况下,它查询成功。错误消息是下一个: 我尝试过一些方法,比如: 在 但什么都没发生。 我的环境是: MySQL: 5.5.3-m3-log源代码分布 Java: 1.60_16 jdk: HotSpot(TM)64位服务器VM(构建14.2-b01,混合模式) JDBC: mysql连接器-java-
我在Stackoverflow的另一个问题中尝试了这个建议,并在连接方面做了如下更改 但我的项目还是失败了 严重:执行身份验证com时发生异常。mysql。jdbc。例外情况。jdbc4。通信异常:通信链路故障 谁能告诉我克服这个问题的最好方法是什么?
StackOverflow上发布的与通信有关的问题似乎已经够多了。但没有一个能帮我走出困境。 但是,当我试图通过JDBC连接DB服务器时,它仍然失败,出现异常“com.mysql.JDBC.exceptions.jdbc4.CommunicationsException:Communications link Failure”。 测试类如下所示。
我试图连接我的java应用程序与SQL数据库和当我点击保存按钮它给我以下错误 这是我的密码
我有一个带有JDBC mysql连接的服务器,但每天早上检查服务器时都会出现这样的错误: 在com.mysql.jdbc.mysqlio.readfly(mysqlio.java:1395)在com.mysql.jdbc.mysqlio.reuseandreadpacket(mysqlio.java:1539)在com.mysql.jdbc.mysqlio.checkerrorpacket(mys
我在所有项目中都使用Liquibase,我非常喜欢它处理db更新的方式,但最近我遇到了这个问题: 我的服务器上运行了3个应用程序。其中一个跑了大约20年。第二次运行2个月,第三次是新的。 它们都是Spring Boot应用程序。 当我想更新第二个应用的jar时,我停止了旧的jar,运行了新的jar,但我无法让它以上面的错误开始。然后我运行了同一个应用程序的旧罐子,它一开始就很好。我试着停止并重新启