我在https://hub.docker.com/r/debezium/Zookeeper:1.2
图像中有一个正在运行的Zookeeper实例和我的撰写文件:
version: "3.7"
services:
zookeeper:
image: debezium/zookeeper:1.2
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
networks:
common:
volumes:
- "~/dev/docker/projects/debezium/volumes/zookeeper/data:/zookeeper/data"
- "~/dev/docker/projects/debezium/volumes/zookeeper/txns:/zookeeper/txns"
- "~/dev/docker/projects/debezium/volumes/zookeeper/conf:/zookeeper/conf"
- "~/dev/docker/projects/debezium/volumes/zookeeper/logs:/zookeeper/logs"
environment:
HOST_USER_ID: ${CURRENT_UID}
HOST_GROUP_ID: ${CURRENT_GID}
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 10s
healthcheck:
test: curl --fail http://localhost:2181 || exit 1
interval: 1m
timeout: 3s
retries: 3
5f7860484b48 debezium/zookeeper:1.2 "/docker-entrypoint.…" About a minute ago Up About a minute (health
: starting) 2181/tcp, 2888/tcp, 3888/tcp, 8778/tcp, 9779/tcp debezium_zookeeper.1.hmdxswlsmqdebqkqvkjqzxnlc
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:45:15,032 - INFO [main:ContextHandler@825] - Started o.e.j.s.ServletContextHandler@3f197a46{/,null,AVAILABLE}
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:45:15,111 - INFO [main:AbstractConnector@330] - Started ServerConnector@4278a03f{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:45:15,112 - INFO [main:Server@399] - Started @5046ms
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:45:15,113 - INFO [main:JettyAdminServer@112] - Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:45:15,135 - INFO [main:ServerCnxnFactory@135] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:45:15,145 - INFO [main:NIOServerCnxnFactory@673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers.
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:45:15,150 - INFO [main:NIOServerCnxnFactory@686] - binding to port 0.0.0.0/0.0.0.0:2181
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:45:15,224 - INFO [main:ZKDatabase@117] - zookeeper.snapshotSizeFactor = 0.33
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:45:15,320 - INFO [main:FileSnap@83] - Reading snapshot /zookeeper/data/version-2/snapshot.0
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:45:15,340 - INFO [main:FileTxnSnapLog@404] - Snapshotting: 0x0 to /zookeeper/data/version-2/snapshot.0
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:45:15,408 - INFO [main:ContainerManager@64] - Using checkIntervalMs=60000 maxPerMinute=10000
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:46:10,244 - WARN [NIOWorkerThread-1:NIOServerCnxn@370] - Exception causing close of session 0x0: Len error 1195725856
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:47:10,884 - WARN [NIOWorkerThread-2:NIOServerCnxn@370] - Exception causing close of session 0x0: Len error 1195725856
debezium_zookeeper.1.j841dv1adeab@stephane-pc | 2020-10-12 08:48:12,599 - WARN [NIOWorkerThread-3:NIOServerCnxn@370] - Exception causing close of session 0x0: Len error 1195725856
stephane@stephane-pc:~$ nmap -p 2181 localhost
Starting Nmap 7.80 ( https://nmap.org ) at 2020-10-12 10:50 CEST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00033s latency).
PORT STATE SERVICE
2181/tcp closed eforward
当然,在防火墙中启用了端口:
sudo ufw allow from any to any port 2181;
stephane@stephane-pc:~$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To Action From
-- ------ ----
8500 ALLOW IN 127.0.0.0
8500 ALLOW IN Anywhere
18630 ALLOW IN Anywhere
2181 ALLOW IN 127.0.0.0
2181 ALLOW IN Anywhere
9092 ALLOW IN Anywhere
8500 (v6) ALLOW IN Anywhere (v6)
18630 (v6) ALLOW IN Anywhere (v6)
2181 (v6) ALLOW IN Anywhere (v6)
9092 (v6) ALLOW IN Anywhere (v6)
我可以连接到正在运行的容器:
docker-exec debezium_zookeeper.1.app5h7goosa2cpn4g06azp2xt
并查看服务器状态:
[zookeeper@2f57d1a84ce7 ~]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: standalone
debezium_zookeeper.1.u0koxm2njw2i@stephane-pc | 2020-10-12 09:03:47,829 - WARN [NIOWorkerThread-2:NIOServerCnxn@370] - Exception causing close of session 0x0: Len error 1195725856
这似乎是一个已知的问题。
这会不会和我的封闭港口问题有关?
更新:我还尝试将hostname
属性作为hostname:zookeeper
添加到Compose文件中,并在/etc/hosts
文件中添加127.0.1.zookeeper
条目,但是nmap-p2181zookeeper
命令仍然显示一个关闭的端口。
我不得不改变我检查健康的方式。
healthcheck:
test: /zookeeper/bin/zkServer.sh print-cmd || exit 1
interval: 1m
timeout: 3s
retries: 3
start_period: 15s
完整的撰写文件:
version: "3.7"
services:
zookeeper:
image: debezium/zookeeper:1.2
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
networks:
common:
volumes:
- "~/dev/docker/projects/debezium/volumes/zookeeper/data:/zookeeper/data"
- "~/dev/docker/projects/debezium/volumes/zookeeper/txns:/zookeeper/txns"
- "~/dev/docker/projects/debezium/volumes/zookeeper/conf:/zookeeper/conf"
- "~/dev/docker/projects/debezium/volumes/zookeeper/logs:/zookeeper/logs"
environment:
HOST_USER_ID: ${CURRENT_UID}
HOST_GROUP_ID: ${CURRENT_GID}
deploy:
resources:
limits:
cpus: "0.1"
memory: 256M
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 10s
healthcheck:
test: /zookeeper/bin/zkServer.sh print-cmd || exit 1
interval: 1m
timeout: 3s
retries: 3
start_period: 15s
kafka:
image: debezium/kafka:1.2
ports:
- "9092:9092"
networks:
common:
volumes:
- "~/dev/docker/projects/debezium/volumes/kafka/data:/kafka/data"
- "~/dev/docker/projects/debezium/volumes/kafka/logs:/kafka/logs"
environment:
ZOOKEEPER_CONNECT: zookeeper:2181
HOST_USER_ID: ${CURRENT_UID}
HOST_GROUP_ID: ${CURRENT_GID}
depends_on:
- zookeeper
deploy:
replicas: 1
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 10s
healthcheck:
test: /kafka/bin/kafka-topics.sh --list --zookeeper zookeeper:2181 || exit 1
interval: 1m
timeout: 15s
retries: 3
start_period: 15s
networks:
common:
external: true
name: common
我在aws上有3台服务器。每个开放jdk 7和zookeeper 3.4.6都有独特的弹性ip。每个形态/动物园。cfg有 客户端端口=2181 然后我从开始/zkServer。sh start(说STARTED)和动物园管理员。out表示 所以它不能打开端口。我最终在aws安全上打开了所有端口以排除这种情况。 telnet进入2181年,ruok获得imok telnet到2888无法连接。连接
问题内容: 我对于过去几天来遇到的Zookeeper港口来说是个新手。 我曾两次用 zookeeper port 关键字介绍过: 在配置neo4j数据库群集(链接)时, 在运行已编译的voltdb目录(链接)时(请参阅网络配置参数) 然后,我遇到了Apache Zookeeper (我想它与分布式应用程序有关,我也是分布式应用程序的新手)。因此我想到一个问题: 在上述两种情况下,Apache Zo
使用Zookeeper依赖关系 Spring Cloud Zookeeper可以让您提供应用程序的依赖关系作为属性。作为依赖关系,您可以了解Zookeeper中注册的其他应用程序,您可以通过Feign(REST客户端构建器)以及Spring RestTemplate呼叫。 您还可以从Zookeeper依赖关系观察者功能中受益,这些功能可让您控制和监视依赖关系的状态,并决定如何处理。 如何激活Zoo
主要内容:1.概述,2.接口描述1.概述 在 dubbo-remoting-zookeeper 模块,实现了 Dubbo 对 Zookeeper 客户端的封装。在该模块中,抽象了通用的 Zookeeper Client API 接口,实现了两种 Zookeeper Client 库的接入: 基于 Apache Curator 实现: <dubbo:registry address=“zookeeper://127.0.0.1:2
我有一个在kubernetes pod中运行的应用程序(在我的本地docker桌面上,启用kubernetes),监听端口8080。然后我有以下kubernetes配置 这个很好用。但我想把443端口改成其他端口,比如8443(因为我将有多个网关)。当我有这个,我不能再访问应用程序了。是否有一些配置我遗漏了?我猜我需要配置Istio来接受8443端口?我使用以下命令安装了istio: 编辑:我读了
我不想更改端口号,但收到异常“exception in thread”main“java.net.BindException:Address already in use:JVM_Bind” ServerSocket服务器=新服务器套接字(1234180);