当前位置: 首页 > 面试题库 >

如何为每台计算机上有两个节点的群集设置两台计算机

楚宪
2023-03-14
问题内容

我有两台用于ES(2.2.0)的专用计算机。这两台机器具有相同的规格。每个服务器都在Windows Server 2012
R2上运行,并具有128GB内存。关于ES,我计划在群集的每台计算机上有两个节点。

我正在查看elasticsearch.yml,以了解如何配置每个节点以形成集群。

同一网络上的两台机器具有以下服务器名称和IP地址:

SRC01, 172.21.0.21
SRC02, 172.21.0.22

我正在查看elasticsearch.yml,但不确定如何设置。我想我需要在elasticsearch.yml中为网络和发现部分设置适当的值:

# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
# network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# --------------------------------- Discovery ----------------------------------
#
# Elasticsearch nodes will find each other via unicast, by default.
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#

我在网上和SO上搜索了Google,希望找到一个完整的配置示例供我启动,但未能找到一个示例。

任何输入或指针都非常感谢。

更新

在Val的帮助下,这是测试后我在四个节点(每台机器两个)上拥有的最小elasticsearch.yml:

#----------SRC01, node 1---------
cluster.name: elastic
node.name: elastic_src01_1
network.host: 172.21.0.21
discovery.zen.ping.unicast.hosts: ["172.21.0.21","172.21.0.22"]


#----------SRC01, node 2---------
cluster.name: elastic
node.name: elastic_src01_2
network.host: 172.21.0.21
discovery.zen.ping.unicast.hosts: ["172.21.0.21","172.21.0.22"]


#----------SRC02, node 1---------
cluster.name: elastic
node.name: elastic_src02_1
network.host: 172.21.0.22
discovery.zen.ping.unicast.hosts: ["172.21.0.21","172.21.0.22"]


#----------SRC02, node 2---------
cluster.name: elastic
node.name: elastic_src02_2
network.host: 172.21.0.22
discovery.zen.ping.unicast.hosts: ["172.21.0.21","172.21.0.22"]

这是我遇到的问题:

  1. 我先启动Node elastic_src01_1,然后再启动Node elastic_src01_2,它们位于同一台计算机上。启动elastic_src01_2时,我可以看到以下由ES生成的消息( detected_master

日志摘录:

[2016-02-28 12:38:33,155][INFO ][node                     ] [elastic_src01_2] version[2.2.0], pid[4620], build[8ff36d1/2016-01-27T13:32:39Z]
[2016-02-28 12:38:33,155][INFO ][node                     ] [elastic_src01_2] initializing ...
[2016-02-28 12:38:33,546][INFO ][plugins                  ] [elastic_src01_2] modules [lang-expression, lang-groovy], plugins [], sites []
[2016-02-28 12:38:33,562][INFO ][env                      ] [elastic_src01_2] using [1] data paths, mounts [[Data (E:)]], net usable_space [241.7gb],
net total_space [249.9gb], spins? [unknown], types [NTFS]
[2016-02-28 12:38:33,562][INFO ][env                      ] [elastic_src01_2] heap size [1.9gb], compressed ordinary object pointers [true]
[2016-02-28 12:38:35,077][INFO ][node                     ] [elastic_src01_2] initialized
[2016-02-28 12:38:35,077][INFO ][node                     ] [elastic_src01_2] starting ...
[2016-02-28 12:38:35,218][INFO ][transport                ] [elastic_src01_2] publish_address {172.21.0.21:9302}, bound_addresses {172.21.0.21:9302}
[2016-02-28 12:38:35,218][INFO ][discovery                ] [elastic_src01_2] elastic/N8r-gD9WQSSvAYMOlJzmIg
[2016-02-28 12:38:39,796][INFO ][cluster.service          ] [elastic_src01_2] detected_master {elastic_src01_1}{UWGAo0BKTQm2f650nyDKYg}{172.21.0.21}{1
72.21.0.21:9300}, added {{elastic_src01_1}{UWGAo0BKTQm2f650nyDKYg}{172.21.0.21}{172.21.0.21:9300},{elastic_src01_1}{qNDQjkmsRjiIVjZ88JsX4g}{172.21.0.2
1}{172.21.0.21:9301},}, reason: zen-disco-receive(from master [{elastic_src01_1}{UWGAo0BKTQm2f650nyDKYg}{172.21.0.21}{172.21.0.21:9300}])
[2016-02-28 12:38:39,843][INFO ][http                     ] [elastic_src01_2] publish_address {172.21.0.21:9202}, bound_addresses {172.21.0.21:9202}
[2016-02-28 12:38:39,843][INFO ][node                     ] [elastic_src01_2] started

但是,当我在SRC02机器上启动节点1时,没有看到 detected_master 消息。这是ES生成的内容:

[2016-02-28 12:22:52,256][INFO ][node                     ] [elastic_src02_1] version[2.2.0], pid[6432], build[8ff36d1/2016-01-27T13:32:39Z]
[2016-02-28 12:22:52,256][INFO ][node                     ] [elastic_src02_1] initializing ...
[2016-02-28 12:22:52,662][INFO ][plugins                  ] [elastic_src02_1] modules [lang-expression, lang-groovy], plugins [], sites []
[2016-02-28 12:22:52,693][INFO ][env                      ] [elastic_src02_1] using [1] data paths, mounts [[Data (E:)]], net usable_space [241.6gb], net total_
space [249.8gb], spins? [unknown], types [NTFS]
[2016-02-28 12:22:52,693][INFO ][env                      ] [elastic_src02_1] heap size [910.5mb], compressed ordinary object pointers [true]
[2016-02-28 12:22:54,193][INFO ][node                     ] [elastic_src02_1] initialized
[2016-02-28 12:22:54,193][INFO ][node                     ] [elastic_src02_1] starting ...
[2016-02-28 12:22:54,334][INFO ][transport                ] [elastic_src02_1] publish_address {172.21.0.22:9300}, bound_addresses {172.21.0.22:9300}
[2016-02-28 12:22:54,334][INFO ][discovery                ] [elastic_src02_1] elastic/SNvuAfnxQV-RW430zLF6Vg
[2016-02-28 12:22:58,912][INFO ][cluster.service          ] [elastic_src02_1] new_master {elastic_src02_1}{SNvuAfnxQV-RW430zLF6Vg}{172.21.0.22}{172.21.0.22:9300
}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-02-28 12:22:58,943][INFO ][gateway                  ] [elastic_src02_1] recovered [0] indices into cluster_state
[2016-02-28 12:22:58,959][INFO ][http                     ] [elastic_src02_1] publish_address {172.21.0.22:9200}, bound_addresses {172.21.0.22:9200}
[2016-02-28 12:22:58,959][INFO ][node                     ] [elastic_src02_1] started

SRC02计算机上的节点是否真的与SRC01计算机上的节点组成一个集群?

  1. 在同一台计算机(SRC01)上,如果我添加

Discovery.zen.minimum_master_nodes:3

到节点elastic_src01_1的elasticsearch.yml文件,elastic_src01_2,然后在计算机SRC01上启动第二个节点elastic_src01_2时,在ES生成的消息中看不到
detected_master

这是否意味着elastic_src01_1和elastic_src01_2不形成集群?

感谢帮助!

更新2

SRC01和SRC02机器可以互相看到。以下是从SRC02到SRC01的ping结果:

C:\Users\Administrator>ping 172.21.0.21

Pinging 172.21.0.21 with 32 bytes of data:
Reply from 172.21.0.21: bytes=32 time<1ms TTL=128
Reply from 172.21.0.21: bytes=32 time<1ms TTL=128
Reply from 172.21.0.21: bytes=32 time<1ms TTL=128
Reply from 172.21.0.21: bytes=32 time<1ms TTL=128

更新3

问题已解决。我的设置之前无法正常工作的原因是服务器的防火墙阻止了9300/9200端口进行通信


问题答案:

基本上,您只需要配置网络设置以确保所有节点都能在网络上看到对方。另外,由于您在同一台计算机上运行两个节点,并且仍然希望获得高可用性,因此您希望防止主碎片及其副本落在同一台物理计算机上。

最后,由于您的集群中总共有四个节点,因此您要防止出现脑裂情况,因此也需要进行设置discovery.zen.minimum_master_nodes

SRC01上的节点1/2:

# cluster name
cluster.name: Name_of_your_cluster

# Give each node a different name (optional but good practice if you don't know Marvel characters)
node.name: SRC01_Node1/2

# The IP that this node will bind to and publish
network.host: 172.21.0.21

# The IP of the other nodes
discovery.zen.ping.unicast.hosts: ["172.21.0.22"]

# prevent split brain
discovery.zen.minimum_master_nodes: 3

# to prevent primary/replica shards to be on the same physical host 
# see why at http://stackoverflow.com/questions/35677741/proper-value-of-es-heap-size-for-a-dedicated-machine-with-two-nodes-in-a-cluster
cluster.routing.allocation.same_shard.host: true

# prevent memory swapping
bootstrap.mlockall: true

SRC02上的节点1/2:

# cluster name
cluster.name: Name_of_your_cluster

# Give each node a different name (optional but good practice if you don't know Marvel characters)
node.name: SRC02_Node1/2

# The IP that this node will bind to and publish
network.host: 172.21.0.22

# The IP of the other nodes
discovery.zen.ping.unicast.hosts: ["172.21.0.21"]

# prevent split brain
discovery.zen.minimum_master_nodes: 3

# to prevent primary/replica shards to be on the same physical host 
# see why at http://stackoverflow.com/questions/35677741/proper-value-of-es-heap-size-for-a-dedicated-machine-with-two-nodes-in-a-cluster
cluster.routing.allocation.same_shard.host: true

# prevent memory swapping
bootstrap.mlockall: true


 类似资料:
  • 问题内容: 说,有两个哈希集,如何计算它们的交集? 问题答案: 使用以下方法: 如果要保留集合,请创建一个新集合以保存交集: 该的的说,这正是你想要的: 仅保留此集合中包含在指定集合中的元素(可选操作)。换句话说,从该集合中删除所有未包含在指定集合中的元素。如果指定的集合也是一个集合,则此操作将有效地修改此集合,以使其值为两个集合的交集。

  • 问题内容: 我有一台专用于ES 2.2.0的计算机。它运行在Windows Server 2012 R2上,并具有128GB内存。关于ES,我计划在此计算机上的群集中有两个节点。根据elasticsearch.yml中的ES建议: 确保将环境变量设置为大约一半的内存 我要设定 这个值合适吗? 更新 我做了进一步的搜索并找到了此信息 https://discuss.elastic.co/t/can-

  • 目标:*使三节点集群每10分钟运行一次Job1,同一集群每5分钟运行一次Job2。每个作业生成一封电子邮件;所以在10:55AM我应该只收到一封来自集群的Job2电子邮件,在11:00AM我应该收到一封来自集群的Job1电子邮件和一封来自集群的Job2电子邮件,在11:05AM我应该只收到一封来自集群的Job2电子邮件,依此类推... 问题:*Job1每10分钟在集群中的每个节点上运行多次,对于J

  • 更多面试题总结请看:【面试题】技术面试题汇总 前言 本文通过在 Docker 容器中执行命令,来深入了解两台主机之间的通信过程。阅读完本文,我们将熟悉以下内容: Docker 的基本操作 创建 socket 并发送 HTTP 请求 路由表、路由决策过程 ARP 协议、ARP 表更新过程 本文也是输入一个 URL 到页面加载完成的另一个角度的回答,我们将解决以下两个问题: 不同局域网的两台主机之间的

  • 我目前正在研究将多台机器中的节点添加到集群中。主节点应该是x. x. x.246,我要添加的数据节点是x. x. x.99。运行最新的elasticsearch 7.6。我已经确定这两个弹性是相同的版本。 代码有什么问题?我想我遵循了这个指示 新主节点配置的错误日志: 任何帮助或指示将不胜感激。谢谢你。

  • 试了好几个方法,只好问到这里。所以最近我在我自己的一个网站上工作,这个网站是在我的本地桌面上开发的(使用xampp),但是现在我想从我的笔记本电脑(ubuntu 16.04)在同一个网站上工作(本地),所以我在我的笔记本电脑上安装了xampp(lampp)尝试了以下方法:- 1.)我从桌面(localhost/phpmyadmin)导出了数据库 2。)已从桌面版本复制wordpress/wp内容文