As of November 7th, 2018, I've decided to end my commitment to maintaining this repo and related.
It's been 3 years since I last used Elasticsearch, so I no longer have the motivation it takes to maintain and evolve this project. Also, other projects need all the attention I can give.
It was a great run, thank you all.
Elasticsearch cluster on top of Kubernetes made easy.
Elasticsearch best-practices recommend to separate nodes in three roles:
Master
nodes - intended for clustering management only, no data, no HTTP APIData
nodes - intended for client usage and dataIngest
nodes - intended for document pre-processing during ingestionGiven this, I'm going to demonstrate how to provision a production grade scenario consisting of 3 master, 2 data and 2 ingest nodes.
Elasticsearch pods need for an init-container to run in privileged mode, so it can set some VM options.For that to happen, the kubelet
should be running with args --allow-privileged
, otherwise the init-container will fail to run.
By default, ES_JAVA_OPTS
is set to -Xms256m -Xmx256m
. This is a very low value but many users, i.e. minikube
users,were having issues with pods getting killed because hosts were out of memory.One can change this in the deployment descriptors available in this repository.
As of the moment, Kubernetes pod descriptors use an emptyDir
for storing data in each data node container.This is meant to be for the sake of simplicity and should be adapted according to one's storage needs.
The stateful directory contains an example which deploys the data pods as a StatefulSet
.These use a volumeClaimTemplates
to provision persistent storage for each pod.
By default, PROCESSORS
is set to 1
. This may not be enough for some deployments, especially at startup time.Adjust resources.limits.cpu
and/or livenessProbe
accordingly if required. Note that resources.limits.cpu
must be an integer.
kubectl
configured to access the Kubernetes API.Providing one's own version of the images automatically built from this repository will not be supported. This is an optional step. One has been warned.
kubectl create -f es-discovery-svc.yaml
kubectl create -f es-svc.yaml
kubectl create -f es-master.yaml
kubectl rollout status -f es-master.yaml
kubectl create -f es-ingest-svc.yaml
kubectl create -f es-ingest.yaml
kubectl rollout status -f es-ingest.yaml
kubectl create -f es-data.yaml
kubectl rollout status -f es-data.yaml
Let's check if everything is working properly:
kubectl get svc,deployment,pods -l component=elasticsearch
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch ClusterIP 10.100.243.196 <none> 9200/TCP 3m
service/elasticsearch-discovery ClusterIP None <none> 9300/TCP 3m
service/elasticsearch-ingest ClusterIP 10.100.76.74 <none> 9200/TCP 2m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/es-data 2 2 2 2 1m
deployment.extensions/es-ingest 2 2 2 2 2m
deployment.extensions/es-master 3 3 3 3 3m
NAME READY STATUS RESTARTS AGE
pod/es-data-56f8ff8c97-642bq 1/1 Running 0 1m
pod/es-data-56f8ff8c97-h6hpc 1/1 Running 0 1m
pod/es-ingest-6ddd5fc689-b4s94 1/1 Running 0 2m
pod/es-ingest-6ddd5fc689-d8rtj 1/1 Running 0 2m
pod/es-master-68bf8f86c4-bsfrx 1/1 Running 0 3m
pod/es-master-68bf8f86c4-g8nph 1/1 Running 0 3m
pod/es-master-68bf8f86c4-q5khn 1/1 Running 0 3m
As we can assert, the cluster seems to be up and running. Easy, wasn't it?
Don't forget that services in Kubernetes are only acessible from containers in the cluster. For different behavior one should configure the creation of an external load-balancer. While it's supported within this example service descriptor, its usage is out of scope of this document, for now.
Note: if you are using one of the cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service. You can uncomment the field in es-svc.yaml.
kubectl get svc elasticsearch
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP 10.100.243.196 <none> 9200/TCP 3m
From any host on the Kubernetes cluster (that's running kube-proxy
or similar), run:
curl http://10.100.243.196:9200
One should see something similar to the following:
{
"name" : "es-data-56f8ff8c97-642bq",
"cluster_name" : "myesdb",
"cluster_uuid" : "RkRkTl26TDOE7o0FhCcW_g",
"version" : {
"number" : "6.3.2",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "053779d",
"build_date" : "2018-07-20T05:20:23.451332Z",
"build_snapshot" : false,
"lucene_version" : "7.3.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Or if one wants to see cluster information:
curl http://10.100.243.196:9200/_cluster/health?pretty
One should see something similar to the following:
{
"cluster_name" : "myesdb",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 7,
"number_of_data_nodes" : 2,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
One of the main advantages of running Elasticsearch on top of Kubernetes is how resilient the cluster becomes, particularly duringnode restarts. However if all data pods are scheduled onto the same node(s), this advantage decreases significantly and may evenresult in no data pods being available.
It is then highly recommended, in the context of the solution described in this repository, that one adopts pod anti-affinityin order to guarantee that two data pods will never run on the same node.
Here's an example:
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: component
operator: In
values:
- elasticsearch
- key: role
operator: In
values:
- data
topologyKey: kubernetes.io/hostname
containers:
- (...)
If one wants to ensure that no more than n
Elasticsearch nodes will be unavailable at a time, one can optionally (change and) apply the following manifests:
kubectl create -f es-master-pdb.yaml
kubectl create -f es-data-pdb.yaml
Note: This is an advanced subject and one should only put it in practice if one understands clearly what it means both in the Kubernetes and Elasticsearch contexts. For more information, please consult Pod Disruptions.
WARNING: The Helm chart is maintained by someone else in the community and may not up-to-date with this repo.
Helm charts for a basic (non-stateful) ElasticSearch deployment are maintained at https://github.com/clockworksoul/helm-elasticsearch. With Helm properly installed and configured, standing up a complete cluster is almost trivial:
git clone https://github.com/clockworksoul/helm-elasticsearch.git
helm install helm-elasticsearch
Various parameters of the cluster, including replica count and memory allocations, can be adjusted by editing the helm-elasticsearch/values.yaml
file. For information about Helm, please consult the complete Helm documentation.
The image used in this repo is very minimalist. However, one can install additional plug-ins at will by simply specifying the ES_PLUGINS_INSTALL
environment variable in the desired pod descriptors. For instance, to install Google Cloud Storage and S3 plug-ins it would be like follows:
- name: "ES_PLUGINS_INSTALL"
value: "repository-gcs,repository-s3"
Note: The X-Pack plugin does not currently work with the quay.io/pires/docker-elasticsearch-kubernetes
image. See Issue #102
Additionally, one can run a CronJob that will periodically run Curator to clean up indices (or do other actions on the Elasticsearch cluster).
kubectl create -f es-curator-config.yaml
kubectl create -f es-curator.yaml
Please, confirm the job has been created.
kubectl get cronjobs
NAME SCHEDULE SUSPEND ACTIVE LAST-SCHEDULE
curator 1 0 * * * False 0 <none>
The job is configured to run once a day at 1 minute past midnight and delete indices that are older than 3 days.
Notes
es-curator.yaml
.es-curator-config.yaml
.action_file.yaml
is quite self-explaining for simple set-ups. For more advanced configuration options, please consult the Curator Documentation.If one wants to remove the curator job, just run:
kubectl delete cronjob curator
kubectl delete configmap curator-config
WARNING: The Kibana section is maintained by someone else in the community and may not up-to-date with this repo.
If Kibana defaults are not enough, one may want to customize kibana.yaml
through a ConfigMap
.Please refer to Configuring Kibana for all available attributes.
kubectl create -f kibana-cm.yaml
kubectl create -f kibana-svc.yaml
kubectl create -f kibana.yaml
Kibana will become available through service kibana
, and one will be able to access it from within the cluster, or proxy it through the Kubernetes API as follows:
curl https://<API_SERVER_URL>/api/v1/namespaces/default/services/kibana:http/proxy
One can also create an Ingress to expose the service publicly or simply use the service nodeport.In the case one proceeds to do so, one must change the environment variable SERVER_BASEPATH
to the match their environment.
NUMBER_OF_MASTERS
differ from number of master-replicas?The default value for this environment variable is 2, meaning a cluster will need a minimum of 2 master nodes to operate. If a cluster has 3 masters and one dies, the cluster still works. Minimum master nodes are usually n/2 + 1
, where n
is the number of master nodes in a cluster. If a cluster has 5 master nodes, one should have a minimum of 3, less than that and the cluster stops. If one scales the number of masters, make sure to update the minimum number of master nodes through the Elasticsearch API as setting environment variable will only work on cluster setup. More info: https://www.elastic.co/guide/en/elasticsearch/guide/1.x/_important_configuration_changes.html#_minimum_master_nodes
elasticsearch.yaml
?Read a different config file by settings env var ES_PATH_CONF=/path/to/my/config/
(see the Elasticsearch docs for more). Another option would be to build one's own image from this repository
One of the errors one may come across when running the setup is the following error:
[2016-11-29T01:28:36,515][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: No up-and-running site-local (private) addresses found, got [name:lo (lo), name:eth0 (eth0)]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:116) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:103) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:96) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.cli.Command.main(Command.java:62) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:73) ~[elasticsearch-5.0.1.jar:5.0.1]
Caused by: java.lang.IllegalArgumentException: No up-and-running site-local (private) addresses found, got [name:lo (lo), name:eth0 (eth0)]
at org.elasticsearch.common.network.NetworkUtils.getSiteLocalAddresses(NetworkUtils.java:187) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.common.network.NetworkService.resolveInternal(NetworkService.java:246) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.common.network.NetworkService.resolveInetAddresses(NetworkService.java:220) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.common.network.NetworkService.resolveBindHostAddresses(NetworkService.java:130) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.transport.TcpTransport.bindServer(TcpTransport.java:575) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:182) ~[?:?]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:182) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.node.Node.start(Node.java:525) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:211) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:288) ~[elasticsearch-5.0.1.jar:5.0.1]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:112) ~[elasticsearch-5.0.1.jar:5.0.1]
... 6 more
[2016-11-29T01:28:37,448][INFO ][o.e.n.Node ] [kIEYQSE] stopping ...
[2016-11-29T01:28:37,451][INFO ][o.e.n.Node ] [kIEYQSE] stopped
[2016-11-29T01:28:37,452][INFO ][o.e.n.Node ] [kIEYQSE] closing ...
[2016-11-29T01:28:37,464][INFO ][o.e.n.Node ] [kIEYQSE] closed
This is related to how the container binds to network ports (defaults to _local_
). It will need to match the actual node network interface name, which depends on what OS and infrastructure provider one uses. For instance, if the primary interface on the node is p1p1
then that is the value that needs to be set for the NETWORK_HOST
environment variable.Please see the documentation for reference of options.
In order to workaround this, set NETWORK_HOST
environment variable in the pod descriptors as follows:
- name: "NETWORK_HOST"
value: "_eth0_" #_p1p1_ if interface name is p1p1, _ens4_ if interface name is ens4, and so on.
Intermittent failures occur when the local network interface has both IPv4 and IPv6 addresses, and Elasticsearch tries to bind to the IPv6 address first.If the IPv4 address is chosen first, Elasticsearch starts correctly.
In order to workaround this, set NETWORK_HOST
environment variable in the pod descriptors as follows:
- name: "NETWORK_HOST"
value: "_eth0:ipv4_" #_p1p1:ipv4_ if interface name is p1p1, _ens4:ipv4_ if interface name is ens4, and so on.
Spark Demo过程中的常见问题(二) Spark的executor/driver怎么持久化日志 想到的两种方式: driver/executor在执行过程中都对接到统一日志系统(例如ES)这个需要改改代码搭建新的环境,后续再研究 Spark本身有日志持久化的配置,通过配置持久化到hdfs路径下(一般在yarn上也是这样用的) 这次采用的方法2,有如下配置可用: #sparkConf:
问题内容: 我有一个测试的Kubernetes集群,我在AWS上创建了elasticsearch,其中包括用于日志管理的Kibana。 端点:https : //search-this-is-my-es- wuktx5la4txs7avvo6ypuuyri.ca-central-1.es.amazonaws.com 据我谷歌搜索,我必须从流利发送日志。然后,我尝试使用本文来实现DaemonSet
我有一个正在运行的elasticsearch集群,我正在尝试将kibana连接到这个集群(同一个节点)。目前,当我尝试使用:在浏览器中打开服务时,页面会挂起。.在我的kibana pod日志中,pod中的最后几条日志消息是: 我的kibana。装载到kibana吊舱中的yml文件具有以下配置: 还有我的弹性搜索。yml文件有以下配置设置(我有3个es POD) 我觉得问题目前与字段有关,但我不确定
问题内容: 我尝试在kubernetes上运行elasticsearch和kibana。我跑了: 然后我跑, 当我进入elasticsearch pod时,一切看起来都很好,但是当我进入kibana时,该应用程序无法运行(我看到“ Kibana服务器尚未准备就绪”以表示无限)。 kibana的日志如下: 这是kibana pod上的kibana.yml: 我对Kubernetes相当陌生,我不知道
问题内容: 根据官方的es文档,禁用交换是Elasticsearch可获得的最佳性能提升之一。 但是,事实证明配置起来很困难。我花了很多时间研究并尝试使用不同的方法来使用Kubernetes上的官方ES docker镜像禁用交换。 设置为环境变量时,映像无法启动,并显示错误:。正如文档所指出的那样,这是意料之中的。我什至用设置挂载了一个自定义,但是失败了。 在k8s上使用官方es映像时,建议的禁用
我正在使用Ansible、Docker、Jenkins和Kubernetes实现持续集成和持续部署。我已经使用Ansible和kubespray部署创建了一个具有1个主节点和2个工作节点的Kubernetes集群。我有30-40个微服务应用。我需要创建这么多的服务和部署。 我的困惑 当我使用Kubernetes包管理器Kubernetes Helm chart时,我需要在主节点上启动我的图表,还是
Kubernetes (通常称为 K8s) 是来自 Google 云平台的开源容器集群管理系统,用于自动部署、扩展和管理容器化(containerized)应用程序。该系统基于 Docker 构建一个容器的调度服务。 Kubernetes 可以自动在一个容器集群中选择一个工作容器供使用。其核心概念是 Container Pod。详细的设计思路请参考这里。 Kubernetes 由 Google 设