当前位置: 首页 > 知识库问答 >
问题:

Docker容器中Elasticsearch 2.4.0中的Root用户

欧阳睿范
2023-03-14

我正在使用Docker运行ELK stack,用于当前配置的ES 1.7、Logstash 1.5.4和Kibana 4.1.4的日志管理。现在,我正在尝试将Elasticsearch升级到2.4.0,在https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.0/elasticsearch-2.4.0.tar.gz通过使用tar。gz文件与Docker。如ES 2。X不允许以root用户身份运行它,我使用了

-Des.insecure.allow.root=true

选项,但我的容器没有启动。日志中没有提到任何问题。

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 874 100 874 0 0 874k 0 --:--:-- --:--:-- --:--:-- 853k
//opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found

Scheduler@0.0.0 start /opt/log-management/Scheduler
node scheduler-app.js

ESExportWrapper@0.0.0 start /opt/log-management/ESExportWrapper
node app.js
Jobs are registered
[2016-09-28 09:04:24,646][INFO ][bootstrap ] max_open_files [1048576]
[2016-09-28 09:04:24,686][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
Native thread-sleep not available.
This will result in much slower performance, but it will still work.
You should re-install spawn-sync or upgrade to the lastest version of node if possible.
Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details
[2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] initializing ...
Wed, 28 Sep 2016 09:04:24 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5
Wed, 28 Sep 2016 09:04:24 GMT connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20
Wed, 28 Sep 2016 09:04:24 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15
[2016-09-28 09:04:25,399][INFO ][plugins ] [Kismet Deadly] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [1tb], net total_space [1tb], spins? [possibly], types [xfs]
[2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] heap size [7.8gb], compressed ordinary object pointers [true]
[2016-09-28 09:04:25,455][WARN ][threadpool ] [Kismet Deadly] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead
[2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] initialized
[2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] starting ...
[2016-09-28 09:04:27,695][INFO ][transport ] [Kismet Deadly] publish_address {10.240.118.68:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2016-09-28 09:04:27,700][INFO ][discovery ] [Kismet Deadly] ccs-elasticsearch/q2Sv4FUFROGIdIWJrNENVA

任何线索都将不胜感激。

编辑1:由于//opt//log-manag//elasticsearch/bin/elasticsearch: line 134: host name: Command not find是一个错误,docker映像没有主机名实用程序,我尝试使用uname-n命令在ES中获取HOSTNAME。现在它不会抛出主机名错误,但问题仍然存在。它没有启动。它是正确的替代使用吗?

还有一个疑问,当我使用目前已启动并运行的ES 1.7时,主机名实用程序也不在其中,但它运行起来没有任何问题。非常困惑。使用uname-n后的日志:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1083  100  1083    0     0  1093k      0 --:--:-- --:--:-- --:--:-- 1057k

> ESExportWrapper@0.0.0 start /opt/log-management/ESExportWrapper
> node app.js


> Scheduler@0.0.0 start /opt/log-management/Scheduler
> node scheduler-app.js

Jobs are registered
[2016-09-30 10:10:37,785][INFO ][bootstrap                ] max_open_files [1048576]
[2016-09-30 10:10:37,822][WARN ][bootstrap                ] running as ROOT user. this is a bad idea!
Native thread-sleep not available.
This will result in much slower performance, but it will still work.
You should re-install spawn-sync or upgrade to the lastest version of node if possible.
Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details
[2016-09-30 10:10:37,993][INFO ][node                     ] [Helleyes] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-09-30 10:10:37,993][INFO ][node                     ] [Helleyes] initializing ...
Fri, 30 Sep 2016 10:10:38 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5
Fri, 30 Sep 2016 10:10:38 GMT connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20
Fri, 30 Sep 2016 10:10:38 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15
[2016-09-30 10:10:38,435][INFO ][plugins                  ] [Helleyes] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-09-30 10:10:38,455][INFO ][env                      ] [Helleyes] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [1tb], net total_space [1tb], spins? [possibly], types [xfs]
[2016-09-30 10:10:38,456][INFO ][env                      ] [Helleyes] heap size [7.8gb], compressed ordinary object pointers [true]
[2016-09-30 10:10:38,483][WARN ][threadpool               ] [Helleyes] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead
[2016-09-30 10:10:40,151][INFO ][node                     ] [Helleyes] initialized
[2016-09-30 10:10:40,152][INFO ][node                     ] [Helleyes] starting ...
[2016-09-30 10:10:40,278][INFO ][transport                ] [Helleyes] publish_address {10.240.118.68:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2016-09-30 10:10:40,283][INFO ][discovery                ] [Helleyes] ccs-elasticsearch/wvVGkhxnTqaa_wS5GGjZBQ
[2016-09-30 10:10:40,360][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0x329b2977, /172.17.0.15:53388 => /10.240.118.69:9300]], closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:40,360][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0xdf31e5e6, /172.17.0.15:46846 => /10.240.118.70:9300]], closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:41,798][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0xcff0b2b6, /172.17.0.15:46958 => /10.240.118.70:9300]], closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:41,800][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0xb47caaf6, /172.17.0.15:53501 => /10.240.118.69:9300]], closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:43,302][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0x6247aa3f, /172.17.0.15:47057 => /10.240.118.70:9300]], closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:43,303][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0x1d266aa0, /172.17.0.15:53598 => /10.240.118.69:9300]], closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

Java语言util。同时发生的ThreadPoolExecutor$Worker。在java上运行(ThreadPoolExecutor.java:617)。lang.Thread。run(Thread.java:745)[2016-09-30 10:10:44807][INFO][cluster.service][Helleyes]new_master{Helleyes}{wvVGkhxnTqaa\u ws5ggjbq}{10.240.118.68}{10.240.118.68:9300},原因:禅迪斯科加入(当选为_master,收到[0]加入)[2016-09-30 10:10:44852][INFO][http][Helleyes]发布_地址{10.240.118.68:9200}。,bound_地址{[::1]:9200},{127.0.0.1:9200}[2016-09-30 10:10:44852][INFO][node][Helleyes]启动[2016-09-30 10:10:44984][INFO][gateway][Helleyes]将[32]个索引恢复到cluster\u状态

部署失败后出错

failed: [10.240.118.68] (item={u'url': u'http://10.240.118.68:9200'}) => {"content": "", "failed": true, "item": {"url": "http://10.240.118.68:9200"}, "msg": "Status code was not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://10.240.118.68:9200"}

编辑2:即使安装了主机名实用程序并且工作正常,容器也不会启动。日志与编辑1相同。

编辑3:容器确实启动了,但在地址超文本传输协议://nodeip: 9200处无法访问。在3个节点中,只有1个有2.4,其他2个仍然有1.7,2.4不是集群的一部分。在运行2.4的容器内部,curl到localhost:9200给出正在运行的弹性搜索结果,但从外部无法访问。

编辑4:我尝试在集群上运行ES 2.4的基本安装,在同一设置中ES 1.7运行良好。我运行了ES迁移插件来检查集群是否可以运行ES 2.4,它给了我绿色。基本安装详情如下

Dockerfile文件

#Pulling SLES12 thin base image
FROM private-registry-1

#Author
MAINTAINER XYZ

# Pre-requisite - Adding repositories
RUN zypper ar private-registry-2

RUN zypper --no-gpg-checks -n refresh

#Install required packages and dependencies
RUN zypper -n in  net-tools-1.60-764.185 wget-1.14-7.1 python-2.7.9-14.1 python-base-2.7.9-14.1 tar-1.27.1-7.1 

#Downloading elasticsearch executable
ENV ES_VERSION=2.4.0
ENV ES_DIR="//opt//log-management//elasticsearch"
ENV ES_CONFIG_PATH="${ES_DIR}//config"
ENV ES_REST_PORT=9200
ENV ES_INTERNAL_COM_PORT=9300

WORKDIR /opt/log-management
RUN wget private-registry-3/elasticsearch/elasticsearch/${ES_VERSION}.tar/elasticsearch-${ES_VERSION}.tar.gz --no-check-certificate
RUN tar -xzvf ${ES_DIR}-${ES_VERSION}.tar.gz \
&& rm ${ES_DIR}-${ES_VERSION}.tar.gz \
&& mv ${ES_DIR}-${ES_VERSION} ${ES_DIR} 

#Exposing elasticsearch server container port to the HOST
EXPOSE ${ES_REST_PORT} ${ES_INTERNAL_COM_PORT}

#Removing binary files which are not needed
RUN zypper -n rm wget

# Removing zypper repos
RUN zypper rr caspiancs_common

#Running elasticsearch executable
WORKDIR ${ES_DIR}
ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true

使用生成

docker build -t es-test .

1)当使用docker运行-d--name elasticsearch--net=host-p 9200:9200-p 9300:9300 es-test时,如其中一条注释所述,并在运行容器的容器或节点内执行curllocalhost:9200,我得到了正确的响应。我仍然无法到达9200端口上的集群的其他节点。

2)当使用docker运行-d--name elasticsearch-p 9200:9200-p 9300:9300 es-test并在容器内执行curllocalhost:9200时,它可以正常工作,但在给我错误的节点上不行

curl: (56) Recv failure: Connection reset by peer

我仍然无法到达9200端口上的集群的其他节点。

编辑5:在这个问题上使用这个答案,我得到了三个运行ES 2.4的容器中的三个。但是ES无法用所有这三个容器形成一个集群。网络配置如下network.host:0.0.0.0http.port: 9200

#configure elasticsearch.yml for clustering
echo 'discovery.zen.ping.unicast.hosts: [ELASTICSEARCH_IPS] ' >> ${ES_CONFIG_PATH}/elasticsearch.yml

docker日志elasticsearch获取的日志如下:

[2016-10-06 12:31:28,887][WARN ][bootstrap                ] running as ROOT user. this is a bad idea!
[2016-10-06 12:31:29,080][INFO ][node                     ] [Screech] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-10-06 12:31:29,081][INFO ][node                     ] [Screech] initializing ...
[2016-10-06 12:31:29,652][INFO ][plugins                  ] [Screech] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-10-06 12:31:29,684][INFO ][env                      ] [Screech] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [8.7gb], net total_space [9.7gb], spins? [unknown], types [rootfs]
[2016-10-06 12:31:29,684][INFO ][env                      ] [Screech] heap size [989.8mb], compressed ordinary object pointers [true]
[2016-10-06 12:31:29,720][WARN ][threadpool               ] [Screech] requested thread pool size [60] for [index] is too large; setting to maximum [5] instead
[2016-10-06 12:31:31,387][INFO ][node                     ] [Screech] initialized
[2016-10-06 12:31:31,387][INFO ][node                     ] [Screech] starting ...
[2016-10-06 12:31:31,456][INFO ][transport                ] [Screech] publish_address {172.17.0.16:9300}, bound_addresses {[::]:9300}
[2016-10-06 12:31:31,465][INFO ][discovery                ] [Screech] ccs-elasticsearch/YeO41MBIR3uqzZzISwalmw
[2016-10-06 12:31:34,500][WARN ][discovery.zen            ] [Screech] failed to connect to master [{Bobster}{Gh-6yBggRIypr7OuW1tXhA}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Bobster][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
    at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
    at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
    at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444)
    at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396)
    at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
    at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

每当我提到作为网络运行该容器的主机的IP地址时。主机,我以旧的情况结束,即只有一个容器运行ES 2.4,其他两个运行1.7。

刚才看到docker proxy正在9300或“我认为”正在收听。

elasticsearch-server/src/main/docker # netstat -nlp | grep 9300
tcp        0      0 :::9300                 :::*                    LISTEN      6656/docker-proxy   

有线索吗?

共有3个答案

许曦
2023-03-14

当您使用-p标志启动容器时,尝试映射端口。

EXPOSE和EXPOSE都不以任何方式依赖于主机;默认情况下,这些规则不允许从主机访问端口。考虑到EXPOSE指令的局限性,作为Dockerfile的作者,您通常应该包括一个EXPOSE规则,仅作为向哪些端口提供服务的提示。由容器的操作员指定进一步的网络规则。

在执行docker run时尝试映射端口,例如,docker run-p 9200:9200-p 9300:9300

洪琦
2023-03-14

根据elasticsearch 2的文档。默认情况下为网络。主机绑定到本地主机

您需要显式设置network.host:0.0.0.0如下所述:

例子:

ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true -Des.network.host=0.0.0.0 
邵宜年
2023-03-14

我可以通过以下设置形成集群

network.publish_host=CONTAINER_HOST_ADDRESS即容器运行的节点地址。network.bind_host=0.0.0.0
transport.publish_port=9300
transport.publish_host=CONTAINER_HOST_ADDRESS

tranport.publish_port在代理/负载均衡器(如nginx或haagent)后面运行ES时很重要。

 类似资料:
  • 我正在使用一个Docker映像,该映像是使用USER命令构建的,用于使用名为的非root用户。在容器内部,我是“dev”,但我想编辑文件。 Docker容器中默认的根用户密码是什么?

  • 我在本地机器上运行这个命令docker run-d--name SonarQube-p9000:9000-p9092:9092 SonarQube 这将从dockerhub获取分支的映像,然后创建映像的容器。现在我想对文件进行一些更改,但容器中没有编辑器。我尝试使用apt-get安装vi,但它说我需要是根用户才能执行命令,当我编写sudo时,它说找不到命令。如何在容器中安装编辑器? 我运行这个命令

  • 问题内容: 我使用的是Docker映像,该映像是使用USER命令构建的,以使用名为的非root用户。在容器内,我是“ dev”,但我想编辑该文件。 所以我需要成为根。我正在尝试su命令,但要求输入root密码。 Docker容器中默认的root用户密码是什么? 问题答案: 最终,我决定重建我的Docker映像,以便通过我知道的方式更改root密码。 要么

  • 「Allen 谈 Docker 系列」 DaoCloud 正在启动 Docker 技术系列文章,每周都会为大家推送一期真材实料的精选 Docker 文章。主讲人为 DaoCloud 核心开发团队成员 Allen(孙宏亮),他是 InfoQ 「Docker 源码分析」专栏作者,已出版《Docker 源码分析》一书。Allen 接触 Docker 近两年,爱钻研系统实现原理,及 Linux 操作系统。

  • 我有一个基本的Dockerfile: 对于这个Dockerfile,我希望能够创建一个没有密码的用户,并且在运行Docker容器时,我希望使用该用户而不是root。当我尝试运行容器时,,我得到这个错误: 当容器以交互方式运行时,如何在Dockerfile中创建用户并使该用户成为默认用户?

  • 我用官方dockerfile安装了Oracle数据库。数据库工作,但不清楚如何成为root(我需要安装perl来安装示例模式)。我试过这个: 但是得到: 这是什么意思? 是否有默认root密码?