https://github.com/AliyunContainerService/log-pilot
docker run -itd \
--name log-pilot \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /:/host:ro \
-e PILOT_TYPE=filebeat \
-e LOGGING_OUTPUT=logstash \
-e LOGSTASH_HOST=localhost \
-e LOGSTASH_PORT=5044 \
--privileged \
--restart=always \
--net=host \
registry.cn-hangzhou.aliyuncs.com/acs-sample/log-pilot:latest
docker run -it --rm -p 10080:8080 -v /usr/local/tomcat/logs --label aliyun.logs.catalina=stdout --label aliyun.logs.access=/usr/local/tomcat/logs/localhost_access_log.*.txt tomcat
启动tomcat的时候,我们声明了这样下面两个,告诉log-pilot这个容器的日志位置。
--label aliyun.logs.catalina=stdout
--label aliyun.logs.access=/usr/local/tomcat/logs/localhost_access_log.*.txt
你还可以在应用容器上添加更多的标签
aliyun.logs.$name = $path
变量name是日志名称,只能包含0-9a-zA-Z_和-
变量path是要收集的日志路径,必须具体到文件,不能只写目录。文件名部分可以使用通配符。/var/log/he.log和/var/log/*.log都是正确的值,但/var/log不行,不能只写到目录。stdout是一个特殊值,表示标准输出
aliyun.logs.$name.format,日志格式,目前支持
none 无格式纯文本
json: json格式,每行一个完整的json字符串
csv: csv格式
aliyun.logs.$name.tags:
上报日志的时候,额外增加的字段,格式为k1=v1,k2=v2,每个key-value之间使用逗号分隔。例如
aliyun.logs.access.tags=“name=hello,stage=test”,上报到存储的日志里就会出现name字段和stage字段。
如果使用elasticsearch作为日志存储,target这个tag具有特殊含义,表示elasticsearch里对应的index
如果应用上使用了标签 aliyun.logs.tags,并且 tags 里包含 target,使用 target 作为 ElasticSearch 里的 index。否则,使用标签 aliyun.logs.XXX 里的 XXX 作为 index。
在前面 tomcat 里的例子里,没有使用 aliyun.logs.tags 标签,所以默认使用了 access 和 catalina 作为 index。我们先创建 index access。
version: '3.6'
volumes:
esdata:
driver: local
# beatdata:
# driver: local
networks:
esnet:
driver: overlay
# attachable: true
configs:
logstash_conf:
file: ./logstash/logstash.conf
kibana_config:
file: ./kibana/kibana.yml
es_proxy_config:
file: ./es_proxy/nginx.conf
# filebeat_config:
# file: ./filebeat/filebeat.yml
services:
elasticsearch:
image: hub.c.163.com/muxiyue/elasticsearch:6.4.0
#hostname: elasticsearch
environment:
- "cluster.name=es-cluster"
- "bootstrap.memory_lock=true"
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- "network.host=0.0.0.0"
- "discovery.zen.minimum_master_nodes=2"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
- "ELASTIC_PASSWORD=elastic"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
volumes:
- esdata:/usr/share/elasticsearch/data
- /etc/localtime:/etc/localtime:ro
deploy:
mode: global
placement:
constraints:
- node.labels.elasticsearch == elasticsearch
restart_policy:
condition: on-failure
endpoint_mode: dnsrr
# ports:
# - "9200:9200"
# - "9300:9300"
logstash:
image: hub.c.163.com/muxiyue/logstash:6.4.0
hostname: logstash
environment:
- "xpack.monitoring.elasticsearch.url=http://elasticsearch:9200"
- "xpack.monitoring.enabled=true"
- "xpack.monitoring.elasticsearch.username=elastic"
- "xpack.monitoring.elasticsearch.password=elastic"
- "LS_JAVA_OPTS=-Xmx2g"
volumes:
- /etc/localtime:/etc/localtime:ro
deploy:
resources:
limits:
cpus: '2'
memory: 4096M
placement:
constraints:
- node.labels.logstash == logstash
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
ports:
- 5044:5044
networks:
- esnet
configs:
- source: logstash_conf
target: /usr/share/logstash/pipeline/logstash.conf
depends_on:
- elasticsearch
kibana:
image: hub.c.163.com/muxiyue/kibana:6.4.0
hostname: kibana
environment:
- "ELASTICSEARCH_URL=http://elasticsearch:9200"
ports:
- "5601:5601"
volumes:
- /etc/localtime:/etc/localtime:ro
deploy:
placement:
constraints:
- node.role == manager
- node.labels.kibana == kibana
restart_policy:
condition: on-failure
depends_on:
- elasticsearch
networks:
- esnet
configs:
- source: kibana_config
target: /usr/share/kibana/config/kibana.yml
es_proxy:
image: hub.c.163.com/library/nginx:1.13.0
ports:
- "9200:80"
depends_on:
- elasticsearch
networks:
- esnet
volumes:
- /etc/localtime:/etc/localtime:ro
deploy:
replicas: 1
resources:
limits:
cpus: '1'
memory: 1024M
update_config:
parallelism: 1
delay: 5s
placement:
constraints:
- node.role != manager
restart_policy:
condition: on-failure
configs:
- source: es_proxy_config
target: /etc/nginx/nginx.conf
max_map_count文件包含限制一个进程可以拥有的VMA(虚拟内存区域)的数量。虚拟内存区域是一个连续的虚拟地址空间区域。在进程的生命周期中,每当程序尝试在内存中映射文件,链接到共享内存段,或者分配堆空间的时候,这些区域将被创建。调优这个值将限制进程可拥有VMA的数量。限制一个进程拥有VMA的总数可能导致应用程序出错,因为当进程达到了VMA上线但又只能释放少量的内存给其他的内核进程使用时,操作系统会抛出内存不足的错误。如果你的操作系统在NORMAL区域仅占用少量的内存,那么调低这个值可以帮助释放内存给内核用。
elasticsearch要求的最小值是262144,小于此值elasticsearch无法启动。
分别在es三个node上执行以下命令,临时扩大这种限制:
sysctl -w vm.max_map_count=262144
echo -e 'vm.max_map_count=262144' >> /etc/sysctl.conf
sysctl -p
more /proc/sys/vm/max_map_count
swarm集群模式下 ulimits 无效,需要额外处理。
启动配置修改,修改为LimitMEMLOCK=infinity
vi /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
LimitMEMLOCK=infinity
systemctl daemon-reload
service docker restart
echo '* soft nproc 65536
* hard nproc 65536
* soft nofile 65536
* hard nofile 65536
* soft memlock unlimited
* hard memlock unlimited' >> /etc/security/limits.conf
cat /etc/security/limits.conf
sudo swapoff -a (永久关闭交换内存则需要修改/etc/fstab)
## 显示分区信息
sfdisk -l
## 开启
swapon -a
sudo sysctl vm.swappiness=0
echo -e 'vm.swappiness=0' >> /etc/sysctl.conf
sysctl -p
more /proc/sys/vm/swappiness
docker node update --label-add elk=elk --label-add logstash=logstash node146
docker node update --label-add elk=elk --label-add elasticsearch=elasticsearch node136
docker node update --label-add elk=elk --label-add elasticsearch=elasticsearch node137
docker node update --label-add elk=elk --label-add elasticsearch=elasticsearch node135
docker node update --label-add elk=elk --label-add kibana=kibana node191
docker node update --label-add elk=elk --label-add apmserver=apmserver node190
docker stack deploy -c /root/elk/docker-compose.yml elk
docker service create --name tomcat-logs-test --replicas=2 --publish 10080:8080 --mount type=volume,destination=/usr/local/tomcat/logs --container-label aliyun.logs.catalina=stdout --container-label aliyun.logs.access=/usr/local/tomcat/logs/localhost_access_log.*.txt --container-label aliyun.logs.access.tags="from=tomcat,target=tomcat_access_log" tomcat
访问 http://xx.xx.xx.xx:10080
访问 http://xx.xx.xx.xx:5601
在kibana中添加Index Patterns,可以看到相关的日志信息。
kinbana使用参考:https://www.elastic.co/guide/cn/kibana/current/index.html