当前位置: 首页 > 面试题库 >

当在docker下运行API时,Kibana中没有日志

缑赤岩
2023-03-14
问题内容

美好时光。我想在Kibana中查看我的日志。为了查看它们,我使用Serilog并在docker中运行我的应用程序Elasticsearch和Kibana。不幸的是,日志未在Kibana中显示。我也找不到lett- apikibana索引。

有我的Program文件:

public class Program
    {
        public static int Main(string[] args)
        {
            CultureInfo.DefaultThreadCurrentCulture = new CultureInfo("en-GB");
            Log.Logger = new LoggerConfiguration()
                .MinimumLevel.Verbose()
                .Enrich.FromLogContext()
                .MinimumLevel.Override("Microsoft", LogEventLevel.Information)
                .Enrich.WithProperty("app", "Lett.Api")
                .WriteTo.Elasticsearch(new ElasticsearchSinkOptions(
                    new Uri("http://elasticsearch:9200"))
                {
                    AutoRegisterTemplate = true,
                    IndexFormat = "lett-api",
                    FailureCallback = e => Console.WriteLine("Unable to submit event " + e.MessageTemplate),
                    EmitEventFailure = EmitEventFailureHandling.WriteToSelfLog |
                                       EmitEventFailureHandling.WriteToFailureSink |
                                       EmitEventFailureHandling.RaiseCallback,
                    FailureSink = new FileSink("./failures.txt", new JsonFormatter(), null)
                })
                .CreateLogger();


            try
            {
                BuildWebHost(args).Run();
                return 0;
            }
            finally
            {
                Log.CloseAndFlush();
            }

        }

        private static IWebHost BuildWebHost(string[] args)
        {
            return new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseSerilog()
                .ConfigureAppConfiguration((ctx, builder) =>
                {
                    builder
                        .SetBasePath(ctx.HostingEnvironment.ContentRootPath)
                        .AddJsonFile("appsettings.json", true)
                        .AddEnvironmentVariables("Docker:");
                })
                .UseStartup<Startup>()
                .Build();
        }
    }

我的docker-compose档案:

 version: '3.7'

services: 
  postgres:
    container_name: postgresql
    image: postgres:alpine
    environment:
      - POSTGRES_PASSWORD=12345
      - POSTGRES_USER=postgres
    ports:
      - 5432:5432

  api:
    container_name: lett-api
    image: lett:latest
    restart: on-failure
    build:
      context: .
      dockerfile: ./Lett.Api.Dockerfile
    depends_on:
      - postgres
      - elasticsearch
    ports:
      - 5000:80
    environment:
      Docker:ConnectionString: "Host=postgres;Username=postgres;Password=12345;Database=Lett"

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
    container_name: elasticsearch
    environment:
      - node.name=elasticsearch
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.zen.minimum_master_nodes=1
      - discovery.type=single-node
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - "9200:9200"
    volumes:
      - elasticsearch-data:/usr/share/elasticsearch/data
    networks:
      - docker-network

  kibana:
    image: docker.elastic.co/kibana/kibana:7.2.0
    container_name: kibana
    depends_on:
      - elasticsearch
    environment:
      elasticsearch.url: "http://elasticsearch:9200"
      elasticsearch.hosts: "http://elasticsearch:9200"
      xpack.security.enabled: "false"
      xpack.monitoring.enabled: "false"
      xpack.ml.enabled: "false"
      xpack.graph.enabled: "false"
      xpack.reporting.enabled: "false"
      xpack.grokdebugger.enabled: "false"
    ports:
      - "5601:5601"
    networks:
      - docker-network

volumes:
  elasticsearch-data:
    driver: local

networks:
  docker-network:
    driver: bridge

但是, 当我在本地运行应用程序(使用ElasticsearchUri = http://localhost:9200)时,lett- api索引也会出现并记录。

有人知道热点是什么问题吗?

谢谢!

更新 我检查了docker输出并发现以下内容:

lett-api         | Unable to submit event {HostingRequestStartingLog:l}
lett-api         | Unable to submit event {HostingRequestFinishedLog:l}
lett-api         | Unable to submit event {HostingRequestStartingLog:l}
lett-api         | Unable to submit event {HostingRequestFinishedLog:l}
lett-api         | Unable to submit event {HostingRequestStartingLog:l}
lett-api         | Unable to submit event CORS policy execution successful.
lett-api         | Unable to submit event Route matched with {RouteData}. Executing controller action with signature {MethodInfo} on controller {Controller} ({AssemblyName}).
lett-api         | Unable to submit event Executing action method {ActionName} - Validation state: {ValidationState}
lett-api         | Unable to submit event Executed action method {ActionName}, returned result {ActionResult} in {ElapsedMilliseconds}ms.
lett-api         | Unable to submit event Executing ObjectResult, writing value of type '{Type}'.
lett-api         | Unable to submit event Executed action {ActionName} in {ElapsedMilliseconds}ms
lett-api         | Unable to submit event {HostingRequestFinishedLog:l}

问题答案:

日志没有写到kibana,因为lett-api它不在docker-network

有正确的docker-compose文件:

version: '3.7'

services: 
  postgres:
    container_name: postgresql
    image: postgres:alpine
    environment:
      - POSTGRES_PASSWORD=12345
      - POSTGRES_USER=postgres
    ports:
      - 5432:5432

  api:
    container_name: lett-api
    image: lett:latest
    restart: on-failure
    build:
      context: .
      dockerfile: ./Lett.Api.Dockerfile
    depends_on:
      - postgres
      - elasticsearch
    ports:
      - 5000:80
    environment:
      Docker:ConnectionString: "Host=postgres;Username=postgres;Password=12345;Database=Lett"
    networks:
      - docker-network

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
    container_name: elasticsearch
    environment:
      - node.name=elasticsearch
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.zen.minimum_master_nodes=1
      - discovery.type=single-node
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - "9200:9200"
    volumes:
      - elasticsearch-data:/usr/share/elasticsearch/data
    networks:
      - docker-network

  kibana:
    image: docker.elastic.co/kibana/kibana:7.2.0
    container_name: kibana
    depends_on:
      - elasticsearch
    environment:
      elasticsearch.url: "http://elasticsearch:9200"
      elasticsearch.hosts: "http://elasticsearch:9200"
      xpack.security.enabled: "false"
      xpack.monitoring.enabled: "false"
      xpack.ml.enabled: "false"
      xpack.graph.enabled: "false"
      xpack.reporting.enabled: "false"
      xpack.grokdebugger.enabled: "false"
    ports:
      - "5601:5601"
    networks:
      - docker-network

volumes:
  elasticsearch-data:
    driver: local

networks:
  docker-network:
    driver: bridge


 类似资料:
  • Kibana 的 Docker 镜像可以从 Elastic 官网上的 Docker 镜像仓库获取。该镜像是随 X-Pack 一起打包的。 注意:X-Pack 在这个 image 中是预装好的。安装了 X-Pack,Kibana 会去连接同样带有 X-Pack 的 Elasticsearch 集群。 获取镜像 向 Elastic Docker 仓库发送一条 docker pull 命令就可以获取 K

  • null 虽然我不能让cron在那个特定的容器中工作,但我能够为cron创建一个独立的docker容器,并成功地使它自动运行。 至于cron容器的设置,我遵循了链接的文章,使用Docker-Julien Boulay运行了一个cron作业,并且能够使其工作。

  • 我已经对两项服务进行了任务整理: < li>web前端(包含在端口3000上访问的angular中的GUI) < li >后端(具有从端口8111上访问的网页中提取数据的puppeteer依赖性) 以下是后端的简化版本dockerfile: 它们用相互通信。 后端有两种运行提取的模式(headless和headful)。无头提取成功地与这些dockerized服务一起工作。 在本地运行后端时,令人

  • 麋鹿堆叠码头工人的新手。 正在尝试在docker中本地设置麋鹿设置。 使用的命令是 ElasticSeach是向上和http://localhost:9200/是给json响应。 但是kibana url(http://localhost:5601/)说“Kibana服务器还没有准备好”,请求在浏览器中继续旋转。 在查询docker容器时,它说, 编辑调查结果:- 1.最初是命令 立即给出空洞的回

  • 问题内容: 我想像运行shell命令: 不用任何插件就能做到吗?由于Jenkins不是,而是服务帐户,我该如何添加? 问题答案: 首先执行 然后执行 然后注销 对于注销很重要,因为您需要重新评估组成员身份 登录并重试 有用!

  • 除了没有之外,我似乎可以运行每个docker命令。我注意到这一点是因为在最近的部署过程中,启动服务的脚本突然失败了。该脚本包括以下命令 很长一段时间以来,它一直保持不变,工作没有问题,但显然系统中的某些东西已经改变了。我已经尝试按照Stack Overflow和其他站点上的许多类似问题中概述的步骤,建议的解决方案是将用户添加到docker组并重新启动服务。例如,从Docker自己的论坛。但问题还是