使用OpenID和KeyCloak的Kibana单点登录。我已经按照opendistro文档配置了该设置。https://opendistro.github.io/for-ellasticsearch-docs/docs/security-configuration/openid-connect/
docker-compose.yml
version: '3'
services:
elasticsearch:
image: amazon/opendistro-for-elasticsearch:0.7.0
container_name: odfe-elasticsearch
environment:
- discovery.type=single-node
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- odfe-data1:/usr/share/elasticsearch/data
- ./elastisearch-opendistro-sec/config.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- odfe-net
kibana:
image: amazon/opendistro-for-elasticsearch-kibana:0.7.0
container_name: odfe-kibana
ports:
- 5601:5601
volumes:
- ./kibana-opendistro-sec/kibana.yml:/usr/share/kibana/config/kibana.yml
expose:
- "5601"
environment:
ELASTICSEARCH_URL: https://odfe-elasticsearch:9200
ELASTICSEARCH_HOSTS: https://odfe-elasticsearch:9200
networks:
- odfe-net
volumes:
odfe-data1:
networks:
odfe-net:
keycloak-compose.yml
version: '3'
services:
mysql:
image: mysql:5.7
volumes:
- mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: password
networks:
- odfe-net
keycloak:
image: jboss/keycloak
environment:
DB_VENDOR: MYSQL
DB_ADDR: mysql
DB_DATABASE: keycloak
DB_USER: keycloak
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
networks:
- odfe-net
ports:
- 8080:8080
depends_on:
- mysql
volumes:
mysql_data:
networks:
odfe-net:
opendistro_security:
dynamic:
authc:
basic_internal_auth_domain:
enabled: true
order: 0
http_authenticator:
type: basic
challenge: false
authentication_backend:
type: internal
openid_auth_domain:
enabled: true
order: 1
http_authenticator:
type: openid
challenge: false
config:
subject_key: preferred_username
roles_key: roles
openid_connect_url: http://172.29.0.3:8080/auth/realms/master/.well-known/openid-configuration
authentication_backend:
type: noop
opendistro_security.auth.type: "openid"
opendistro_security.openid.connect_url: "http://172.29.0.3:8080/auth/realms/master/.well-known/openid-configuration"
opendistro_security.openid.client_id: "kibana-sso"
opendistro_security.openid.client_secret: "841d796a-bc3a-4cc8-9fb9-bed6221f66b4"
elasticsearch.url: "https://odfe-elasticsearch:9200"
elasticsearch.username: "kibanaserver"
elasticsearch.password: "kibanaserver"
elasticsearch.ssl.verificationMode: none
elasticsearch.requestHeadersWhitelist: ["Authorization", "security_tenant"]
"Client request error: connect ECONNREFUSED 127.0.0.1:8080"}
odfe-kibana | /usr/share/kibana/plugins/opendistro_security/lib/auth/types/openid/OpenId.js:151
odfe-kibana | throw new Error('Failed when trying to obtain the endpoints from your IdP');
kibana运行在localhost:5601,但是,当我试图在浏览器中加载页面时,我得到了err_empty_response。
以下是日志:
odfe-kibana | {"type":"log","@timestamp":"2019-06-27T23:00:54Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://odfe-elasticsearch:9200/"}
odfe-kibana | {"type":"log","@timestamp":"2019-06-27T23:00:54Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
odfe-elasticsearch | [2019-06-27T23:00:54,613][INFO ][c.a.o.e.p.h.c.PerformanceAnalyzerConfigAction] [8EPY7C_] PerformanceAnalyzer Enabled: true
odfe-elasticsearch | Registering Handler
odfe-elasticsearch | [2019-06-27T23:00:54,687][INFO ][o.e.n.Node ] [8EPY7C_] initialized
odfe-elasticsearch | [2019-06-27T23:00:54,687][INFO ][o.e.n.Node ] [8EPY7C_] starting ...
odfe-elasticsearch | [2019-06-27T23:00:54,918][INFO ][o.e.t.TransportService ] [8EPY7C_] publish_address {172.29.0.5:9300}, bound_addresses {0.0.0.0:9300}
odfe-elasticsearch | [2019-06-27T23:00:54,967][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [8EPY7C_] Check if .opendistro_security index exists ...
odfe-elasticsearch | [2019-06-27T23:00:55,064][INFO ][c.a.o.s.h.OpenDistroSecurityHttpServerTransport] [8EPY7C_] publish_address {172.29.0.5:9200}, bound_addresses {0.0.0.0:9200}
odfe-elasticsearch | [2019-06-27T23:00:55,067][INFO ][o.e.n.Node ] [8EPY7C_] started
odfe-elasticsearch | [2019-06-27T23:00:55,070][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [8EPY7C_] 4 Open Distro Security modules loaded so far: [Module [type=AUDITLOG, implementing class=com.amazon.opendistroforelasticsearch.security.auditlog.impl.AuditLogImpl], Module [type=MULTITENANCY, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.PrivilegesInterceptorImpl], Module [type=DLSFLS, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper], Module [type=REST_MANAGEMENT_API, implementing class=com.amazon.opendistroforelasticsearch.security.dlic.rest.api.OpenDistroSecurityRestApiActions]]
odfe-elasticsearch | [2019-06-27T23:00:55,558][INFO ][o.e.g.GatewayService ] [8EPY7C_] recovered [2] indices into cluster_state
odfe-elasticsearch | [2019-06-27T23:00:56,394][INFO ][o.e.c.r.a.AllocationService] [8EPY7C_] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.opendistro_security][0]] ...]).
odfe-elasticsearch | [2019-06-27T23:00:56,684][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [8EPY7C_] Node '8EPY7C_' initialized
odfe-kibana | {"type":"log","@timestamp":"2019-06-27T23:00:57Z","tags":["status","plugin:elasticsearch@6.5.4","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at https://odfe-elasticsearch:9200/."}
odfe-kibana | {"type":"log","@timestamp":"2019-06-27T23:00:57Z","tags":["listening","info"],"pid":1,"message":"Server running at http://localhost:5601"}
我不会将“localhost”与Keycloak和Docker结合使用,尤其是在Mac上运行Docker时。
您看到的错误(connect EconnDelection 127.0.0.1:8080“)意味着Kibana正试图在端口8080上连接到它自己(它自己的Docker容器)。这似乎令人困惑,但”localhost“在每台机器上都有非常具体的含义--别忘了,每个Docker容器都是它自己的机器。相反,您希望它在端口8080上从Docker网络连接到您的主机。
为此,我建议使用“127.0.0.1.xip.io”(查看xip.io以了解它是什么)作为域名。您可能还需要在docker-compose文件中使用“extra_hosts”来配置此地址。
我正在尝试通过SAML请求将SSO集成到我们的ASP. NET应用程序中。我正在使用KentorAuthService库来实现这一点。使用kentor库是否是从其他身份提供者(例如(一次登录、快速身份等)进行身份验证的解决方案,或者我应该专门基于身份提供者实施。
我正在尝试将一个基本的节点 oidc 提供程序应用程序作为我的密钥斗篷服务器的 OIDC 提供程序。 钥匙斗篷正确链接到我的应用程序的登录页面。输入用户名和密码后,我正确转移回钥匙斗篷。 但是,密钥保护比说“使用标识提供者进行身份验证时出现意外错误”。 身份提供者配置:这里 我的应用: 从KeyClope跟踪: 注意:当我使用postman测试令牌endpoint时,它工作正常
我是Kibana新手,将数据加载到Elastic 5.0.0-alpha3中,并使用Kibana5.0.0-alpha3进行可视化。我可以将一些数字字段显示为直方图,但当我想使用文本字段时,我会得到: 我被警告说数据(出版商的名字)可能已经被分析成子字段,但是我还是想显示。 如何设置< code>fielddata=true? 编辑:Kibana github上最近的问题表明这是5.0.0中的新功
对于Spring MVC应用程序,通常如何使用SAML 2.0实现SSO? 我的应用程序需要实现SSO,以便用户无需使用我的应用程序创建新帐户即可登录。 我的理解是,如果我错了,请纠正我,我需要一个服务提供商与第三方使用的身份提供商进行通信,以便交换元数据。但我该如何实现这一过程呢? 此外,Spring MVC应用程序端需要什么? 提前感谢:D
我刚刚使用上的指南在我的Kubernetes集群上安装了一个EFK堆栈https://medium.com/@timfpark/efk-logging-on-kubernetes-on-azure-4c54402459c4 我在上的指南中指出,当通过代理访问它时,它可以工作 http://localhost:8001/api/v1/namespaces/kube-系统/服务/kibana日志记录/
Kibana 可以配置连接 Tribe 节点用于数据检索。因为 tribe 节点不能创建索引,Kibana 额外需要一个独立的连接来维护节点状态。当配置好后,搜索和可视化控件会使用 tribe 节点检索数据,管理操作(如保存仪表板)会发送给非 tribe 节点。 配置 Kibana Tribe 节点 当 kibana.yml 中配置了 elasticsearch 时,Tribe 节点使用相同的配置