为此,我构造了一个SharedIndexInformer
来正确监视集群发出的资源事件。我将在这里以豆荚作为资源示例。
因此,SharedIndexInformer
是按照以下代码构造的:
java prettyprint-override">SharedIndexInformer<Pod> sharedIndexInformer = kubernetesClient.informers().sharedIndexInformerFor(
objectClass,
objectClassList,
10 * 60 * 1000);
接下来,很多代码要附加事件处理程序,启动索引器,有一个协调循环等等。
当从我的本地机器启动时,索引器工作得非常好,我看到所有的豆荚都被列出了。但是,当我在集群中的一个pod上运行它(正确定义了RBAC)时,我只看到运行pod的命名空间的pod。
我使用kubectl
在pod中显式检查了相关服务帐户是否能够列出集群中的所有pod,而不仅仅是当前命名空间中的pod。
我错过了什么?
提前感谢您的帮助!
我认为这是由于KubernetesClient
在Kubernetes集群外部或POD
内部创建config
的方式不同造成的。在前一种情况下,KubernetesClient
通常从~/.kube/config
读取,而连接信息,如令牌和命名空间,则从~/.kube/config
文件中的当前上下文中提取。
但是,当KubernetesClient
位于吊舱内时;它从加载的ServiceAccount
中获取连接Config
信息,请参见config.java。承载令牌从/var/run/secrets/kubernetes.io/serviceaccount/token
中挑选,用于命名空间API操作的默认命名空间从/var/run/secrets/kubernetes.io/serviceaccount/namespace
中挑选。您可以在Kubernetes文档:从pod访问API中找到更多关于它的信息。我认为KubernetesClient
在加载配置
时正在选择这个命名空间。
我认为KubernetesClient
没有正确处理这种情况。这个应该在那里修好。已经有一个问题在那里提交:https://github.com/fabric8io/kubernetes-client/issues/2514
我不确定现在的告密者是否可以检测到它们是在集群内还是在集群外(这只有在我们加载config
)之前才能知道)。现在,信息器提供了使用operationcontext
指定命名空间的方法:
SharedInformerFactory sharedInformerFactory = client.informers();
SharedIndexInformer<Pod> podInformer = sharedInformerFactory.sharedIndexInformerFor(
Pod.class,
PodList.class,
new OperationContext().withNamespace("default"),
30 * 1000L);
也许为了重写从ServiceAccount
加载的命名空间,我们可以允许设置null
命名空间:
SharedIndexInformer<Pod> podInformer = sharedInformerFactory.sharedIndexInformerFor(
Pod.class,
PodList.class,
new OperationContext().withNamespace(null), // -> Doesn't work; Ideally should Watch in all namespaces,
30 * 1000L);
更新:
fabric8-kubernetes-java-informers-in-pod : $ mvn k8s:log
[INFO] Scanning for projects...
[INFO]
[INFO] --------< org.example:fabric8-kubernetes-java-informers-in-pod >--------
[INFO] Building fabric8-kubernetes-java-informers-in-pod 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- kubernetes-maven-plugin:1.0.2:log (default-cli) @ fabric8-kubernetes-java-informers-in-pod ---
[INFO] k8s: Using Kubernetes at https://192.168.39.24:8443/ in namespace default with manifest /home/rohaan/work/repos/fabric8-kubernetes-java-informers-in-pod/target/classes/META-INF/jkube/kubernetes.yml
[INFO] k8s: Using namespace: default
[INFO] k8s: Watching pods with selector LabelSelector(matchExpressions=[], matchLabels={app=fabric8-kubernetes-java-informers-in-pod, provider=jkube, group=org.example}, additionalProperties={}) waiting for a running pod...
[INFO] k8s: [NEW] fabric8-kubernetes-java-informers-in-pod-6f957b6b59-tpbgd status: Running Ready
[INFO] k8s: [NEW] Tailing log of pod: fabric8-kubernetes-java-informers-in-pod-6f957b6b59-tpbgd
[INFO] k8s: [NEW] Press Ctrl-C to stop tailing the log
[INFO] k8s: [NEW]
[INFO] k8s: Starting the Java application using /opt/jboss/container/java/run/run-java.sh ...
[INFO] k8s: INFO exec java -javaagent:/usr/share/java/jolokia-jvm-agent/jolokia-jvm.jar=config=/opt/jboss/container/jolokia/etc/jolokia.properties -javaagent:/usr/share/java/prometheus-jmx-exporter/jmx_prometheus_javaagent.jar=9779:/opt/jboss/container/prometheus/etc/jmx-exporter-config.yaml -XX:+UseParallelOldGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:MaxMetaspaceSize=100m -XX:+ExitOnOutOfMemoryError -cp "." -jar /deployments/fabric8-kubernetes-java-informers-in-pod-1.0-SNAPSHOT-jar-with-dependencies.jar
[INFO] k8s: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
[INFO] k8s: SLF4J: Defaulting to no-operation (NOP) logger implementation
[INFO] k8s: SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[INFO] k8s: WARNING: An illegal reflective access operation has occurred
[INFO] k8s: WARNING: Illegal reflective access by org.jolokia.util.ClassUtil (file:/usr/share/java/jolokia-jvm-agent/jolokia-jvm.jar) to constructor sun.security.x509.X500Name(java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String,java.lang.String)
[INFO] k8s: WARNING: Please consider reporting this to the maintainers of org.jolokia.util.ClassUtil
[INFO] k8s: WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
[INFO] k8s: WARNING: All illegal access operations will be denied in a future release
[INFO] k8s: Nov 10, 2020 5:37:50 PM io.fabric8.testing.SimpleSharedInformerRun main
[INFO] k8s: INFO: k8s.getConfiguration().getNamespace(): default
[INFO] k8s: I> No access restrictor found, access to any MBean is allowed
[INFO] k8s: Jolokia: Agent started with URL https://172.17.0.6:8778/jolokia/
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onAdd
[INFO] k8s: INFO: ADDED: default/fabric8-kubernetes-java-informers-in-pod-6f957b6b59-tpbgd
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onAdd
[INFO] k8s: INFO: ADDED: istio-system/istio-ingressgateway-64cfb9d44b-kk5ft
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onAdd
[INFO] k8s: INFO: ADDED: istio-system/istiod-7684b696d6-fhzwt
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onAdd
[INFO] k8s: INFO: ADDED: kube-system/coredns-f9fd979d6-g4htj
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onAdd
[INFO] k8s: INFO: ADDED: kube-system/etcd-minikube
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onAdd
[INFO] k8s: INFO: ADDED: kube-system/kube-apiserver-minikube
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onAdd
[INFO] k8s: INFO: ADDED: kube-system/kube-controller-manager-minikube
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onAdd
[INFO] k8s: INFO: ADDED: kube-system/kube-proxy-tpsrg
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onAdd
[INFO] k8s: INFO: ADDED: kube-system/kube-scheduler-minikube
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onAdd
[INFO] k8s: INFO: ADDED: kube-system/metrics-server-d9b576748-4w6jz
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onAdd
[INFO] k8s: INFO: ADDED: kube-system/storage-provisioner
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onAdd
[INFO] k8s: INFO: ADDED: rokumar/multi-container-pod
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onUpdate
[INFO] k8s: INFO: UPDATED: default/fabric8-kubernetes-java-informers-in-pod-6f957b6b59-tpbgd
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onUpdate
[INFO] k8s: INFO: UPDATED: istio-system/istio-ingressgateway-64cfb9d44b-kk5ft
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onUpdate
[INFO] k8s: INFO: UPDATED: istio-system/istiod-7684b696d6-fhzwt
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onUpdate
[INFO] k8s: INFO: UPDATED: kube-system/coredns-f9fd979d6-g4htj
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onUpdate
[INFO] k8s: INFO: UPDATED: kube-system/etcd-minikube
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onUpdate
[INFO] k8s: INFO: UPDATED: kube-system/kube-apiserver-minikube
[INFO] k8s: [NEW] fabric8-kubernetes-java-informers-in-pod-6f957b6b59-tpbgd status: Running Ready
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onUpdate
[INFO] k8s: INFO: UPDATED: kube-system/kube-controller-manager-minikube
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onUpdate
[INFO] k8s: INFO: UPDATED: kube-system/kube-proxy-tpsrg
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onUpdate
[INFO] k8s: INFO: UPDATED: kube-system/kube-scheduler-minikube
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onUpdate
[INFO] k8s: INFO: UPDATED: kube-system/metrics-server-d9b576748-4w6jz
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onUpdate
[INFO] k8s: INFO: UPDATED: kube-system/storage-provisioner
[INFO] k8s: Nov 10, 2020 5:37:52 PM io.fabric8.testing.SimpleSharedInformerRun$1 onUpdate
[INFO] k8s: INFO: UPDATED: rokumar/multi-container-pod
我们正在运行带有6个节点的集群,最近几天我在集群中面临java.lang.OutOfMemoryError PermGen空间问题,这会影响到节点,同样也会掉下来。我正在重新启动特定节点以使其处于活动状态。 我们试图通过给集群重负载来解决这个问题,但不幸的是,它无法复制。但我们在生产过程中会一次又一次地遇到同样的问题。 这里介绍了一些yml文件配置 内存配置 使用以下配置更新问题 我怀疑与此问题相
我需要在集群级或命名空间级(尽可能)配置以下限制: 不允许运行pod作为previldged 不允许pod作为uid 0运行 不允许任何吊舱使用主机网络命名空间 Kubernetes 1.12,带RBAC
在查看文档时,有一个API调用来删除单个pod,但是有一种方法可以删除所有名称空间中的所有pod吗?
我正在尝试用CuratorFramework创建一个基于动物园管理员的应用程序。该应用程序必须能够在更多的节点上以仲裁的方式运行。应用程序的每个实例都嵌入了动物园管理员服务器和客户端的实例。节点在仲裁中被成功地删除。每个节点都向 /workers/active/node1写入一个EPHEMERAL节点(“活动”是由领导者创建的PERSISTENT znode)。因为当客户端连接到动物园管理员服务器
为什么using指令在包含在匿名命名空间中时表现得好像出现在全局范围?
我正在运行包含JVM(java8u31)的Docker容器。这些容器被部署为kubernetes集群中的吊舱。我经常为豆荚腾出空间,库伯内特斯杀死豆荚并重新启动。我在寻找这些OOM的根本原因时遇到了问题,因为我是库伯内特斯的新手。 > 这些容器作为有状态集部署,下面是资源分配 因此,分配给容器的总内存与MaxRam匹配 如果我使用,那就没用了,因为pod会被杀死,重新创建,一旦有OOM,就会启动,