我有以下设置:
docker集线器上的docker映像omg/telperion
Kubernetes集群(有4个节点,每个节点大约50GB RAM)和大量资源
我按照教程将图像从dockerhub拉到kubernetes
SERVICE_NAME=telperion
DOCKER_SERVER="https://index.docker.io/v1/"
DOCKER_USERNAME=username
DOCKER_PASSWORD=password
DOCKER_EMAIL="omg@whatever.com"
# Create secret
kubectl create secret docker-registry dockerhub --docker-server=$DOCKER_SERVER --docker-username=$DOCKER_USERNAME --docker-password=$DOCKER_PASSWORD --docker-email=$DOCKER_EMAIL
# Create service yaml
echo "apiVersion: v1 \n\
kind: Pod \n\
metadata: \n\
name: ${SERVICE_NAME} \n\
spec: \n\
containers: \n\
- name: ${SERVICE_NAME} \n\
image: omg/${SERVICE_NAME} \n\
imagePullPolicy: Always \n\
command: [ \"echo\",\"done deploying $SERVICE_NAME\" ] \n\
imagePullSecrets: \n\
- name: dockerhub" > $SERVICE_NAME.yaml
# Deploy to kubernetes
kubectl create -f $SERVICE_NAME.yaml
这导致pod进入< code>CrashLoopBackoff
< code > docker run-it-p 8080:9546 OMG/telperion 工作正常。
所以我的问题是这个可以调试吗?如果可以,我该如何调试?
一些日志:
kubectl get nodes
NAME STATUS AGE VERSION
k8s-agent-adb12ed9-0 Ready 22h v1.6.6
k8s-agent-adb12ed9-1 Ready 22h v1.6.6
k8s-agent-adb12ed9-2 Ready 22h v1.6.6
k8s-master-adb12ed9-0 Ready,SchedulingDisabled 22h v1.6.6
.
kubectl get pods
NAME READY STATUS RESTARTS AGE
telperion 0/1 CrashLoopBackOff 10 28m
.
kubectl describe pod telperion
Name: telperion
Namespace: default
Node: k8s-agent-adb12ed9-2/10.240.0.4
Start Time: Wed, 21 Jun 2017 10:18:23 +0000
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.1.4
Controllers: <none>
Containers:
telperion:
Container ID: docker://c2dd021b3d619d1d4e2afafd7a71070e1e43132563fdc370e75008c0b876d567
Image: omg/telperion
Image ID: docker-pullable://omg/telperion@sha256:c7e3beb0457b33cd2043c62ea7b11ae44a5629a5279a88c086ff4853828a6d96
Port:
Command:
echo
done deploying telperion
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 21 Jun 2017 10:19:25 +0000
Finished: Wed, 21 Jun 2017 10:19:25 +0000
Ready: False
Restart Count: 3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-n7ll0 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-n7ll0:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-n7ll0
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 default-scheduler Normal Scheduled Successfully assigned telperion to k8s-agent-adb12ed9-2
1m 1m 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Created Created container with id d9aa21fd16b682698235e49adf80366f90d02628e7ed5d40a6e046aaaf7bf774
1m 1m 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Started Started container with id d9aa21fd16b682698235e49adf80366f90d02628e7ed5d40a6e046aaaf7bf774
1m 1m 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Started Started container with id c6c8f61016b06d0488e16bbac0c9285fed744b933112fd5d116e3e41c86db919
1m 1m 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Created Created container with id c6c8f61016b06d0488e16bbac0c9285fed744b933112fd5d116e3e41c86db919
1m 1m 2 kubelet, k8s-agent-adb12ed9-2 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "telperion" with CrashLoopBackOff: "Back-off 10s restarting failed container=telperion pod=telperion_default(f4e36a12-566a-11e7-99a6-000d3aa32f49)"
1m 1m 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Started Started container with id 3b911f1273518b380bfcbc71c9b7b770826c0ce884ac876fdb208e7c952a4631
1m 1m 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Created Created container with id 3b911f1273518b380bfcbc71c9b7b770826c0ce884ac876fdb208e7c952a4631
1m 1m 2 kubelet, k8s-agent-adb12ed9-2 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "telperion" with CrashLoopBackOff: "Back-off 20s restarting failed container=telperion pod=telperion_default(f4e36a12-566a-11e7-99a6-000d3aa32f49)"
1m 50s 4 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Pulling pulling image "omg/telperion"
47s 47s 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Started Started container with id c2dd021b3d619d1d4e2afafd7a71070e1e43132563fdc370e75008c0b876d567
1m 47s 4 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Pulled Successfully pulled image "omg/telperion"
47s 47s 1 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Normal Created Created container with id c2dd021b3d619d1d4e2afafd7a71070e1e43132563fdc370e75008c0b876d567
1m 9s 8 kubelet, k8s-agent-adb12ed9-2 spec.containers{telperion} Warning BackOff Back-off restarting failed container
46s 9s 4 kubelet, k8s-agent-adb12ed9-2 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "telperion" with CrashLoopBackOff: "Back-off 40s restarting failed container=telperion pod=telperion_default(f4e36a12-566a-11e7-99a6-000d3aa32f49)"
编辑kubelet在主机上报告的错误:
journalctl -u kubelet
.
Jun 21 10:28:49 k8s-master-ADB12ED9-0 docker[1622]: E0621 10:28:49.798140 1809 fsHandler.go:121] failed to collect filesystem stats - rootDiskErr: du command failed on /var/lib/docker/overlay/5cfff16d670f2df6520360595d7858fb5d16607b6999a88e5dcbc09e1e7ab9ce with output
Jun 21 10:28:49 k8s-master-ADB12ED9-0 docker[1622]: , stderr: du: cannot access '/var/lib/docker/overlay/5cfff16d670f2df6520360595d7858fb5d16607b6999a88e5dcbc09e1e7ab9ce/merged/proc/13122/task/13122/fd/4': No such file or directory
Jun 21 10:28:49 k8s-master-ADB12ED9-0 docker[1622]: du: cannot access '/var/lib/docker/overlay/5cfff16d670f2df6520360595d7858fb5d16607b6999a88e5dcbc09e1e7ab9ce/merged/proc/13122/task/13122/fdinfo/4': No such file or directory
Jun 21 10:28:49 k8s-master-ADB12ED9-0 docker[1622]: du: cannot access '/var/lib/docker/overlay/5cfff16d670f2df6520360595d7858fb5d16607b6999a88e5dcbc09e1e7ab9ce/merged/proc/13122/fd/3': No such file or directory
Jun 21 10:28:49 k8s-master-ADB12ED9-0 docker[1622]: du: cannot access '/var/lib/docker/overlay/5cfff16d670f2df6520360595d7858fb5d16607b6999a88e5dcbc09e1e7ab9ce/merged/proc/13122/fdinfo/3': No such file or directory
Jun 21 10:28:49 k8s-master-ADB12ED9-0 docker[1622]: - exit status 1, rootInodeErr: <nil>, extraDiskErr: <nil>
编辑2:更多日志
kubectl logs $SERVICE_NAME -p
done deploying telperion
为此,我编写了一个开源工具(robusta.dev)。以下是Slack中的示例输出(它也可以发送到其他目的地):
或多或少,我在做以下事情:
我稍微简化了一下代码,但下面是实现上述内容的Python代码:
@action
def restart_loop_reporter(event: PodEvent, config: RestartLoopParams):
"""
When a pod is in restart loop, debug the issue, fetch the logs, and send useful information on the restart
"""
pod = event.get_pod()
crashed_container_statuses = get_crashing_containers(pod.status, config)
# this callback runs on every pod update, so filter out ones without crashing pods
if len(crashed_container_statuses) == 0:
return # no matched containers
pod_name = pod.metadata.name
# don't run this too frequently for the same crashing pod
if not RateLimiter.mark_and_test(
"restart_loop_reporter", pod_name + pod.metadata.namespace, config.rate_limit
):
return
# thi is the data we send to Slack / other destinations
blocks: List[BaseBlock] = []
for container_status in crashed_container_statuses:
blocks.append(
MarkdownBlock(
f"*{container_status.name}* restart count: {container_status.restartCount}"
)
)
...
container_log = pod.get_logs(container_status.name, previous=True)
if container_log:
blocks.append(FileBlock(f"{pod_name}.txt", container_log))
else:
blocks.append(
MarkdownBlock(
f"Container logs unavailable for container: {container_status.name}"
)
)
event.add_enrichment(blocks)
你可以在Github上看到实际的代码。没有我的简化。
崩溃循环回退
告诉 pod 在启动后立即崩溃。库伯内特斯试图再次启动 Pod,但 Pod 再次崩溃,这进入了循环。
您可以通过<code>kubectl日志检查pods日志中的任何错误
--上一个将向你显示容器的先前实例化的日志
接下来,您可以通过描述pod kubectl descripe pod-n来检查“状态原因”、“最后状态原因”和“事件”部分
有时,问题可能是因为提供给应用程序的内存或CPU较少。
您可以通过以下方式访问pod的日志
kubectl logs [podname] -p
-p选项将读取上一个(崩溃)实例的日志
如果崩溃来自应用程序,那么您应该有有用的日志。
本文向大家介绍Python 如何调试程序崩溃错误,包括了Python 如何调试程序崩溃错误的使用技巧和注意事项,需要的朋友参考一下 问题 你的程序崩溃后该怎样去调试它? 解决方案 如果你的程序因为某个异常而崩溃,运行 python3 -i someprogram.py 可执行简单的调试。 -i 选项可让程序结束后打开一个交互式shell。 然后你就能查看环境,例如,假设你有下面的代码: 运行 py
希望你做得很好。 我有一个C语言的JNI实现,其中一个C函数连接到当前JVM线程并回调一个Java方法,这实际上是在attachCurrentThread()函数调用时使JVM崩溃。我的实现基于链接中给出的答案,保持对JNIEnv环境的全局引用 线程创建部分,它将侦听套接字连接以接受事件并注入GetEvent函数。 回调函数 在接收到事件时执行GetEvent()后,我的JVM在AttachMon
问题内容: 一天前,经过几个月的正常工作,我们的Java应用偶尔会因以下错误而崩溃: 我查看了hs_err_pid2075.log,发现有一个活动线程正在处理网络通信。但是,最近几个月没有进行任何应用程序或环境更改。也没有任何负载增长。我该怎么做才能了解崩溃的原因?有没有一些通用的步骤来调查jvm崩溃? UPD http://www.wuala.com/ubear/public 问题答案: 崩溃发
我目前正在使用OpenGL和GLFW开发一个跨平台的2D渲染引擎。我有一个名为GameWindow的基本窗口类,以及一个名为Game的继承GameWindow的类。游戏覆盖了基类中的一些虚拟方法,以便GameWindow可以处理帧限制等。在GameWindow的构造函数中,我使用glfwSetWindowSizeCallback指定窗口调整大小时要运行的方法。然后,此方法调用GameWindow的
有什么建议吗?
下面是一些代码: Java代码: 上面所有的工作都很好,但是C++试图做同样的事情,却做不到。(注意:我需要本机库中的类作为类,而不是独立的静态调用) 更新了MyClass::SomeCall定义: