Je suis ce guide afin de configurer un pod à l'aide de minikube et d'extraire une image d'un référentiel privé hébergé sur: hub.docker.com
En essayant de configurer un pod pour tirer l'image, je vois "CrashLoopBackoff"
pod config:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: ha/prod:latest
imagePullSecrets:
- name: regsecret
sortie de "get pod"
kubectl get pod private-reg
NAME READY STATUS RESTARTS AGE
private-reg 0/1 CrashLoopBackOff 5 4m
Autant que je sache, les images ne posent aucun problème et si je les tire manuellement et que je les exécute, elles fonctionnent.
(vous pouvez voir "image tirée avec succès" ha/prod: dernière ")
ce problème se produit également si i Poussez une image générique dans le référentiel tel que centos et essayez de la récupérer et de l'exécuter à l'aide de pod.
En outre, le secret semble bien fonctionner et je peux voir les "tractions" comptabilisées dans le référentiel privé.
Voici le résultat de la commande:
[~]$ kubectl describe pods private-reg
Name: private-reg
Namespace: default
Node: minikube/192.168.99.100
Start Time: Thu, 22 Jun 2017 17:13:24 +0300
Labels: <none>
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controllers: <none>
Containers:
private-reg-container:
Container ID: docker://1aad64750d0ba9ba826fe4f12c8814f7db77293078f8047feec686fcd8f90132
Image: ha/prod:latest
Image ID: docker://sha256:7335859e2071af518bcd0e2f373f57c1da643bb37c7e6bbc125d171ff98f71c0
Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Thu, 22 Jun 2017 17:20:04 +0300
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bhvgz (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-bhvgz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bhvgz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
9m 9m 1 default-scheduler Normal Scheduled Successfully assigned private-reg to minikube
8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id 431fecfd1d2ca03d29fd88fd6c663e66afb59dc5e86487409002dd8e9987945c
8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id 431fecfd1d2ca03d29fd88fd6c663e66afb59dc5e86487409002dd8e9987945c
8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id 223e6af99bb950570a27056d7401137ff9f3dc895f4f313a36e73ef6489eb61a
8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id 223e6af99bb950570a27056d7401137ff9f3dc895f4f313a36e73ef6489eb61a
8m 8m 2 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 10s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"
8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id a98377f9aedc5947fe1dd006caddb11fb48fa2fd0bb06c20667e0c8b83a3ab6a
8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id a98377f9aedc5947fe1dd006caddb11fb48fa2fd0bb06c20667e0c8b83a3ab6a
8m 8m 2 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 20s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"
8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id 261f430a80ff5a312bdbdee78558091a9ae7bc9fc6a9e0676207922f1a576841
8m 8m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id 261f430a80ff5a312bdbdee78558091a9ae7bc9fc6a9e0676207922f1a576841
8m 7m 3 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 40s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"
7m 7m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id 7251ab76853d4178eff59c10bb41e52b2b1939fbee26e546cd564e2f6b4a1478
7m 7m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id 7251ab76853d4178eff59c10bb41e52b2b1939fbee26e546cd564e2f6b4a1478
7m 5m 7 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"
5m 5m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id 347868d03fc9730417cf234e4c96195bb9b45a6cc9d9d97973855801d52e2a02
5m 5m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id 347868d03fc9730417cf234e4c96195bb9b45a6cc9d9d97973855801d52e2a02
5m 3m 12 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"
9m 2m 7 kubelet, minikube spec.containers{private-reg-container} Normal Pulling pulling image "ha/prod:latest"
2m 2m 1 kubelet, minikube spec.containers{private-reg-container} Normal Started Started container with id 1aad64750d0ba9ba826fe4f12c8814f7db77293078f8047feec686fcd8f90132
8m 2m 7 kubelet, minikube spec.containers{private-reg-container} Normal Pulled Successfully pulled image "ha/prod:latest"
2m 2m 1 kubelet, minikube spec.containers{private-reg-container} Normal Created Created container with id 1aad64750d0ba9ba826fe4f12c8814f7db77293078f8047feec686fcd8f90132
8m <invalid> 40 kubelet, minikube spec.containers{private-reg-container} Warning BackOff Back-off restarting failed container
2m <invalid> 14 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"
Voici le résultat de la commande:
I0622 17:35:01.043739 15981 cached_discovery.go:71] returning cached discovery info from /home/demo/.kube/cache/discovery/192.168.99.100_8443/apps/v1beta1/serverresources.json
I0622 17:35:01.043951 15981 cached_discovery.go:71] returning cached discovery info from /home/demo/.kube/cache/discovery/192.168.99.100_8443/v1/serverresources.json
I0622 17:35:01.045061 15981 cached_discovery.go:118] returning cached discovery info from /home/demo/.kube/cache/discovery/192.168.99.100_8443/servergroups.json
I0622 17:35:01.045175 15981 round_trippers.go:395] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/private-reg
I0622 17:35:01.045182 15981 round_trippers.go:402] Request Headers:
I0622 17:35:01.045187 15981 round_trippers.go:405] Accept: application/json, */*
I0622 17:35:01.045191 15981 round_trippers.go:405] User-Agent: kubectl/v1.6.6 (linux/AMD64) kubernetes/7fa1c17
I0622 17:35:01.072863 15981 round_trippers.go:420] Response Status: 200 OK in 27 milliseconds
I0622 17:35:01.072900 15981 round_trippers.go:423] Response Headers:
I0622 17:35:01.072921 15981 round_trippers.go:426] Content-Type: application/json
I0622 17:35:01.072930 15981 round_trippers.go:426] Content-Length: 2216
I0622 17:35:01.072936 15981 round_trippers.go:426] Date: Thu, 22 Jun 2017 14:35:31 GMT
I0622 17:35:01.072994 15981 request.go:991] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"private-reg","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/private-reg","uid":"f4340638-5754-11e7-978a-08002773375c","resourceVersion":"3070","creationTimestamp":"2017-06-22T14:13:24Z"},"spec":{"volumes":[{"name":"default-token-bhvgz","secret":{"secretName":"default-token-bhvgz","defaultMode":420}}],"containers":[{"name":"private-reg-container","image":"ha/prod:latest","resources":{},"volumeMounts":[{"name":"default-token-bhvgz","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"minikube","securityContext":{},"imagePullSecrets":[{"name":"regsecret"}],"schedulerName":"default-scheduler"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-06-22T14:13:24Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2017-06-22T14:13:24Z","reason":"ContainersNotReady","message":"containers with unready status: [private-reg-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-06-22T14:13:24Z"}],"hostIP":"192.168.99.100","podIP":"172.17.0.5","startTime":"2017-06-22T14:13:24Z","containerStatuses":[{"name":"private-reg-container","state":{"waiting":{"reason":"CrashLoopBackOff","message":"Back-off 5m0s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"}},"lastState":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":null,"finishedAt":"2017-06-22T14:30:36Z","containerID":"docker://a4cb436a79b0b21bb385e544d424b2444a80ca66160ef21af30ab69ed2e23b32"}},"ready":false,"restartCount":8,"image":"ha/prod:latest","imageID":"docker://sha256:7335859e2071af518bcd0e2f373f57c1da643bb37c7e6bbc125d171ff98f71c0","containerID":"docker://a4cb436a79b0b21bb385e544d424b2444a80ca66160ef21af30ab69ed2e23b32"}],"qosClass":"BestEffort"}}
I0622 17:35:01.074108 15981 round_trippers.go:395] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/private-reg/log
I0622 17:35:01.074126 15981 round_trippers.go:402] Request Headers:
I0622 17:35:01.074132 15981 round_trippers.go:405] Accept: application/json, */*
I0622 17:35:01.074137 15981 round_trippers.go:405] User-Agent: kubectl/v1.6.6 (linux/AMD64) kubernetes/7fa1c17
I0622 17:35:01.079257 15981 round_trippers.go:420] Response Status: 200 OK in 5 milliseconds
I0622 17:35:01.079289 15981 round_trippers.go:423] Response Headers:
I0622 17:35:01.079299 15981 round_trippers.go:426] Content-Type: text/plain
I0622 17:35:01.079307 15981 round_trippers.go:426] Content-Length: 0
I0622 17:35:01.079315 15981 round_trippers.go:426] Date: Thu, 22 Jun 2017 14:35:31 GMT
Comment puis-je déboguer ce problème?
Mise à jour La sortie de:
kubectl --v = 8 journaux ps-agent-2028336249-3pk43 --namespace = par défaut -p
I0625 11: 30: 01.569903 13420 round_trippers.go: 395] GET I0625 11: 30: 01.569920 13420 round_trippers.go: 402] En-têtes de demande: I0625 11: 30: 01.569927 13420 round_trippers.go: 405] Utilisateur-Agent: kubal .6.6 (linux/AMD64) kubernetes/7fa1c17 I0625 11: 30: 01.569934 13420 round_trippers.go: 405] Acceptez: application/json, / I0625 11: 30: 01.599026 13420 round_trippers.go: 420] Réponse Statut: 200 OK en 29 millisecondes I0625 11: 30: 01.599048 13420 round_trippers.go: 423] En-têtes de réponse: I0625 11: 30: 01.599056 13420 round_trippers.go: 426] Date: dim., 25 juin 2017 08:30:01 GMT I0625 11: 30: 01.599062 13420 round_trippers.go: 426] Content-Type: application/json I0625 11: 30: 01.599069 13420 round_trippers.go: 426] Longueur du contenu: 2794 I0625 11: 30: 01.599264 13420 request.go: 991] Corps de la réponse: {"kind": "Pod", "apiVersion": "v1", "métadonnées": {"name": "ps-agent-2028336249-3pk43", "generateName": "ps-agent-2028336249- "," namespace ":" default "," selfLink ":"/api/v1/espaces de nom/default/pods/ps-agent-2028336249-3pk43 "," uid ": "87c69072-597e-11e7-83cd-08002773375c", "resourceVersion": "14354", "creationTimestamp": "2017-06-25T08: 16: 03Z", "étiquettes": {"pod-template-hash": " 2028336249 "," run ":" ps-agent "}," annotations ": {" kubernetes.io/created-by ":" {\ "kind \":\"SerializedReference \",\"apiVersion \":\"v1 \",\"référence \": {\ "type \":\"ReplicaSet \",\"espace de noms \":\"défaut \",\"nom \":\"ps-agent-2028336249\", \" uid\": \" 87c577b5-597e-11e7-83cd-08002773375c\", \" apiVersion\": \" extensions\", \" resourceVersion\": \" 13446\"}}\n" }, "ownerReferences": [{"apiVersion": "extensions/v1beta1", "kind": "ReplicaSet", "name": "ps-agent-2028336249", "uid": "87c577b5-597e-11e7-83cd -08002773375c "," contrôleur ": true," blockOwnerDeletion ": true}]}," spec ": {" volumes ": [{" nom ":" default-token-bhvgz "," secret ": {" secretName " : "default-token-bhvgz", "defaultMode": 420}}], "conteneurs": [{"nom": "ps-agent", "image": "ha/prod: ps-agent-latest", "resources": {}, "volumeMounts": [{"name": "default-token-bhvgz", "readOnly": true, "mountPath": "/ var/run/secrets/kubernetes.io/serviceaccount"} ], "te rminationMessagePath ":"/dev/terminaison-log "," terminaisonMessagePolicy ":" Fichier "," imagePullPolicy ":" IfNotPresent "}]," restartPolicy ":" Toujours "," terminaisonGracePeriodSeconds ": 30," dnsPolicy "," ClusterFirst " "," serviceAccountName ":" default "," serviceAccount ":" default "," nodeName ":" minikube "," securityContext ": {}," schedulerName ":" default-scheduler "}," status ": {" phase ":" Exécution "," conditions ": [{" type ":" Initialisé "," statut ":" Vrai "," lastProbeTime ": null," lastTransitionTime ":" 2017-06-25T08: 16: 03Z " }, {"type": "Ready", "status": "False", "lastProbeTime": null, "lastTransitionTime": "2017-06-25T08: 16: 03Z", "raison": "ContainersNotReady", " message ":" conteneurs avec statut non prêt: [ps-agent] "}, {" type ":" PodScheduled "," status ":" True "," lastProbeTime ": null," lastTransitionTime ":" 2017-06-25T08 : 16: 03Z "}]," hostIP ":" 192.168.99.100 "," podIP ":" 172.17.0.5 "," startTime ":" 2017-06-25T08: 16: 03Z "," containerStatuses ": [{ "nom": "ps-agent", "état": {"en attente": {"raison": "CrashLoopBackOff", "message": "Redémarrage de 5m0s en cas d'échec du conteneur = ps -agent pod = ps-agent-2028336249-3pk43_default (87c69072-597e-11e7-83cd-08002773375c) "}}," lastState ": {" terminé ": {" codeExit ": 0," raison ":" Completed ", "startedAt": null, "finishAt": "2017-06-25T08: 27: 17Z", "conteneurID": "docker: // 1aa9dfbfeb80042c6f4c8d04cabb3306ac1cd52963568e62f19f0ee081b"}, "se" ":" ha/prod: ps-agent dernier », "ImageID": "docker: // SHA256: eb5307c4366fc129d022703625a5f30ff175b5e1a24dbe39fd4c32e726a0ee7b", "containerId": "docker: // 1aa9dfbfeb80042c6f4c8d04cabb3306ac1cd52963568e621019e2f1f0ee081b"}], "qosClass": "BestEffort"} } I0625 11: 30: 01.600727 13420 round_trippers.go: 395] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/ps-agent-2028336249-3pk43/log?previous = true I0625 11: 30: 01.600747 13420 round_trippers.go: 402] En-tête de demande: I0625 11: 30: 01.600757 13420 round_trippers.go: 405] Accepter: application/json, / I0625 11: 30: 01.600766 13420 round_trippers.go: 405] Agent utilisateur: kubectl/v1.6.6 (linux/AMD64) kubernetes/7fa1c17 I0 625 11: 30: 01.632473 13420 round_trippers.go: 420] État de la réponse: 200 OK en 31 millisecondes I0625 11: 30: 01.632545 13420 round_trippers.go: 423] En-têtes de réponse: I0625 11: 30: 01.632569 13420 round_trippers.go: 426] Date: dimanche, 25 juin 2017 08:30:01 GMT I0625 11: 30: 01.632592 13420 round_trippers.go: 426] Type de contenu: text/plain I0625 11: 30: 01.632615 13420 round_trippers.go: 426] Contenu-Longueur: 0
Le problème est dû au conteneur docker qui se ferme dès la fin du processus "start". J'ai ajouté une commande qui fonctionne pour toujours et cela a fonctionné. Ce numéro a mentionné ici
J'ai rencontré le même problème "CrashLoopBackOff" lorsque j'ai débogué pour obtenir les pods et les journaux de pod Découvert que mes arguments de commande sont faux
J'ai rencontré la même erreur.
NOM READY STATUS RESTARTS AGE Pod/webapp 0/1 CrashLoopBackOff 5 47h
Mon problème était que j'essayais d'exécuter deux pods différents avec le même nom de métadonnées.
type: Pod métadonnées: nom: webapp Étiquettes: ...
Pour trouver tous les noms de vos pods, lancez: Kubectl get pods
NOM READY STATUS RESTARTS AGE Webapp 1/1 En cours d'exécution 15 47h
puis j'ai changé le nom du pod en conflit et tout a bien fonctionné.
NOM READY STATUS RESTARTS AGE Webapp 1/1 Exécution 17 2d Webapp-release-0-5 1/1 Exécution 0 13m