Un de nos gousses ne démarrera pas et le redémarre constamment et est dans un état de CrashLoopbackoff:
Nom prêt au statut redémarre l'âge
quasar-api-staging-14c385ccaff2519688add0c2cb0144b2-3r7v4 0/1
CrashLoopBackOff 72 5h
Décrire la pod semble ceci (juste les événements):
FirstSEN Le chant du devin de SubObjectPath Message de motif
57m 57m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 7515ced7f49c
57m 57m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 7515ced7f49c
52m 52m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 2efe8885ad49
52m 52m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 2efe8885ad49
46m 46m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id a4361ebc3c06
46m 46m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id a4361ebc3c06
41m 41m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 99bc3a8b01ad
41m 41m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 99bc3a8b01ad
36m 36m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 3e873c664cde
36m 36m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 3e873c664cde
31m 31m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 97680dac2e12
31m 31m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 97680dac2e12
26m 26m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 42ef4b0eea73
26m 26m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 42ef4b0eea73
21m 21m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 7dbd65668733
21m 21m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 7dbd65668733
15m 15m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id d372cb279fff
15m 15m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id d372cb279fff
10m 10m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id bc7f5a0fe5d4
10m 10m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id bc7f5a0fe5d4
5m 5m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id b545a71af1d2
5m 5m 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id b545a71af1d2
3h 25s 43 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Pulled Container image "us.gcr.io/skywatch-app/quasar-api-staging:15.0" already present on machine
25s 25s 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Started Started with docker id 3e4087281881
25s 25s 1 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Created Created with docker id 3e4087281881
3h 5s 1143 {kubelet gke-skywatch-cf86c224-node-21bm} spec.containers{quasar-api-staging} Backoff Back-off restarting failed docker container
Le journal de la gousse ne montre pas beaucoup non plus:
Pod "quasar-api-staging-14c385ccaff2519688add0c2cb0144b2-3r7v4" in namespace "default": container "quasar-api-staging" is in waiting state.
J'ai été capable de courir le gousson localement et il semble fonctionner. Je ne suis pas sûr de quoi vérifier ou essayer d'autre. Toute étape d'aide ou de dépannage serait grandement appréciée!
Vous pourriez essayer de courir kubectl logs <podid> --previous
Pour voir les journaux de l'instance précédente du conteneur.