web-dev-qa-db-fra.com

Erreur du tableau de bord kubernetes: «Échec du contrôle d'intégrité du client métrique: le serveur n'a pas pu trouver la ressource demandée (obtenir le tas de services).»

Je suis nouveau dans le monde de Kubernetes, alors pardonnez-moi si j'écris une erreur. J'essaie de déployer le tableau de bord de Kubernetes

Mon cluster contient trois maîtres et 3 travailleurs drainés et non programmables afin d'installer le tableau de bord sur les nœuds maîtres:

[root@pp-tmp-test20 ~]# kubectl get nodes

NAME            STATUS                     ROLES    AGE    VERSION
pp-tmp-test20   Ready                      master   2d2h   v1.15.2
pp-tmp-test21   Ready                      master   37h    v1.15.2
pp-tmp-test22   Ready                      master   37h    v1.15.2
pp-tmp-test23   Ready,SchedulingDisabled   worker   36h    v1.15.2
pp-tmp-test24   Ready,SchedulingDisabled   worker   36h    v1.15.2
pp-tmp-test25   Ready,SchedulingDisabled   worker   36h    v1.15.2

J'essaie de déployer le tableau de bord kubernetes via cette URL:

[root@pp-tmp-test20 ~]# kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
  • Après cela, un pod kubernetes-dashboard-5698d5bc9-ql6q8 est programmé sur mon nœud maître pp-tmp-test20/172.31.68.220

  • le pod

kube-system   kubernetes-dashboard-5698d5bc9-ql6q8  /1     Running   1          7m11s   10.244.0.7      pp-tmp-test20   <none>        <none>
  • les journaux du pod
[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system

2019/08/14 10:14:57 Starting overwatch
2019/08/14 10:14:57 Using in-cluster config to connect to apiserver
2019/08/14 10:14:57 Using service account token for csrf signing
2019/08/14 10:14:58 Successful initial request to the apiserver, version: v1.15.2
2019/08/14 10:14:58 Generating JWE encryption key
2019/08/14 10:14:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/08/14 10:14:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/08/14 10:14:59 Initializing JWE encryption key from synchronized object
2019/08/14 10:14:59 Creating in-cluster Heapster client
2019/08/14 10:14:59 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 10:14:59 Auto-generating certificates
2019/08/14 10:14:59 Successfully created certificates
2019/08/14 10:14:59 Serving securely on HTTPS port: 8443
2019/08/14 10:15:29 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 10:15:59 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
  • la description de la cosse
[root@pp-tmp-test20 ~]# kubectl describe pob kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system

Name:           kubernetes-dashboard-5698d5bc9-ql6q8
Namespace:      kube-system
Priority:       0
Node:           pp-tmp-test20/172.31.68.220
Start Time:     Wed, 14 Aug 2019 16:58:39 +0200
Labels:         k8s-app=kubernetes-dashboard
                pod-template-hash=5698d5bc9
Annotations:    <none>
Status:         Running
IP:             10.244.0.7
Controlled By:  ReplicaSet/kubernetes-dashboard-5698d5bc9
Containers:
  kubernetes-dashboard:
    Container ID:  docker://40edddf7a9102d15e3b22f4bc6f08b3a07a19e4841f09360daefbce0486baf0e
    Image:         k8s.gcr.io/kubernetes-dashboard-AMD64:v1.10.1
    Image ID:      docker-pullable://k8s.gcr.io/kubernetes-dashboard-AMD64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
    State:          Running
      Started:      Wed, 14 Aug 2019 16:58:43 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 14 Aug 2019 16:58:41 +0200
      Finished:     Wed, 14 Aug 2019 16:58:42 +0200
    Ready:          True
    Restart Count:  1
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-ptw78 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kubernetes-dashboard-token-ptw78:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-ptw78
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  dashboard=true
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age                    From                                                     Message
  ----    ------     ----                   ----                                                     -------
  Normal  Scheduled  2m41s                  default-scheduler                                        Successfully assigned kube-system/kubernetes-dashboard-5698d5bc9-ql6q8 to pp-tmp-test20.tec.prj.in.phm.education.gouv.fr
  Normal  Pulled     2m38s (x2 over 2m40s)  kubelet, pp-tmp-test20  Container image "k8s.gcr.io/kubernetes-dashboard-AMD64:v1.10.1" already present on machine
  Normal  Created    2m37s (x2 over 2m39s)  kubelet, pp-tmp-test20  Created container kubernetes-dashboard
  Normal  Started    2m37s (x2 over 2m39s)  kubelet, pp-tmp-test20  Started container kubernetes-dashboard
  • la description du service de tableau de bord
[root@pp-tmp-test20 ~]# kubectl describe svc/kubernetes-dashboard -n kube-system

Name:              kubernetes-dashboard
Namespace:         kube-system
Labels:            k8s-app=kubernetes-dashboard
Annotations:       <none>
Selector:          k8s-app=kubernetes-dashboard
Type:              ClusterIP
IP:                10.110.236.88
Port:              <unset>  443/TCP
TargetPort:        8443/TCP
Endpoints:         10.244.0.7:8443
Session Affinity:  None
Events:            <none>
  • le docker ps sur mon maître exécutant le pod
[root@pp-tmp-test20 ~]# Docker ps

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
40edddf7a910        f9aed6605b81           "/dashboard --inse..."   7 minutes ago       Up 7 minutes                            k8s_kubernetes-dashboard_kubernetes-dashboard-5698d5bc9-ql6q8_kube-system_f785d4bd-2e67-4daa-9f6c-19f98582fccb_1
e7f3820f1cf2        k8s.gcr.io/pause:3.1   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kubernetes-dashboard-5698d5bc9-ql6q8_kube-system_f785d4bd-2e67-4daa-9f6c-19f98582fccb_0

[root@pp-tmp-test20 ~]# docker logs 40edddf7a910
2019/08/14 14:58:43 Starting overwatch
2019/08/14 14:58:43 Using in-cluster config to connect to apiserver
2019/08/14 14:58:43 Using service account token for csrf signing
2019/08/14 14:58:44 Successful initial request to the apiserver, version: v1.15.2
2019/08/14 14:58:44 Generating JWE encryption key
2019/08/14 14:58:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/08/14 14:58:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/08/14 14:58:44 Initializing JWE encryption key from synchronized object
2019/08/14 14:58:44 Creating in-cluster Heapster client
2019/08/14 14:58:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:58:44 Auto-generating certificates
2019/08/14 14:58:44 Successfully created certificates
2019/08/14 14:58:44 Serving securely on HTTPS port: 8443
2019/08/14 14:59:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:59:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 15:00:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

1/Sur mon maître je lance le proxy

[root@pp-tmp-test20 ~]# kubectl proxy
Starting to serve on 127.0.0.1:8001

2/Je lance Firefox avec la redirection x11 de mon maître et je tape cette URL

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

c'est le message d'erreur que je reçois dans le navigateur

Error: 'dial tcp 10.244.0.7:8443: connect: no route to Host'
Trying to reach: 'https://10.244.0.7:8443/'

En même temps j'ai eu ces erreurs depuis la console où j'ai lancé le proxy

I0814 16:10:05.836114   20240 log.go:172] http: proxy error: context canceled
I0814 16:10:06.198701   20240 log.go:172] http: proxy error: context canceled
I0814 16:13:21.708190   20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:21.708229   20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:21.708270   20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:39.335483   20240 log.go:172] http: proxy error: context canceled
I0814 16:13:39.716360   20240 log.go:172] http: proxy error: context canceled

mais après avoir actualisé n fois (au hasard) le navigateur, je suis en mesure d'accéder à l'interface de connexion pour entrer le jeton (créé avant)

Dashboard_login

Mais ... la même erreur se reproduit

Dashboard_login_error

Après avoir appuyé n fois sur le bouton "Connexion", je peux obtenir le tableau de bord .. pendant quelques secondes.

dashboard_interface_1

dashboard_interface_2

après cela, le tableau de bord commence à produire les mêmes erreurs lorsque j'explore l'interface:

dashboard_interface_error_1

dashboard_interface_error_2

J'ai regardé les journaux de pod, nous pouvons voir du trafic:

[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8  -n kube-system
2019/08/14 14:16:56 Getting list of all services in the cluster
2019/08/14 14:16:56 [2019-08-14T14:16:56Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:01 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/token request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 POST /api/v1/token/refresh request from 10.244.0.1:56140: { contents hidden }
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/settings/global/cani request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/settings/global request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 Cannot find settings config map: configmaps "kubernetes-dashboard-settings" not found

et encore les journaux de pod

[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8  -n kube-system
Error from server: Get https://172.31.68.220:10250/containerLogs/kube-system/kubernetes-dashboard-5698d5bc9-ql6q8/kubernetes-dashboard: Forbidden

Qu'est-ce que je fais mal? Pourriez-vous s'il vous plaît me dire un moyen d'enquête?

ÉDITER :

mon compte de service que j'ai utilisé

# cat dashboard-adminuser.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

# cat dashboard-adminuser-ClusterRoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
3
Vincent

Il semble que le serviceaccount kubernetes-dashboard n'ait pas accès à toutes les ressources kubernetes car il était lié au rôle kubernetes-dashboard-minimal. Si vous liez le compte de service au rôle d'administrateur de cluster, vous n'obtiendrez pas de tels problèmes. Le fichier YAML ci-dessous peut être utilisé pour y parvenir.

    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
       name: kubernetes-dashboard
       labels:
           k8s-app: kubernetes-dashboard
    roleRef:
       apiGroup: rbac.authorization.k8s.io
       kind: ClusterRole
       name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: kubernetes-dashboard
      namespace: kube-system
0

Il semble que le tas est obsolète avec kubernetes en faveur de metrics-server: Support metrics API # 2986 & Heapster Deprecation Timeline .

J'ai déployé un tableau de bord qui utilise heapster. Cette version du tableau de bord n'est pas compatible avec ma version kubernetes (1.15). Donc, moyen possible de résoudre le problème: installer le tableau de bord v2.0.0-beta

# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta3/aio/deploy/recommended.yaml
0
Vincent