Je suis nouveau à Kubernetes
L’objectif est de faire fonctionner le tableau de bord du cluster Kubernetes
Le cluster Kubernetes a été déployé sous Kubespray: github.com/kubernetes-incubator/kubespray
Versions:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-15T08:51:21Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/AMD64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3+coreos.0", GitCommit:"42de91f04e456f7625941a6c4aaedaa69708be1b", GitTreeState:"clean", BuildDate:"2017-08-07T19:44:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/AMD64"}
Quand je fais kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml --validate=false
comme décrit ici
Je reçois:
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": rolebindings.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": deployments.extensions "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exists
Quand je lance kubectl get services --namespace kube-system
, je reçois:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.233.0.3 <none> 53/UDP,53/TCP 10d
kubernetes-dashboard 10.233.28.132 <none> 80/TCP 9d
Lorsque j'essaie d'atteindre le cluster de tableaux de bord kubernetes, je reçois Connection refused
kubectl logs --namespace=kube-system kubernetes-dashboard-4167803980-1dz53
sortie:
2017/09/27 10:54:11 Using in-cluster config to connect to apiserver
2017/09/27 10:54:11 Using service account token for csrf signing
2017/09/27 10:54:11 No request provided. Skipping authorization
2017/09/27 10:54:11 Starting overwatch
2017/09/27 10:54:11 Successful initial request to the apiserver, version: v1.7.3+coreos.0
2017/09/27 10:54:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2017/09/27 10:54:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2017/09/27 10:54:11 Initializing secret synchronizer synchronously using secret kubernetes-dashboard-key-holder from namespace kube-system
2017/09/27 10:54:11 Initializing JWE encryption key from synchronized object
2017/09/27 10:54:11 Creating in-cluster Heapster client
2017/09/27 10:54:11 Serving securely on HTTPS port: 8443
2017/09/27 10:54:11 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
Autres sorties:
kubectl get pods --namespace=kube-system
:
NAME READY STATUS RESTARTS AGE
calico-node-bqckz 1/1 Running 0 12d
calico-node-r9svd 1/1 Running 2 12d
calico-node-w3tps 1/1 Running 0 12d
kube-apiserver-kubetest1 1/1 Running 0 12d
kube-apiserver-kubetest2 1/1 Running 0 12d
kube-controller-manager-kubetest1 1/1 Running 2 12d
kube-controller-manager-kubetest2 1/1 Running 2 12d
kube-dns-3888408129-n0m8d 3/3 Running 0 12d
kube-dns-3888408129-z8xx3 3/3 Running 0 12d
kube-proxy-kubetest1 1/1 Running 0 12d
kube-proxy-kubetest2 1/1 Running 0 12d
kube-proxy-kubetest3 1/1 Running 0 12d
kube-scheduler-kubetest1 1/1 Running 2 12d
kube-scheduler-kubetest2 1/1 Running 2 12d
kubedns-autoscaler-1629318612-sd924 1/1 Running 0 12d
kubernetes-dashboard-4167803980-1dz53 1/1 Running 0 1d
nginx-proxy-kubetest3 1/1 Running 0 12d
kubectl proxy
:
Starting to serve on 127.0.0.1:8001panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x2692f20]
goroutine 1 [running]:
k8s.io/kubernetes/pkg/kubectl.(*ProxyServer).ServeOnListener(0x0, 0x3a95a60, 0xc420114110, 0x17, 0xc4208b7c28)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/proxy_server.go:201 +0x70
k8s.io/kubernetes/pkg/kubectl/cmd.RunProxy(0x3aa5ec0, 0xc42074e960, 0x3a7f1e0, 0xc42000c018, 0xc4201d7200, 0x0, 0x0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:156 +0x774
k8s.io/kubernetes/pkg/kubectl/cmd.NewCmdProxy.func1(0xc4201d7200, 0xc4203586e0, 0x0, 0x2)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/proxy.go:79 +0x4f
k8s.io/kubernetes/vendor/github.com/spf13/Cobra.(*Command).execute(0xc4201d7200, 0xc420358500, 0x2, 0x2, 0xc4201d7200, 0xc420358500)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/Cobra/command.go:603 +0x234
k8s.io/kubernetes/vendor/github.com/spf13/Cobra.(*Command).ExecuteC(0xc4202e4240, 0x5000107, 0x0, 0xffffffffffffffff)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/Cobra/command.go:689 +0x2fe
k8s.io/kubernetes/vendor/github.com/spf13/Cobra.(*Command).Execute(0xc4202e4240, 0xc42074e960, 0x3a7f1a0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/Cobra/command.go:648 +0x2b
k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0)
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:39 +0xd5
main.main()
/private/tmp/kubernetes-cli-20170915-41661-iccjh1/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:26 +0x22
kubectl top nodes
:
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
kubectl get svc --namespace=kube-system
:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.233.0.3 <none> 53/UDP,53/TCP 12d
kubernetes-dashboard 10.233.28.132 <none> 80/TCP 11d
curl http://localhost:8001/ui
: curl: (7) Failed to connect to 10.2.3.211 port 8001: Connection refused
Comment puis-je faire fonctionner le tableau de bord? Apprécier ton aide.
vous installez peut-être la version 1.7 du tableau de bord. essayez d'installer la version 1.6.3 bien testée.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
Mise à jour 10/2/17: Pouvez-vous essayer ceci: Supprimez et installez la version 1.6.3.
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.6.3/src/deploy/kubernetes-dashboard.yaml
Je pense que le tableau de bord kubernetes est déjà disponible par défaut si vous le déployez via GCP ou Azure. La première erreur l'explique déjà. Pour vérifier, vous pouvez taper la commande suivante pour rechercher les pods/services dans l'espace de noms kube-system.
>kubectl get pods --namespace=kube-system
>kubectl get svc --namespace=kube-system
A partir de la commande ci-dessus, vous devriez trouver votre tableau de bord kubernetes disponible et vous n'avez donc pas besoin de le déployer à nouveau. Pour accéder au tableau de bord, vous pouvez taper la commande suivante.
>kubectl proxy
Cela rendra le tableau de bord disponible sur http: // localhost: 8001/ui sur la machine sur laquelle vous tapez cette commande.
Mais pour mieux comprendre votre problème, puis-je savoir quelle version de kubernetes et quel environnement utilisez-vous maintenant? En outre, ce serait formidable si vous pouviez me montrer le résultat de ces deux commandes.
>kubectl get pods --namespace=kube-system
>kubectl top nodes