Couldn't create a master node in Kuberenetes

Discussion in 'Masters Program - Customers only' started by Pushparaj Naik, Feb 11, 2020.

  1. Pushparaj Naik

    Joined:
    Jul 24, 2019
    Messages:
    2
    Likes Received:
    0
    Hi,
    Unknowingly my K8 setup screwed up and can't make it UP afterwards. Logs are attached below with error info. Can you kindly direct me where things are going wrong
    ============================================

    systemctl status kubelet
    ● kubelet.service - kubelet: The Kubernetes Node Agent
    Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
    └─10-kubeadm.conf
    Active: active (running) since Tue 2020-02-11 03:56:53 UTC; 17min ago
    Docs: https://kubernetes.io/docs/home/
    Main PID: 1395 (kubelet)
    Tasks: 15
    Memory: 101.7M
    CPU: 10.385s
    CGroup: /system.slice/kubelet.service
    └─1395 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf

    Feb 11 04:14:42 kmaster kubelet[1395]: E0211 04:14:42.258683 1395 kubelet.go:2248] node "kmaster" not found
    Feb 11 04:14:42 kmaster kubelet[1395]: E0211 04:14:42.358831 1395 kubelet.go:2248] node "kmaster" not found
    Feb 11 04:14:42 kmaster kubelet[1395]: E0211 04:14:42.365392 1395 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Faile
    Feb 11 04:14:42 kmaster kubelet[1395]: E0211 04:14:42.459029 1395 kubelet.go:2248] node "kmaster" not found
    Feb 11 04:14:42 kmaster kubelet[1395]: E0211 04:14:42.559224 1395 kubelet.go:2248] node "kmaster" not found
    Feb 11 04:14:42 kmaster kubelet[1395]: E0211 04:14:42.565707 1395 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Faile
    Feb 11 04:14:42 kmaster kubelet[1395]: E0211 04:14:42.659415 1395 kubelet.go:2248] node "kmaster" not found
    Feb 11 04:14:42 kmaster kubelet[1395]: E0211 04:14:42.759615 1395 kubelet.go:2248] node "kmaster" not found
    Feb 11 04:14:42 kmaster kubelet[1395]: E0211 04:14:42.766065 1395 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed t
    Feb 11 04:14:42 kmaster kubelet[1395]: E0211 04:14:42.860283 1395 kubelet.go:2248] node "kmaster" not found

    -----------------

    systemctl status docker
    ● docker.service - Docker Application Container Engine
    Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
    Active: active (running) since Tue 2020-02-11 03:56:59 UTC; 17min ago
    Docs: https://docs.docker.com
    Main PID: 1538 (dockerd)
    Tasks: 17
    Memory: 111.1M
    CPU: 2.292s
    CGroup: /system.slice/docker.service
    ├─1538 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    └─2173 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.17.0.2 -container-port 80

    Feb 11 03:56:59 kmaster dockerd[1538]: time="2020-02-11T03:56:59.797072574Z" level=warning msg="failed to retrieve runc version: unknown o
    Feb 11 03:56:59 kmaster dockerd[1538]: time="2020-02-11T03:56:59.860641301Z" level=info msg="Docker daemon" commit=2d0083d graphdriver(s)=
    Feb 11 03:56:59 kmaster dockerd[1538]: time="2020-02-11T03:56:59.867100109Z" level=info msg="Daemon has completed initialization"
    Feb 11 03:56:59 kmaster dockerd[1538]: time="2020-02-11T03:56:59.890683590Z" level=info msg="API listen on /var/run/docker.sock"
    Feb 11 03:56:59 kmaster systemd[1]: Started Docker Application Container Engine.
    Feb 11 03:56:59 kmaster dockerd[1538]: time="2020-02-11T03:56:59.921734391Z" level=warning msg="failed to retrieve runc version: unknown o
    Feb 11 03:56:59 kmaster dockerd[1538]: time="2020-02-11T03:56:59.939354318Z" level=warning msg="failed to retrieve runc version: unknown o
    Feb 11 03:57:00 kmaster dockerd[1538]: time="2020-02-11T03:57:00.174044716Z" level=warning msg="failed to retrieve runc version: unknown o
    Feb 11 03:57:00 kmaster dockerd[1538]: time="2020-02-11T03:57:00.189495732Z" level=warning msg="failed to retrieve runc version: unknown o
    Feb 11 03:57:00 kmaster dockerd[1538]: time="2020-02-11T03:57:00.353421678Z" level=warning msg="failed to retrieve runc version: unknown o
    root@kmaster:~#
    --------------------------------------

    kubeadm reset
    [reset] Reading configuration from the cluster...
    [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    W0211 04:18:04.407856 4166 reset.go:98] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get https://172.31.16.91:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 172.31.16.91:6443: connect: connection refused
    [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
    [reset] Are you sure you want to proceed? [y/N]: y
    [preflight] Running pre-flight checks
    W0211 04:18:05.740803 4166 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
    [reset] Stopping the kubelet service
    [reset] Unmounting mounted directories in "/var/lib/kubelet"
    [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
    [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
    [reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]

    The reset process does not reset or clean up iptables rules or IPVS tables.
    If you wish to reset iptables, you must do so manually.
    For example:
    iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

    If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
    to reset your system's IPVS tables.

    The reset process does not clean your kubeconfig files and you must remove them manually.
    Please, check the contents of the $HOME/.kube/config file.

    rm $HOME/.kube/config
    rm: cannot remove '/root/.kube/config': No such file or directory

    rm /var/lib/etcd/ -rvf
    removed directory '/var/lib/etcd/'

    kubeadm init --apiserver-advertise-address $(hostname -i)
    I0211 04:18:33.062730 4190 version.go:248] remote version is much newer: v1.17.2; falling back to: stable-1.15
    [init] Using Kubernetes version: v1.15.9
    [preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Activating the kubelet service
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [172.31.16.91 127.0.0.1 ::1]
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [172.31.16.91 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.16.91]
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.
    --------------------
    Unfortunately, an error has occurred:
    timed out waiting for the condition

    This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
    Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
    error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
    root@kmaster:~#
    ---------------------------
    Logs from the node/Vm:

    Feb 10 16:32:22 kmaster kubelet[3733]: E0210 16:32:22.607859 3733 kubelet.go:2248] node "kmaster" not found
    Feb 10 16:32:22 kmaster kubelet[3733]: E0210 16:32:22.693376 3733 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed t
    Feb 10 16:32:22 kmaster kubelet[3733]: E0210 16:32:22.707989 3733 kubelet.go:2248] node "kmaster" not found
    Feb 10 16:32:22 kmaster kubelet[3733]: E0210 16:32:22.808144 3733 kubelet.go:2248] node "kmaster" not found
    Feb 10 16:32:22 kmaster kubelet[3733]: E0210 16:32:22.893328 3733 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed t
    Feb 10 16:32:22 kmaster kubelet[3733]: E0210 16:32:22.908302 3733 kubelet.go:2248] node "kmaster" not found
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.008431 3733 kubelet.go:2248] node "kmaster" not found
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.093809 3733 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:4
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.108599 3733 kubelet.go:2248] node "kmaster" not found
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.208728 3733 kubelet.go:2248] node "kmaster" not found
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.293805 3733 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Faile
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.308855 3733 kubelet.go:2248] node "kmaster" not found
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.408985 3733 kubelet.go:2248] node "kmaster" not found
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.493820 3733 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Faile
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.509121 3733 kubelet.go:2248] node "kmaster" not found
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.556004 3733 controller.go:125] failed to ensure node lease exists, will retry in
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.609262 3733 kubelet.go:2248] node "kmaster" not found
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.693813 3733 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed t
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.709392 3733 kubelet.go:2248] node "kmaster" not found
    Feb 10 16:32:23 kmaster kubelet[3733]: E0210 16:32:23.809535 3733 kubelet.go:2248] node "kmaster" not found
     
    #1

Share This Page