This repository was archived by the owner on Oct 6, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 7
This repository was archived by the owner on Oct 6, 2022. It is now read-only.
kubelet isn't running. #7
Copy link
Copy link
Open
Description
I installed concourse and then used the example job in the readme. Seems like the kubelet is not running.
- name: kind
plan:
- in_parallel:
- get: k8s-git
- get: kind-on-c
- get: kind-release
params:
globs:
- kind-linux-amd64
- task: run-kind
privileged: true
file: kind-on-c/kind.yaml
params:
KIND_TESTS: |
# your actual tests go here!
kubectl get nodes -o wide
resources:
- name: k8s-git
type: git
source:
uri: https://github.com/kubernetes/kubernetes
- name: kind-release
type: github-release
source:
owner: kubernetes-sigs
repository: kind
access_token: <some github token>
pre_release: true
- name: kind-on-c
type: git
source:
uri: https://github.com/pivotal-k8s/kind-on-c```
The logs are :-
```[INF] Setting up Docker environment...
[INF] Starting Docker...
[INF] Waiting 60 seconds for Docker to be available...
[INF] Docker available after 2 seconds.
[INF] /tmp/build/dd1bc04d/bin/kind: v0.5.0
[INF] will use kind upstream's node image
[INF] /tmp/build/dd1bc04d/bin/kubectl: Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
[INF] kmsg-linker starting in the background
DEBU[18:19:45] Running: /bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}}]
Creating cluster "kind" ...
DEBU[18:19:45] Running: /bin/docker [docker inspect --type=image kindest/node:v1.15.3]
INFO[18:19:45] Pulling image: kindest/node:v1.15.3 ...
DEBU[18:19:45] Running: /bin/docker [docker pull kindest/node:v1.15.3]
✓ Ensuring node image (kindest/node:v1.15.3) 🖼
DEBU[18:20:33] Running: /bin/docker [docker info --format '{{json .SecurityOptions}}']
DEBU[18:20:33] Running: /bin/docker [docker info --format '{{json .SecurityOptions}}']
DEBU[18:20:33] Running: /bin/docker [docker info --format '{{json .SecurityOptions}}']
DEBU[18:20:33] Running: /bin/docker [docker run --detach --tty --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --hostname kind-worker2 --name kind-worker2 --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=worker kindest/node:v1.15.3@sha256:27e388752544890482a86b90d8ac50fcfa63a2e8656a96ec5337b902ec8e5157]
DEBU[18:20:33] Running: /bin/docker [docker run --detach --tty --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --hostname kind-worker --name kind-worker --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=worker kindest/node:v1.15.3@sha256:27e388752544890482a86b90d8ac50fcfa63a2e8656a96ec5337b902ec8e5157]
DEBU[18:20:33] Running: /bin/docker [docker run --detach --tty --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --hostname kind-control-plane --name kind-control-plane --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=control-plane --expose 43587 --publish=127.0.0.1:43587:6443/TCP kindest/node:v1.15.3@sha256:27e388752544890482a86b90d8ac50fcfa63a2e8656a96ec5337b902ec8e5157]
✓ Preparing nodes 📦📦📦
DEBU[18:21:16] Running: /bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}} --filter label=io.k8s.sigs.kind.cluster=kind]
DEBU[18:21:16] Running: /bin/docker [docker inspect -f {{index .Config.Labels "io.k8s.sigs.kind.role"}} kind-control-plane]
DEBU[18:21:16] Running: /bin/docker [docker inspect -f {{index .Config.Labels "io.k8s.sigs.kind.role"}} kind-worker2]
DEBU[18:21:16] Running: /bin/docker [docker inspect -f {{index .Config.Labels "io.k8s.sigs.kind.role"}} kind-worker]
DEBU[18:21:16] Running: /bin/docker [docker exec --privileged kind-control-plane cat /kind/version]
DEBU[18:21:17] Running: /bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]
DEBU[18:21:17] Running: /bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker]
DEBU[18:21:17] Running: /bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane]
DEBU[18:21:17] Running: /bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker2]
DEBU[18:21:18] Configuration Input data: {kind v1.15.3 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.4 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}
DEBU[18:21:18] Configuration generated:
# config generated by kind
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
name: config
kubernetesVersion: v1.15.3
clusterName: "kind"
controlPlaneEndpoint: "172.17.0.3:6443"
# on docker for mac we have to expose the api server via port forward,
# so we need to ensure the cert is valid for localhost so we can talk
# to the cluster after rewriting the kubeconfig to point to localhost
apiServer:
certSANs: [localhost, "127.0.0.1"]
controllerManager:
extraArgs:
enable-hostpath-provisioner: "true"
# configure ipv6 default addresses for IPv6 clusters
scheduler:
extraArgs:
# configure ipv6 default addresses for IPv6 clusters
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.96.0.0/12"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
name: config
# we use a well know token for TLS bootstrap
bootstrapTokens:
- token: "abcdef.0123456789abcdef"
# we use a well know port for making the API server discoverable inside docker network.
# from the host machine such port will be accessible via a random local port instead.
localAPIEndpoint:
advertiseAddress: "172.17.0.4"
bindPort: 6443
nodeRegistration:
criSocket: "/run/containerd/containerd.sock"
kubeletExtraArgs:
fail-swap-on: "false"
node-ip: "172.17.0.4"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
metadata:
name: config
nodeRegistration:
criSocket: "/run/containerd/containerd.sock"
kubeletExtraArgs:
fail-swap-on: "false"
node-ip: "172.17.0.4"
discovery:
bootstrapToken:
apiServerEndpoint: "172.17.0.3:6443"
token: "abcdef.0123456789abcdef"
unsafeSkipCAVerification: true
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
metadata:
name: config
# configure ipv6 addresses in IPv6 mode
# disable disk resource management by default
# kubelet will see the host disk that the inner container runtime
# is ultimately backed by and attempt to recover disk space. we don't want that.
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metadata:
name: config
DEBU[18:21:18] Configuration Input data: {kind v1.15.3 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.2 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}
DEBU[18:21:18] Configuration generated:
# config generated by kind
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
name: config
kubernetesVersion: v1.15.3
clusterName: "kind"
controlPlaneEndpoint: "172.17.0.3:6443"
# on docker for mac we have to expose the api server via port forward,
# so we need to ensure the cert is valid for localhost so we can talk
# to the cluster after rewriting the kubeconfig to point to localhost
apiServer:
certSANs: [localhost, "127.0.0.1"]
controllerManager:
extraArgs:
enable-hostpath-provisioner: "true"
# configure ipv6 default addresses for IPv6 clusters
scheduler:
extraArgs:
# configure ipv6 default addresses for IPv6 clusters
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.96.0.0/12"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
name: config
# we use a well know token for TLS bootstrap
bootstrapTokens:
- token: "abcdef.0123456789abcdef"
# we use a well know port for making the API server discoverable inside docker network.
# from the host machine such port will be accessible via a random local port instead.
localAPIEndpoint:
advertiseAddress: "172.17.0.2"
bindPort: 6443
nodeRegistration:
criSocket: "/run/containerd/containerd.sock"
kubeletExtraArgs:
fail-swap-on: "false"
node-ip: "172.17.0.2"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
metadata:
name: config
nodeRegistration:
criSocket: "/run/containerd/containerd.sock"
kubeletExtraArgs:
fail-swap-on: "false"
node-ip: "172.17.0.2"
discovery:
bootstrapToken:
apiServerEndpoint: "172.17.0.3:6443"
token: "abcdef.0123456789abcdef"
unsafeSkipCAVerification: true
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
metadata:
name: config
# configure ipv6 addresses in IPv6 mode
# disable disk resource management by default
# kubelet will see the host disk that the inner container runtime
# is ultimately backed by and attempt to recover disk space. we don't want that.
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metadata:
name: config
DEBU[18:21:18] Configuration Input data: {kind v1.15.3 172.17.0.3:6443 6443 127.0.0.1 true 172.17.0.3 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}}
DEBU[18:21:18] Configuration generated:
# config generated by kind
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
name: config
kubernetesVersion: v1.15.3
clusterName: "kind"
controlPlaneEndpoint: "172.17.0.3:6443"
# on docker for mac we have to expose the api server via port forward,
# so we need to ensure the cert is valid for localhost so we can talk
# to the cluster after rewriting the kubeconfig to point to localhost
apiServer:
certSANs: [localhost, "127.0.0.1"]
controllerManager:
extraArgs:
enable-hostpath-provisioner: "true"
# configure ipv6 default addresses for IPv6 clusters
scheduler:
extraArgs:
# configure ipv6 default addresses for IPv6 clusters
networking:
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.96.0.0/12"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
name: config
# we use a well know token for TLS bootstrap
bootstrapTokens:
- token: "abcdef.0123456789abcdef"
# we use a well know port for making the API server discoverable inside docker network.
# from the host machine such port will be accessible via a random local port instead.
localAPIEndpoint:
advertiseAddress: "172.17.0.3"
bindPort: 6443
nodeRegistration:
criSocket: "/run/containerd/containerd.sock"
kubeletExtraArgs:
fail-swap-on: "false"
node-ip: "172.17.0.3"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
metadata:
name: config
controlPlane:
localAPIEndpoint:
advertiseAddress: "172.17.0.3"
bindPort: 6443
nodeRegistration:
criSocket: "/run/containerd/containerd.sock"
kubeletExtraArgs:
fail-swap-on: "false"
node-ip: "172.17.0.3"
discovery:
bootstrapToken:
apiServerEndpoint: "172.17.0.3:6443"
token: "abcdef.0123456789abcdef"
unsafeSkipCAVerification: true
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
metadata:
name: config
# configure ipv6 addresses in IPv6 mode
# disable disk resource management by default
# kubelet will see the host disk that the inner container runtime
# is ultimately backed by and attempt to recover disk space. we don't want that.
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metadata:
name: config
DEBU[18:21:19] Using kubeadm config:
apiServer:
certSANs:
- localhost
- 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.3:6443
controllerManager:
extraArgs:
enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.15.3
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler:
extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.0.2
bindPort: 6443
nodeRegistration:
criSocket: /run/containerd/containerd.sock
kubeletExtraArgs:
fail-swap-on: "false"
node-ip: 172.17.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
bootstrapToken:
apiServerEndpoint: 172.17.0.3:6443
token: abcdef.0123456789abcdef
unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
criSocket: /run/containerd/containerd.sock
kubeletExtraArgs:
fail-swap-on: "false"
node-ip: 172.17.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
imagefs.available: 0%
nodefs.available: 0%
nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
DEBU[18:21:19] Running: /bin/docker [docker exec --privileged kind-worker2 mkdir -p /kind]
DEBU[18:21:19] Using kubeadm config:
apiServer:
certSANs:
- localhost
- 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.3:6443
controllerManager:
extraArgs:
enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.15.3
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler:
extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.0.3
bindPort: 6443
nodeRegistration:
criSocket: /run/containerd/containerd.sock
kubeletExtraArgs:
fail-swap-on: "false"
node-ip: 172.17.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
localAPIEndpoint:
advertiseAddress: 172.17.0.3
bindPort: 6443
discovery:
bootstrapToken:
apiServerEndpoint: 172.17.0.3:6443
token: abcdef.0123456789abcdef
unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
criSocket: /run/containerd/containerd.sock
kubeletExtraArgs:
fail-swap-on: "false"
node-ip: 172.17.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
imagefs.available: 0%
nodefs.available: 0%
nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
DEBU[18:21:19] Running: /bin/docker [docker exec --privileged kind-control-plane mkdir -p /kind]
DEBU[18:21:19] Using kubeadm config:
apiServer:
certSANs:
- localhost
- 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.3:6443
controllerManager:
extraArgs:
enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.15.3
networking:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler:
extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.0.4
bindPort: 6443
nodeRegistration:
criSocket: /run/containerd/containerd.sock
kubeletExtraArgs:
fail-swap-on: "false"
node-ip: 172.17.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
bootstrapToken:
apiServerEndpoint: 172.17.0.3:6443
token: abcdef.0123456789abcdef
unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
criSocket: /run/containerd/containerd.sock
kubeletExtraArgs:
fail-swap-on: "false"
node-ip: 172.17.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
imagefs.available: 0%
nodefs.available: 0%
nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
DEBU[18:21:19] Running: /bin/docker [docker exec --privileged kind-worker mkdir -p /kind]
DEBU[18:21:24] Running: /bin/docker [docker exec --privileged -i kind-worker2 cp /dev/stdin /kind/kubeadm.conf]
DEBU[18:21:24] Running: /bin/docker [docker exec --privileged -i kind-worker cp /dev/stdin /kind/kubeadm.conf]
DEBU[18:21:24] Running: /bin/docker [docker exec --privileged -i kind-control-plane cp /dev/stdin /kind/kubeadm.conf]
⠈⠁ Creating kubeadm config 📜 [INF] kmsg-linker successful, shutting down
✓ Creating kubeadm config 📜
DEBU[18:21:27] Running: /bin/docker [docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6]
DEBU[18:23:36] I1014 18:21:28.578980 82 initconfiguration.go:189] loading configuration from "/kind/kubeadm.conf"
I1014 18:21:28.587691 82 feature_gate.go:216] feature gates: &{map[]}
[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
I1014 18:21:28.588302 82 checks.go:581] validating Kubernetes and kubeadm version
I1014 18:21:28.588342 82 checks.go:172] validating if the firewall is enabled and active
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
I1014 18:21:28.981511 82 checks.go:209] validating availability of port 6443
I1014 18:21:28.982062 82 checks.go:209] validating availability of port 10251
I1014 18:21:28.982256 82 checks.go:209] validating availability of port 10252
I1014 18:21:28.982557 82 checks.go:292] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1014 18:21:28.982867 82 checks.go:292] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1014 18:21:28.983087 82 checks.go:292] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1014 18:21:28.983181 82 checks.go:292] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1014 18:21:28.983325 82 checks.go:439] validating if the connectivity type is via proxy or direct
I1014 18:21:28.983484 82 checks.go:475] validating http connectivity to first IP address in the CIDR
I1014 18:21:28.983921 82 checks.go:475] validating http connectivity to first IP address in the CIDR
I1014 18:21:28.984031 82 checks.go:105] validating the container runtime
I1014 18:21:30.543266 82 checks.go:382] validating the presence of executable crictl
I1014 18:21:30.543727 82 checks.go:341] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1014 18:21:30.656489 82 checks.go:341] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1014 18:21:30.784627 82 checks.go:653] validating whether swap is enabled or not
I1014 18:21:30.785055 82 checks.go:382] validating the presence of executable ip
I1014 18:21:30.825742 82 checks.go:382] validating the presence of executable iptables
I1014 18:21:30.830747 82 checks.go:382] validating the presence of executable mount
I1014 18:21:30.831014 82 checks.go:382] validating the presence of executable nsenter
I1014 18:21:30.959989 82 checks.go:382] validating the presence of executable ebtables
I1014 18:21:30.960602 82 checks.go:382] validating the presence of executable ethtool
I1014 18:21:30.960791 82 checks.go:382] validating the presence of executable socat
I1014 18:21:30.961037 82 checks.go:382] validating the presence of executable tc
I1014 18:21:30.961232 82 checks.go:382] validating the presence of executable touch
I1014 18:21:30.970741 82 checks.go:524] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1051-aws
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.15.0-1051-aws/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1051-aws\n", err: exit status 1
I1014 18:21:31.147227 82 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I1014 18:21:31.170004 82 checks.go:622] validating kubelet version
I1014 18:21:31.597558 82 checks.go:131] validating if the service is enabled and active
I1014 18:21:31.703504 82 checks.go:209] validating availability of port 10250
I1014 18:21:31.703815 82 checks.go:209] validating availability of port 2379
I1014 18:21:31.703992 82 checks.go:209] validating availability of port 2380
I1014 18:21:31.704178 82 checks.go:254] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1014 18:21:32.057312 82 checks.go:842] image exists: k8s.gcr.io/kube-apiserver:v1.15.3
I1014 18:21:32.071703 82 checks.go:842] image exists: k8s.gcr.io/kube-controller-manager:v1.15.3
I1014 18:21:32.084891 82 checks.go:842] image exists: k8s.gcr.io/kube-scheduler:v1.15.3
I1014 18:21:32.091814 82 checks.go:842] image exists: k8s.gcr.io/kube-proxy:v1.15.3
I1014 18:21:32.100361 82 checks.go:842] image exists: k8s.gcr.io/pause:3.1
I1014 18:21:32.106837 82 checks.go:842] image exists: k8s.gcr.io/etcd:3.3.10
I1014 18:21:32.114283 82 checks.go:842] image exists: k8s.gcr.io/coredns:1.3.1
I1014 18:21:32.114593 82 kubelet.go:61] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1014 18:21:32.151542 82 kubelet.go:79] Starting the kubelet
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1014 18:21:32.817114 82 certs.go:104] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
I1014 18:21:36.402593 82 certs.go:104] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.3 172.17.0.3 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1014 18:21:37.468092 82 certs.go:104] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
I1014 18:21:38.247122 82 certs.go:70] creating a new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1014 18:21:38.446304 82 kubeconfig.go:79] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1014 18:21:38.799561 82 kubeconfig.go:79] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1014 18:21:39.030137 82 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1014 18:21:39.102549 82 kubeconfig.go:79] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1014 18:21:39.263012 82 manifests.go:115] [control-plane] getting StaticPodSpecs
I1014 18:21:39.350543 82 manifests.go:131] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1014 18:21:39.356141 82 manifests.go:115] [control-plane] getting StaticPodSpecs
I1014 18:21:39.379764 82 manifests.go:131] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1014 18:21:39.389670 82 manifests.go:115] [control-plane] getting StaticPodSpecs
I1014 18:21:39.390652 82 manifests.go:131] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1014 18:21:39.405349 82 local.go:60] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I1014 18:21:39.405622 82 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy
I1014 18:21:39.406957 82 loader.go:359] Config loaded from file: /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1014 18:21:39.418315 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:39.924496 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:40.424106 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:40.924056 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:41.423989 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:41.924116 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:42.424104 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:42.924058 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:43.423971 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:43.924120 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:44.424030 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:44.924132 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:45.424130 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:45.937824 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:46.424131 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:46.924095 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:47.424053 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:47.924090 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:48.424072 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:48.924102 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:49.424124 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:49.924065 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:50.424075 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:50.923981 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:51.424053 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:51.924046 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:52.424027 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:52.924088 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:53.424100 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:53.925096 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:54.424117 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:54.924121 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:55.430311 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:55.924114 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:56.424090 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:56.924058 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:57.424100 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:57.924114 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:58.424088 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:58.924090 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:59.424102 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:21:59.924043 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:00.426483 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:00.924121 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:01.424103 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:01.924039 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:02.424061 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:02.924109 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:03.424084 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:03.924015 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:04.424112 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:04.924128 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:05.423935 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:05.924112 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:06.424093 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:06.924053 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:07.424095 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:07.924067 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:08.424097 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:08.924143 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:09.424113 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:09.924114 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:10.424055 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:10.924135 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:11.424110 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:11.924094 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:12.424070 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:12.930947 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 7 milliseconds
I1014 18:22:13.424082 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:13.949873 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 26 milliseconds
I1014 18:22:14.425145 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:14.924121 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:15.424118 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:15.933121 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:16.424085 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:16.924198 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:17.424274 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:17.925109 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:18.424108 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:18.924062 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
[kubelet-check] Initial timeout of 40s passed.
I1014 18:22:19.474587 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 24 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1014 18:22:19.928234 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:20.424095 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:20.926108 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:21.424087 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:21.924117 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:22.424099 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:22.924072 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:23.424089 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:23.928696 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:24.424099 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1014 18:22:24.924085 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:25.429034 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:25.924102 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:26.424066 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:26.929100 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:27.424066 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:27.924093 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:28.425166 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:28.924084 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:29.424101 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:29.924069 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:30.424078 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:30.924053 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:31.424133 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:31.924214 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:32.424087 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:32.924092 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:33.424578 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:33.924123 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:34.424114 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1014 18:22:34.924870 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:35.424009 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:35.924089 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:36.424899 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:36.924112 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:37.424071 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:37.924018 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:38.424094 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:38.924068 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:39.424052 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:39.924064 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:40.424096 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:40.924139 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:41.424074 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:41.924102 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:42.426563 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:42.924231 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:43.424119 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:43.925037 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:44.424099 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:44.924115 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:45.424039 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:45.924188 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:46.424103 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:46.924064 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:47.424120 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:47.924043 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:48.424061 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:48.924092 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:49.424124 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:49.924013 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:50.424048 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:50.924097 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:51.424014 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:51.924118 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:52.425079 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:52.924061 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:53.424251 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:53.924106 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:54.424121 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1014 18:22:54.924041 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:55.424029 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:55.924109 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:56.424110 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:56.925229 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:57.425302 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:57.924109 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:58.426970 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:58.924046 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:59.424100 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:22:59.924009 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:00.424081 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:00.924127 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:01.424042 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:01.924124 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:02.424137 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:02.930307 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:03.424095 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:03.924145 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:04.424049 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:04.924101 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:05.424126 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:05.924108 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:06.424095 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:06.924051 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:07.424091 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:07.924068 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:08.424067 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:08.924083 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:09.424053 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:09.924093 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:10.424132 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:10.924038 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:11.424141 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:11.924061 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:12.445096 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:12.924152 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:13.424183 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:13.924121 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:14.424082 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:14.924121 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:15.426411 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 2 milliseconds
I1014 18:23:15.925376 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:16.424088 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:16.924121 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:17.424093 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:17.924050 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:18.424049 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:18.932107 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:19.436940 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:19.924270 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:20.424604 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:20.925099 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:21.424134 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:21.924120 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:22.424047 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:22.924071 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:23.424144 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:23.924033 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:24.424124 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:24.924121 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:25.428832 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:25.924102 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:26.424110 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:26.924041 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:27.424096 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:27.924133 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:28.424652 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:28.924040 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:29.424117 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:29.924041 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:30.424092 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:30.924104 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:31.424121 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:31.924155 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:32.424078 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:32.924138 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:33.424119 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:33.924118 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
I1014 18:23:34.424120 82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
✗ Starting control-plane 🕹️
Error: failed to create cluster: failed to init node with kubeadm: exit status 1
[ERR] Build failed (1), not stopping docker.```
Can you help me with this issue ?
Metadata
Metadata
Assignees
Labels
No labels