Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 12 succeeded |
Started | |
Elapsed | 1h35m |
Revision | 92fe578a254a1f0db7e6d252aa1b53119f42b8bf |
Refs |
122 |
job-version | v1.19.0-beta.2.149+5ace14fac7d669 |
revision | v1.19.0-beta.2.149+5ace14fac7d669 |
error during ./hack/ginkgo-e2e.sh --node-os-distro=windows --ginkgo.focus=\[Conformance\]|\[NodeConformance\]|\[sig-windows\]|\[sig-apps\].CronJob|\[sig-api-machinery\].ResourceQuota|\[sig-scheduling\].SchedulerPreemption --ginkgo.skip=\[LinuxOnly\]|\[Serial\]|Guestbook.application.should.create.and.stop.a.working.application --report-dir=/logs/artifacts --disable-log-dump=true: exit status 1
from junit_runner.xml
Filter through log files
Build
Check APIReachability
Deferred TearDown
DumpClusterLogs
IsUp
TearDown
TearDown Previous
Timeout
Up
kubectl version
list nodes
test setup
... skipping 179 lines ... {"log":"service/kube-dns created\n","stream":"stdout","time":"2020-07-13T23:22:39.340807133Z"} {"log":"serviceaccount/coredns-autoscaler created\n","stream":"stdout","time":"2020-07-13T23:22:39.340812133Z"} {"log":"clusterrole.rbac.authorization.k8s.io/coredns-autoscaler created\n","stream":"stdout","time":"2020-07-13T23:22:39.340817333Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/coredns-autoscaler created\n","stream":"stdout","time":"2020-07-13T23:22:39.340822533Z"} {"log":"deployment.apps/coredns-autoscaler created\n","stream":"stdout","time":"2020-07-13T23:22:39.340827233Z"} {"log":"configmap/azure-ip-masq-agent-config created\n","stream":"stdout","time":"2020-07-13T23:22:39.340831833Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stdout","time":"2020-07-13T23:22:39.340836433Z"} {"log":"INFO: == Kubernetes addon ensure completed at 2020-07-13T23:22:39+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:22:39.345265533Z"} {"log":"INFO: == Reconciling with deprecated label ==\n","stream":"stdout","time":"2020-07-13T23:22:39.345288433Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:22:46.852868133Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider created\n","stream":"stdout","time":"2020-07-13T23:22:46.852914133Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider created\n","stream":"stdout","time":"2020-07-13T23:22:46.909552133Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-persistent-volume-binder created\n","stream":"stdout","time":"2020-07-13T23:22:46.909559033Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-persistent-volume-binder created\n","stream":"stdout","time":"2020-07-13T23:22:46.909564033Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter created\n","stream":"stdout","time":"2020-07-13T23:22:46.909580333Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter created\n","stream":"stdout","time":"2020-07-13T23:22:46.909584533Z"} ... skipping 11 lines ... {"log":"rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created\n","stream":"stdout","time":"2020-07-13T23:22:46.909637333Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created\n","stream":"stdout","time":"2020-07-13T23:22:46.909641633Z"} {"log":"service/metrics-server created\n","stream":"stdout","time":"2020-07-13T23:22:46.909646033Z"} {"log":"deployment.apps/metrics-server created\n","stream":"stdout","time":"2020-07-13T23:22:46.909649933Z"} {"log":"apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created\n","stream":"stdout","time":"2020-07-13T23:22:46.909654333Z"} {"log":"INFO: == Reconciling with addon-manager label ==\n","stream":"stdout","time":"2020-07-13T23:22:46.909660733Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:22:52.375462133Z"} {"log":"daemonset.apps/azure-cni-networkmonitor created\n","stream":"stdout","time":"2020-07-13T23:22:52.377021433Z"} {"log":"podsecuritypolicy.policy/privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:22:52.377035833Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:22:52.377041533Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:22:52.377046333Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:22:52.377051733Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:22:52.377077433Z"} ... skipping 3 lines ... {"log":"clusterrole.rbac.authorization.k8s.io/secretproviderclasses-role created\n","stream":"stdout","time":"2020-07-13T23:22:52.377098233Z"} {"log":"customresourcedefinition.apiextensions.k8s.io/secretproviderclasses.secrets-store.csi.x-k8s.io created\n","stream":"stdout","time":"2020-07-13T23:22:52.377103933Z"} {"log":"daemonset.apps/csi-secrets-store created\n","stream":"stdout","time":"2020-07-13T23:22:52.377116933Z"} {"log":"daemonset.apps/csi-secrets-store-provider-azure created\n","stream":"stdout","time":"2020-07-13T23:22:52.377121433Z"} {"log":"INFO: == Kubernetes addon reconcile completed at 2020-07-13T23:22:52+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:22:52.381166033Z"} {"log":"INFO: Leader is k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2\n","stream":"stdout","time":"2020-07-13T23:23:33.546358233Z"} {"log":"Error from server (Invalid): error when creating \"/etc/kubernetes/addons/coredns.yaml\": Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.0.0.10\": provided IP is already allocated\n","stream":"stdout","time":"2020-07-13T23:23:35.581871833Z"} {"log":"unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stdout","time":"2020-07-13T23:23:35.581894933Z"} {"log":"INFO: == Kubernetes addon ensure completed at 2020-07-13T23:23:35+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:23:35.583728233Z"} {"log":"INFO: == Reconciling with deprecated label ==\n","stream":"stdout","time":"2020-07-13T23:23:35.583889133Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:23:37.387243333Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:23:37.390017233Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:23:37.390030633Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:23:37.390052633Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:23:37.390058233Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:23:37.390063933Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:23:37.390069333Z"} ... skipping 11 lines ... {"log":"rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged\n","stream":"stdout","time":"2020-07-13T23:23:37.390114833Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged\n","stream":"stdout","time":"2020-07-13T23:23:37.390118333Z"} {"log":"service/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:23:37.390129133Z"} {"log":"deployment.apps/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:23:37.390132133Z"} {"log":"apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:23:37.390135033Z"} {"log":"INFO: == Reconciling with addon-manager label ==\n","stream":"stdout","time":"2020-07-13T23:23:37.392047433Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:23:39.112469133Z"} {"log":"daemonset.apps/azure-cni-networkmonitor unchanged\n","stream":"stdout","time":"2020-07-13T23:23:39.113662433Z"} {"log":"podsecuritypolicy.policy/privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:23:39.113675633Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:23:39.113680333Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:23:39.113685033Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:23:39.113689333Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:23:39.113694033Z"} {"log":"csidriver.storage.k8s.io/secrets-store.csi.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:23:39.113713833Z"} {"log":"serviceaccount/secrets-store-csi-driver unchanged\n","stream":"stdout","time":"2020-07-13T23:23:39.113718233Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/secretproviderclasses-rolebinding unchanged\n","stream":"stdout","time":"2020-07-13T23:23:39.113722733Z"} {"log":"clusterrole.rbac.authorization.k8s.io/secretproviderclasses-role unchanged\n","stream":"stdout","time":"2020-07-13T23:23:39.113727233Z"} {"log":"daemonset.apps/csi-secrets-store unchanged\n","stream":"stdout","time":"2020-07-13T23:23:39.113731833Z"} {"log":"INFO: == Kubernetes addon reconcile completed at 2020-07-13T23:23:39+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:23:39.115032033Z"} {"log":"INFO: Leader is k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2\n","stream":"stdout","time":"2020-07-13T23:24:33.206899933Z"} {"log":"Error from server (Invalid): error when creating \"/etc/kubernetes/addons/coredns.yaml\": Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.0.0.10\": provided IP is already allocated\n","stream":"stdout","time":"2020-07-13T23:24:34.828117933Z"} {"log":"unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stdout","time":"2020-07-13T23:24:34.828166933Z"} {"log":"INFO: == Kubernetes addon ensure completed at 2020-07-13T23:24:34+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:24:34.829534433Z"} {"log":"INFO: == Reconciling with deprecated label ==\n","stream":"stdout","time":"2020-07-13T23:24:34.829549733Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:24:36.285959133Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:24:36.287400733Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:24:36.287415733Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:24:36.287421133Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:24:36.287426533Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:24:36.287447233Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:24:36.287461633Z"} ... skipping 11 lines ... {"log":"rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged\n","stream":"stdout","time":"2020-07-13T23:24:36.287511333Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged\n","stream":"stdout","time":"2020-07-13T23:24:36.287515433Z"} {"log":"service/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:24:36.287519733Z"} {"log":"deployment.apps/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:24:36.287523933Z"} {"log":"apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:24:36.287527933Z"} {"log":"INFO: == Reconciling with addon-manager label ==\n","stream":"stdout","time":"2020-07-13T23:24:36.288457233Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:24:37.917081333Z"} {"log":"daemonset.apps/azure-cni-networkmonitor unchanged\n","stream":"stdout","time":"2020-07-13T23:24:37.918120833Z"} {"log":"podsecuritypolicy.policy/privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:24:37.918134033Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:24:37.918165333Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:24:37.918171733Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:24:37.918175033Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:24:37.918178133Z"} {"log":"csidriver.storage.k8s.io/secrets-store.csi.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:24:37.918181333Z"} {"log":"serviceaccount/secrets-store-csi-driver unchanged\n","stream":"stdout","time":"2020-07-13T23:24:37.918184333Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/secretproviderclasses-rolebinding unchanged\n","stream":"stdout","time":"2020-07-13T23:24:37.918187233Z"} {"log":"clusterrole.rbac.authorization.k8s.io/secretproviderclasses-role unchanged\n","stream":"stdout","time":"2020-07-13T23:24:37.918190233Z"} {"log":"daemonset.apps/csi-secrets-store unchanged\n","stream":"stdout","time":"2020-07-13T23:24:37.918193333Z"} {"log":"INFO: == Kubernetes addon reconcile completed at 2020-07-13T23:24:37+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:24:37.919586533Z"} {"log":"INFO: Leader is k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2\n","stream":"stdout","time":"2020-07-13T23:25:34.008269733Z"} {"log":"Error from server (Invalid): error when creating \"/etc/kubernetes/addons/coredns.yaml\": Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.0.0.10\": provided IP is already allocated\n","stream":"stdout","time":"2020-07-13T23:25:35.630287333Z"} {"log":"unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stdout","time":"2020-07-13T23:25:35.630313933Z"} {"log":"INFO: == Kubernetes addon ensure completed at 2020-07-13T23:25:35+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:25:35.631877133Z"} {"log":"INFO: == Reconciling with deprecated label ==\n","stream":"stdout","time":"2020-07-13T23:25:35.631914033Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:25:37.083918533Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:25:37.085123633Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:25:37.085138433Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:25:37.085144733Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:25:37.085149833Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:25:37.085154633Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:25:37.085159333Z"} ... skipping 11 lines ... {"log":"rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged\n","stream":"stdout","time":"2020-07-13T23:25:37.085219533Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged\n","stream":"stdout","time":"2020-07-13T23:25:37.085224733Z"} {"log":"service/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:25:37.085229833Z"} {"log":"deployment.apps/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:25:37.085234533Z"} {"log":"apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:25:37.085239033Z"} {"log":"INFO: == Reconciling with addon-manager label ==\n","stream":"stdout","time":"2020-07-13T23:25:37.085436333Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:25:38.635895033Z"} {"log":"daemonset.apps/azure-cni-networkmonitor unchanged\n","stream":"stdout","time":"2020-07-13T23:25:38.636981333Z"} {"log":"podsecuritypolicy.policy/privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:25:38.636996433Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:25:38.637001333Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:25:38.637005633Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:25:38.637010433Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:25:38.637015133Z"} {"log":"csidriver.storage.k8s.io/secrets-store.csi.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:25:38.637019333Z"} {"log":"serviceaccount/secrets-store-csi-driver unchanged\n","stream":"stdout","time":"2020-07-13T23:25:38.637023633Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/secretproviderclasses-rolebinding unchanged\n","stream":"stdout","time":"2020-07-13T23:25:38.637027833Z"} {"log":"clusterrole.rbac.authorization.k8s.io/secretproviderclasses-role unchanged\n","stream":"stdout","time":"2020-07-13T23:25:38.637032133Z"} {"log":"daemonset.apps/csi-secrets-store unchanged\n","stream":"stdout","time":"2020-07-13T23:25:38.637036633Z"} {"log":"INFO: == Kubernetes addon reconcile completed at 2020-07-13T23:25:38+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:25:38.638781233Z"} {"log":"INFO: Leader is k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2\n","stream":"stdout","time":"2020-07-13T23:26:33.777864333Z"} {"log":"Error from server (Invalid): error when creating \"/etc/kubernetes/addons/coredns.yaml\": Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.0.0.10\": provided IP is already allocated\n","stream":"stdout","time":"2020-07-13T23:26:35.380297685Z"} {"log":"unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stdout","time":"2020-07-13T23:26:35.380317686Z"} {"log":"INFO: == Kubernetes addon ensure completed at 2020-07-13T23:26:35+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:26:35.381834189Z"} {"log":"INFO: == Reconciling with deprecated label ==\n","stream":"stdout","time":"2020-07-13T23:26:35.38184809Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:26:36.962972196Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:26:36.964109371Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:26:36.964122972Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:26:36.964153574Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:26:36.964159475Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:26:36.964164875Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:26:36.964170175Z"} ... skipping 11 lines ... {"log":"rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged\n","stream":"stdout","time":"2020-07-13T23:26:36.96424508Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged\n","stream":"stdout","time":"2020-07-13T23:26:36.964248981Z"} {"log":"service/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:26:36.964253481Z"} {"log":"deployment.apps/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:26:36.964257381Z"} {"log":"apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:26:36.964261881Z"} {"log":"INFO: == Reconciling with addon-manager label ==\n","stream":"stdout","time":"2020-07-13T23:26:36.964460095Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:26:38.54501766Z"} {"log":"daemonset.apps/azure-cni-networkmonitor unchanged\n","stream":"stdout","time":"2020-07-13T23:26:38.546072425Z"} {"log":"podsecuritypolicy.policy/privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:26:38.546085326Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:26:38.546090326Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:26:38.546094627Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:26:38.546098827Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:26:38.546101927Z"} {"log":"csidriver.storage.k8s.io/secrets-store.csi.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:26:38.546105227Z"} {"log":"serviceaccount/secrets-store-csi-driver unchanged\n","stream":"stdout","time":"2020-07-13T23:26:38.546108427Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/secretproviderclasses-rolebinding unchanged\n","stream":"stdout","time":"2020-07-13T23:26:38.546111328Z"} {"log":"clusterrole.rbac.authorization.k8s.io/secretproviderclasses-role unchanged\n","stream":"stdout","time":"2020-07-13T23:26:38.546114428Z"} {"log":"daemonset.apps/csi-secrets-store unchanged\n","stream":"stdout","time":"2020-07-13T23:26:38.546118328Z"} {"log":"INFO: == Kubernetes addon reconcile completed at 2020-07-13T23:26:38+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:26:38.547548117Z"} {"log":"INFO: Leader is k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2\n","stream":"stdout","time":"2020-07-13T23:27:33.64304651Z"} {"log":"Error from server (Invalid): error when creating \"/etc/kubernetes/addons/coredns.yaml\": Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.0.0.10\": provided IP is already allocated\n","stream":"stdout","time":"2020-07-13T23:27:35.215787201Z"} {"log":"unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stdout","time":"2020-07-13T23:27:35.215840605Z"} {"log":"INFO: == Kubernetes addon ensure completed at 2020-07-13T23:27:35+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:27:35.219466729Z"} {"log":"INFO: == Reconciling with deprecated label ==\n","stream":"stdout","time":"2020-07-13T23:27:35.219524433Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:27:36.799445498Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:27:36.800700273Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:27:36.800714074Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:27:36.800719174Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:27:36.800724174Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:27:36.800728575Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:27:36.800748376Z"} ... skipping 11 lines ... {"log":"rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged\n","stream":"stdout","time":"2020-07-13T23:27:36.800792378Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged\n","stream":"stdout","time":"2020-07-13T23:27:36.800795979Z"} {"log":"service/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:27:36.800798979Z"} {"log":"deployment.apps/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:27:36.800801979Z"} {"log":"apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:27:36.800805779Z"} {"log":"INFO: == Reconciling with addon-manager label ==\n","stream":"stdout","time":"2020-07-13T23:27:36.801082396Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:27:38.376160084Z"} {"log":"daemonset.apps/azure-cni-networkmonitor unchanged\n","stream":"stdout","time":"2020-07-13T23:27:38.377518161Z"} {"log":"podsecuritypolicy.policy/privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:27:38.377537162Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:27:38.377542162Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:27:38.377546863Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:27:38.377551763Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:27:38.377556163Z"} {"log":"csidriver.storage.k8s.io/secrets-store.csi.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:27:38.377560663Z"} {"log":"serviceaccount/secrets-store-csi-driver unchanged\n","stream":"stdout","time":"2020-07-13T23:27:38.377578164Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/secretproviderclasses-rolebinding unchanged\n","stream":"stdout","time":"2020-07-13T23:27:38.377583065Z"} {"log":"clusterrole.rbac.authorization.k8s.io/secretproviderclasses-role unchanged\n","stream":"stdout","time":"2020-07-13T23:27:38.377587465Z"} {"log":"daemonset.apps/csi-secrets-store unchanged\n","stream":"stdout","time":"2020-07-13T23:27:38.377591865Z"} {"log":"INFO: == Kubernetes addon reconcile completed at 2020-07-13T23:27:38+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:27:38.379124652Z"} {"log":"INFO: Leader is k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2\n","stream":"stdout","time":"2020-07-13T23:28:33.483147103Z"} {"log":"Error from server (Invalid): error when creating \"/etc/kubernetes/addons/coredns.yaml\": Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.0.0.10\": provided IP is already allocated\n","stream":"stdout","time":"2020-07-13T23:28:35.163910399Z"} {"log":"unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stdout","time":"2020-07-13T23:28:35.1639297Z"} {"log":"INFO: == Kubernetes addon ensure completed at 2020-07-13T23:28:35+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:28:35.165329915Z"} {"log":"INFO: == Reconciling with deprecated label ==\n","stream":"stdout","time":"2020-07-13T23:28:35.165428916Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:28:36.784642443Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:28:36.785725655Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:28:36.785742055Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:28:36.785747455Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:28:36.785752455Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:28:36.785757955Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:28:36.785762955Z"} ... skipping 11 lines ... {"log":"rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged\n","stream":"stdout","time":"2020-07-13T23:28:36.785824456Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged\n","stream":"stdout","time":"2020-07-13T23:28:36.785829256Z"} {"log":"service/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:28:36.785833356Z"} {"log":"deployment.apps/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:28:36.785837256Z"} {"log":"apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:28:36.785841456Z"} {"log":"INFO: == Reconciling with addon-manager label ==\n","stream":"stdout","time":"2020-07-13T23:28:36.786126659Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:28:38.366010559Z"} {"log":"daemonset.apps/azure-cni-networkmonitor unchanged\n","stream":"stdout","time":"2020-07-13T23:28:38.367147571Z"} {"log":"podsecuritypolicy.policy/privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:28:38.367162471Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:28:38.367167571Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:28:38.367172171Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:28:38.367176971Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:28:38.367196072Z"} {"log":"csidriver.storage.k8s.io/secrets-store.csi.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:28:38.367200272Z"} {"log":"serviceaccount/secrets-store-csi-driver unchanged\n","stream":"stdout","time":"2020-07-13T23:28:38.367203872Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/secretproviderclasses-rolebinding unchanged\n","stream":"stdout","time":"2020-07-13T23:28:38.367208072Z"} {"log":"clusterrole.rbac.authorization.k8s.io/secretproviderclasses-role unchanged\n","stream":"stdout","time":"2020-07-13T23:28:38.367213172Z"} {"log":"daemonset.apps/csi-secrets-store unchanged\n","stream":"stdout","time":"2020-07-13T23:28:38.367217672Z"} {"log":"INFO: == Kubernetes addon reconcile completed at 2020-07-13T23:28:38+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:28:38.368594387Z"} {"log":"INFO: Leader is k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2\n","stream":"stdout","time":"2020-07-13T23:29:33.468738225Z"} {"log":"Error from server (Invalid): error when creating \"/etc/kubernetes/addons/coredns.yaml\": Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.0.0.10\": provided IP is already allocated\n","stream":"stdout","time":"2020-07-13T23:29:35.114222255Z"} {"log":"unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stdout","time":"2020-07-13T23:29:35.114245155Z"} {"log":"INFO: == Kubernetes addon ensure completed at 2020-07-13T23:29:35+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:29:35.11566477Z"} {"log":"INFO: == Reconciling with deprecated label ==\n","stream":"stdout","time":"2020-07-13T23:29:35.115776772Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:29:36.579763124Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:29:36.580893837Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:29:36.580909437Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:29:36.580914437Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:29:36.580919037Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:29:36.580923637Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:29:36.580941737Z"} ... skipping 11 lines ... {"log":"rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged\n","stream":"stdout","time":"2020-07-13T23:29:36.580990838Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged\n","stream":"stdout","time":"2020-07-13T23:29:36.580995338Z"} {"log":"service/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:29:36.581000438Z"} {"log":"deployment.apps/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:29:36.581004838Z"} {"log":"apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:29:36.581010138Z"} {"log":"INFO: == Reconciling with addon-manager label ==\n","stream":"stdout","time":"2020-07-13T23:29:36.581274641Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:29:38.172830984Z"} {"log":"daemonset.apps/azure-cni-networkmonitor unchanged\n","stream":"stdout","time":"2020-07-13T23:29:38.174197099Z"} {"log":"podsecuritypolicy.policy/privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:29:38.174211599Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:29:38.174216199Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:29:38.174220899Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:29:38.1742334Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:29:38.1742379Z"} {"log":"csidriver.storage.k8s.io/secrets-store.csi.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:29:38.1742553Z"} {"log":"serviceaccount/secrets-store-csi-driver unchanged\n","stream":"stdout","time":"2020-07-13T23:29:38.1742595Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/secretproviderclasses-rolebinding unchanged\n","stream":"stdout","time":"2020-07-13T23:29:38.1742637Z"} {"log":"clusterrole.rbac.authorization.k8s.io/secretproviderclasses-role unchanged\n","stream":"stdout","time":"2020-07-13T23:29:38.174268Z"} {"log":"daemonset.apps/csi-secrets-store unchanged\n","stream":"stdout","time":"2020-07-13T23:29:38.1742728Z"} {"log":"INFO: == Kubernetes addon reconcile completed at 2020-07-13T23:29:38+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:29:38.175306511Z"} {"log":"INFO: Leader is k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2\n","stream":"stdout","time":"2020-07-13T23:30:33.28374376Z"} {"log":"Error from server (Invalid): error when creating \"/etc/kubernetes/addons/coredns.yaml\": Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.0.0.10\": provided IP is already allocated\n","stream":"stdout","time":"2020-07-13T23:30:34.851632758Z"} {"log":"unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stdout","time":"2020-07-13T23:30:34.851673158Z"} {"log":"INFO: == Kubernetes addon ensure completed at 2020-07-13T23:30:34+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:30:34.853209175Z"} {"log":"INFO: == Reconciling with deprecated label ==\n","stream":"stdout","time":"2020-07-13T23:30:34.853388577Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:30:36.370833625Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:30:36.371981038Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:30:36.371995138Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:30:36.372000238Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:30:36.372005338Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:30:36.372038338Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:30:36.372044338Z"} ... skipping 11 lines ... {"log":"rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged\n","stream":"stdout","time":"2020-07-13T23:30:36.372120239Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged\n","stream":"stdout","time":"2020-07-13T23:30:36.372123139Z"} {"log":"service/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:30:36.372126239Z"} {"log":"deployment.apps/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:30:36.372129239Z"} {"log":"apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:30:36.372132139Z"} {"log":"INFO: == Reconciling with addon-manager label ==\n","stream":"stdout","time":"2020-07-13T23:30:36.372302041Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:30:37.973540203Z"} {"log":"daemonset.apps/azure-cni-networkmonitor unchanged\n","stream":"stdout","time":"2020-07-13T23:30:37.974995019Z"} {"log":"podsecuritypolicy.policy/privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:30:37.975007319Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:30:37.975010919Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:30:37.975014719Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:30:37.975017919Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:30:37.975021019Z"} {"log":"csidriver.storage.k8s.io/secrets-store.csi.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:30:37.97502432Z"} {"log":"serviceaccount/secrets-store-csi-driver unchanged\n","stream":"stdout","time":"2020-07-13T23:30:37.97502732Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/secretproviderclasses-rolebinding unchanged\n","stream":"stdout","time":"2020-07-13T23:30:37.97503022Z"} {"log":"clusterrole.rbac.authorization.k8s.io/secretproviderclasses-role unchanged\n","stream":"stdout","time":"2020-07-13T23:30:37.97503332Z"} {"log":"daemonset.apps/csi-secrets-store unchanged\n","stream":"stdout","time":"2020-07-13T23:30:37.97503642Z"} {"log":"INFO: == Kubernetes addon reconcile completed at 2020-07-13T23:30:37+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:30:37.976618337Z"} {"log":"INFO: Leader is k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2\n","stream":"stdout","time":"2020-07-13T23:31:34.073743786Z"} {"log":"Error from server (Invalid): error when creating \"/etc/kubernetes/addons/coredns.yaml\": Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.0.0.10\": provided IP is already allocated\n","stream":"stdout","time":"2020-07-13T23:31:35.821216654Z"} {"log":"unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stdout","time":"2020-07-13T23:31:35.821237454Z"} {"log":"INFO: == Kubernetes addon ensure completed at 2020-07-13T23:31:35+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:31:35.82266887Z"} {"log":"INFO: == Reconciling with deprecated label ==\n","stream":"stdout","time":"2020-07-13T23:31:35.82272907Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:31:37.476888021Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:31:37.478043033Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:31:37.478065133Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:31:37.478070134Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:31:37.478074334Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:31:37.478078834Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:31:37.478083334Z"} ... skipping 11 lines ... {"log":"rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged\n","stream":"stdout","time":"2020-07-13T23:31:37.478133734Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged\n","stream":"stdout","time":"2020-07-13T23:31:37.478156334Z"} {"log":"service/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:31:37.478162735Z"} {"log":"deployment.apps/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:31:37.478166835Z"} {"log":"apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:31:37.478170835Z"} {"log":"INFO: == Reconciling with addon-manager label ==\n","stream":"stdout","time":"2020-07-13T23:31:37.478449638Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:31:39.155478738Z"} {"log":"daemonset.apps/azure-cni-networkmonitor unchanged\n","stream":"stdout","time":"2020-07-13T23:31:39.156973154Z"} {"log":"podsecuritypolicy.policy/privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:31:39.156986054Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:31:39.156990654Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:31:39.156994354Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:31:39.156998654Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:31:39.157002754Z"} {"log":"csidriver.storage.k8s.io/secrets-store.csi.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:31:39.157006754Z"} {"log":"serviceaccount/secrets-store-csi-driver unchanged\n","stream":"stdout","time":"2020-07-13T23:31:39.157010954Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/secretproviderclasses-rolebinding unchanged\n","stream":"stdout","time":"2020-07-13T23:31:39.157034255Z"} {"log":"clusterrole.rbac.authorization.k8s.io/secretproviderclasses-role unchanged\n","stream":"stdout","time":"2020-07-13T23:31:39.157039555Z"} {"log":"daemonset.apps/csi-secrets-store unchanged\n","stream":"stdout","time":"2020-07-13T23:31:39.157135756Z"} {"log":"INFO: == Kubernetes addon reconcile completed at 2020-07-13T23:31:39+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:31:39.15936078Z"} {"log":"INFO: Leader is k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2\n","stream":"stdout","time":"2020-07-13T23:32:33.247628836Z"} {"log":"Error from server (Invalid): error when creating \"/etc/kubernetes/addons/coredns.yaml\": Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.0.0.10\": provided IP is already allocated\n","stream":"stdout","time":"2020-07-13T23:32:34.882602085Z"} {"log":"unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stdout","time":"2020-07-13T23:32:34.882621385Z"} {"log":"INFO: == Kubernetes addon ensure completed at 2020-07-13T23:32:34+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:32:34.884058001Z"} {"log":"INFO: == Reconciling with deprecated label ==\n","stream":"stdout","time":"2020-07-13T23:32:34.884178302Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:32:36.553907031Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:32:36.555033343Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider unchanged\n","stream":"stdout","time":"2020-07-13T23:32:36.555048044Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:32:36.555053444Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-persistent-volume-binder unchanged\n","stream":"stdout","time":"2020-07-13T23:32:36.555058144Z"} {"log":"clusterrole.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:32:36.555077044Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/system:azure-cloud-provider-secret-getter unchanged\n","stream":"stdout","time":"2020-07-13T23:32:36.555082644Z"} ... skipping 11 lines ... {"log":"rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged\n","stream":"stdout","time":"2020-07-13T23:32:36.555143345Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged\n","stream":"stdout","time":"2020-07-13T23:32:36.555155945Z"} {"log":"service/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:32:36.555160745Z"} {"log":"deployment.apps/metrics-server unchanged\n","stream":"stdout","time":"2020-07-13T23:32:36.555165045Z"} {"log":"apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:32:36.555169445Z"} {"log":"INFO: == Reconciling with addon-manager label ==\n","stream":"stdout","time":"2020-07-13T23:32:36.555351547Z"} {"log":"error: unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stderr","time":"2020-07-13T23:32:38.327693096Z"} {"log":"daemonset.apps/azure-cni-networkmonitor unchanged\n","stream":"stdout","time":"2020-07-13T23:32:38.328871309Z"} {"log":"podsecuritypolicy.policy/privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:32:38.328883709Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:32:38.328887509Z"} {"log":"clusterrole.rbac.authorization.k8s.io/psp:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:32:38.328890909Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:restricted unchanged\n","stream":"stdout","time":"2020-07-13T23:32:38.328894009Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/default:privileged unchanged\n","stream":"stdout","time":"2020-07-13T23:32:38.328897209Z"} {"log":"csidriver.storage.k8s.io/secrets-store.csi.k8s.io unchanged\n","stream":"stdout","time":"2020-07-13T23:32:38.328900309Z"} {"log":"serviceaccount/secrets-store-csi-driver unchanged\n","stream":"stdout","time":"2020-07-13T23:32:38.328903309Z"} {"log":"clusterrolebinding.rbac.authorization.k8s.io/secretproviderclasses-rolebinding unchanged\n","stream":"stdout","time":"2020-07-13T23:32:38.328906209Z"} {"log":"clusterrole.rbac.authorization.k8s.io/secretproviderclasses-role unchanged\n","stream":"stdout","time":"2020-07-13T23:32:38.328909209Z"} {"log":"daemonset.apps/csi-secrets-store unchanged\n","stream":"stdout","time":"2020-07-13T23:32:38.328912209Z"} {"log":"INFO: == Kubernetes addon reconcile completed at 2020-07-13T23:32:38+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:32:38.330926031Z"} {"log":"INFO: Leader is k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2\n","stream":"stdout","time":"2020-07-13T23:33:33.539151069Z"} {"log":"Error from server (Invalid): error when creating \"/etc/kubernetes/addons/coredns.yaml\": Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.0.0.10\": provided IP is already allocated\n","stream":"stdout","time":"2020-07-13T23:33:35.516354162Z"} {"log":"unable to recognize \"/etc/kubernetes/addons/audit-policy.yaml\": no matches for kind \"Policy\" in version \"audit.k8s.io/v1\"\n","stream":"stdout","time":"2020-07-13T23:33:35.516378962Z"} {"log":"INFO: == Kubernetes addon ensure completed at 2020-07-13T23:33:35+00:00 ==\n","stream":"stdout","time":"2020-07-13T23:33:35.517766978Z"} {"log":"INFO: == Reconciling with deprecated label ==\n","stream":"stdout","time":"2020-07-13T23:33:35.517781078Z"} Dumping kube-apiserver-k8s-master-89242181-0_kube-system_kube-apiserver-d62f93f8a625d816a62023c8c79a8316e007dc0c5d80dd81614a1f8e407ec38b.log {"log":"Flag --insecure-port has been deprecated, This flag will be removed in a future version.\n","stream":"stderr","time":"2020-07-13T23:22:09.443133133Z"} {"log":"I0713 23:22:09.443109 1 flags.go:59] FLAG: --add-dir-header=\"false\"\n","stream":"stderr","time":"2020-07-13T23:22:09.443227733Z"} ... skipping 202 lines ... {"log":"Trace[588598594]: ---\"Resource version extracted\" 0ms (23:22:00.974)\n","stream":"stderr","time":"2020-07-13T23:22:09.980572033Z"} {"log":"Trace[588598594]: ---\"Objects extracted\" 5ms (23:22:00.980)\n","stream":"stderr","time":"2020-07-13T23:22:09.980577133Z"} {"log":"Trace[588598594]: ---\"SyncWith done\" 0ms (23:22:00.980)\n","stream":"stderr","time":"2020-07-13T23:22:09.980581633Z"} {"log":"Trace[588598594]: ---\"Resource version updated\" 0ms (23:22:00.980)\n","stream":"stderr","time":"2020-07-13T23:22:09.980585733Z"} {"log":"Trace[588598594]: [12.5344ms] [12.5344ms] END\n","stream":"stderr","time":"2020-07-13T23:22:09.980590533Z"} {"log":"I0713 23:22:10.007957 1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc000c8ac10, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2020-07-13T23:22:10.008137533Z"} {"log":"I0713 23:22:10.009125 1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2020-07-13T23:22:10.009267833Z"} {"log":"I0713 23:22:10.020649 1 store.go:1378] Monitoring customresourcedefinitions.apiextensions.k8s.io count at \u003cstorage-prefix\u003e//apiextensions.k8s.io/customresourcedefinitions\n","stream":"stderr","time":"2020-07-13T23:22:10.033091433Z"} {"log":"I0713 23:22:10.023224 1 reflector.go:243] Listing and watching *apiextensions.CustomResourceDefinition from storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions\n","stream":"stderr","time":"2020-07-13T23:22:10.033107133Z"} {"log":"I0713 23:22:10.036774 1 trace.go:201] Trace[683732654]: \"List etcd3\" key:/apiextensions.k8s.io/customresourcedefinitions,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (13-Jul-2020 23:22:00.023) (total time: 13ms):\n","stream":"stderr","time":"2020-07-13T23:22:10.036867033Z"} {"log":"Trace[683732654]: [13.4936ms] [13.4936ms] END\n","stream":"stderr","time":"2020-07-13T23:22:10.036881433Z"} {"log":"I0713 23:22:10.036824 1 cacher.go:403] cacher (*apiextensions.CustomResourceDefinition): initialized\n","stream":"stderr","time":"2020-07-13T23:22:10.036886533Z"} {"log":"I0713 23:22:10.036837 1 watch_cache.go:521] Replace watchCache (rev: 6) \n","stream":"stderr","time":"2020-07-13T23:22:10.036917733Z"} ... skipping 1967 lines ... {"log":"I0713 23:22:15.339311 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/services\" latency=\"780.4µs\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=200\n","stream":"stderr","time":"2020-07-13T23:22:15.339697233Z"} {"log":"I0713 23:22:15.340778 1 trace.go:201] Trace[1192121050]: \"GuaranteedUpdate etcd3\" type:*core.RangeAllocation (13-Jul-2020 23:22:00.339) (total time: 1ms):\n","stream":"stderr","time":"2020-07-13T23:22:15.341257833Z"} {"log":"Trace[1192121050]: ---\"initial value restored\" 0ms (23:22:00.340)\n","stream":"stderr","time":"2020-07-13T23:22:15.341270633Z"} {"log":"Trace[1192121050]: ---\"Transaction prepared\" 0ms (23:22:00.340)\n","stream":"stderr","time":"2020-07-13T23:22:15.341276133Z"} {"log":"Trace[1192121050]: ---\"Transaction committed\" 0ms (23:22:00.340)\n","stream":"stderr","time":"2020-07-13T23:22:15.341280933Z"} {"log":"Trace[1192121050]: [1.2576ms] [1.2576ms] END\n","stream":"stderr","time":"2020-07-13T23:22:15.341285733Z"} {"log":"I0713 23:22:15.344481 1 healthz.go:239] healthz check failed: poststarthook/start-apiextensions-controllers,poststarthook/crd-informer-synced,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes,poststarthook/apiservice-registration-controller,autoregister-completion\n","stream":"stderr","time":"2020-07-13T23:22:15.346172533Z"} {"log":"[-]poststarthook/start-apiextensions-controllers failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:15.346185633Z"} {"log":"[-]poststarthook/crd-informer-synced failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:15.346190733Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:15.346195033Z"} {"log":"[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:15.346207033Z"} {"log":"[-]poststarthook/apiservice-registration-controller failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:15.346211333Z"} {"log":"[-]autoregister-completion failed: missing APIService: [v1. v1.admissionregistration.k8s.io v1.apiextensions.k8s.io v1.apps v1.authentication.k8s.io v1.authorization.k8s.io v1.autoscaling v1.batch v1.certificates.k8s.io v1.coordination.k8s.io v1.events.k8s.io v1.networking.k8s.io v1.rbac.authorization.k8s.io v1.scheduling.k8s.io v1.storage.k8s.io v1beta1.admissionregistration.k8s.io v1beta1.apiextensions.k8s.io v1beta1.authentication.k8s.io v1beta1.authorization.k8s.io v1beta1.batch v1beta1.certificates.k8s.io v1beta1.coordination.k8s.io v1beta1.discovery.k8s.io v1beta1.events.k8s.io v1beta1.extensions v1beta1.networking.k8s.io v1beta1.node.k8s.io v1beta1.policy v1beta1.rbac.authorization.k8s.io v1beta1.scheduling.k8s.io v1beta1.storage.k8s.io v2beta1.autoscaling v2beta2.autoscaling]\n","stream":"stderr","time":"2020-07-13T23:22:15.346216433Z"} {"log":"I0713 23:22:15.344525 1 trace.go:201] Trace[1744253503]: \"HTTP Request\" method:GET,url:/api/v1/namespaces/kube-system,verb:get,name:kube-system,resource:namespaces,subresource:,namespace:kube-system,api-group:,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.341) (total time: 3ms):\n","stream":"stderr","time":"2020-07-13T23:22:15.346223033Z"} {"log":"Trace[1744253503]: ---\"Authenticate check done\" 0ms (23:22:00.341)\n","stream":"stderr","time":"2020-07-13T23:22:15.346228433Z"} {"log":"Trace[1744253503]: ---\"Authorize check done\" 0ms (23:22:00.341)\n","stream":"stderr","time":"2020-07-13T23:22:15.346239833Z"} {"log":"Trace[1744253503]: [\"Get\" url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 3ms (23:22:00.341)\n","stream":"stderr","time":"2020-07-13T23:22:15.346244933Z"} {"log":"Trace[1744253503]: ---\"About to Get from storage\" 0ms (23:22:00.341)]\n","stream":"stderr","time":"2020-07-13T23:22:15.346249533Z"} {"log":"Trace[1744253503]: [3.4179ms] [3.4179ms] END\n","stream":"stderr","time":"2020-07-13T23:22:15.346253833Z"} ... skipping 486 lines ... {"log":"Trace[1396739446]: ---\"Authenticate check done\" 0ms (23:22:00.066)\n","stream":"stderr","time":"2020-07-13T23:22:16.722506333Z"} {"log":"Trace[1396739446]: ---\"Authorize check done\" 0ms (23:22:00.066)\n","stream":"stderr","time":"2020-07-13T23:22:16.722510433Z"} {"log":"Trace[1396739446]: [\"Get\" url:/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 631ms (23:22:00.067)\n","stream":"stderr","time":"2020-07-13T23:22:16.722514733Z"} {"log":"Trace[1396739446]: ---\"About to Get from storage\" 0ms (23:22:00.067)]\n","stream":"stderr","time":"2020-07-13T23:22:16.722519733Z"} {"log":"Trace[1396739446]: [632.0408ms] [632.0408ms] END\n","stream":"stderr","time":"2020-07-13T23:22:16.722524233Z"} {"log":"I0713 23:22:16.699016 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical\" latency=\"632.1224ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=404\n","stream":"stderr","time":"2020-07-13T23:22:16.722528633Z"} {"log":"I0713 23:22:16.699664 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes,autoregister-completion\n","stream":"stderr","time":"2020-07-13T23:22:16.722533933Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:16.722538633Z"} {"log":"[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:16.722542833Z"} {"log":"[-]autoregister-completion failed: missing APIService: [v1.networking.k8s.io v1.rbac.authorization.k8s.io v1.scheduling.k8s.io v1.storage.k8s.io v1beta1.coordination.k8s.io v1beta1.discovery.k8s.io v1beta1.events.k8s.io v1beta1.extensions v1beta1.networking.k8s.io v1beta1.node.k8s.io v1beta1.policy v1beta1.rbac.authorization.k8s.io v1beta1.scheduling.k8s.io v1beta1.storage.k8s.io]\n","stream":"stderr","time":"2020-07-13T23:22:16.722546933Z"} {"log":"I0713 23:22:16.699973 1 trace.go:201] Trace[902172329]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-controller-manager/v1.20.0 (linux/amd64) kubernetes/240a72b/shared-informers,client:10.255.255.5 (13-Jul-2020 23:22:00.443) (total time: 1256ms):\n","stream":"stderr","time":"2020-07-13T23:22:16.722550833Z"} {"log":"Trace[902172329]: ---\"Authenticate check done\" 0ms (23:22:00.443)\n","stream":"stderr","time":"2020-07-13T23:22:16.722554633Z"} {"log":"Trace[902172329]: ---\"Authorize check done\" 0ms (23:22:00.443)\n","stream":"stderr","time":"2020-07-13T23:22:16.722557633Z"} {"log":"Trace[902172329]: [1.256406s] [1.256406s] END\n","stream":"stderr","time":"2020-07-13T23:22:16.722560633Z"} {"log":"I0713 23:22:16.700003 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz?timeout=32s\" latency=\"1.256565s\" userAgent=\"kube-controller-manager/v1.20.0 (linux/amd64) kubernetes/240a72b/shared-informers\" srcIP=\"10.255.255.5:57168\" resp=0\n","stream":"stderr","time":"2020-07-13T23:22:16.722564133Z"} {"log":"I0713 23:22:16.701044 1 trace.go:201] Trace[187818180]: \"HTTP Request\" method:POST,url:/api/v1/namespaces/kube-system/events,verb:create,name:,resource:events,subresource:,namespace:kube-system,api-group:,api-version:v1,user-agent:kube-controller-manager/v1.20.0 (linux/amd64) kubernetes/240a72b/kube-controller-manager,client:10.255.255.5 (13-Jul-2020 23:22:00.430) (total time: 1270ms):\n","stream":"stderr","time":"2020-07-13T23:22:16.722567933Z"} ... skipping 52 lines ... {"log":"Trace[1172119753]: [\"Get\" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.20.0 (linux/amd64) kubernetes/240a72b/leader-election,client:10.255.255.5 1272ms (23:22:00.430)\n","stream":"stderr","time":"2020-07-13T23:22:16.722746633Z"} {"log":"Trace[1172119753]: ---\"About to Get from storage\" 0ms (23:22:00.430)\n","stream":"stderr","time":"2020-07-13T23:22:16.722750333Z"} {"log":"Trace[1172119753]: ---\"About to write a response\" 1272ms (23:22:00.702)\n","stream":"stderr","time":"2020-07-13T23:22:16.722753633Z"} {"log":"Trace[1172119753]: ---\"Transformed response object\" 0ms (23:22:00.702)]\n","stream":"stderr","time":"2020-07-13T23:22:16.722756733Z"} {"log":"Trace[1172119753]: [1.2787086s] [1.2787086s] END\n","stream":"stderr","time":"2020-07-13T23:22:16.722759733Z"} {"log":"I0713 23:22:16.702521 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s\" latency=\"1.2787684s\" userAgent=\"kube-controller-manager/v1.20.0 (linux/amd64) kubernetes/240a72b/leader-election\" srcIP=\"10.255.255.5:57168\" resp=200\n","stream":"stderr","time":"2020-07-13T23:22:16.722762733Z"} {"log":"I0713 23:22:16.702581 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes,autoregister-completion\n","stream":"stderr","time":"2020-07-13T23:22:16.722766633Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:16.722769933Z"} {"log":"[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:16.722772833Z"} {"log":"[-]autoregister-completion failed: missing APIService: [v1.networking.k8s.io v1.rbac.authorization.k8s.io v1.scheduling.k8s.io v1.storage.k8s.io v1beta1.coordination.k8s.io v1beta1.discovery.k8s.io v1beta1.events.k8s.io v1beta1.extensions v1beta1.networking.k8s.io v1beta1.node.k8s.io v1beta1.policy v1beta1.rbac.authorization.k8s.io v1beta1.scheduling.k8s.io v1beta1.storage.k8s.io]\n","stream":"stderr","time":"2020-07-13T23:22:16.722778433Z"} {"log":"I0713 23:22:16.702664 1 trace.go:201] Trace[2054898495]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.445) (total time: 1257ms):\n","stream":"stderr","time":"2020-07-13T23:22:16.722782233Z"} {"log":"Trace[2054898495]: ---\"Authenticate check done\" 0ms (23:22:00.445)\n","stream":"stderr","time":"2020-07-13T23:22:16.722785833Z"} {"log":"Trace[2054898495]: ---\"Authorize check done\" 0ms (23:22:00.445)\n","stream":"stderr","time":"2020-07-13T23:22:16.722788833Z"} {"log":"Trace[2054898495]: [1.2575059s] [1.2575059s] END\n","stream":"stderr","time":"2020-07-13T23:22:16.722791733Z"} {"log":"I0713 23:22:16.703013 1 cacher.go:780] cacher (*apiregistration.APIService): 1 objects queued in incoming channel.\n","stream":"stderr","time":"2020-07-13T23:22:16.722794633Z"} {"log":"I0713 23:22:16.704455 1 available_controller.go:445] Adding v1beta1.coordination.k8s.io\n","stream":"stderr","time":"2020-07-13T23:22:16.722797733Z"} ... skipping 207 lines ... {"log":"Trace[850104165]: ---\"initial value restored\" 0ms (23:22:00.734)\n","stream":"stderr","time":"2020-07-13T23:22:16.765303933Z"} {"log":"Trace[850104165]: ---\"Transaction prepared\" 0ms (23:22:00.735)\n","stream":"stderr","time":"2020-07-13T23:22:16.765308333Z"} {"log":"Trace[850104165]: ---\"Transaction committed\" 26ms (23:22:00.762)]\n","stream":"stderr","time":"2020-07-13T23:22:16.765312933Z"} {"log":"Trace[850104165]: ---\"Object stored in database\" 0ms (23:22:00.762)]\n","stream":"stderr","time":"2020-07-13T23:22:16.765317333Z"} {"log":"Trace[850104165]: [33.6957ms] [33.6957ms] END\n","stream":"stderr","time":"2020-07-13T23:22:16.765322033Z"} {"log":"I0713 23:22:16.765370 1 httplog.go:89] \"HTTP\" verb=\"PUT\" URI=\"/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s\" latency=\"34.0968ms\" userAgent=\"kube-controller-manager/v1.20.0 (linux/amd64) kubernetes/240a72b/leader-election\" srcIP=\"10.255.255.5:57168\" resp=200\n","stream":"stderr","time":"2020-07-13T23:22:16.765425733Z"} {"log":"I0713 23:22:16.766134 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes,autoregister-completion\n","stream":"stderr","time":"2020-07-13T23:22:16.766222833Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:16.766235633Z"} {"log":"[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:16.766240833Z"} {"log":"[-]autoregister-completion failed: missing APIService: [v1beta1.storage.k8s.io]\n","stream":"stderr","time":"2020-07-13T23:22:16.766245533Z"} {"log":"I0713 23:22:16.766383 1 trace.go:201] Trace[566633860]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.756) (total time: 10ms):\n","stream":"stderr","time":"2020-07-13T23:22:16.766480433Z"} {"log":"Trace[566633860]: ---\"Authenticate check done\" 0ms (23:22:00.756)\n","stream":"stderr","time":"2020-07-13T23:22:16.766491733Z"} {"log":"Trace[566633860]: ---\"Authorize check done\" 0ms (23:22:00.756)\n","stream":"stderr","time":"2020-07-13T23:22:16.766496833Z"} {"log":"Trace[566633860]: [10.1992ms] [10.1992ms] END\n","stream":"stderr","time":"2020-07-13T23:22:16.766501133Z"} {"log":"I0713 23:22:16.766463 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz\" latency=\"10.3208ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=0\n","stream":"stderr","time":"2020-07-13T23:22:16.766524133Z"} {"log":"I0713 23:22:16.767066 1 trace.go:201] Trace[436934477]: \"HTTP Request\" method:GET,url:/api/v1/namespaces/default,verb:get,name:default,resource:namespaces,subresource:,namespace:default,api-group:,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.756) (total time: 10ms):\n","stream":"stderr","time":"2020-07-13T23:22:16.767131833Z"} ... skipping 276 lines ... {"log":"Trace[746091186]: ---\"Conversion done\" 0ms (23:22:00.857)\n","stream":"stderr","time":"2020-07-13T23:22:16.859603633Z"} {"log":"Trace[746091186]: ---\"About to store object in database\" 0ms (23:22:00.857)\n","stream":"stderr","time":"2020-07-13T23:22:16.859608133Z"} {"log":"Trace[746091186]: ---\"Object stored in database\" 1ms (23:22:00.859)]\n","stream":"stderr","time":"2020-07-13T23:22:16.859612633Z"} {"log":"Trace[746091186]: [1.6428ms] [1.6428ms] END\n","stream":"stderr","time":"2020-07-13T23:22:16.859617333Z"} {"log":"I0713 23:22:16.859645 1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/apis/rbac.authorization.k8s.io/v1/clusterroles\" latency=\"1.8729ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=201\n","stream":"stderr","time":"2020-07-13T23:22:16.859703133Z"} {"log":"I0713 23:22:16.860087 1 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer\n","stream":"stderr","time":"2020-07-13T23:22:16.860167333Z"} {"log":"I0713 23:22:16.862847 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles\n","stream":"stderr","time":"2020-07-13T23:22:16.862934833Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:16.863188433Z"} {"log":"I0713 23:22:16.863434 1 trace.go:201] Trace[381522339]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.859) (total time: 3ms):\n","stream":"stderr","time":"2020-07-13T23:22:16.863523433Z"} {"log":"Trace[381522339]: ---\"Authenticate check done\" 0ms (23:22:00.859)\n","stream":"stderr","time":"2020-07-13T23:22:16.863534933Z"} {"log":"Trace[381522339]: ---\"Authorize check done\" 0ms (23:22:00.859)\n","stream":"stderr","time":"2020-07-13T23:22:16.863540133Z"} {"log":"Trace[381522339]: [3.579ms] [3.579ms] END\n","stream":"stderr","time":"2020-07-13T23:22:16.863570333Z"} {"log":"I0713 23:22:16.863479 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz\" latency=\"3.6344ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=0\n","stream":"stderr","time":"2020-07-13T23:22:16.863576333Z"} {"log":"I0713 23:22:16.863658 1 trace.go:201] Trace[1477810790]: \"HTTP Request\" method:GET,url:/apis/rbac.authorization.k8s.io/v1/clusterroles/admin,verb:get,name:admin,resource:clusterroles,subresource:,namespace:,api-group:rbac.authorization.k8s.io,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.860) (total time: 3ms):\n","stream":"stderr","time":"2020-07-13T23:22:16.863731333Z"} ... skipping 127 lines ... {"log":"Trace[1318242391]: ---\"Conversion done\" 0ms (23:22:00.938)\n","stream":"stderr","time":"2020-07-13T23:22:16.940099233Z"} {"log":"Trace[1318242391]: ---\"About to store object in database\" 0ms (23:22:00.938)\n","stream":"stderr","time":"2020-07-13T23:22:16.940103933Z"} {"log":"Trace[1318242391]: ---\"Object stored in database\" 0ms (23:22:00.939)]\n","stream":"stderr","time":"2020-07-13T23:22:16.940108233Z"} {"log":"Trace[1318242391]: [1.0984ms] [1.0984ms] END\n","stream":"stderr","time":"2020-07-13T23:22:16.940112733Z"} {"log":"I0713 23:22:16.939961 1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/apis/rbac.authorization.k8s.io/v1/clusterroles\" latency=\"1.1505ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=201\n","stream":"stderr","time":"2020-07-13T23:22:16.940116733Z"} {"log":"I0713 23:22:16.940358 1 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view\n","stream":"stderr","time":"2020-07-13T23:22:16.940467233Z"} {"log":"I0713 23:22:16.952359 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles\n","stream":"stderr","time":"2020-07-13T23:22:16.953482633Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:16.953527233Z"} {"log":"I0713 23:22:16.952545 1 trace.go:201] Trace[1349207433]: \"HTTP Request\" method:GET,url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster,verb:get,name:system:heapster,resource:clusterroles,subresource:,namespace:,api-group:rbac.authorization.k8s.io,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.940) (total time: 11ms):\n","stream":"stderr","time":"2020-07-13T23:22:16.953534033Z"} {"log":"Trace[1349207433]: ---\"Authenticate check done\" 0ms (23:22:00.940)\n","stream":"stderr","time":"2020-07-13T23:22:16.953547933Z"} {"log":"Trace[1349207433]: ---\"Authorize check done\" 0ms (23:22:00.940)\n","stream":"stderr","time":"2020-07-13T23:22:16.953552933Z"} {"log":"Trace[1349207433]: [\"Get\" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 11ms (23:22:00.940)\n","stream":"stderr","time":"2020-07-13T23:22:16.953557433Z"} {"log":"Trace[1349207433]: ---\"About to Get from storage\" 0ms (23:22:00.940)]\n","stream":"stderr","time":"2020-07-13T23:22:16.953562833Z"} {"log":"Trace[1349207433]: [11.973ms] [11.973ms] END\n","stream":"stderr","time":"2020-07-13T23:22:16.953567233Z"} ... skipping 304 lines ... {"log":"Trace[1662067152]: ---\"Authenticate check done\" 0ms (23:22:00.038)\n","stream":"stderr","time":"2020-07-13T23:22:17.045729933Z"} {"log":"Trace[1662067152]: ---\"Authorize check done\" 0ms (23:22:00.038)\n","stream":"stderr","time":"2020-07-13T23:22:17.045735433Z"} {"log":"Trace[1662067152]: [\"Get\" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 6ms (23:22:00.038)\n","stream":"stderr","time":"2020-07-13T23:22:17.045740233Z"} {"log":"Trace[1662067152]: ---\"About to Get from storage\" 0ms (23:22:00.038)]\n","stream":"stderr","time":"2020-07-13T23:22:17.045745833Z"} {"log":"Trace[1662067152]: [6.9652ms] [6.9652ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.045750733Z"} {"log":"I0713 23:22:17.045727 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner\" latency=\"7.1073ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=404\n","stream":"stderr","time":"2020-07-13T23:22:17.046219933Z"} {"log":"I0713 23:22:17.046129 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles\n","stream":"stderr","time":"2020-07-13T23:22:17.046234333Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:17.046239533Z"} {"log":"I0713 23:22:17.046315 1 trace.go:201] Trace[1746597752]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.045) (total time: 1ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.046366233Z"} {"log":"Trace[1746597752]: ---\"Authenticate check done\" 0ms (23:22:00.045)\n","stream":"stderr","time":"2020-07-13T23:22:17.046376033Z"} {"log":"Trace[1746597752]: ---\"Authorize check done\" 0ms (23:22:00.045)\n","stream":"stderr","time":"2020-07-13T23:22:17.046380833Z"} {"log":"Trace[1746597752]: [1.2669ms] [1.2669ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.046385333Z"} {"log":"I0713 23:22:17.046343 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz\" latency=\"1.3082ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=0\n","stream":"stderr","time":"2020-07-13T23:22:17.046409233Z"} {"log":"I0713 23:22:17.047870 1 trace.go:201] Trace[74199029]: \"HTTP Request\" method:POST,url:/apis/rbac.authorization.k8s.io/v1/clusterroles,verb:create,name:,resource:clusterroles,subresource:,namespace:,api-group:rbac.authorization.k8s.io,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.046) (total time: 1ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.047929533Z"} ... skipping 259 lines ... {"log":"Trace[2045448437]: ---\"Authenticate check done\" 0ms (23:22:00.141)\n","stream":"stderr","time":"2020-07-13T23:22:17.151436133Z"} {"log":"Trace[2045448437]: ---\"Authorize check done\" 0ms (23:22:00.142)\n","stream":"stderr","time":"2020-07-13T23:22:17.151441333Z"} {"log":"Trace[2045448437]: [\"Get\" url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 8ms (23:22:00.142)\n","stream":"stderr","time":"2020-07-13T23:22:17.151472933Z"} {"log":"Trace[2045448437]: ---\"About to Get from storage\" 0ms (23:22:00.142)]\n","stream":"stderr","time":"2020-07-13T23:22:17.151478533Z"} {"log":"Trace[2045448437]: [8.4827ms] [8.4827ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.151482933Z"} {"log":"I0713 23:22:17.150503 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller\" latency=\"8.5378ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=404\n","stream":"stderr","time":"2020-07-13T23:22:17.151487233Z"} {"log":"I0713 23:22:17.152761 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles\n","stream":"stderr","time":"2020-07-13T23:22:17.152848333Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:17.152859533Z"} {"log":"I0713 23:22:17.152872 1 trace.go:201] Trace[1023942151]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.145) (total time: 7ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.152895233Z"} {"log":"Trace[1023942151]: ---\"Authenticate check done\" 0ms (23:22:00.145)\n","stream":"stderr","time":"2020-07-13T23:22:17.152902333Z"} {"log":"Trace[1023942151]: ---\"Authorize check done\" 0ms (23:22:00.145)\n","stream":"stderr","time":"2020-07-13T23:22:17.152906933Z"} {"log":"Trace[1023942151]: [7.8306ms] [7.8306ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.152935933Z"} {"log":"I0713 23:22:17.152888 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz\" latency=\"7.8773ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=0\n","stream":"stderr","time":"2020-07-13T23:22:17.152976133Z"} {"log":"I0713 23:22:17.153277 1 trace.go:201] Trace[1664520624]: \"HTTP Request\" method:POST,url:/apis/rbac.authorization.k8s.io/v1/clusterroles,verb:create,name:,resource:clusterroles,subresource:,namespace:,api-group:rbac.authorization.k8s.io,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.150) (total time: 2ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.153368733Z"} ... skipping 228 lines ... {"log":"Trace[980719915]: ---\"Conversion done\" 0ms (23:22:00.232)\n","stream":"stderr","time":"2020-07-13T23:22:17.248159633Z"} {"log":"Trace[980719915]: ---\"About to store object in database\" 0ms (23:22:00.232)\n","stream":"stderr","time":"2020-07-13T23:22:17.248163933Z"} {"log":"Trace[980719915]: ---\"Object stored in database\" 14ms (23:22:00.247)]\n","stream":"stderr","time":"2020-07-13T23:22:17.248190433Z"} {"log":"Trace[980719915]: [15.1376ms] [15.1376ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.248195933Z"} {"log":"I0713 23:22:17.248001 1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/apis/rbac.authorization.k8s.io/v1/clusterroles\" latency=\"15.2159ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=201\n","stream":"stderr","time":"2020-07-13T23:22:17.248200233Z"} {"log":"I0713 23:22:17.248198 1 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller\n","stream":"stderr","time":"2020-07-13T23:22:17.248246533Z"} {"log":"I0713 23:22:17.248728 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles\n","stream":"stderr","time":"2020-07-13T23:22:17.248870333Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:17.248970333Z"} {"log":"I0713 23:22:17.248995 1 trace.go:201] Trace[1276082215]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.245) (total time: 3ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.249060833Z"} {"log":"Trace[1276082215]: ---\"Authenticate check done\" 0ms (23:22:00.245)\n","stream":"stderr","time":"2020-07-13T23:22:17.249069833Z"} {"log":"Trace[1276082215]: ---\"Authorize check done\" 0ms (23:22:00.245)\n","stream":"stderr","time":"2020-07-13T23:22:17.249074633Z"} {"log":"Trace[1276082215]: [3.9387ms] [3.9387ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.249079033Z"} {"log":"I0713 23:22:17.249028 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz\" latency=\"3.9872ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=0\n","stream":"stderr","time":"2020-07-13T23:22:17.249239633Z"} {"log":"I0713 23:22:17.249658 1 trace.go:201] Trace[1165735423]: \"HTTP Request\" method:GET,url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder,verb:get,name:system:controller:persistent-volume-binder,resource:clusterroles,subresource:,namespace:,api-group:rbac.authorization.k8s.io,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.248) (total time: 1ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.249738033Z"} ... skipping 201 lines ... {"log":"Trace[1985442391]: ---\"Conversion done\" 0ms (23:22:00.340)\n","stream":"stderr","time":"2020-07-13T23:22:17.344114833Z"} {"log":"Trace[1985442391]: ---\"About to store object in database\" 0ms (23:22:00.340)\n","stream":"stderr","time":"2020-07-13T23:22:17.344118733Z"} {"log":"Trace[1985442391]: ---\"Object stored in database\" 2ms (23:22:00.343)]\n","stream":"stderr","time":"2020-07-13T23:22:17.344122933Z"} {"log":"Trace[1985442391]: [2.7162ms] [2.7162ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.344127533Z"} {"log":"I0713 23:22:17.343372 1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/apis/rbac.authorization.k8s.io/v1/clusterroles\" latency=\"2.7976ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=201\n","stream":"stderr","time":"2020-07-13T23:22:17.344131233Z"} {"log":"I0713 23:22:17.343571 1 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller\n","stream":"stderr","time":"2020-07-13T23:22:17.344136633Z"} {"log":"I0713 23:22:17.346467 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles\n","stream":"stderr","time":"2020-07-13T23:22:17.347328233Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:17.347398933Z"} {"log":"I0713 23:22:17.346602 1 trace.go:201] Trace[1333022391]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.345) (total time: 1ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.347405633Z"} {"log":"Trace[1333022391]: ---\"Authenticate check done\" 0ms (23:22:00.345)\n","stream":"stderr","time":"2020-07-13T23:22:17.347411833Z"} {"log":"Trace[1333022391]: ---\"Authorize check done\" 0ms (23:22:00.345)\n","stream":"stderr","time":"2020-07-13T23:22:17.347416633Z"} {"log":"Trace[1333022391]: [1.5497ms] [1.5497ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.347421133Z"} {"log":"I0713 23:22:17.346618 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz\" latency=\"1.5923ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=0\n","stream":"stderr","time":"2020-07-13T23:22:17.347425833Z"} {"log":"I0713 23:22:17.347821 1 trace.go:201] Trace[951037341]: \"HTTP Request\" method:GET,url:/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller,verb:get,name:system:controller:pvc-protection-controller,resource:clusterroles,subresource:,namespace:,api-group:rbac.authorization.k8s.io,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.343) (total time: 4ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.349168933Z"} ... skipping 227 lines ... {"log":"Trace[32095045]: ---\"Conversion done\" 0ms (23:22:00.443)\n","stream":"stderr","time":"2020-07-13T23:22:17.446257233Z"} {"log":"Trace[32095045]: ---\"About to store object in database\" 0ms (23:22:00.443)\n","stream":"stderr","time":"2020-07-13T23:22:17.446298033Z"} {"log":"Trace[32095045]: ---\"Object stored in database\" 1ms (23:22:00.445)]\n","stream":"stderr","time":"2020-07-13T23:22:17.446303633Z"} {"log":"Trace[32095045]: [1.7532ms] [1.7532ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.446308233Z"} {"log":"I0713 23:22:17.445583 1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/apis/rbac.authorization.k8s.io/v1/clusterrolebindings\" latency=\"1.8214ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=201\n","stream":"stderr","time":"2020-07-13T23:22:17.446312833Z"} {"log":"I0713 23:22:17.445870 1 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller\n","stream":"stderr","time":"2020-07-13T23:22:17.446318633Z"} {"log":"I0713 23:22:17.448210 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles\n","stream":"stderr","time":"2020-07-13T23:22:17.450198733Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:17.450322233Z"} {"log":"I0713 23:22:17.448330 1 trace.go:201] Trace[991715132]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.445) (total time: 2ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.450337833Z"} {"log":"Trace[991715132]: ---\"Authenticate check done\" 0ms (23:22:00.446)\n","stream":"stderr","time":"2020-07-13T23:22:17.450343633Z"} {"log":"Trace[991715132]: ---\"Authorize check done\" 0ms (23:22:00.446)\n","stream":"stderr","time":"2020-07-13T23:22:17.450348333Z"} {"log":"Trace[991715132]: [2.3059ms] [2.3059ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.450352933Z"} {"log":"I0713 23:22:17.448356 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz\" latency=\"2.3486ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=0\n","stream":"stderr","time":"2020-07-13T23:22:17.450357533Z"} {"log":"I0713 23:22:17.448700 1 trace.go:201] Trace[627024024]: \"HTTP Request\" method:GET,url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller,verb:get,name:system:controller:clusterrole-aggregation-controller,resource:clusterrolebindings,subresource:,namespace:,api-group:rbac.authorization.k8s.io,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.446) (total time: 2ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.450363333Z"} ... skipping 209 lines ... {"log":"Trace[1533519336]: ---\"Conversion done\" 0ms (23:22:00.538)\n","stream":"stderr","time":"2020-07-13T23:22:17.541727233Z"} {"log":"Trace[1533519336]: ---\"About to store object in database\" 0ms (23:22:00.538)\n","stream":"stderr","time":"2020-07-13T23:22:17.541731333Z"} {"log":"Trace[1533519336]: ---\"Object stored in database\" 1ms (23:22:00.540)]\n","stream":"stderr","time":"2020-07-13T23:22:17.541735433Z"} {"log":"Trace[1533519336]: [2.1793ms] [2.1793ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.541746333Z"} {"log":"I0713 23:22:17.540970 1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/apis/rbac.authorization.k8s.io/v1/clusterrolebindings\" latency=\"2.2617ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=201\n","stream":"stderr","time":"2020-07-13T23:22:17.541750533Z"} {"log":"I0713 23:22:17.541182 1 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller\n","stream":"stderr","time":"2020-07-13T23:22:17.541755533Z"} {"log":"I0713 23:22:17.551811 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles\n","stream":"stderr","time":"2020-07-13T23:22:17.552966433Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:17.553006233Z"} {"log":"I0713 23:22:17.552252 1 trace.go:201] Trace[962213231]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.545) (total time: 7ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.553011433Z"} {"log":"Trace[962213231]: ---\"Authenticate check done\" 0ms (23:22:00.545)\n","stream":"stderr","time":"2020-07-13T23:22:17.553017033Z"} {"log":"Trace[962213231]: ---\"Authorize check done\" 0ms (23:22:00.545)\n","stream":"stderr","time":"2020-07-13T23:22:17.553021633Z"} {"log":"Trace[962213231]: [7.1643ms] [7.1643ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.553026133Z"} {"log":"I0713 23:22:17.552268 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz\" latency=\"7.2081ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=0\n","stream":"stderr","time":"2020-07-13T23:22:17.553030733Z"} {"log":"I0713 23:22:17.552408 1 trace.go:201] Trace[1947157360]: \"HTTP Request\" method:GET,url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller,verb:get,name:system:controller:namespace-controller,resource:clusterrolebindings,subresource:,namespace:,api-group:rbac.authorization.k8s.io,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.541) (total time: 11ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.553036533Z"} ... skipping 189 lines ... {"log":"Trace[2047753961]: ---\"Conversion done\" 0ms (23:22:00.647)\n","stream":"stderr","time":"2020-07-13T23:22:17.650942133Z"} {"log":"Trace[2047753961]: ---\"About to store object in database\" 0ms (23:22:00.647)\n","stream":"stderr","time":"2020-07-13T23:22:17.650946733Z"} {"log":"Trace[2047753961]: ---\"Object stored in database\" 2ms (23:22:00.649)]\n","stream":"stderr","time":"2020-07-13T23:22:17.650951433Z"} {"log":"Trace[2047753961]: [2.8789ms] [2.8789ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.650956033Z"} {"log":"I0713 23:22:17.650164 1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/apis/rbac.authorization.k8s.io/v1/clusterrolebindings\" latency=\"2.974ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=201\n","stream":"stderr","time":"2020-07-13T23:22:17.650960333Z"} {"log":"I0713 23:22:17.650389 1 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller\n","stream":"stderr","time":"2020-07-13T23:22:17.650966733Z"} {"log":"I0713 23:22:17.651838 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles\n","stream":"stderr","time":"2020-07-13T23:22:17.652344233Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:17.652355833Z"} {"log":"I0713 23:22:17.651972 1 trace.go:201] Trace[2010593797]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.645) (total time: 6ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.652391233Z"} {"log":"Trace[2010593797]: ---\"Authenticate check done\" 0ms (23:22:00.645)\n","stream":"stderr","time":"2020-07-13T23:22:17.652397833Z"} {"log":"Trace[2010593797]: ---\"Authorize check done\" 0ms (23:22:00.645)\n","stream":"stderr","time":"2020-07-13T23:22:17.652402333Z"} {"log":"Trace[2010593797]: [6.9149ms] [6.9149ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.652406933Z"} {"log":"I0713 23:22:17.651988 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz\" latency=\"6.9715ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=0\n","stream":"stderr","time":"2020-07-13T23:22:17.652411233Z"} {"log":"I0713 23:22:17.653117 1 trace.go:201] Trace[383478842]: \"HTTP Request\" method:GET,url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller,verb:get,name:system:controller:statefulset-controller,resource:clusterrolebindings,subresource:,namespace:,api-group:rbac.authorization.k8s.io,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.650) (total time: 2ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.654384233Z"} ... skipping 93 lines ... {"log":"Trace[1080329607]: ---\"Conversion done\" 0ms (23:22:00.714)\n","stream":"stderr","time":"2020-07-13T23:22:17.727141433Z"} {"log":"Trace[1080329607]: ---\"About to store object in database\" 0ms (23:22:00.714)\n","stream":"stderr","time":"2020-07-13T23:22:17.727145933Z"} {"log":"Trace[1080329607]: ---\"Object stored in database\" 2ms (23:22:00.717)]\n","stream":"stderr","time":"2020-07-13T23:22:17.727150433Z"} {"log":"Trace[1080329607]: [3.1705ms] [3.1705ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.727154933Z"} {"log":"I0713 23:22:17.717378 1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/apis/rbac.authorization.k8s.io/v1/clusterrolebindings\" latency=\"3.2499ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=201\n","stream":"stderr","time":"2020-07-13T23:22:17.727159233Z"} {"log":"I0713 23:22:17.717575 1 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller\n","stream":"stderr","time":"2020-07-13T23:22:17.727165033Z"} {"log":"I0713 23:22:17.718547 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles\n","stream":"stderr","time":"2020-07-13T23:22:17.727169733Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:17.727174333Z"} {"log":"I0713 23:22:17.718691 1 trace.go:201] Trace[377862342]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-controller-manager/v1.20.0 (linux/amd64) kubernetes/240a72b/shared-informers,client:10.255.255.5 (13-Jul-2020 23:22:00.708) (total time: 10ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.727179133Z"} {"log":"Trace[377862342]: ---\"Authenticate check done\" 0ms (23:22:00.708)\n","stream":"stderr","time":"2020-07-13T23:22:17.727184733Z"} {"log":"Trace[377862342]: ---\"Authorize check done\" 0ms (23:22:00.708)\n","stream":"stderr","time":"2020-07-13T23:22:17.727189133Z"} {"log":"Trace[377862342]: [10.6303ms] [10.6303ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.727193733Z"} {"log":"I0713 23:22:17.718727 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz?timeout=32s\" latency=\"10.683ms\" userAgent=\"kube-controller-manager/v1.20.0 (linux/amd64) kubernetes/240a72b/shared-informers\" srcIP=\"10.255.255.5:57168\" resp=0\n","stream":"stderr","time":"2020-07-13T23:22:17.727198733Z"} {"log":"I0713 23:22:17.720150 1 trace.go:201] Trace[1228291255]: \"HTTP Request\" method:GET,url:/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader,verb:get,name:extension-apiserver-authentication-reader,resource:roles,subresource:,namespace:kube-system,api-group:rbac.authorization.k8s.io,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.717) (total time: 2ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.727204533Z"} ... skipping 71 lines ... {"log":"Trace[1359163284]: ---\"Authenticate check done\" 0ms (23:22:00.743)\n","stream":"stderr","time":"2020-07-13T23:22:17.787080633Z"} {"log":"Trace[1359163284]: ---\"Authorize check done\" 0ms (23:22:00.743)\n","stream":"stderr","time":"2020-07-13T23:22:17.787085333Z"} {"log":"Trace[1359163284]: [\"Get\" url:/api/v1/namespaces/kube-system/pods/kube-scheduler-k8s-master-89242181-0,user-agent:kubelet/v1.20.0 (linux/amd64) kubernetes/240a72b,client:10.255.255.5 13ms (23:22:00.744)\n","stream":"stderr","time":"2020-07-13T23:22:17.787089833Z"} {"log":"Trace[1359163284]: ---\"About to Get from storage\" 0ms (23:22:00.744)]\n","stream":"stderr","time":"2020-07-13T23:22:17.787094933Z"} {"log":"Trace[1359163284]: [14.183ms] [14.183ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.787099533Z"} {"log":"I0713 23:22:17.757663 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kube-system/pods/kube-scheduler-k8s-master-89242181-0\" latency=\"14.286ms\" userAgent=\"kubelet/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"10.255.255.5:57176\" resp=404\n","stream":"stderr","time":"2020-07-13T23:22:17.787104333Z"} {"log":"I0713 23:22:17.759429 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles\n","stream":"stderr","time":"2020-07-13T23:22:17.787113733Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:17.787118533Z"} {"log":"I0713 23:22:17.759557 1 trace.go:201] Trace[1048437866]: \"HTTP Request\" method:GET,url:/healthz,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.745) (total time: 14ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.787122933Z"} {"log":"Trace[1048437866]: ---\"Authenticate check done\" 0ms (23:22:00.745)\n","stream":"stderr","time":"2020-07-13T23:22:17.787128333Z"} {"log":"Trace[1048437866]: ---\"Authorize check done\" 0ms (23:22:00.745)\n","stream":"stderr","time":"2020-07-13T23:22:17.787132933Z"} {"log":"Trace[1048437866]: [14.4822ms] [14.4822ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.787137633Z"} {"log":"I0713 23:22:17.759574 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/healthz\" latency=\"14.5411ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=0\n","stream":"stderr","time":"2020-07-13T23:22:17.787142133Z"} {"log":"I0713 23:22:17.760078 1 trace.go:201] Trace[908919453]: \"HTTP Request\" method:POST,url:/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles,verb:create,name:,resource:roles,subresource:,namespace:kube-system,api-group:rbac.authorization.k8s.io,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.752) (total time: 7ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.787147633Z"} ... skipping 122 lines ... {"log":"Trace[1944995030]: ---\"Conversion done\" 0ms (23:22:00.834)\n","stream":"stderr","time":"2020-07-13T23:22:17.841105333Z"} {"log":"Trace[1944995030]: ---\"About to store object in database\" 0ms (23:22:00.834)\n","stream":"stderr","time":"2020-07-13T23:22:17.841109833Z"} {"log":"Trace[1944995030]: ---\"Object stored in database\" 5ms (23:22:00.840)]\n","stream":"stderr","time":"2020-07-13T23:22:17.841114433Z"} {"log":"Trace[1944995030]: [6.2433ms] [6.2433ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.841118933Z"} {"log":"I0713 23:22:17.840862 1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles\" latency=\"6.3242ms\" userAgent=\"kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b\" srcIP=\"[::1]:38174\" resp=201\n","stream":"stderr","time":"2020-07-13T23:22:17.841123533Z"} {"log":"I0713 23:22:17.846294 1 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public\n","stream":"stderr","time":"2020-07-13T23:22:17.846796433Z"} {"log":"I0713 23:22:17.850281 1 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles\n","stream":"stderr","time":"2020-07-13T23:22:17.850415533Z"} {"log":"[-]poststarthook/rbac/bootstrap-roles failed: not finished\n","stream":"stderr","time":"2020-07-13T23:22:17.851113233Z"} {"log":"I0713 23:22:17.850408 1 trace.go:201] Trace[2044841476]: \"HTTP Request\" method:GET,url:/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader,verb:get,name:system::extension-apiserver-authentication-reader,resource:rolebindings,subresource:,namespace:kube-system,api-group:rbac.authorization.k8s.io,api-version:v1,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 (13-Jul-2020 23:22:00.846) (total time: 3ms):\n","stream":"stderr","time":"2020-07-13T23:22:17.851122533Z"} {"log":"Trace[2044841476]: ---\"Authenticate check done\" 0ms (23:22:00.846)\n","stream":"stderr","time":"2020-07-13T23:22:17.851129233Z"} {"log":"Trace[2044841476]: ---\"Authorize check done\" 0ms (23:22:00.846)\n","stream":"stderr","time":"2020-07-13T23:22:17.851134033Z"} {"log":"Trace[2044841476]: [\"Get\" url:/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/240a72b,client:::1 3ms (23:22:00.846)\n","stream":"stderr","time":"2020-07-13T23:22:17.851139133Z"} {"log":"Trace[2044841476]: ---\"About to Get from storage\" 0ms (23:22:00.846)]\n","stream":"stderr","time":"2020-07-13T23:22:17.851144433Z"} {"log":"Trace[2044841476]: [3.8906ms] [3.8906ms] END\n","stream":"stderr","time":"2020-07-13T23:22:17.851149033Z"} ... skipping 403 lines ... {"log":"Trace[1616842934]: ---\"Listed items from cache\" count:0 0ms (23:22:00.146)\n","stream":"stderr","time":"2020-07-13T23:22:18.146553433Z"} {"log":"Trace[1616842934]: ---\"Filtered items\" count:0 0ms (23:22:00.146)]\n","stream":"stderr","time":"2020-07-13T23:22:18.146557733Z"} {"log":"Trace[1616842934]: ---\"Listing from storage done\" 0ms (23:22:00.146)\n","stream":"stderr","time":"2020-07-13T23:22:18.146561633Z"} {"log":"Trace[1616842934]: ---\"Writing http response done\" count:0 0ms (23:22:00.146)]\n","stream":"stderr","time":"2020-07-13T23:22:18.146565833Z"} {"log":"Trace[1616842934]: [779.8µs] [779.8µs] END\n","stream":"stderr","time":"2020-07-13T23:22:18.146570333Z"} {"log":"I0713 23:22:18.146616 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apps/v1/statefulsets?limit=500\u0026resourceVersion=0\" latency=\"1.1354ms\" userAgent=\"kube-scheduler/v1.20.0 (linux/amd64) kubernetes/240a72b/scheduler\" srcIP=\"10.255.255.5:57238\" resp=200\n","stream":"stderr","time":"2020-07-13T23:22:18.146683933Z"} {"log":"I0713 23:22:18.147268 1 get.go:261] \"Starting watch\" path=\"/api/v1/pods\" resourceVersion=\"6\" labels=\"\" fields=\"status.phase!=Failed,status.phase!=Succeeded\" timeout=\"6m5s\"\n","stream":"stderr","time":"2020-07-13T23:22:18.147349433Z"} {"log":"I0713 23:22:18.148024 1 trace.go:201] Trace[991414968]: \"HTTP Request\" method:GET,url:/apis/storage.k8s.io/v1/storageclasses,verb:list,name:,resource:storageclasses,subresource:,namespace:,api-group:storage.k8s.io,api-version:v1,user-agent:kube-scheduler/v1.20.0 (linux/amd64) kubernetes/240a72b/scheduler,client:10.255.255.5 (13-Jul-2020 23:22:00.147) (total time: 0ms):\n","stream":"stderr","time":"2020-07-13T23:22:18.148127833Z"} {"log":"Trace[991414968]: ---\"Authenticate check done\" 0ms (23:22:00.147)\n","stream":"stderr","time":"2020-07-13T23:22:18.148140733Z"} {"log":"Trace[991414968]: ---\"Authorize check done\" 0ms (23:22:00.147)\n","stream":"stderr","time":"2020-07-13T23:22:18.148173233Z"} {"log":"Trace[991414968]: [\"List\" url:/apis/storage.k8s.io/v1/storageclasses,user-agent:kube-scheduler/v1.20.0 (linux/amd64) kubernetes/240a72b/scheduler,client:10.255.255.5 0ms (23:22:00.147)\n","stream":"stderr","time":"2020-07-13T23:22:18.148179033Z"} {"log":"Trace[991414968]: ---\"About to List from storage\" 0ms (23:22:00.147)\n","stream":"stderr","time":"2020-07-13T23:22:18.148184133Z"} {"log":"Trace[991414968]: [\"cacher list\" type:*storage.StorageClass 0ms (23:22:00.147)\n","stream":"stderr","time":"2020-07-13T23:22:18.148188733Z"} ... skipping 133658 lines ... {"log":"I0713 23:32:38.013766 1 trace.go:201] Trace[453670857]: \"HTTP Request\" method:GET,url:/apis/scheduling.k8s.io/v1beta1,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab,client:10.255.255.5 (13-Jul-2020 23:32:00.013) (total time: 0ms):\n","stream":"stderr","time":"2020-07-13T23:32:38.01389987Z"} {"log":"Trace[453670857]: ---\"Authenticate check done\" 0ms (23:32:00.013)\n","stream":"stderr","time":"2020-07-13T23:32:38.01391567Z"} {"log":"Trace[453670857]: ---\"Authorize check done\" 0ms (23:32:00.013)\n","stream":"stderr","time":"2020-07-13T23:32:38.01392167Z"} {"log":"Trace[453670857]: [345.504µs] [345.504µs] END\n","stream":"stderr","time":"2020-07-13T23:32:38.013926271Z"} {"log":"I0713 23:32:38.013790 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/scheduling.k8s.io/v1beta1?timeout=32s\" latency=\"390.604µs\" userAgent=\"kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab\" srcIP=\"10.255.255.5:33494\" resp=200\n","stream":"stderr","time":"2020-07-13T23:32:38.013931071Z"} {"log":"I0713 23:32:38.022591 1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00c4c83c0, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2020-07-13T23:32:38.022815068Z"} {"log":"I0713 23:32:38.023110 1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2020-07-13T23:32:38.023240572Z"} {"log":"I0713 23:32:38.039546 1 trace.go:201] Trace[1551707337]: \"HTTP Request\" method:GET,url:/apis/coordination.k8s.io/v1,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab,client:10.255.255.5 (13-Jul-2020 23:32:00.039) (total time: 0ms):\n","stream":"stderr","time":"2020-07-13T23:32:38.039687752Z"} {"log":"Trace[1551707337]: ---\"Authenticate check done\" 0ms (23:32:00.039)\n","stream":"stderr","time":"2020-07-13T23:32:38.039752352Z"} {"log":"Trace[1551707337]: ---\"Authorize check done\" 0ms (23:32:00.039)\n","stream":"stderr","time":"2020-07-13T23:32:38.039760253Z"} {"log":"Trace[1551707337]: [328.104µs] [328.104µs] END\n","stream":"stderr","time":"2020-07-13T23:32:38.039765053Z"} {"log":"I0713 23:32:38.039590 1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/coordination.k8s.io/v1?timeout=32s\" latency=\"387.304µs\" userAgent=\"kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab\" srcIP=\"10.255.255.5:33494\" resp=200\n","stream":"stderr","time":"2020-07-13T23:32:38.039769853Z"} {"log":"I0713 23:32:38.069766 1 trace.go:201] Trace[505427741]: \"HTTP Request\" method:GET,url:/apis/coordination.k8s.io/v1beta1,verb:get,name:,resource:,subresource:,namespace:,api-group:,api-version:,user-agent:kubectl/v1.13.2 (linux/amd64) kubernetes/cff46ab,client:10.255.255.5 (13-Jul-2020 23:32:00.069) (total time: 0ms):\n","stream":"stderr","time":"2020-07-13T23:32:38.070259786Z"} ... skipping 5346 lines ... {"log":"I0713 23:33:21.132155 1 httplog.go:89] \"HTTP\" verb=\"PUT\" URI=\"/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=10s\" latency=\"7.729384ms\" userAgent=\"kube-controller-manager/v1.20.0 (linux/amd64) kubernetes/240a72b/leader-election\" srcIP=\"10.255.255.5:57168\" resp=200\n","stream":"stderr","time":"2020-07-13T23:33:21.132408677Z"} {"log":"I0713 23:33:21.181399 1 client.go:360] parsed scheme: \"passthrough\"\n","stream":"stderr","time":"2020-07-13T23:33:21.181484613Z"} {"log":"I0713 23:33:21.181456 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 \u003cnil\u003e 0 \u003cnil\u003e}] \u003cnil\u003e \u003cnil\u003e}\n","stream":"stderr","time":"2020-07-13T23:33:21.181664315Z"} {"log":"I0713 23:33:21.181473 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n","stream":"stderr","time":"2020-07-13T23:33:21.181675015Z"} {"log":"I0713 23:33:21.181539 1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00f5ad570, {CONNECTING \u003cnil\u003e}\n","stream":"stderr","time":"2020-07-13T23:33:21.181837717Z"} {"log":"I0713 23:33:21.207333 1 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00f5ad570, {READY \u003cnil\u003e}\n","stream":"stderr","time":"2020-07-13T23:33:21.207469897Z"} {"log":"I0713 23:33:21.208014 1 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n","stream":"stderr","time":"2020-07-13T23:33:21.208086704Z"} {"log":"I0713 23:33:21.585778 1 trace.go:201] Trace[1477173166]: \"HTTP Request\" method:POST,url:/api/v1/namespaces/default/events,verb:create,name:,resource:events,subresource:,namespace:default,api-group:,api-version:v1,user-agent:kubelet/v1.20.0 (linux/amd64) kubernetes/240a72b,client:10.255.255.5 (13-Jul-2020 23:33:00.582) (total time: 3ms):\n","stream":"stderr","time":"2020-07-13T23:33:21.586082231Z"} {"log":"Trace[1477173166]: ---\"Authenticate check done\" 0ms (23:33:00.583)\n","stream":"stderr","time":"2020-07-13T23:33:21.586101332Z"} {"log":"Trace[1477173166]: ---\"Authorize check done\" 0ms (23:33:00.583)\n","stream":"stderr","time":"2020-07-13T23:33:21.586113732Z"} {"log":"Trace[1477173166]: [\"Create\" url:/api/v1/namespaces/default/events,user-agent:kubelet/v1.20.0 (linux/amd64) kubernetes/240a72b,client:10.255.255.5 2ms (23:33:00.583)\n","stream":"stderr","time":"2020-07-13T23:33:21.586118732Z"} {"log":"Trace[1477173166]: ---\"About to convert to expected version\" 0ms (23:33:00.583)\n","stream":"stderr","time":"2020-07-13T23:33:21.586123432Z"} {"log":"Trace[1477173166]: ---\"Conversion done\" 0ms (23:33:00.583)\n","stream":"stderr","time":"2020-07-13T23:33:21.586128632Z"} ... skipping 2329 lines ... {"log":"I0713 23:21:56.414681 1 tlsconfig.go:200] loaded serving cert [\"Generated self signed cert\"]: \"localhost@1594682516\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1594682515\" (2020-07-13 22:21:55 +0000 UTC to 2021-07-13 22:21:55 +0000 UTC (now=2020-07-13 23:21:56.414649333 +0000 UTC))\n","stream":"stderr","time":"2020-07-13T23:21:56.414799733Z"} {"log":"I0713 23:21:56.414943 1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1594682516\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1594682516\" (2020-07-13 22:21:56 +0000 UTC to 2021-07-13 22:21:56 +0000 UTC (now=2020-07-13 23:21:56.414929433 +0000 UTC))\n","stream":"stderr","time":"2020-07-13T23:21:56.415037633Z"} {"log":"I0713 23:21:56.414982 1 secure_serving.go:197] Serving securely on [::]:10257\n","stream":"stderr","time":"2020-07-13T23:21:56.415051833Z"} {"log":"I0713 23:21:56.415401 1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252\n","stream":"stderr","time":"2020-07-13T23:21:56.415510933Z"} {"log":"I0713 23:21:56.415450 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...\n","stream":"stderr","time":"2020-07-13T23:21:56.415542933Z"} {"log":"I0713 23:21:56.415775 1 tlsconfig.go:240] Starting DynamicServingCertificateController\n","stream":"stderr","time":"2020-07-13T23:21:56.415862833Z"} {"log":"E0713 23:21:56.416046 1 leaderelection.go:321] error retrieving resource lock kube-system/kube-controller-manager: Get \"https://10.255.255.5:443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s\": dial tcp 10.255.255.5:443: connect: connection refused\n","stream":"stderr","time":"2020-07-13T23:21:56.416118933Z"} {"log":"E0713 23:21:58.674828 1 leaderelection.go:321] error retrieving resource lock kube-system/kube-controller-manager: Get \"https://10.255.255.5:443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s\": dial tcp 10.255.255.5:443: connect: connection refused\n","stream":"stderr","time":"2020-07-13T23:21:58.678273133Z"} {"log":"E0713 23:22:02.854628 1 leaderelection.go:321] error retrieving resource lock kube-system/kube-controller-manager: Get \"https://10.255.255.5:443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s\": dial tcp 10.255.255.5:443: connect: connection refused\n","stream":"stderr","time":"2020-07-13T23:22:02.854780633Z"} {"log":"E0713 23:22:05.462697 1 leaderelection.go:321] error retrieving resource lock kube-system/kube-controller-manager: Get \"https://10.255.255.5:443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s\": dial tcp 10.255.255.5:443: connect: connection refused\n","stream":"stderr","time":"2020-07-13T23:22:05.462818233Z"} {"log":"E0713 23:22:08.864668 1 leaderelection.go:321] error retrieving resource lock kube-system/kube-controller-manager: Get \"https://10.255.255.5:443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s\": dial tcp 10.255.255.5:443: connect: connection refused\n","stream":"stderr","time":"2020-07-13T23:22:08.864779333Z"} {"log":"I0713 23:22:15.414324 1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager\n","stream":"stderr","time":"2020-07-13T23:22:15.414426933Z"} {"log":"I0713 23:22:15.421584 1 event.go:291] \"Event occurred\" object=\"kube-system/kube-controller-manager\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2 became leader\"\n","stream":"stderr","time":"2020-07-13T23:22:15.421724733Z"} {"log":"I0713 23:22:15.421623 1 event.go:291] \"Event occurred\" object=\"kube-system/kube-controller-manager\" kind=\"Lease\" apiVersion=\"coordination.k8s.io/v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"k8s-master-89242181-0_45256d2d-b9cd-46ec-bfad-09f3a73b56c2 became leader\"\n","stream":"stderr","time":"2020-07-13T23:22:15.421748133Z"} {"log":"I0713 23:22:15.439307 1 controllermanager.go:231] using legacy client builder\n","stream":"stderr","time":"2020-07-13T23:22:15.440856533Z"} {"log":"W0713 23:22:19.065791 1 plugins.go:105] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release\n","stream":"stderr","time":"2020-07-13T23:22:19.065884833Z"} {"log":"I0713 23:22:19.066671 1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token\n","stream":"stderr","time":"2020-07-13T23:22:19.066765633Z"} ... skipping 31 lines ... {"log":"I0713 23:22:19.070587 1 reflector.go:207] Starting reflector *v1.ServiceAccount (15h51m16.701905637s) from k8s.io/client-go/informers/factory.go:134\n","stream":"stderr","time":"2020-07-13T23:22:19.070649333Z"} {"log":"I0713 23:22:19.070767 1 shared_informer.go:240] Waiting for caches to sync for tokens\n","stream":"stderr","time":"2020-07-13T23:22:19.070828233Z"} {"log":"I0713 23:22:19.070356 1 reflector.go:207] Starting reflector *v1.Secret (15h51m16.701905637s) from k8s.io/client-go/informers/factory.go:134\n","stream":"stderr","time":"2020-07-13T23:22:19.070915933Z"} {"log":"I0713 23:22:19.081524 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146\n","stream":"stderr","time":"2020-07-13T23:22:19.081602833Z"} {"log":"I0713 23:22:19.170904 1 shared_informer.go:247] Caches are synced for tokens \n","stream":"stderr","time":"2020-07-13T23:22:19.171032033Z"} {"log":"I0713 23:22:19.197279 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146\n","stream":"stderr","time":"2020-07-13T23:22:19.197385933Z"} {"log":"W0713 23:22:19.212700 1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets \"azure-cloud-provider\" is forbidden: User \"system:serviceaccount:kube-system:azure-cloud-provider\" cannot get resource \"secrets\" in API group \"\" in the namespace \"kube-system\", skip initializing from secret\n","stream":"stderr","time":"2020-07-13T23:22:19.212773433Z"} {"log":"I0713 23:22:19.212727 1 controllermanager.go:534] Starting \"daemonset\"\n","stream":"stderr","time":"2020-07-13T23:22:19.212783933Z"} {"log":"I0713 23:22:19.217934 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146\n","stream":"stderr","time":"2020-07-13T23:22:19.218023433Z"} {"log":"I0713 23:22:19.237094 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146\n","stream":"stderr","time":"2020-07-13T23:22:19.237178433Z"} {"log":"I0713 23:22:19.238270 1 controllermanager.go:549] Started \"daemonset\"\n","stream":"stderr","time":"2020-07-13T23:22:19.238346733Z"} {"log":"I0713 23:22:19.238288 1 controllermanager.go:534] Starting \"horizontalpodautoscaling\"\n","stream":"stderr","time":"2020-07-13T23:22:19.238359633Z"} {"log":"I0713 23:22:19.238399 1 daemon_controller.go:285] Starting daemon sets controller\n","stream":"stderr","time":"2020-07-13T23:22:19.238472333Z"} ... skipping 38 lines ... {"log":"I0713 23:22:19.376356 1 controllermanager.go:549] Started \"attachdetach\"\n","stream":"stderr","time":"2020-07-13T23:22:19.376448833Z"} {"log":"I0713 23:22:19.376372 1 controllermanager.go:534] Starting \"ttl-after-finished\"\n","stream":"stderr","time":"2020-07-13T23:22:19.376455133Z"} {"log":"W0713 23:22:19.376381 1 controllermanager.go:541] Skipping \"ttl-after-finished\"\n","stream":"stderr","time":"2020-07-13T23:22:19.376459733Z"} {"log":"I0713 23:22:19.376389 1 controllermanager.go:534] Starting \"root-ca-cert-publisher\"\n","stream":"stderr","time":"2020-07-13T23:22:19.376463933Z"} {"log":"W0713 23:22:19.376396 1 controllermanager.go:541] Skipping \"root-ca-cert-publisher\"\n","stream":"stderr","time":"2020-07-13T23:22:19.376468433Z"} {"log":"I0713 23:22:19.376403 1 controllermanager.go:534] Starting \"endpointslicemirroring\"\n","stream":"stderr","time":"2020-07-13T23:22:19.376472933Z"} {"log":"W0713 23:22:19.376575 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"k8s-master-89242181-0\" does not exist\n","stream":"stderr","time":"2020-07-13T23:22:19.376653533Z"} {"log":"I0713 23:22:19.376660 1 attach_detach_controller.go:322] Starting attach detach controller\n","stream":"stderr","time":"2020-07-13T23:22:19.376697633Z"} {"log":"I0713 23:22:19.376721 1 shared_informer.go:240] Waiting for caches to sync for attach detach\n","stream":"stderr","time":"2020-07-13T23:22:19.376763433Z"} {"log":"I0713 23:22:19.422470 1 reflector.go:207] Starting reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146\n","stream":"stderr","time":"2020-07-13T23:22:19.422576133Z"} {"log":"I0713 23:22:19.521997 1 reflector.go:213] Stopping reflector *v1.Secret (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146\n","stream":"stderr","time":"2020-07-13T23:22:19.523398033Z"} {"log":"I0713 23:22:19.523150 1 controllermanager.go:549] Started \"endpointslicemirroring\"\n","stream":"stderr","time":"2020-07-13T23:22:19.523406133Z"} {"log":"I0713 23:22:19.523169 1 controllermanager.go:534] Starting \"bootstrapsigner\"\n","stream":"stderr","time":"2020-07-13T23:22:19.523409533Z"} ... skipping 327 lines ... {"log":"I0713 23:22:26.923621 1 garbagecollector.go:404] \"Processing object\" object=\"k8s-master-89242181-0\" objectUID=1afb5ba5-0071-44b5-be43-daa1b8487d4b kind=\"Node\"\n","stream":"stderr","time":"2020-07-13T23:22:26.923700033Z"} {"log":"I0713 23:22:26.926172 1 resource_quota_controller.go:434] syncing resource quota controller with updated resources from discovery: added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=persistentvolumeclaims /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v1, Resource=horizontalpodautoscalers batch/v1, Resource=jobs batch/v1beta1, Resource=cronjobs coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1beta1, Resource=endpointslices events.k8s.io/v1, Resource=events extensions/v1beta1, Resource=ingresses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies policy/v1beta1, Resource=poddisruptionbudgets rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles], removed: []\n","stream":"stderr","time":"2020-07-13T23:22:26.926247433Z"} {"log":"I0713 23:22:26.926342 1 shared_informer.go:240] Waiting for caches to sync for resource quota\n","stream":"stderr","time":"2020-07-13T23:22:26.926413833Z"} {"log":"I0713 23:22:26.926364 1 shared_informer.go:247] Caches are synced for resource quota \n","stream":"stderr","time":"2020-07-13T23:22:26.926423133Z"} {"log":"I0713 23:22:26.926371 1 resource_quota_controller.go:453] synced quota controller\n","stream":"stderr","time":"2020-07-13T23:22:26.926428833Z"} {"log":"I0713 23:22:26.926827 1 garbagecollector.go:449] object [v1/Node, namespace: , name: k8s-master-89242181-0, uid: 1afb5ba5-0071-44b5-be43-daa1b8487d4b]'s doesn't have an owner, continue on next item\n","stream":"stderr","time":"2020-07-13T23:22:26.926914333Z"} {"log":"W0713 23:22:28.748591 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"8924k8s010\" does not exist\n","stream":"stderr","time":"2020-07-13T23:22:28.751848433Z"} {"log":"I0713 23:22:28.763645 1 ttl_controller.go:273] \"Changed ttl annotation\" node=\"8924k8s010\" new_ttl=\"0s\"\n","stream":"stderr","time":"2020-07-13T23:22:28.763751933Z"} {"log":"I0713 23:22:30.847950 1 controller_utils.go:122] Update ready status of pods on node [k8s-master-89242181-0]\n","stream":"stderr","time":"2020-07-13T23:22:30.850594233Z"} {"log":"I0713 23:22:31.329002 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: southcentralus:\u0000:0\n","stream":"stderr","time":"2020-07-13T23:22:31.329131433Z"} {"log":"I0713 23:22:31.329043 1 node_lifecycle_controller.go:773] Controller observed a new Node: \"8924k8s010\"\n","stream":"stderr","time":"2020-07-13T23:22:31.329167133Z"} {"log":"I0713 23:22:31.329052 1 controller_utils.go:172] Recording Registered Node 8924k8s010 in Controller event message for node 8924k8s010\n","stream":"stderr","time":"2020-07-13T23:22:31.329174833Z"} {"log":"I0713 23:22:31.329760 1 event.go:291] \"Event occurred\" object=\"8924k8s010\" kind=\"Node\" apiVersion=\"v1\" type=\"Normal\" reason=\"RegisteredNode\" message=\"Node 8924k8s010 event: Registered Node 8924k8s010 in Controller\"\n","stream":"stderr","time":"2020-07-13T23:22:31.329837833Z"} ... skipping 3 lines ... {"log":"I0713 23:22:36.348325 1 node_lifecycle_controller.go:872] Node 8924k8s010 is NotReady as of 2020-07-13 23:22:36.348308233 +0000 UTC m=+40.639953801. Adding it to the Taint queue.\n","stream":"stderr","time":"2020-07-13T23:22:36.348456133Z"} {"log":"I0713 23:22:38.778873 1 controller.go:708] Detected change in list of current cluster nodes. New node set: map[8924k8s010:{}]\n","stream":"stderr","time":"2020-07-13T23:22:38.779592633Z"} {"log":"I0713 23:22:38.778925 1 controller.go:716] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes\n","stream":"stderr","time":"2020-07-13T23:22:38.779611933Z"} {"log":"I0713 23:22:39.172576 1 event.go:291] \"Event occurred\" object=\"kube-system/coredns\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set coredns-7ff794b897 to 1\"\n","stream":"stderr","time":"2020-07-13T23:22:39.174114933Z"} {"log":"I0713 23:22:39.173604 1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kube-system/coredns-7ff794b897\" need=1 creating=1\n","stream":"stderr","time":"2020-07-13T23:22:39.174128933Z"} {"log":"I0713 23:22:39.197996 1 event.go:291] \"Event occurred\" object=\"kube-system/coredns-7ff794b897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: coredns-7ff794b897-6vt24\"\n","stream":"stderr","time":"2020-07-13T23:22:39.204239233Z"} {"log":"I0713 23:22:39.242727 1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kube-system/coredns\" err=\"Operation cannot be fulfilled on deployments.apps \\\"coredns\\\": the object has been modified; please apply your changes to the latest version and try again\"\n","stream":"stderr","time":"2020-07-13T23:22:39.242804533Z"} {"log":"I0713 23:22:39.338682 1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kube-system/coredns-autoscaler-87b67c5fd\" need=1 creating=1\n","stream":"stderr","time":"2020-07-13T23:22:39.338778933Z"} {"log":"I0713 23:22:39.339700 1 event.go:291] \"Event occurred\" object=\"kube-system/coredns-autoscaler\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set coredns-autoscaler-87b67c5fd to 1\"\n","stream":"stderr","time":"2020-07-13T23:22:39.339786933Z"} {"log":"I0713 23:22:39.353617 1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kube-system/coredns\" err=\"Operation cannot be fulfilled on deployments.apps \\\"coredns\\\": the object has been modified; please apply your changes to the latest version and try again\"\n","stream":"stderr","time":"2020-07-13T23:22:39.354596333Z"} {"log":"I0713 23:22:39.394573 1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kube-system/coredns\" err=\"Operation cannot be fulfilled on deployments.apps \\\"coredns\\\": the object has been modified; please apply your changes to the latest version and try again\"\n","stream":"stderr","time":"2020-07-13T23:22:39.396163833Z"} {"log":"I0713 23:22:39.399457 1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kube-system/coredns-autoscaler\" err=\"Operation cannot be fulfilled on deployments.apps \\\"coredns-autoscaler\\\": the object has been modified; please apply your changes to the latest version and try again\"\n","stream":"stderr","time":"2020-07-13T23:22:39.399549533Z"} {"log":"I0713 23:22:40.361871 1 event.go:291] \"Event occurred\" object=\"kube-system/coredns-autoscaler-87b67c5fd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: coredns-autoscaler-87b67c5fd-sv9l2\"\n","stream":"stderr","time":"2020-07-13T23:22:40.371608333Z"} {"log":"I0713 23:22:41.348563 1 node_lifecycle_controller.go:896] Node 8924k8s010 is healthy again, removing all taints\n","stream":"stderr","time":"2020-07-13T23:22:41.348707133Z"} {"log":"I0713 23:22:41.348600 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.\n","stream":"stderr","time":"2020-07-13T23:22:41.348740733Z"} {"log":"I0713 23:22:46.471389 1 event.go:291] \"Event occurred\" object=\"kube-system/blobfuse-flexvol-installer\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: blobfuse-flexvol-installer-rdlvb\"\n","stream":"stderr","time":"2020-07-13T23:22:46.471506333Z"} {"log":"I0713 23:22:46.507961 1 event.go:291] \"Event occurred\" object=\"kube-system/azure-ip-masq-agent\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: azure-ip-masq-agent-rx86g\"\n","stream":"stderr","time":"2020-07-13T23:22:46.508059733Z"} {"log":"I0713 23:22:46.613790 1 event.go:291] \"Event occurred\" object=\"kube-system/kube-proxy\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kube-proxy-w9zmm\"\n","stream":"stderr","time":"2020-07-13T23:22:46.613889033Z"} {"log":"E0713 23:22:46.656552 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set \u0026v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy\", UID:\"fa94eaba-9348-48c5-9fcd-478092252be9\", ResourceVersion:\"503\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730279366, loc:(*time.Location)(0x6f1daa0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"addonmanager.kubernetes.io/mode\":\"Reconcile\", \"component\":\"kube-proxy\", \"k8s-app\":\"kube-proxy\", \"kubernetes.io/cluster-service\":\"true\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\",\\\"component\\\":\\\"kube-proxy\\\",\\\"k8s-app\\\":\\\"kube-proxy\\\",\\\"kubernetes.io/cluster-service\\\":\\\"true\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kube-proxy\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"component\\\":\\\"kube-proxy\\\",\\\"k8s-app\\\":\\\"kube-proxy\\\",\\\"tier\\\":\\\"node\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"cluster-autoscaler.kubernetes.io/daemonset-pod\\\":\\\"true\\\",\\\"scheduler.alpha.kubernetes.io/critical-pod\\\":\\\"\\\"},\\\"labels\\\":{\\\"component\\\":\\\"kube-proxy\\\",\\\"k8s-app\\\":\\\"kube-proxy\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"kube-proxy\\\",\\\"--config=/var/lib/kube-proxy/config.yaml\\\"],\\\"image\\\":\\\"k8sprowinternal.azurecr.io/kube-proxy-amd64:v1.20.0-alpha.0-150-g240a72b5c0a\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"kube-proxy\\\",\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\"}},\\\"securityContext\\\":{\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\",\\\"name\\\":\\\"ssl-certs-host\\\",\\\"readOnly\\\":true},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\",\\\"readOnly\\\":true},{\\\"mountPath\\\":\\\"/var/lib/kubelet/kubeconfig\\\",\\\"name\\\":\\\"kubeconfig\\\",\\\"readOnly\\\":true},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"iptableslock\\\"},{\\\"mountPath\\\":\\\"/lib/modules/\\\",\\\"name\\\":\\\"kernelmodules\\\",\\\"readOnly\\\":true},{\\\"mountPath\\\":\\\"/var/lib/kube-proxy/config.yaml\\\",\\\"name\\\":\\\"kube-proxy-config-volume\\\",\\\"readOnly\\\":true,\\\"subPath\\\":\\\"config.yaml\\\"}]}],\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"priorityClassName\\\":\\\"system-node-critical\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"key\\\":\\\"node-role.kubernetes.io/master\\\",\\\"operator\\\":\\\"Equal\\\",\\\"value\\\":\\\"true\\\"},{\\\"effect\\\":\\\"NoExecute\\\",\\\"operator\\\":\\\"Exists\\\"},{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"},{\\\"key\\\":\\\"CriticalAddonsOnly\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/usr/share/ca-certificates\\\"},\\\"name\\\":\\\"ssl-certs-host\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/var/lib/kubelet/kubeconfig\\\"},\\\"name\\\":\\\"kubeconfig\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/kubernetes\\\"},\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\"},\\\"name\\\":\\\"iptableslock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules/\\\"},\\\"name\\\":\\\"kernelmodules\\\"},{\\\"configMap\\\":{\\\"name\\\":\\\"kube-proxy-config\\\"},\\\"name\\\":\\\"kube-proxy-config-volume\\\"}]}},\\\"updateStrategy\\\":{\\\"rollingUpdate\\\":{\\\"maxUnavailable\\\":\\\"50%\\\"},\\\"type\\\":\\\"RollingUpdate\\\"}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kubectl\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc0022ac0c0), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0022ac0e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0022ac100), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"component\":\"kube-proxy\", \"k8s-app\":\"kube-proxy\", \"tier\":\"node\"}, Annotations:map[string]string{\"cluster-autoscaler.kubernetes.io/daemonset-pod\":\"true\", \"scheduler.alpha.kubernetes.io/critical-pod\":\"\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"ssl-certs-host\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0022ac120), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"kubeconfig\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0022ac140), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"etc-kubernetes\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0022ac160), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"iptableslock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0022ac180), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"kernelmodules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0022ac1a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"kube-proxy-config-volume\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0022a2740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"k8sprowinternal.azurecr.io/kube-proxy-amd64:v1.20.0-alpha.0-150-g240a72b5c0a\", Command:[]string{\"kube-proxy\", \"--config=/var/lib/kube-proxy/config.yaml\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"ssl-certs-host\", ReadOnly:true, MountPath:\"/etc/ssl/certs\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"etc-kubernetes\", ReadOnly:true, MountPath:\"/etc/kubernetes\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"kubeconfig\", ReadOnly:true, MountPath:\"/var/lib/kubelet/kubeconfig\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"iptableslock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"kernelmodules\", ReadOnly:true, MountPath:\"/lib/modules/\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"kube-proxy-config-volume\", ReadOnly:true, MountPath:\"/var/lib/kube-proxy/config.yaml\", SubPath:\"config.yaml\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc00229a2a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc00225bfb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"\", DeprecatedServiceAccount:\"\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000607c00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"node-role.kubernetes.io/master\", Operator:\"Equal\", Value:\"true\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoExecute\", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:\"CriticalAddonsOnly\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0021c0","stream":"stderr","time":"2020-07-13T23:22:46.656717033Z"} {"log":"9b8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0022c0030)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again\n","stream":"stderr","time":"2020-07-13T23:22:46.656717033Z"} {"log":"I0713 23:22:46.825661 1 event.go:291] \"Event occurred\" object=\"kube-system/metrics-server\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set metrics-server-84d5cf8ccf to 1\"\n","stream":"stderr","time":"2020-07-13T23:22:46.827247433Z"} {"log":"I0713 23:22:46.826734 1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kube-system/metrics-server-84d5cf8ccf\" need=1 creating=1\n","stream":"stderr","time":"2020-07-13T23:22:46.827266833Z"} {"log":"I0713 23:22:46.928196 1 event.go:291] \"Event occurred\" object=\"kube-system/metrics-server-84d5cf8ccf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: metrics-server-84d5cf8ccf-l88lb\"\n","stream":"stderr","time":"2020-07-13T23:22:46.928674533Z"} {"log":"I0713 23:22:46.944665 1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kube-system/metrics-server\" err=\"Operation cannot be fulfilled on deployments.apps \\\"metrics-server\\\": the object has been modified; please apply your changes to the latest version and try again\"\n","stream":"stderr","time":"2020-07-13T23:22:46.944772733Z"} {"log":"I0713 23:22:47.226449 1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kube-system/metrics-server\" err=\"Operation cannot be fulfilled on deployments.apps \\\"metrics-server\\\": the object has been modified; please apply your changes to the latest version and try again\"\n","stream":"stderr","time":"2020-07-13T23:22:47.227344533Z"} {"log":"I0713 23:22:52.179465 1 event.go:291] \"Event occurred\" object=\"kube-system/azure-cni-networkmonitor\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: azure-cni-networkmonitor-pl76s\"\n","stream":"stderr","time":"2020-07-13T23:22:52.179563933Z"} {"log":"I0713 23:22:52.373307 1 event.go:291] \"Event occurred\" object=\"kube-system/csi-secrets-store\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: csi-secrets-store-nqq5m\"\n","stream":"stderr","time":"2020-07-13T23:22:52.373423133Z"} {"log":"I0713 23:22:52.454968 1 event.go:291] \"Event occurred\" object=\"kube-system/csi-secrets-store-provider-azure\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: csi-secrets-store-provider-azure-pxr4g\"\n","stream":"stderr","time":"2020-07-13T23:22:52.455213633Z"} {"log":"E0713 23:22:52.465676 1 daemon_controller.go:320] kube-system/csi-secrets-store failed with : error storing status for daemon set \u0026v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"csi-secrets-store\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/csi-secrets-store\", UID:\"f7d15005-2e57-400d-b343-0e0d0ab0fee8\", ResourceVersion:\"590\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730279372, loc:(*time.Location)(0x6f1daa0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"addonmanager.kubernetes.io/mode\":\"Reconcile\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"addonmanager.kubernetes.io/mode\\\":\\\"Reconcile\\\"},\\\"name\\\":\\\"csi-secrets-store\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"csi-secrets-store\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"csi-secrets-store\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"args\\\":[\\\"--v=5\\\",\\\"--csi-address=/csi/csi.sock\\\",\\\"--kubelet-registration-path=/var/lib/kubelet/plugins/csi-secrets-store/csi.sock\\\"],\\\"env\\\":[{\\\"name\\\":\\\"KUBE_NODE_NAME\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"apiVersion\\\":\\\"v1\\\",\\\"fieldPath\\\":\\\"spec.nodeName\\\"}}}],\\\"image\\\":\\\"mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v1.2.0\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"lifecycle\\\":{\\\"preStop\\\":{\\\"exec\\\":{\\\"command\\\":[\\\"/bin/sh\\\",\\\"-c\\\",\\\"rm -rf /registration/secrets-store.csi.k8s.io-reg.sock\\\"]}}},\\\"name\\\":\\\"node-driver-registrar\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"200m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/csi\\\",\\\"name\\\":\\\"plugin-dir\\\"},{\\\"mountPath\\\":\\\"/registration\\\",\\\"name\\\":\\\"registration-dir\\\"}]},{\\\"args\\\":[\\\"--debug=true\\\",\\\"--endpoint=$(CSI_ENDPOINT)\\\",\\\"--nodeid=$(KUBE_NODE_NAME)\\\",\\\"--provider-volume=/etc/kubernetes/secrets-store-csi-providers\\\"],\\\"env\\\":[{\\\"name\\\":\\\"CSI_ENDPOINT\\\",\\\"value\\\":\\\"unix:///csi/csi.sock\\\"},{\\\"name\\\":\\\"KUBE_NODE_NAME\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"apiVersion\\\":\\\"v1\\\",\\\"fieldPath\\\":\\\"spec.nodeName\\\"}}}],\\\"image\\\":\\\"mcr.microsoft.com/k8s/csi/secrets-store/driver:v0.0.11\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"livenessProbe\\\":{\\\"failureThreshold\\\":5,\\\"httpGet\\\":{\\\"path\\\":\\\"/healthz\\\",\\\"port\\\":\\\"healthz\\\"},\\\"initialDelaySeconds\\\":30,\\\"periodSeconds\\\":15,\\\"timeoutSeconds\\\":10},\\\"name\\\":\\\"secrets-store\\\",\\\"ports\\\":[{\\\"containerPort\\\":9808,\\\"name\\\":\\\"healthz\\\",\\\"protocol\\\":\\\"TCP\\\"}],\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"200m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"50m\\\",\\\"memory\\\":\\\"100Mi\\\"}},\\\"securityContext\\\":{\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/csi\\\",\\\"name\\\":\\\"plugin-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet/pods\\\",\\\"mountPropagation\\\":\\\"Bidirectional\\\",\\\"name\\\":\\\"mountpoint-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/secrets-store-csi-providers\\\",\\\"name\\\":\\\"providers-dir\\\"}]},{\\\"args\\\":[\\\"--csi-address=/csi/csi.sock\\\",\\\"--probe-timeout=3s\\\",\\\"--health-port=9808\\\"],\\\"image\\\":\\\"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v1.1.0\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"liveness-probe\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"200m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/csi\\\",\\\"name\\\":\\\"plugin-dir\\\"}]}],\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"serviceAccountName\\\":\\\"secrets-store-csi-driver\\\",\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/var/lib/kubelet/pods\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"mountpoint-dir\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/var/lib/kubelet/plugins_registry/\\\",\\\"type\\\":\\\"Directory\\\"},\\\"name\\\":\\\"registration-dir\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/var/lib/kubelet/plugins/csi-secrets-store/\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"plugin-dir\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/kubernetes/secrets-store-csi-providers\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"providers-dir\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kubectl\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc000f4f7e0), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc000f4f800)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000f4f820), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"csi-secrets-store\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"mountpoint-dir\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000f4f860), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"registration-dir\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000f4f880), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"plugin-dir\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000f4f8a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"providers-dir\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000f4f8c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"node-driver-registrar\", Image:\"mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v1.2.0\", Command:[]string(nil), Args:[]string{\"--v=5\", \"--csi-address=/csi/csi.sock\", \"--kubelet-registration-path=/var/lib/kubelet/plugins/csi-secrets-store/csi.sock\"}, WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"KUBE_NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc000f4f900)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:200, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"200m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"10m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"20Mi\", Format:\"BinarySI\"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"plugin-dir\", ReadOnly:false, MountPath:\"/csi\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"registration-dir\", ReadOnly:false, MountPath:\"/registration\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(0xc001868190), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:\"secrets-store\", Image:\"mcr.microsoft.com/k8s/csi/secrets-store/driver:v0.0.11\", Command:[]string(nil), Args:[]string{\"--debug=true\", \"--endpoint=$(CSI_ENDPOINT)\", \"--nodeid=$(KUBE_NODE_NAME)\", \"--provider-volume=/etc/kubernetes/secrets-store-csi-providers\"}, WorkingDir:\"\", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:\"healthz\", HostPort:9808, ContainerPort:9808, Protocol:\"TCP\", HostIP:\"\"}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"CSI_ENDPOINT\", Value:\"unix:///csi/csi.sock\", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:\"KUBE_NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc000f4f9e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:200, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"200m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:50, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100Mi\", Format:\"BinarySI\"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"plugin-dir\", ReadOnly:false, MountPath:\"/csi\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"mountpoint-dir\", ReadOnly:false, MountPath:\"/var/lib/kubelet/pods\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(0xc0018681c0), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"providers-dir\", ReadOnly:false, MountPath:\"/etc/kubernetes/secrets-store-csi-providers\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc00185a8a0), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc001684c00), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:\"liveness-probe\", Image:\"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v1.1.0\", Command:[]string(nil), Args:[]string{\"--csi-address=/csi/csi.sock\", \"--probe-timeout=3s\", \"--health-port=9808\"}, WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:200, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"200m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"10m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"20Mi\", Format:\"BinarySI\"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"plugin-dir\", ReadOnly:false, MountPath:\"/csi\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*i","stream":"stderr","time":"2020-07-13T23:22:52.465819033Z"} {"log":"nt64)(0xc000e900a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"secrets-store-csi-driver\", DeprecatedServiceAccount:\"secrets-store-csi-driver\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001003f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0003d9618)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000e900d8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"csi-secrets-store\": the object has been modified; please apply your changes to the latest version and try again\n","stream":"stderr","time":"2020-07-13T23:22:52.465819033Z"} {"log":"E0713 23:22:57.377546 1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request\n","stream":"stderr","time":"2020-07-13T23:22:57.378459633Z"} {"log":"I0713 23:22:57.377631 1 resource_quota_controller.go:434] syncing resource quota controller with updated resources from discovery: added: [secrets-store.csi.x-k8s.io/v1alpha1, Resource=secretproviderclasses], removed: []\n","stream":"stderr","time":"2020-07-13T23:22:57.378483133Z"} {"log":"I0713 23:22:57.377829 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for secretproviderclasses.secrets-store.csi.x-k8s.io\n","stream":"stderr","time":"2020-07-13T23:22:57.378489433Z"} {"log":"I0713 23:22:57.378064 1 reflector.go:207] Starting reflector *v1.PartialObjectMetadata (14h17m20.034293202s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\n","stream":"stderr","time":"2020-07-13T23:22:57.378495033Z"} {"log":"I0713 23:22:57.378335 1 shared_informer.go:240] Waiting for caches to sync for resource quota\n","stream":"stderr","time":"2020-07-13T23:22:57.378506833Z"} {"log":"I0713 23:22:57.478454 1 shared_informer.go:247] Caches are synced for resource quota \n","stream":"stderr","time":"2020-07-13T23:22:57.478541933Z"} {"log":"I0713 23:22:57.478610 1 resource_quota_controller.go:453] synced quota controller\n","stream":"stderr","time":"2020-07-13T23:22:57.478657233Z"} {"log":"I0713 23:22:58.433553 1 request.go:645] Throttling request took 1.046204s, request: GET:https://10.255.255.5:443/apis/authorization.k8s.io/v1beta1?timeout=32s\n","stream":"stderr","time":"2020-07-13T23:22:58.433662233Z"} {"log":"W0713 23:22:59.234882 1 garbagecollector.go:642] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]\n","stream":"stderr","time":"2020-07-13T23:22:59.235000433Z"} {"log":"I0713 23:22:59.235155 1 garbagecollector.go:199] syncing garbage collector with updated resources from discovery (attempt 1): added: [secrets-store.csi.x-k8s.io/v1alpha1, Resource=secretproviderclasses], removed: []\n","stream":"stderr","time":"2020-07-13T23:22:59.235212833Z"} {"log":"E0713 23:22:59.250256 1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request\n","stream":"stderr","time":"2020-07-13T23:22:59.250612333Z"} {"log":"E0713 23:22:59.736428 1 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request\n","stream":"stderr","time":"2020-07-13T23:22:59.737581433Z"} {"log":"I0713 23:22:59.737441 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\n","stream":"stderr","time":"2020-07-13T23:22:59.737599633Z"} {"log":"I0713 23:22:59.737484 1 shared_informer.go:247] Caches are synced for garbage collector \n","stream":"stderr","time":"2020-07-13T23:22:59.737605933Z"} {"log":"I0713 23:22:59.737493 1 garbagecollector.go:240] synced garbage collector\n","stream":"stderr","time":"2020-07-13T23:22:59.737611533Z"} {"log":"E0713 23:23:27.930263 1 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request\n","stream":"stderr","time":"2020-07-13T23:23:27.930382033Z"} {"log":"I0713 23:23:31.337927 1 request.go:645] Throttling request took 1.0488087s, request: GET:https://10.255.255.5:443/apis/networking.k8s.io/v1beta1?timeout=32s\n","stream":"stderr","time":"2020-07-13T23:23:31.338216333Z"} {"log":"W0713 23:23:32.139117 1 garbagecollector.go:642] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]\n","stream":"stderr","time":"2020-07-13T23:23:32.139219133Z"} {"log":"W0713 23:23:42.284559 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"8924k8s000\" does not exist\n","stream":"stderr","time":"2020-07-13T23:23:42.284684133Z"} {"log":"I0713 23:23:42.299664 1 ttl_controller.go:273] \"Changed ttl annotation\" node=\"8924k8s000\" new_ttl=\"0s\"\n","stream":"stderr","time":"2020-07-13T23:23:42.300038433Z"} {"log":"W0713 23:23:45.280599 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"8924k8s001\" does not exist\n","stream":"stderr","time":"2020-07-13T23:23:45.280701433Z"} {"log":"I0713 23:23:45.290449 1 ttl_controller.go:273] \"Changed ttl annotation\" node=\"8924k8s001\" new_ttl=\"0s\"\n","stream":"stderr","time":"2020-07-13T23:23:45.290540633Z"} {"log":"I0713 23:23:46.352760 1 node_lifecycle_controller.go:773] Controller observed a new Node: \"8924k8s000\"\n","stream":"stderr","time":"2020-07-13T23:23:46.352880233Z"} {"log":"I0713 23:23:46.352780 1 controller_utils.go:172] Recording Registered Node 8924k8s000 in Controller event message for node 8924k8s000\n","stream":"stderr","time":"2020-07-13T23:23:46.352896533Z"} {"log":"I0713 23:23:46.352803 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: southcentralus:\u0000:1\n","stream":"stderr","time":"2020-07-13T23:23:46.352902133Z"} {"log":"I0713 23:23:46.352818 1 node_lifecycle_controller.go:773] Controller observed a new Node: \"8924k8s001\"\n","stream":"stderr","time":"2020-07-13T23:23:46.352907833Z"} {"log":"I0713 23:23:46.352825 1 controller_utils.go:172] Recording Registered Node 8924k8s001 in Controller event message for node 8924k8s001\n","stream":"stderr","time":"2020-07-13T23:23:46.352912833Z"} ... skipping 8 lines ... {"log":"I0713 23:23:49.560534 1 event.go:291] \"Event occurred\" object=\"kube-system/coredns\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set coredns-779d547755 to 1\"\n","stream":"stderr","time":"2020-07-13T23:23:49.564734933Z"} {"log":"I0713 23:23:49.577629 1 event.go:291] \"Event occurred\" object=\"kube-system/coredns-779d547755\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: coredns-779d547755-l5fn7\"\n","stream":"stderr","time":"2020-07-13T23:23:49.577710233Z"} {"log":"I0713 23:23:49.590557 1 event.go:291] \"Event occurred\" object=\"kube-system/coredns\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set coredns-7ff794b897 to 0\"\n","stream":"stderr","time":"2020-07-13T23:23:49.597982833Z"} {"log":"I0713 23:23:49.590916 1 replica_set.go:595] \"Too many replicas\" replicaSet=\"kube-system/coredns-7ff794b897\" need=0 deleting=1\n","stream":"stderr","time":"2020-07-13T23:23:49.597997633Z"} {"log":"I0713 23:23:49.590944 1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"kube-system/coredns-7ff794b897\" relatedReplicaSets=[coredns-779d547755 coredns-7ff794b897]\n","stream":"stderr","time":"2020-07-13T23:23:49.598002733Z"} {"log":"I0713 23:23:49.591007 1 controller_utils.go:604] \"Deleting pod\" controller=\"coredns-7ff794b897\" pod=\"kube-system/coredns-7ff794b897-6vt24\"\n","stream":"stderr","time":"2020-07-13T23:23:49.598018233Z"} {"log":"I0713 23:23:49.613591 1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kube-system/coredns\" err=\"Operation cannot be fulfilled on deployments.apps \\\"coredns\\\": the object has been modified; please apply your changes to the latest version and try again\"\n","stream":"stderr","time":"2020-07-13T23:23:49.622061333Z"} {"log":"I0713 23:23:49.615414 1 event.go:291] \"Event occurred\" object=\"kube-system/coredns-7ff794b897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: coredns-7ff794b897-6vt24\"\n","stream":"stderr","time":"2020-07-13T23:23:49.622072933Z"} {"log":"I0713 23:23:52.319219 1 controller.go:708] Detected change in list of current cluster nodes. New node set: map[8924k8s000:{} 8924k8s010:{}]\n","stream":"stderr","time":"2020-07-13T23:23:52.319354033Z"} {"log":"I0713 23:23:52.319280 1 controller.go:716] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes\n","stream":"stderr","time":"2020-07-13T23:23:52.319380233Z"} {"log":"I0713 23:23:55.328847 1 controller.go:708] Detected change in list of current cluster nodes. New node set: map[8924k8s000:{} 8924k8s001:{} 8924k8s010:{}]\n","stream":"stderr","time":"2020-07-13T23:23:55.328964133Z"} {"log":"I0713 23:23:55.328875 1 controller.go:716] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes\n","stream":"stderr","time":"2020-07-13T23:23:55.328980533Z"} {"log":"I0713 23:23:56.363584 1 node_lifecycle_controller.go:896] Node 8924k8s000 is healthy again, removing all taints\n","stream":"stderr","time":"2020-07-13T23:23:56.363673833Z"} ... skipping 313 lines ... Jul 13 23:21:36.904716 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903454 5936 flags.go:59] FLAG: --experimental-allocatable-ignore-eviction="false" Jul 13 23:21:36.904716 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903460 5936 flags.go:59] FLAG: --experimental-bootstrap-kubeconfig="" Jul 13 23:21:36.907041 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903466 5936 flags.go:59] FLAG: --experimental-check-node-capabilities-before-mount="false" Jul 13 23:21:36.907041 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903472 5936 flags.go:59] FLAG: --experimental-dockershim-root-directory="/var/lib/dockershim" Jul 13 23:21:36.907041 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903478 5936 flags.go:59] FLAG: --experimental-kernel-memcg-notification="false" Jul 13 23:21:36.907041 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903484 5936 flags.go:59] FLAG: --experimental-mounter-path="" Jul 13 23:21:36.907041 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903489 5936 flags.go:59] FLAG: --fail-swap-on="true" Jul 13 23:21:36.907041 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903494 5936 flags.go:59] FLAG: --feature-gates="KubeletPodResources=false,RotateKubeletServerCertificate=true" Jul 13 23:21:36.907041 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903507 5936 flags.go:59] FLAG: --file-check-frequency="20s" Jul 13 23:21:36.907041 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903513 5936 flags.go:59] FLAG: --global-housekeeping-interval="1m0s" Jul 13 23:21:36.907041 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903519 5936 flags.go:59] FLAG: --hairpin-mode="promiscuous-bridge" Jul 13 23:21:36.907041 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903529 5936 flags.go:59] FLAG: --healthz-bind-address="127.0.0.1" Jul 13 23:21:36.907041 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.903535 5936 flags.go:59] FLAG: --healthz-port="10248" ... skipping 142 lines ... Jul 13 23:21:36.998601 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.985797 5936 certificate_manager.go:282] Certificate rotation is enabled. Jul 13 23:21:36.998601 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.996913 5936 dynamic_cafile_content.go:129] Loaded a new CA Bundle and Verifier for "client-ca-bundle::/etc/kubernetes/certs/ca.crt" Jul 13 23:21:36.998601 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.997397 5936 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/certs/ca.crt Jul 13 23:21:36.998601 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.997431 5936 certificate_manager.go:491] Current certificate CN (client) does not match requested CN (system:node:k8s-master-89242181-0) Jul 13 23:21:36.998601 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:36.997442 5936 certificate_manager.go:412] Rotating certificates Jul 13 23:21:37.005028 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.004905 5936 manager.go:165] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service" Jul 13 23:21:37.018155 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.018084 5936 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://10.255.255.5:443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:37.054195 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.051686 5936 fs.go:127] Filesystem UUIDs: map[076255b1-d152-40eb-b89c-ea0968b694d8:/dev/sda1 25DA-4525:/dev/sdb15 2c6071d4-c6e7-4dc9-97af-9ef85fb979a3:/dev/sdc1 b0dd9d06-536e-4144-ac5f-6db8e20295b3:/dev/sdb1] Jul 13 23:21:37.054195 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.051725 5936 fs.go:128] Filesystem partitions: map[/dev/sda1:{mountpoint:/mnt major:8 minor:1 fsType:ext4 blockSize:0} /dev/sdb1:{mountpoint:/ major:8 minor:17 fsType:ext4 blockSize:0} /dev/sdc1:{mountpoint:/var/lib/etcddisk major:8 minor:33 fsType:ext4 blockSize:0} /dev/shm:{mountpoint:/dev/shm major:0 minor:25 fsType:tmpfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/lock:{mountpoint:/run/lock major:0 minor:26 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:27 fsType:tmpfs blockSize:0}] Jul 13 23:21:37.054195 k8s-master-89242181-0 kubelet[5936]: W0713 23:21:37.051973 5936 nvidia.go:61] NVIDIA GPU metrics will not be available: no NVIDIA devices found Jul 13 23:21:37.055371 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.055186 5936 manager.go:213] Machine: {Timestamp:2020-07-13 23:21:37.054915933 +0000 UTC m=+0.275915501 NumCores:2 NumPhysicalCores:1 NumSockets:1 CpuFrequency:2593905 MemoryCapacity:8349155328 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:07bbad1282bc4be5913528c0d2899a28 SystemUUID:cd7aba4c-4b4d-db4a-b891-e35223ea0cb1 BootID:e1e4be2c-c0c5-4eff-8c51-b3c6b8d15826 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:25 Capacity:4174577664 Type:vfs Inodes:1019184 HasInodes:true} {Device:/run/lock DeviceMajor:0 DeviceMinor:26 Capacity:5242880 Type:vfs Inodes:1019184 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:27 Capacity:4174577664 Type:vfs Inodes:1019184 HasInodes:true} {Device:/dev/sdc1 DeviceMajor:8 DeviceMinor:33 Capacity:540052721664 Type:vfs Inodes:33554432 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:52574978048 Type:vfs Inodes:3276800 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:834916352 Type:vfs Inodes:1019184 HasInodes:true} {Device:/dev/sdb1 DeviceMajor:8 DeviceMinor:17 Capacity:31036686336 Type:vfs Inodes:3870720 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:53687091200 Scheduler:mq-deadline} 8:16:{Name:sdb Major:8 Minor:16 Size:32213303296 Scheduler:mq-deadline} 8:32:{Name:sdc Major:8 Minor:32 Size:549755813888 Scheduler:mq-deadline}] NetworkDevices:[{Name:eth0 MacAddress:00:0d:3a:5e:cb:37 Speed:50000 Mtu:1500}] Topology:[{Id:0 Memory:8349155328 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0 1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:1048576 Type:Unified Level:2}] SocketID:0}] Caches:[{Size:37486592 Type:Unified Level:3}]}] CloudProvider:Azure InstanceType:Unknown InstanceID:cd7aba4c-4b4d-db4a-b891-e35223ea0cb1} Jul 13 23:21:37.055494 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.055372 5936 manager_no_libpfm.go:28] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jul 13 23:21:37.072112 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.072068 5936 manager.go:229] Version: {KernelVersion:5.3.0-1031-azure ContainerOsVersion:Ubuntu 18.04.4 LTS DockerVersion:3.0.13+azure DockerAPIVersion:1.40 CadvisorVersion: CadvisorRevision:} ... skipping 7 lines ... Jul 13 23:21:37.074618 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.072969 5936 client.go:77] Connecting to docker on unix:///var/run/docker.sock Jul 13 23:21:37.074618 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.072985 5936 client.go:94] Start docker client with request timeout=2m0s Jul 13 23:21:37.083410 k8s-master-89242181-0 kubelet[5936]: W0713 23:21:37.082068 5936 docker_service.go:564] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" Jul 13 23:21:37.083410 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.082096 5936 docker_service.go:241] Hairpin mode set to "hairpin-veth" Jul 13 23:21:37.358131 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.358098 5936 plugins.go:168] Loaded network plugin "cni" Jul 13 23:21:37.358304 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.358167 5936 docker_service.go:256] Docker cri networking managed by cni Jul 13 23:21:37.366264 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.366179 5936 docker_service.go:261] Docker Info: &{ID:SQKO:SJDE:JB3O:7NU5:FLNG:MYJM:6YTY:5P3R:MCTE:W6SK:XOHA:TI6I Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:85 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2020-07-13T23:21:37.359252833Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.3.0-1031-azure OperatingSystem:Ubuntu 18.04.4 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000951d50 NCPU:2 MemTotal:8349155328 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:k8s-master-89242181-0 Labels:[] ExperimentalBuild:false ServerVersion:3.0.13+azure ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:true Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support]} Jul 13 23:21:37.366264 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.366261 5936 docker_service.go:274] Setting cgroupDriver to cgroupfs Jul 13 23:21:37.366486 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.366336 5936 kubelet_dockershim.go:67] Starting the GRPC server for the docker CRI shim. Jul 13 23:21:37.366486 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.366357 5936 docker_server.go:61] Start dockershim grpc server Jul 13 23:21:37.377827 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.377806 5936 remote_runtime.go:59] parsed scheme: "" Jul 13 23:21:37.377827 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.377828 5936 remote_runtime.go:59] scheme "" not registered, fallback to default scheme Jul 13 23:21:37.378031 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.377863 5936 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>} ... skipping 9 lines ... Jul 13 23:21:37.378031 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.378029 5936 kubelet.go:273] Watching apiserver Jul 13 23:21:37.380888 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.380866 5936 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00056c040, {CONNECTING <nil>} Jul 13 23:21:37.381076 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.381063 5936 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00056c5f0, {CONNECTING <nil>} Jul 13 23:21:37.381411 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.381393 5936 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00056c040, {READY <nil>} Jul 13 23:21:37.381493 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.381435 5936 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc00056c5f0, {READY <nil>} Jul 13 23:21:37.390569 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.390546 5936 reflector.go:207] Starting reflector *v1.Pod (0s) from k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46 Jul 13 23:21:37.390938 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.390919 5936 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.255.255.5:443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master-89242181-0&limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:37.391128 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.391110 5936 reflector.go:207] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134 Jul 13 23:21:37.391407 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.391386 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.255.255.5:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:37.391558 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.391498 5936 reflector.go:207] Starting reflector *v1.Node (0s) from k8s.io/kubernetes/pkg/kubelet/kubelet.go:438 Jul 13 23:21:37.391812 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.391792 5936 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.255.255.5:443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master-89242181-0&limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:37.563988 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.563952 5936 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated. Jul 13 23:21:37.563988 k8s-master-89242181-0 kubelet[5936]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors Jul 13 23:21:37.564489 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.564467 5936 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token Jul 13 23:21:37.571485 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.571462 5936 kuberuntime_manager.go:208] Container runtime docker initialized, version: 3.0.13+azure, apiVersion: 1.40.0 Jul 13 23:21:37.571726 k8s-master-89242181-0 kubelet[5936]: W0713 23:21:37.571709 5936 probe.go:268] Flexvolume plugin directory at /etc/kubernetes/volumeplugins does not exist. Recreating. Jul 13 23:21:37.571901 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.571865 5936 plugins.go:631] Loaded volume plugin "kubernetes.io/gce-pd" ... skipping 20 lines ... Jul 13 23:21:37.572205 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.572137 5936 plugins.go:631] Loaded volume plugin "kubernetes.io/portworx-volume" Jul 13 23:21:37.572205 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.572151 5936 plugins.go:631] Loaded volume plugin "kubernetes.io/scaleio" Jul 13 23:21:37.572205 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.572161 5936 plugins.go:631] Loaded volume plugin "kubernetes.io/local-volume" Jul 13 23:21:37.572205 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.572172 5936 plugins.go:631] Loaded volume plugin "kubernetes.io/storageos" Jul 13 23:21:37.572205 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.572203 5936 plugins.go:631] Loaded volume plugin "kubernetes.io/csi" Jul 13 23:21:37.572870 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.572475 5936 server.go:1147] Started kubelet Jul 13 23:21:37.573192 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.573111 5936 kubelet.go:1211] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache Jul 13 23:21:37.573787 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.573763 5936 event.go:273] Unable to write event: 'Post "https://10.255.255.5:443/api/v1/namespaces/default/events": dial tcp 10.255.255.5:443: connect: connection refused' (may retry after sleeping) Jul 13 23:21:37.574465 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.574075 5936 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer Jul 13 23:21:37.576564 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.576541 5936 volume_manager.go:263] The desired_state_of_world populator starts Jul 13 23:21:37.576564 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.576559 5936 volume_manager.go:265] Starting Kubelet Volume Manager Jul 13 23:21:37.576798 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.576779 5936 reflector.go:207] Starting reflector *v1.CSIDriver (0s) from k8s.io/client-go/informers/factory.go:134 Jul 13 23:21:37.577074 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.577043 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:37.577074 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.574105 5936 server.go:150] Starting to listen on 0.0.0.0:10250 Jul 13 23:21:37.577865 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.577845 5936 server.go:416] Adding debug handlers to kubelet server. Jul 13 23:21:37.579935 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.579437 5936 desired_state_of_world_populator.go:139] Desired state populator starts to run Jul 13 23:21:37.580499 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.580463 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:37.586651 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.586621 5936 controller.go:136] failed to ensure node lease exists, will retry in 200ms, error: Get "https://10.255.255.5:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master-89242181-0?timeout=10s": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:37.605456 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.605436 5936 factory.go:55] Registering systemd factory Jul 13 23:21:37.615015 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.614959 5936 factory.go:370] Registering Docker factory Jul 13 23:21:37.618196 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.615497 5936 factory.go:101] Registering Raw factory Jul 13 23:21:37.618196 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.616701 5936 manager.go:1195] Started watching for new ooms in manager Jul 13 23:21:37.618196 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.617181 5936 manager.go:301] Starting recovery of all containers Jul 13 23:21:37.645134 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.645112 5936 manager.go:306] Recovery completed ... skipping 24 lines ... Jul 13 23:21:37.741156 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.741152 5936 cpu_manager.go:185] [cpumanager] reconciling every 10s Jul 13 23:21:37.741277 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.741170 5936 state_mem.go:36] [cpumanager] initializing new in-memory state store Jul 13 23:21:37.745977 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.745969 5936 status_manager.go:158] Starting to sync pod status with apiserver Jul 13 23:21:37.746044 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.745990 5936 kubelet.go:1727] Starting kubelet main sync loop. Jul 13 23:21:37.746093 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.746036 5936 kubelet.go:1751] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] Jul 13 23:21:37.751772 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.751749 5936 reflector.go:207] Starting reflector *v1beta1.RuntimeClass (0s) from k8s.io/client-go/informers/factory.go:134 Jul 13 23:21:37.752623 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.752597 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.RuntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://10.255.255.5:443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:37.787207 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.787179 5936 controller.go:136] failed to ensure node lease exists, will retry in 400ms, error: Get "https://10.255.255.5:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master-89242181-0?timeout=10s": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:37.789364 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.789348 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:37.792740 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.792725 5936 policy_none.go:43] [cpumanager] none policy: Start Jul 13 23:21:37.805247 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.805227 5936 manager.go:235] Starting Device Plugin manager Jul 13 23:21:37.805328 k8s-master-89242181-0 kubelet[5936]: W0713 23:21:37.805264 5936 manager.go:596] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found Jul 13 23:21:37.805392 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.805361 5936 manager.go:277] Serving device plugin registration server on "/var/lib/kubelet/device-plugins/kubelet.sock" Jul 13 23:21:37.805438 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.805419 5936 plugin_watcher.go:54] Plugin Watcher Start at /var/lib/kubelet/plugins_registry Jul 13 23:21:37.805489 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.805484 5936 plugin_manager.go:112] The desired_state_of_world populator (plugin watcher) starts Jul 13 23:21:37.805531 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.805491 5936 plugin_manager.go:114] Starting Kubelet Plugin Manager Jul 13 23:21:37.805808 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:37.805792 5936 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "k8s-master-89242181-0" not found Jul 13 23:21:37.806159 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.806128 5936 container_manager_linux.go:492] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service Jul 13 23:21:37.846221 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.846184 5936 kubelet.go:1813] SyncLoop (ADD, "file"): "kube-addon-manager-k8s-master-89242181-0_kube-system(5ddf9fa33d2dcb615b0173624c2621da), kube-apiserver-k8s-master-89242181-0_kube-system(921e251ad9ac3df5e4a6fed98b1e313e), kube-controller-manager-k8s-master-89242181-0_kube-system(dbf3fc11cd6694dd893ee9fecfa6bd0e), kube-scheduler-k8s-master-89242181-0_kube-system(3b0a6be9b8cd2beeffa5dc5bb3baa251)" Jul 13 23:21:37.846338 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.846234 5936 topology_manager.go:233] [topologymanager] Topology Admit Handler Jul 13 23:21:37.870464 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.870396 5936 kubelet_node_status.go:334] Setting node annotation to enable volume controller attach/detach Jul 13 23:21:37.870464 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.870428 5936 kubelet_node_status.go:382] Adding node label from cloud provider: beta.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:37.870464 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.870438 5936 kubelet_node_status.go:384] Adding node label from cloud provider: node.kubernetes.io/instance-type=Standard_D2_v3 ... skipping 24 lines ... Jul 13 23:21:37.893526 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.893453 5936 kubelet_node_status.go:397] Adding node label from cloud provider: topology.kubernetes.io/zone=0 Jul 13 23:21:37.893526 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.893459 5936 kubelet_node_status.go:401] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=southcentralus Jul 13 23:21:37.893526 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.893466 5936 kubelet_node_status.go:403] Adding node label from cloud provider: topology.kubernetes.io/region=southcentralus Jul 13 23:21:37.912742 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.912724 5936 kubelet_node_status.go:526] Recording NodeHasSufficientMemory event message for node k8s-master-89242181-0 Jul 13 23:21:37.912742 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.912747 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:21:37.912742 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.912758 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:37.917980 k8s-master-89242181-0 kubelet[5936]: W0713 23:21:37.915752 5936 status_manager.go:556] Failed to get status for pod "kube-addon-manager-k8s-master-89242181-0_kube-system(5ddf9fa33d2dcb615b0173624c2621da)": Get "https://10.255.255.5:443/api/v1/namespaces/kube-system/pods/kube-addon-manager-k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:37.922411 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.922394 5936 kubelet_node_status.go:526] Recording NodeHasSufficientMemory event message for node k8s-master-89242181-0 Jul 13 23:21:37.922411 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.922420 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:21:37.922568 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.922431 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:37.922670 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.922573 5936 topology_manager.go:233] [topologymanager] Topology Admit Handler Jul 13 23:21:37.922836 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.922821 5936 kubelet_node_status.go:334] Setting node annotation to enable volume controller attach/detach Jul 13 23:21:37.922913 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.922844 5936 kubelet_node_status.go:382] Adding node label from cloud provider: beta.kubernetes.io/instance-type=Standard_D2_v3 ... skipping 9 lines ... Jul 13 23:21:37.934658 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.934547 5936 kubelet_node_status.go:397] Adding node label from cloud provider: topology.kubernetes.io/zone=0 Jul 13 23:21:37.934658 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.934554 5936 kubelet_node_status.go:401] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=southcentralus Jul 13 23:21:37.934658 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.934561 5936 kubelet_node_status.go:403] Adding node label from cloud provider: topology.kubernetes.io/region=southcentralus Jul 13 23:21:37.935915 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.935901 5936 kubelet_node_status.go:526] Recording NodeHasSufficientMemory event message for node k8s-master-89242181-0 Jul 13 23:21:37.936013 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.935921 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:21:37.936013 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.935932 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:37.938566 k8s-master-89242181-0 kubelet[5936]: W0713 23:21:37.938516 5936 status_manager.go:556] Failed to get status for pod "kube-apiserver-k8s-master-89242181-0_kube-system(921e251ad9ac3df5e4a6fed98b1e313e)": Get "https://10.255.255.5:443/api/v1/namespaces/kube-system/pods/kube-apiserver-k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:37.939432 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.939416 5936 kubelet_node_status.go:334] Setting node annotation to enable volume controller attach/detach Jul 13 23:21:37.939527 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.939439 5936 kubelet_node_status.go:382] Adding node label from cloud provider: beta.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:37.939527 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.939447 5936 kubelet_node_status.go:384] Adding node label from cloud provider: node.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:37.939527 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.939456 5936 kubelet_node_status.go:395] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=0 Jul 13 23:21:37.939527 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.939464 5936 kubelet_node_status.go:397] Adding node label from cloud provider: topology.kubernetes.io/zone=0 Jul 13 23:21:37.939734 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.939532 5936 kubelet_node_status.go:401] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=southcentralus ... skipping 52 lines ... Jul 13 23:21:37.990378 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.990343 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "addons" (UniqueName: "kubernetes.io/host-path/5ddf9fa33d2dcb615b0173624c2621da-addons") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "5ddf9fa33d2dcb615b0173624c2621da") Jul 13 23:21:37.990746 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.990468 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "etc-kubernetes" (UniqueName: "kubernetes.io/host-path/5ddf9fa33d2dcb615b0173624c2621da-etc-kubernetes") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "5ddf9fa33d2dcb615b0173624c2621da") Jul 13 23:21:37.990746 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.990482 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "msi" (UniqueName: "kubernetes.io/host-path/5ddf9fa33d2dcb615b0173624c2621da-msi") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "5ddf9fa33d2dcb615b0173624c2621da") Jul 13 23:21:37.997087 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.997068 5936 kubelet_node_status.go:526] Recording NodeHasSufficientMemory event message for node k8s-master-89242181-0 Jul 13 23:21:37.997186 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.997108 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:21:37.997186 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:37.997118 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:38.001487 k8s-master-89242181-0 kubelet[5936]: W0713 23:21:38.001454 5936 status_manager.go:556] Failed to get status for pod "kube-controller-manager-k8s-master-89242181-0_kube-system(dbf3fc11cd6694dd893ee9fecfa6bd0e)": Get "https://10.255.255.5:443/api/v1/namespaces/kube-system/pods/kube-controller-manager-k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:38.014361 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.014341 5936 kubelet_node_status.go:526] Recording NodeHasSufficientMemory event message for node k8s-master-89242181-0 Jul 13 23:21:38.014361 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.014364 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:21:38.014506 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.014373 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:38.018538 k8s-master-89242181-0 kubelet[5936]: W0713 23:21:38.018507 5936 status_manager.go:556] Failed to get status for pod "kube-scheduler-k8s-master-89242181-0_kube-system(3b0a6be9b8cd2beeffa5dc5bb3baa251)": Get "https://10.255.255.5:443/api/v1/namespaces/kube-system/pods/kube-scheduler-k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:38.089572 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.089554 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:38.090422 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.090385 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "etc-kubernetes" (UniqueName: "kubernetes.io/host-path/921e251ad9ac3df5e4a6fed98b1e313e-etc-kubernetes") pod "kube-apiserver-k8s-master-89242181-0" (UID: "921e251ad9ac3df5e4a6fed98b1e313e") Jul 13 23:21:38.090858 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.090842 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "etc-kubernetes" (UniqueName: "kubernetes.io/host-path/921e251ad9ac3df5e4a6fed98b1e313e-etc-kubernetes") pod "kube-apiserver-k8s-master-89242181-0" (UID: "921e251ad9ac3df5e4a6fed98b1e313e") Jul 13 23:21:38.090943 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.090898 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "sock" (UniqueName: "kubernetes.io/host-path/921e251ad9ac3df5e4a6fed98b1e313e-sock") pod "kube-apiserver-k8s-master-89242181-0" (UID: "921e251ad9ac3df5e4a6fed98b1e313e") Jul 13 23:21:38.091002 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.090991 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "etc-kubernetes" (UniqueName: "kubernetes.io/host-path/3b0a6be9b8cd2beeffa5dc5bb3baa251-etc-kubernetes") pod "kube-scheduler-k8s-master-89242181-0" (UID: "3b0a6be9b8cd2beeffa5dc5bb3baa251") Jul 13 23:21:38.091056 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.091044 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "var-lib-kubelet" (UniqueName: "kubernetes.io/host-path/921e251ad9ac3df5e4a6fed98b1e313e-var-lib-kubelet") pod "kube-apiserver-k8s-master-89242181-0" (UID: "921e251ad9ac3df5e4a6fed98b1e313e") ... skipping 13 lines ... Jul 13 23:21:38.092004 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.091988 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "msi" (UniqueName: "kubernetes.io/host-path/3b0a6be9b8cd2beeffa5dc5bb3baa251-msi") pod "kube-scheduler-k8s-master-89242181-0" (UID: "3b0a6be9b8cd2beeffa5dc5bb3baa251") Jul 13 23:21:38.092094 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.092071 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "etc-ssl" (UniqueName: "kubernetes.io/host-path/dbf3fc11cd6694dd893ee9fecfa6bd0e-etc-ssl") pod "kube-controller-manager-k8s-master-89242181-0" (UID: "dbf3fc11cd6694dd893ee9fecfa6bd0e") Jul 13 23:21:38.092182 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.092151 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "var-lib-kubelet" (UniqueName: "kubernetes.io/host-path/dbf3fc11cd6694dd893ee9fecfa6bd0e-var-lib-kubelet") pod "kube-controller-manager-k8s-master-89242181-0" (UID: "dbf3fc11cd6694dd893ee9fecfa6bd0e") Jul 13 23:21:38.092244 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.092231 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "msi" (UniqueName: "kubernetes.io/host-path/dbf3fc11cd6694dd893ee9fecfa6bd0e-msi") pod "kube-controller-manager-k8s-master-89242181-0" (UID: "dbf3fc11cd6694dd893ee9fecfa6bd0e") Jul 13 23:21:38.092330 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.092305 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "var-lib-kubelet" (UniqueName: "kubernetes.io/host-path/3b0a6be9b8cd2beeffa5dc5bb3baa251-var-lib-kubelet") pod "kube-scheduler-k8s-master-89242181-0" (UID: "3b0a6be9b8cd2beeffa5dc5bb3baa251") Jul 13 23:21:38.092415 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.092384 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "msi" (UniqueName: "kubernetes.io/host-path/921e251ad9ac3df5e4a6fed98b1e313e-msi") pod "kube-apiserver-k8s-master-89242181-0" (UID: "921e251ad9ac3df5e4a6fed98b1e313e") Jul 13 23:21:38.187749 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.187659 5936 controller.go:136] failed to ensure node lease exists, will retry in 800ms, error: Get "https://10.255.255.5:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master-89242181-0?timeout=10s": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:38.189681 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.189662 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:38.217917 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.217882 5936 kuberuntime_manager.go:419] No sandbox for pod "kube-addon-manager-k8s-master-89242181-0_kube-system(5ddf9fa33d2dcb615b0173624c2621da)" can be found. Need to start a new one Jul 13 23:21:38.245251 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.245226 5936 kuberuntime_manager.go:419] No sandbox for pod "kube-apiserver-k8s-master-89242181-0_kube-system(921e251ad9ac3df5e4a6fed98b1e313e)" can be found. Need to start a new one Jul 13 23:21:38.245490 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.245473 5936 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.255.255.5:443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master-89242181-0&limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:38.289812 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.289790 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:38.301432 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.301396 5936 kuberuntime_manager.go:419] No sandbox for pod "kube-controller-manager-k8s-master-89242181-0_kube-system(dbf3fc11cd6694dd893ee9fecfa6bd0e)" can be found. Need to start a new one Jul 13 23:21:38.318423 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.318404 5936 kuberuntime_manager.go:419] No sandbox for pod "kube-scheduler-k8s-master-89242181-0_kube-system(3b0a6be9b8cd2beeffa5dc5bb3baa251)" can be found. Need to start a new one Jul 13 23:21:38.359366 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.359346 5936 kubelet_node_status.go:334] Setting node annotation to enable volume controller attach/detach Jul 13 23:21:38.359487 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.359380 5936 kubelet_node_status.go:382] Adding node label from cloud provider: beta.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:38.359487 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.359392 5936 kubelet_node_status.go:384] Adding node label from cloud provider: node.kubernetes.io/instance-type=Standard_D2_v3 ... skipping 5 lines ... Jul 13 23:21:38.472509 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.375677 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:21:38.472509 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.375688 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:38.472509 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.375709 5936 kubelet_node_status.go:70] Attempting to register node k8s-master-89242181-0 Jul 13 23:21:38.472509 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.376026 5936 kubelet_node_status.go:92] Unable to register node "k8s-master-89242181-0" with API server: Post "https://10.255.255.5:443/api/v1/nodes": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:38.472509 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.389906 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:38.490048 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.490026 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:38.577625 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:38.577588 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:38.590212 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.590173 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:38.599729 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.599708 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.255.255.5:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:38.690468 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.690396 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:38.790605 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.790521 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:38.854697 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.854672 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.RuntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://10.255.255.5:443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:38.890674 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.890651 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:38.988448 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.988404 5936 controller.go:136] failed to ensure node lease exists, will retry in 1.6s, error: Get "https://10.255.255.5:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master-89242181-0?timeout=10s": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:38.990776 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.990741 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:38.991106 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:38.991091 5936 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.255.255.5:443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master-89242181-0&limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:39.090885 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.090864 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:39.176197 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.176159 5936 kubelet_node_status.go:334] Setting node annotation to enable volume controller attach/detach Jul 13 23:21:39.176197 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.176203 5936 kubelet_node_status.go:382] Adding node label from cloud provider: beta.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:39.176197 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.176212 5936 kubelet_node_status.go:384] Adding node label from cloud provider: node.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:39.176478 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.176224 5936 kubelet_node_status.go:395] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=0 Jul 13 23:21:39.176478 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.176241 5936 kubelet_node_status.go:397] Adding node label from cloud provider: topology.kubernetes.io/zone=0 Jul 13 23:21:39.176478 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.176248 5936 kubelet_node_status.go:401] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=southcentralus Jul 13 23:21:39.176478 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.176256 5936 kubelet_node_status.go:403] Adding node label from cloud provider: topology.kubernetes.io/region=southcentralus Jul 13 23:21:39.191056 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.191028 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:39.191298 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.191278 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:39.195002 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.194726 5936 kubelet_node_status.go:526] Recording NodeHasSufficientMemory event message for node k8s-master-89242181-0 Jul 13 23:21:39.195002 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.194755 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:21:39.195002 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.194768 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:39.195002 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.194793 5936 kubelet_node_status.go:70] Attempting to register node k8s-master-89242181-0 Jul 13 23:21:39.203938 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.203882 5936 certificate_manager.go:412] Rotating certificates Jul 13 23:21:39.206900 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.206882 5936 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://10.255.255.5:443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:39.291135 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.291089 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:39.391044 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.390961 5936 kubelet_node_status.go:92] Unable to register node "k8s-master-89242181-0" with API server: Post "https://10.255.255.5:443/api/v1/nodes": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:39.391243 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.391227 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:39.491382 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.491354 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:39.591103 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:39.591073 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:39.591496 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.591477 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:39.691682 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.691601 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:39.791851 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.791789 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:39.892004 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.891958 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:39.979125 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.979022 5936 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.255.255.5:443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master-89242181-0&limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:39.992101 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:39.992079 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:40.092238 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:40.092209 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:40.192387 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:40.192351 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:40.293958 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:40.292442 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:40.392584 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:40.392559 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:40.468403 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.468350 5936 kubelet.go:1845] SyncLoop (PLEG): "kube-addon-manager-k8s-master-89242181-0_kube-system(5ddf9fa33d2dcb615b0173624c2621da)", event: &pleg.PodLifecycleEvent{ID:"5ddf9fa33d2dcb615b0173624c2621da", Type:"ContainerDied", Data:"5c62bd3d47c3ad3c4ca80540865b409c76a2760e184465a542d53df8da127a09"} ... skipping 44 lines ... Jul 13 23:21:40.539378 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.539221 5936 kubelet_node_status.go:401] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=southcentralus Jul 13 23:21:40.539378 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.539228 5936 kubelet_node_status.go:403] Adding node label from cloud provider: topology.kubernetes.io/region=southcentralus Jul 13 23:21:40.555901 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.555847 5936 kubelet_node_status.go:526] Recording NodeHasSufficientMemory event message for node k8s-master-89242181-0 Jul 13 23:21:40.555901 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.555873 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:21:40.555901 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.555882 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:40.555901 k8s-master-89242181-0 kubelet[5936]: W0713 23:21:40.555902 5936 pod_container_deletor.go:79] Container "0cbc32edcdfe315aded1d4ae97ad1dc4c4ee26569ed0ffb03fa0b5eab5053eab" not found in pod's containers Jul 13 23:21:40.577861 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.577838 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:40.588785 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:40.588757 5936 controller.go:136] failed to ensure node lease exists, will retry in 3.2s, error: Get "https://10.255.255.5:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master-89242181-0?timeout=10s": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:40.592782 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:40.592764 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:40.692913 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:40.692890 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:40.792112 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:40.792086 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.RuntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://10.255.255.5:443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:40.793024 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:40.793007 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:40.893197 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:40.893119 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:40.958046 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:40.958024 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.255.255.5:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:40.991119 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.991090 5936 kubelet_node_status.go:334] Setting node annotation to enable volume controller attach/detach Jul 13 23:21:40.991119 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.991132 5936 kubelet_node_status.go:382] Adding node label from cloud provider: beta.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:40.991119 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.991141 5936 kubelet_node_status.go:384] Adding node label from cloud provider: node.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:40.991396 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.991153 5936 kubelet_node_status.go:395] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=0 Jul 13 23:21:40.991396 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.991160 5936 kubelet_node_status.go:397] Adding node label from cloud provider: topology.kubernetes.io/zone=0 Jul 13 23:21:40.991396 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:40.991167 5936 kubelet_node_status.go:401] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=southcentralus ... skipping 21 lines ... Jul 13 23:21:41.514950 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:41.514940 5936 kubelet_node_status.go:401] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=southcentralus Jul 13 23:21:41.515041 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:41.515031 5936 kubelet_node_status.go:403] Adding node label from cloud provider: topology.kubernetes.io/region=southcentralus Jul 13 23:21:41.518545 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:41.518514 5936 kubelet.go:1845] SyncLoop (PLEG): "kube-scheduler-k8s-master-89242181-0_kube-system(3b0a6be9b8cd2beeffa5dc5bb3baa251)", event: &pleg.PodLifecycleEvent{ID:"3b0a6be9b8cd2beeffa5dc5bb3baa251", Type:"ContainerStarted", Data:"17caced715d2a597df3b418b9b0886f2c4d893d95a9e1f8aa033048a2a3a44fd"} Jul 13 23:21:41.529698 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:41.529678 5936 kubelet_node_status.go:526] Recording NodeHasSufficientMemory event message for node k8s-master-89242181-0 Jul 13 23:21:41.529791 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:41.529707 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:21:41.529791 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:41.529718 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:41.530411 k8s-master-89242181-0 kubelet[5936]: W0713 23:21:41.530388 5936 status_manager.go:556] Failed to get status for pod "kube-addon-manager-k8s-master-89242181-0_kube-system(5ddf9fa33d2dcb615b0173624c2621da)": Get "https://10.255.255.5:443/api/v1/namespaces/kube-system/pods/kube-addon-manager-k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:41.577517 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:41.577486 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:41.594914 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:41.594897 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:41.617636 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:41.617609 5936 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.255.255.5:443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master-89242181-0&limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:41.695145 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:41.695082 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:41.795220 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:41.795194 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:41.895447 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:41.895387 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:41.949945 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:41.949875 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:41.995549 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:41.995527 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:42.095752 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:42.095668 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:42.195921 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:42.195854 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:42.296253 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:42.296154 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:42.397588 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:42.396296 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:42.496570 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:42.496523 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found ... skipping 4 lines ... Jul 13 23:21:42.521338 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:42.521249 5936 kubelet_node_status.go:397] Adding node label from cloud provider: topology.kubernetes.io/zone=0 Jul 13 23:21:42.521338 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:42.521256 5936 kubelet_node_status.go:401] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=southcentralus Jul 13 23:21:42.521338 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:42.521263 5936 kubelet_node_status.go:403] Adding node label from cloud provider: topology.kubernetes.io/region=southcentralus Jul 13 23:21:42.536579 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:42.536557 5936 kubelet_node_status.go:526] Recording NodeHasSufficientMemory event message for node k8s-master-89242181-0 Jul 13 23:21:42.536579 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:42.536587 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:21:42.536729 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:42.536597 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:42.577581 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:42.577531 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:42.597022 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:42.596999 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:42.697229 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:42.697137 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:42.797258 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:42.797232 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:42.897531 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:42.897470 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:42.997826 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:42.997680 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:43.097957 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.097905 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:43.142800 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.142763 5936 event.go:273] Unable to write event: 'Post "https://10.255.255.5:443/api/v1/namespaces/default/events": dial tcp 10.255.255.5:443: connect: connection refused' (may retry after sleeping) Jul 13 23:21:43.198288 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.198204 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:43.298463 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.298415 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:43.398610 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.398568 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:43.403826 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:43.403804 5936 certificate_manager.go:412] Rotating certificates Jul 13 23:21:43.406523 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.406241 5936 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://10.255.255.5:443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:43.498724 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.498700 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:43.577775 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:43.577680 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:43.598842 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.598818 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:43.698996 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.698949 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:43.789289 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.789256 5936 controller.go:136] failed to ensure node lease exists, will retry in 6.4s, error: Get "https://10.255.255.5:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master-89242181-0?timeout=10s": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:43.799205 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.799158 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:43.899480 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.899403 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:43.999513 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:43.999476 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:44.023033 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:44.023002 5936 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.255.255.5:443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master-89242181-0&limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:44.099748 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:44.099689 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:44.200025 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:44.199935 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:44.218636 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:44.218618 5936 kubelet_node_status.go:334] Setting node annotation to enable volume controller attach/detach Jul 13 23:21:44.218747 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:44.218650 5936 kubelet_node_status.go:382] Adding node label from cloud provider: beta.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:44.218747 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:44.218660 5936 kubelet_node_status.go:384] Adding node label from cloud provider: node.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:44.218747 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:44.218672 5936 kubelet_node_status.go:395] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=0 ... skipping 5 lines ... Jul 13 23:21:44.245978 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:44.245513 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:44.245978 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:44.245535 5936 kubelet_node_status.go:70] Attempting to register node k8s-master-89242181-0 Jul 13 23:21:44.245978 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:44.245893 5936 kubelet_node_status.go:92] Unable to register node "k8s-master-89242181-0" with API server: Post "https://10.255.255.5:443/api/v1/nodes": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:44.300188 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:44.300125 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:44.400437 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:44.400387 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:44.500876 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:44.500790 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:44.577465 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:44.577431 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:44.601034 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:44.600974 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:44.701224 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:44.701124 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:44.707823 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:44.707793 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.255.255.5:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:44.801305 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:44.801266 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:44.901414 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:44.901385 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:45.001673 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:45.001643 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:45.101816 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:45.101721 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:45.201816 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:45.201790 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:45.301982 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:45.301941 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:45.402095 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:45.402024 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:45.502218 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:45.502148 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:45.577506 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:45.577473 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:45.602284 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:45.602261 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:45.608961 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:45.608940 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.RuntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://10.255.255.5:443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:45.702494 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:45.702424 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:45.802520 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:45.802492 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:45.902631 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:45.902604 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:46.002778 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:46.002703 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:46.102795 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:46.102772 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:46.202959 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:46.202927 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:46.303073 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:46.303012 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:46.349732 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:46.349706 5936 kube_docker_client.go:347] Stop pulling image "k8sprowinternal.azurecr.io/kube-controller-manager-amd64:v1.20.0-alpha.0-150-g240a72b5c0a": "Status: Downloaded newer image for k8sprowinternal.azurecr.io/kube-controller-manager-amd64:v1.20.0-alpha.0-150-g240a72b5c0a" Jul 13 23:21:46.403407 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:46.403342 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:46.503459 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:46.503426 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:46.577403 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:46.577331 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:46.603564 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:46.603538 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:46.703796 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:46.703736 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:46.803920 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:46.803883 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:46.904055 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:46.903948 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:47.004122 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.004094 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:47.104253 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.104227 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:47.204409 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.204337 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:47.304497 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.304470 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:47.404620 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.404599 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:47.462887 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.462805 5936 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.255.255.5:443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master-89242181-0&limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:47.504833 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.504809 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:47.538864 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.538845 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:47.577469 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:47.577435 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:47.604896 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.604875 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:47.704995 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.704968 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:47.805299 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.805273 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:47.806064 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.806039 5936 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "k8s-master-89242181-0" not found Jul 13 23:21:47.905415 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:47.905389 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:48.005525 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:48.005499 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:48.105646 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:48.105551 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:48.205669 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:48.205639 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:48.305816 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:48.305784 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:48.406291 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:48.406186 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:48.506367 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:48.506323 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:48.577584 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:48.577533 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:48.606491 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:48.606462 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:48.706678 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:48.706584 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:48.806730 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:48.806700 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:48.906856 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:48.906821 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:49.007051 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:49.006952 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:49.107109 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:49.107078 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:49.207229 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:49.207199 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:49.307335 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:49.307307 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:49.407460 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:49.407427 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:49.507584 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:49.507553 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:49.577603 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:49.577513 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:49.607689 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:49.607668 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:49.707807 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:49.707779 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:49.807921 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:49.807893 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:49.908057 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:49.907956 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:50.008151 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:50.008085 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:50.108454 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:50.108350 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:50.189868 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:50.189776 5936 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get "https://10.255.255.5:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master-89242181-0?timeout=10s": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:50.208653 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:50.208612 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:50.308843 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:50.308785 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:50.408949 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:50.408927 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:50.464910 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:50.464841 5936 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.255.255.5:443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master-89242181-0&limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:50.509082 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:50.509053 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:50.577565 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:50.577536 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:50.609197 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:50.609176 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:50.646066 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:50.646013 5936 kubelet_node_status.go:334] Setting node annotation to enable volume controller attach/detach Jul 13 23:21:50.646381 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:50.646350 5936 kubelet_node_status.go:382] Adding node label from cloud provider: beta.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:50.646381 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:50.646381 5936 kubelet_node_status.go:384] Adding node label from cloud provider: node.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:50.646513 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:50.646394 5936 kubelet_node_status.go:395] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=0 Jul 13 23:21:50.646513 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:50.646403 5936 kubelet_node_status.go:397] Adding node label from cloud provider: topology.kubernetes.io/zone=0 ... skipping 10 lines ... Jul 13 23:21:51.010108 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:51.010043 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:51.110394 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:51.110234 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:51.210575 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:51.210512 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:51.310779 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:51.310688 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:51.410919 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:51.410839 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:51.436920 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:51.436871 5936 certificate_manager.go:412] Rotating certificates Jul 13 23:21:51.439335 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:51.439319 5936 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://10.255.255.5:443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:51.511070 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:51.511041 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:51.577662 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:51.577634 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:51.611313 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:51.611259 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:51.711511 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:51.711446 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:51.811791 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:51.811763 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:51.911877 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:51.911838 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:52.011999 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:52.011929 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:52.112301 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:52.112219 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:52.212451 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:52.212366 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:52.312579 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:52.312559 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:52.412868 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:52.412843 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:52.513177 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:52.513157 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:52.577517 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:52.577448 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:52.613469 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:52.613446 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:52.713759 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:52.713737 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:52.751327 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.RuntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://10.255.255.5:443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:52.814019 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:52.914115 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:53.014173 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:53.114267 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:53.143277 5936 event.go:273] Unable to write event: 'Post "https://10.255.255.5:443/api/v1/namespaces/default/events": dial tcp 10.255.255.5:443: connect: connection refused' (may retry after sleeping) Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:53.214362 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:53.314451 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:53.414548 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:53.514639 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:53.577540 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:53.614728 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:53.714824 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:53.814915 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:53.915001 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:54.015078 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.162396 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:54.115169 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.164524 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:54.215263 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.164524 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:54.315357 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.164524 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:54.415448 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.164524 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:54.515538 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.164524 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:54.577541 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:55.164524 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:54.615629 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.164524 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:54.715721 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.164524 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:54.815811 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.164524 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:54.915900 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.164524 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:55.015990 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.164524 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:55.116083 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.216223 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:55.216193 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.316343 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:55.316316 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.416522 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:55.416438 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.516587 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:55.516557 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.577514 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:55.577488 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:55.616705 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:55.616685 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.717273 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:55.717179 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.817363 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:55.817301 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:55.917470 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:55.917440 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:56.017637 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:56.017547 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:56.117663 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:56.117628 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:56.217766 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:56.217737 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:56.319554 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:56.319453 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:56.419666 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:56.419585 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:56.519692 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:56.519653 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:56.577693 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:56.577593 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:56.585256 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:56.585226 5936 kubelet.go:1845] SyncLoop (PLEG): "kube-controller-manager-k8s-master-89242181-0_kube-system(dbf3fc11cd6694dd893ee9fecfa6bd0e)", event: &pleg.PodLifecycleEvent{ID:"dbf3fc11cd6694dd893ee9fecfa6bd0e", Type:"ContainerStarted", Data:"07adda2db6ef824a98e47abdbfb55be00fdcc1b1479b684eb9a33d7c243a6722"} Jul 13 23:21:56.585375 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:56.585306 5936 kubelet_node_status.go:334] Setting node annotation to enable volume controller attach/detach Jul 13 23:21:56.585375 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:56.585329 5936 kubelet_node_status.go:382] Adding node label from cloud provider: beta.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:56.585375 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:56.585338 5936 kubelet_node_status.go:384] Adding node label from cloud provider: node.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:56.585375 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:56.585349 5936 kubelet_node_status.go:395] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=0 Jul 13 23:21:56.585375 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:56.585356 5936 kubelet_node_status.go:397] Adding node label from cloud provider: topology.kubernetes.io/zone=0 Jul 13 23:21:56.585375 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:56.585362 5936 kubelet_node_status.go:401] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=southcentralus Jul 13 23:21:56.585375 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:56.585369 5936 kubelet_node_status.go:403] Adding node label from cloud provider: topology.kubernetes.io/region=southcentralus Jul 13 23:21:56.601820 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:56.601802 5936 kubelet_node_status.go:526] Recording NodeHasSufficientMemory event message for node k8s-master-89242181-0 Jul 13 23:21:56.601820 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:56.601825 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:21:56.601982 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:56.601835 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:56.602683 k8s-master-89242181-0 kubelet[5936]: W0713 23:21:56.602652 5936 status_manager.go:556] Failed to get status for pod "kube-controller-manager-k8s-master-89242181-0_kube-system(dbf3fc11cd6694dd893ee9fecfa6bd0e)": Get "https://10.255.255.5:443/api/v1/namespaces/kube-system/pods/kube-controller-manager-k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:56.619867 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:56.619832 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:56.719986 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:56.719959 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:56.820127 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:56.820086 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:56.920305 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:56.920217 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:57.020377 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.020337 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:57.069507 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:57.069471 5936 kube_docker_client.go:344] Pulling image "k8sprowinternal.azurecr.io/kube-apiserver-amd64:v1.20.0-alpha.0-150-g240a72b5c0a": "40a41497c026: Downloading [===========> ] 12.96MB/56.23MB" Jul 13 23:21:57.120518 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.120478 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:57.190323 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.190247 5936 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get "https://10.255.255.5:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master-89242181-0?timeout=10s": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:57.220628 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.220600 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:57.320780 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.320732 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:57.344443 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.344422 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.255.255.5:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:57.420904 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.420877 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:57.521131 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.521042 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:57.577514 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:57.577479 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:57.593125 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:57.592888 5936 kubelet_node_status.go:334] Setting node annotation to enable volume controller attach/detach Jul 13 23:21:57.593125 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:57.592923 5936 kubelet_node_status.go:382] Adding node label from cloud provider: beta.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:57.593125 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:57.592933 5936 kubelet_node_status.go:384] Adding node label from cloud provider: node.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:21:57.593125 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:57.592943 5936 kubelet_node_status.go:395] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=0 Jul 13 23:21:57.593125 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:57.592951 5936 kubelet_node_status.go:397] Adding node label from cloud provider: topology.kubernetes.io/zone=0 Jul 13 23:21:57.593125 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:57.592958 5936 kubelet_node_status.go:401] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=southcentralus ... skipping 12 lines ... Jul 13 23:21:57.675146 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:57.675127 5936 kubelet_node_status.go:526] Recording NodeHasSufficientMemory event message for node k8s-master-89242181-0 Jul 13 23:21:57.675146 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:57.675150 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:21:57.675298 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:57.675164 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:21:57.675298 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:57.675186 5936 kubelet_node_status.go:70] Attempting to register node k8s-master-89242181-0 Jul 13 23:21:57.675533 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.675505 5936 kubelet_node_status.go:92] Unable to register node "k8s-master-89242181-0" with API server: Post "https://10.255.255.5:443/api/v1/nodes": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:57.721258 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.721234 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:57.806259 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.806219 5936 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "k8s-master-89242181-0" not found Jul 13 23:21:57.821387 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.821345 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:57.921554 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.921516 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:57.965094 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:57.965068 5936 reflector.go:127] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.255.255.5:443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master-89242181-0&limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:58.021701 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:58.021651 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:58.121887 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:58.121793 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:58.221966 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:58.221934 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:58.322313 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:58.322264 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:58.422477 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:58.422331 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:58.522550 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:58.522452 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:58.577698 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:58.577646 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:58.622564 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:58.622536 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:58.722725 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:58.722633 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:58.822851 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:58.822822 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:58.922963 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:58.922881 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:59.023173 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:59.023086 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:59.099812 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:59.099785 5936 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:59.123157 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:59.123132 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:59.223284 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:59.223254 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:59.323579 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:59.323411 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:59.423658 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:59.423562 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:59.523801 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:59.523773 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:59.577711 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:59.577615 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:21:59.623919 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:59.623892 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:59.724035 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:59.723955 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:59.824117 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:59.824087 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:21:59.897895 k8s-master-89242181-0 kubelet[5936]: I0713 23:21:59.897808 5936 kube_docker_client.go:347] Stop pulling image "k8sprowinternal.azurecr.io/kube-apiserver-amd64:v1.20.0-alpha.0-150-g240a72b5c0a": "Status: Downloaded newer image for k8sprowinternal.azurecr.io/kube-apiserver-amd64:v1.20.0-alpha.0-150-g240a72b5c0a" Jul 13 23:21:59.924245 k8s-master-89242181-0 kubelet[5936]: E0713 23:21:59.924218 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:00.024372 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:00.024340 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:00.124497 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:00.124467 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:00.224707 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:00.224614 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:00.324777 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:00.324743 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:00.424933 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:00.424882 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:00.525123 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:00.525027 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:00.577559 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:00.577526 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:22:00.625197 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:00.625154 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:00.725348 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:00.725303 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:00.825465 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:00.825432 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:00.925596 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:00.925567 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:01.025733 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:01.025701 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:01.125932 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:01.125849 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:01.225996 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:01.225968 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:01.326162 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:01.326107 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:01.426324 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:01.426231 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:01.526406 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:01.526354 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:01.577518 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:01.577486 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:22:01.626512 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:01.626484 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:01.726719 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:01.726624 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:01.826801 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:01.826765 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:01.926931 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:01.926900 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:02.027109 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:02.027026 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:02.127254 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:02.127212 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:02.227404 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:02.227366 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:02.327519 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:02.327488 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:02.427652 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:02.427623 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:02.527779 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:02.527752 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:02.577636 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:02.577551 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:22:02.628074 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:02.628052 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:02.728197 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:02.728168 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:02.828376 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:02.828288 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:02.928390 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:02.928363 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:03.028464 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:03.028422 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:03.128671 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:03.128569 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:03.143834 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:03.143791 5936 event.go:273] Unable to write event: 'Post "https://10.255.255.5:443/api/v1/namespaces/default/events": dial tcp 10.255.255.5:443: connect: connection refused' (may retry after sleeping) Jul 13 23:22:03.228750 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:03.228712 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:03.328892 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:03.328864 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:03.429074 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:03.428988 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:03.529177 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:03.529139 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:03.577576 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:03.577532 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:22:03.629301 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:03.629275 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:03.729668 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:03.729578 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:03.829718 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:03.829693 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:03.929838 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:03.929812 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:04.030031 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:04.029935 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:04.130092 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:04.130059 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:04.190759 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:04.190711 5936 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get "https://10.255.255.5:443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master-89242181-0?timeout=10s": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:22:04.230204 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:04.230180 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:04.330322 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:04.330296 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:04.430446 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:04.430412 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:04.530573 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:04.530532 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:04.577538 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:04.577509 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:22:04.630709 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:04.630640 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:04.675662 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:04.675641 5936 kubelet_node_status.go:334] Setting node annotation to enable volume controller attach/detach Jul 13 23:22:04.675780 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:04.675678 5936 kubelet_node_status.go:382] Adding node label from cloud provider: beta.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:22:04.675780 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:04.675690 5936 kubelet_node_status.go:384] Adding node label from cloud provider: node.kubernetes.io/instance-type=Standard_D2_v3 Jul 13 23:22:04.675780 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:04.675702 5936 kubelet_node_status.go:395] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=0 Jul 13 23:22:04.675780 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:04.675710 5936 kubelet_node_status.go:397] Adding node label from cloud provider: topology.kubernetes.io/zone=0 ... skipping 10 lines ... Jul 13 23:22:05.031019 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:05.030982 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:05.131176 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:05.131137 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:05.231363 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:05.231269 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:05.331451 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:05.331420 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:05.432286 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:05.432254 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:05.532474 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:05.532387 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:05.577540 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:05.577509 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:22:05.632531 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:05.632506 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:05.732705 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:05.732660 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:05.832824 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:05.832790 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:05.932957 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:05.932924 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:06.033058 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:06.033028 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:06.133418 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:06.133324 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:06.233738 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:06.233710 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:06.333863 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:06.333833 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:06.434225 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:06.434033 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:06.534215 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:06.534186 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:06.577565 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:06.577532 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:22:06.634322 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:06.634300 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:06.734504 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:06.734413 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:06.834557 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:06.834530 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:06.934674 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:06.934647 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:07.034846 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.034762 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:07.134906 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.134880 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:07.234987 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.234956 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:07.335100 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.335075 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:07.435221 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.435191 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:07.535346 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.535315 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:07.577517 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:07.577481 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:22:07.635495 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.635421 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:07.735558 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.735531 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:07.806443 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.806410 5936 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "k8s-master-89242181-0" not found Jul 13 23:22:07.835657 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.835638 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:07.935826 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.935749 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:07.993873 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:07.993849 5936 certificate_manager.go:412] Rotating certificates Jul 13 23:22:07.996348 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.996329 5936 certificate_manager.go:437] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post "https://10.255.255.5:443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:22:07.996348 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:07.996351 5936 certificate_manager.go:318] Reached backoff limit, still unable to rotate certs: timed out waiting for the condition Jul 13 23:22:08.035880 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:08.035860 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:08.135999 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:08.135971 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:08.236185 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:08.236091 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:08.336255 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:08.336225 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:08.436374 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:08.436343 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:08.536512 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:08.536415 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:08.577599 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:08.577551 5936 csi_plugin.go:948] Failed to contact API server when waiting for CSINode publishing: Get "https://10.255.255.5:443/apis/storage.k8s.io/v1/csinodes/k8s-master-89242181-0": dial tcp 10.255.255.5:443: connect: connection refused Jul 13 23:22:08.636521 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:08.636491 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:08.736627 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:08.736587 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:08.836749 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:08.836725 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:08.936871 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:08.936849 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:09.036989 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:09.036965 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:09.137312 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:09.137223 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found ... skipping 89 lines ... Jul 13 23:22:15.318243 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:14.757831 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:15.318243 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:14.857908 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:15.318243 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:14.958034 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:15.318243 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:15.058169 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:15.318243 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:15.186185 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:15.318243 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:15.288567 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:15.335464 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:15.335428 5936 controller.go:228] failed to get node "k8s-master-89242181-0" when trying to set owner ref to the node lease: nodes "k8s-master-89242181-0" not found Jul 13 23:22:15.339150 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:15.338992 5936 nodeinfomanager.go:403] Failed to publish CSINode: nodes "k8s-master-89242181-0" not found Jul 13 23:22:15.357735 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:15.357708 5936 nodeinfomanager.go:403] Failed to publish CSINode: nodes "k8s-master-89242181-0" not found Jul 13 23:22:15.390713 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:15.390622 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:15.424473 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:15.424447 5936 kubelet_node_status.go:73] Successfully registered node k8s-master-89242181-0 Jul 13 23:22:15.490763 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:15.490738 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:15.590962 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:15.590865 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:15.691025 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:15.690999 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:15.791161 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:15.791131 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found ... skipping 37 lines ... Jul 13 23:22:17.689546 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:17.689537 5936 kubelet_node_status.go:403] Adding node label from cloud provider: topology.kubernetes.io/region=southcentralus Jul 13 23:22:17.705320 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:17.705289 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:17.744501 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:17.742856 5936 kubelet_node_status.go:526] Recording NodeHasSufficientMemory event message for node k8s-master-89242181-0 Jul 13 23:22:17.744501 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:17.742893 5936 kubelet_node_status.go:526] Recording NodeHasNoDiskPressure event message for node k8s-master-89242181-0 Jul 13 23:22:17.744501 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:17.742904 5936 kubelet_node_status.go:526] Recording NodeHasSufficientPID event message for node k8s-master-89242181-0 Jul 13 23:22:17.807606 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:17.807476 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:17.807933 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:17.807907 5936 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "k8s-master-89242181-0" not found Jul 13 23:22:17.908399 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:17.908362 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:17.939630 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:17.939522 5936 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"k8s-master-89242181-0.162173485f43d67d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"k8s-master-89242181-0", UID:"k8s-master-89242181-0", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node k8s-master-89242181-0 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"k8s-master-89242181-0"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfbb59006c0c0c7d, ext:959986701, loc:(*time.Location)(0x72bdb80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfbb590074c3e219, ext:1106252101, loc:(*time.Location)(0x72bdb80)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Jul 13 23:22:18.008479 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:18.008444 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:18.108704 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:18.108567 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:18.208751 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:18.208717 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found Jul 13 23:22:18.308838 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:18.308797 5936 kubelet.go:2163] node "k8s-master-89242181-0" not found ... skipping 50 lines ... Jul 13 23:22:30.778264 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.778245 5936 kubelet.go:1823] SyncLoop (REMOVE, "file"): "kube-addon-manager-k8s-master-89242181-0_kube-system(5ddf9fa33d2dcb615b0173624c2621da)" Jul 13 23:22:30.783541 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.783519 5936 kubelet.go:1813] SyncLoop (ADD, "file"): "kube-addon-manager-k8s-master-89242181-0_kube-system(55a812f9b87f89eb3e7fd0c59fef6a6f)" Jul 13 23:22:30.783634 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.783553 5936 topology_manager.go:233] [topologymanager] Topology Admit Handler Jul 13 23:22:30.783773 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.783731 5936 kubelet_pods.go:1247] Killing unwanted pod "kube-addon-manager-k8s-master-89242181-0" Jul 13 23:22:30.783984 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.783968 5936 kuberuntime_container.go:635] Killing container "docker://23fd3e4ed45dd5e9ac349f60c883286638bf89bd5ccb9cd117da5bc2e864a5a7" with a 30 second grace period Jul 13 23:22:30.815216 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.815184 5936 kubelet.go:1823] SyncLoop (REMOVE, "file"): "kube-addon-manager-k8s-master-89242181-0_kube-system(55a812f9b87f89eb3e7fd0c59fef6a6f)" Jul 13 23:22:30.815366 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.815232 5936 kubelet.go:2011] Failed to delete pod "kube-addon-manager-k8s-master-89242181-0_kube-system(55a812f9b87f89eb3e7fd0c59fef6a6f)", err: pod not found Jul 13 23:22:30.815366 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.815249 5936 kubelet.go:1813] SyncLoop (ADD, "file"): "kube-addon-manager-k8s-master-89242181-0_kube-system(3592092f8228b2ad53cc3cbd9380195c)" Jul 13 23:22:30.815366 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.815269 5936 topology_manager.go:233] [topologymanager] Topology Admit Handler Jul 13 23:22:30.864467 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.864428 5936 status_manager.go:563] Pod "kube-addon-manager-k8s-master-89242181-0_kube-system(13f578d6-f9ff-4765-9c9a-1d02fc46bba1)" was deleted and then recreated, skipping status update; old UID "3592092f8228b2ad53cc3cbd9380195c", new UID "13f578d6-f9ff-4765-9c9a-1d02fc46bba1" Jul 13 23:22:30.864642 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.864588 5936 kubelet.go:1813] SyncLoop (ADD, "api"): "kube-addon-manager-k8s-master-89242181-0_kube-system(13f578d6-f9ff-4765-9c9a-1d02fc46bba1)" Jul 13 23:22:30.864841 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.864814 5936 kuberuntime_manager.go:419] No sandbox for pod "kube-addon-manager-k8s-master-89242181-0_kube-system(55a812f9b87f89eb3e7fd0c59fef6a6f)" can be found. Need to start a new one Jul 13 23:22:30.935201 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.935175 5936 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-kubelet" (UniqueName: "kubernetes.io/host-path/3592092f8228b2ad53cc3cbd9380195c-var-lib-kubelet") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "3592092f8228b2ad53cc3cbd9380195c") Jul 13 23:22:30.935201 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.935212 5936 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/3592092f8228b2ad53cc3cbd9380195c-addons") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "3592092f8228b2ad53cc3cbd9380195c") Jul 13 23:22:31.020697 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.935236 5936 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "msi" (UniqueName: "kubernetes.io/host-path/3592092f8228b2ad53cc3cbd9380195c-msi") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "3592092f8228b2ad53cc3cbd9380195c") Jul 13 23:22:31.020697 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:30.935258 5936 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-kubernetes" (UniqueName: "kubernetes.io/host-path/3592092f8228b2ad53cc3cbd9380195c-etc-kubernetes") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "3592092f8228b2ad53cc3cbd9380195c") Jul 13 23:22:31.026571 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:31.023088 5936 kubelet.go:1569] Failed creating a mirror pod for "kube-addon-manager-k8s-master-89242181-0_kube-system(3592092f8228b2ad53cc3cbd9380195c)": pods "kube-addon-manager-k8s-master-89242181-0" already exists Jul 13 23:22:31.031103 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:31.031050 5936 kubelet.go:1829] SyncLoop (DELETE, "api"): "kube-addon-manager-k8s-master-89242181-0_kube-system(13f578d6-f9ff-4765-9c9a-1d02fc46bba1)" Jul 13 23:22:31.035440 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:31.035424 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "var-lib-kubelet" (UniqueName: "kubernetes.io/host-path/3592092f8228b2ad53cc3cbd9380195c-var-lib-kubelet") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "3592092f8228b2ad53cc3cbd9380195c") Jul 13 23:22:31.035552 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:31.035460 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/3592092f8228b2ad53cc3cbd9380195c-addons") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "3592092f8228b2ad53cc3cbd9380195c") Jul 13 23:22:31.035552 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:31.035487 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "msi" (UniqueName: "kubernetes.io/host-path/3592092f8228b2ad53cc3cbd9380195c-msi") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "3592092f8228b2ad53cc3cbd9380195c") Jul 13 23:22:31.035552 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:31.035513 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "etc-kubernetes" (UniqueName: "kubernetes.io/host-path/3592092f8228b2ad53cc3cbd9380195c-etc-kubernetes") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "3592092f8228b2ad53cc3cbd9380195c") Jul 13 23:22:31.035700 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:31.035572 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "etc-kubernetes" (UniqueName: "kubernetes.io/host-path/3592092f8228b2ad53cc3cbd9380195c-etc-kubernetes") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "3592092f8228b2ad53cc3cbd9380195c") Jul 13 23:22:31.035700 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:31.035624 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "var-lib-kubelet" (UniqueName: "kubernetes.io/host-path/3592092f8228b2ad53cc3cbd9380195c-var-lib-kubelet") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "3592092f8228b2ad53cc3cbd9380195c") Jul 13 23:22:31.035700 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:31.035664 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "addons" (UniqueName: "kubernetes.io/host-path/3592092f8228b2ad53cc3cbd9380195c-addons") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "3592092f8228b2ad53cc3cbd9380195c") Jul 13 23:22:31.035791 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:31.035702 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "msi" (UniqueName: "kubernetes.io/host-path/3592092f8228b2ad53cc3cbd9380195c-msi") pod "kube-addon-manager-k8s-master-89242181-0" (UID: "3592092f8228b2ad53cc3cbd9380195c") Jul 13 23:22:31.326993 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:31.326681 5936 kuberuntime_manager.go:419] No sandbox for pod "kube-addon-manager-k8s-master-89242181-0_kube-system(3592092f8228b2ad53cc3cbd9380195c)" can be found. Need to start a new one Jul 13 23:22:31.762247 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:31.762225 5936 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/55a812f9b87f89eb3e7fd0c59fef6a6f/volumes" does not exist Jul 13 23:22:31.857394 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:31.857363 5936 pod_container_deletor.go:79] Container "a581b944fa7f0797463425686cd0b51ca031f69b34a40e4869b452191b4894f3" not found in pod's containers Jul 13 23:22:31.867206 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:31.867186 5936 kuberuntime_manager.go:798] container &Container{Name:kube-addon-manager,Image:mcr.microsoft.com/oss/kubernetes/kube-addon-manager:v9.1.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBECONFIG,Value:/var/lib/kubelet/kubeconfig,ValueFrom:nil,},EnvVar{Name:ADDON_PATH,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-addon-manager-k8s-master-89242181-0_kube-system(55a812f9b87f89eb3e7fd0c59fef6a6f): CreateContainerConfigError: open /var/lib/kubelet/pods/55a812f9b87f89eb3e7fd0c59fef6a6f/etc-hosts: no such file or directory Jul 13 23:22:31.867497 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:31.867479 5936 pod_workers.go:191] Error syncing pod 55a812f9b87f89eb3e7fd0c59fef6a6f ("kube-addon-manager-k8s-master-89242181-0_kube-system(55a812f9b87f89eb3e7fd0c59fef6a6f)"), skipping: failed to "StartContainer" for "kube-addon-manager" with CreateContainerConfigError: "open /var/lib/kubelet/pods/55a812f9b87f89eb3e7fd0c59fef6a6f/etc-hosts: no such file or directory" Jul 13 23:22:31.935737 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:31.935704 5936 kubelet.go:1845] SyncLoop (PLEG): "kube-addon-manager-k8s-master-89242181-0_kube-system(3592092f8228b2ad53cc3cbd9380195c)", event: &pleg.PodLifecycleEvent{ID:"3592092f8228b2ad53cc3cbd9380195c", Type:"ContainerDied", Data:"30fc28ad0dfb0b7dde8ae7706e9261d3bf58b1f3b5fa90c44557c482b877721a"} Jul 13 23:22:31.935948 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:31.935779 5936 pod_container_deletor.go:79] Container "30fc28ad0dfb0b7dde8ae7706e9261d3bf58b1f3b5fa90c44557c482b877721a" not found in pod's containers Jul 13 23:22:32.945500 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:32.945465 5936 kubelet.go:1845] SyncLoop (PLEG): "kube-addon-manager-k8s-master-89242181-0_kube-system(3592092f8228b2ad53cc3cbd9380195c)", event: &pleg.PodLifecycleEvent{ID:"3592092f8228b2ad53cc3cbd9380195c", Type:"ContainerStarted", Data:"30fc28ad0dfb0b7dde8ae7706e9261d3bf58b1f3b5fa90c44557c482b877721a"} Jul 13 23:22:32.946049 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:32.946020 5936 kubelet.go:1845] SyncLoop (PLEG): "kube-addon-manager-k8s-master-89242181-0_kube-system(3592092f8228b2ad53cc3cbd9380195c)", event: &pleg.PodLifecycleEvent{ID:"3592092f8228b2ad53cc3cbd9380195c", Type:"ContainerStarted", Data:"a4269972beb8d318a8be96ed8f7f3bd0e7ffab74ec06c1641402c90d23319ad8"} Jul 13 23:22:32.946167 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:32.945676 5936 kubelet.go:1552] Trying to delete pod kube-addon-manager-k8s-master-89242181-0_kube-system 13f578d6-f9ff-4765-9c9a-1d02fc46bba1 Jul 13 23:22:32.946274 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:32.946261 5936 mirror_client.go:125] Deleting a mirror pod "kube-addon-manager-k8s-master-89242181-0_kube-system" (uid (*types.UID)(0xc000279060)) ... skipping 18 lines ... Jul 13 23:22:39.364196 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:39.364179 5936 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-cqwqr" (UniqueName: "kubernetes.io/secret/a244b257-53f1-4c95-b823-e5a5d0d80173-coredns-token-cqwqr") pod "coredns-7ff794b897-6vt24" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173") Jul 13 23:22:39.466481 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:39.465158 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a244b257-53f1-4c95-b823-e5a5d0d80173-config-volume") pod "coredns-7ff794b897-6vt24" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173") Jul 13 23:22:39.466481 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:39.465274 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a244b257-53f1-4c95-b823-e5a5d0d80173-config-volume") pod "coredns-7ff794b897-6vt24" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173") Jul 13 23:22:39.466481 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:39.465335 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "config-custom" (UniqueName: "kubernetes.io/configmap/a244b257-53f1-4c95-b823-e5a5d0d80173-config-custom") pod "coredns-7ff794b897-6vt24" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173") Jul 13 23:22:39.466481 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:39.465369 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "coredns-token-cqwqr" (UniqueName: "kubernetes.io/secret/a244b257-53f1-4c95-b823-e5a5d0d80173-coredns-token-cqwqr") pod "coredns-7ff794b897-6vt24" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173") Jul 13 23:22:39.466481 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:39.465706 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "config-custom" (UniqueName: "kubernetes.io/configmap/a244b257-53f1-4c95-b823-e5a5d0d80173-config-custom") pod "coredns-7ff794b897-6vt24" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173") Jul 13 23:22:39.499875 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:39.499826 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r09637960406546e2ada0a4bad7d24c12.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r09637960406546e2ada0a4bad7d24c12.scope: no such file or directory Jul 13 23:22:39.501188 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:39.501161 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r09637960406546e2ada0a4bad7d24c12.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r09637960406546e2ada0a4bad7d24c12.scope: no such file or directory Jul 13 23:22:39.505310 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:39.505267 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r09637960406546e2ada0a4bad7d24c12.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r09637960406546e2ada0a4bad7d24c12.scope: no such file or directory Jul 13 23:22:39.505692 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:39.505669 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r09637960406546e2ada0a4bad7d24c12.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r09637960406546e2ada0a4bad7d24c12.scope: no such file or directory Jul 13 23:22:39.506655 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:39.506109 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-r09637960406546e2ada0a4bad7d24c12.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-r09637960406546e2ada0a4bad7d24c12.scope: no such file or directory Jul 13 23:22:39.508417 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:39.508401 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "coredns-token-cqwqr" (UniqueName: "kubernetes.io/secret/a244b257-53f1-4c95-b823-e5a5d0d80173-coredns-token-cqwqr") pod "coredns-7ff794b897-6vt24" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173") Jul 13 23:22:39.589999 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:39.589920 5936 kuberuntime_manager.go:419] No sandbox for pod "coredns-7ff794b897-6vt24_kube-system(a244b257-53f1-4c95-b823-e5a5d0d80173)" can be found. Need to start a new one Jul 13 23:22:39.997765 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:39.997708 5936 certificate_manager.go:412] Rotating certificates Jul 13 23:22:40.048120 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:40.048097 5936 reflector.go:207] Starting reflector *v1.CertificateSigningRequest (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jul 13 23:22:40.050552 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:40.050526 5936 csr.go:249] certificate signing request csr-k9vpn is approved, waiting to be issued Jul 13 23:22:40.051486 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:40.051464 5936 csr.go:245] certificate signing request csr-k9vpn is issued ... skipping 6 lines ... Jul 13 23:22:40.593793 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:40.593721 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "coredns-autoscaler-token-g96b2" (UniqueName: "kubernetes.io/secret/2854622c-dc90-413b-a30f-2598345ae488-coredns-autoscaler-token-g96b2") pod "coredns-autoscaler-87b67c5fd-sv9l2" (UID: "2854622c-dc90-413b-a30f-2598345ae488") Jul 13 23:22:40.702353 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:40.702315 5936 kuberuntime_manager.go:419] No sandbox for pod "coredns-autoscaler-87b67c5fd-sv9l2_kube-system(2854622c-dc90-413b-a30f-2598345ae488)" can be found. Need to start a new one Jul 13 23:22:41.615294 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:41.052635 5936 certificate_manager.go:556] Certificate expiration is 2021-07-13 23:17:40 +0000 UTC, rotation deadline is 2021-04-14 11:54:34.82115519 +0000 UTC Jul 13 23:22:41.615294 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:41.052673 5936 certificate_manager.go:288] Waiting 6588h31m53.768487857s for next certificate rotation Jul 13 23:22:42.860317 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:42.860268 5936 kubelet.go:1845] SyncLoop (PLEG): "coredns-7ff794b897-6vt24_kube-system(a244b257-53f1-4c95-b823-e5a5d0d80173)", event: &pleg.PodLifecycleEvent{ID:"a244b257-53f1-4c95-b823-e5a5d0d80173", Type:"ContainerDied", Data:"997cd75b4ed3fc45376b0b5f3f0caf21b2cef0cb3b2127390316d1946f342573"} Jul 13 23:22:42.860948 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:42.860354 5936 pod_container_deletor.go:79] Container "997cd75b4ed3fc45376b0b5f3f0caf21b2cef0cb3b2127390316d1946f342573" not found in pod's containers Jul 13 23:22:42.922353 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:42.922313 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/ntp-systemd-netif.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/ntp-systemd-netif.service: no such file or directory Jul 13 23:22:44.083534 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:44.081204 5936 kubelet.go:1845] SyncLoop (PLEG): "coredns-7ff794b897-6vt24_kube-system(a244b257-53f1-4c95-b823-e5a5d0d80173)", event: &pleg.PodLifecycleEvent{ID:"a244b257-53f1-4c95-b823-e5a5d0d80173", Type:"ContainerStarted", Data:"997cd75b4ed3fc45376b0b5f3f0caf21b2cef0cb3b2127390316d1946f342573"} Jul 13 23:22:44.090110 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:44.090059 5936 kubelet.go:1845] SyncLoop (PLEG): "coredns-autoscaler-87b67c5fd-sv9l2_kube-system(2854622c-dc90-413b-a30f-2598345ae488)", event: &pleg.PodLifecycleEvent{ID:"2854622c-dc90-413b-a30f-2598345ae488", Type:"ContainerStarted", Data:"b16fc90de5278786264215d4aa4f30ab5c02e4ee82822830c7444bd0ce100799"} Jul 13 23:22:45.110219 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:45.110172 5936 kubelet.go:1845] SyncLoop (PLEG): "coredns-autoscaler-87b67c5fd-sv9l2_kube-system(2854622c-dc90-413b-a30f-2598345ae488)", event: &pleg.PodLifecycleEvent{ID:"2854622c-dc90-413b-a30f-2598345ae488", Type:"ContainerStarted", Data:"e8180ad69b2dba9bc691bcd9b89fa330a558d8db14d545d616af035f610f4697"} Jul 13 23:22:45.124069 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:45.124017 5936 kubelet.go:1845] SyncLoop (PLEG): "coredns-7ff794b897-6vt24_kube-system(a244b257-53f1-4c95-b823-e5a5d0d80173)", event: &pleg.PodLifecycleEvent{ID:"a244b257-53f1-4c95-b823-e5a5d0d80173", Type:"ContainerStarted", Data:"c43e782436db006ded56c581a2fde17c1a32029418425eb94a904166cfc114f2"} Jul 13 23:22:46.480396 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:46.479513 5936 kubelet.go:1813] SyncLoop (ADD, "api"): "blobfuse-flexvol-installer-rdlvb_kube-system(29b4ece1-1483-43ef-b668-e9d637b7b7a6)" Jul 13 23:22:46.480396 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:46.479587 5936 topology_manager.go:233] [topologymanager] Topology Admit Handler ... skipping 50 lines ... Jul 13 23:22:46.997103 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:46.997078 5936 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials Jul 13 23:22:46.998767 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:46.998739 5936 reflector.go:424] object-"kube-system"/"azure-ip-masq-agent-config": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"azure-ip-masq-agent-config": Unexpected watch close - watch lasted less than a second and no items received Jul 13 23:22:46.999062 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:46.999036 5936 reflector.go:424] object-"kube-system"/"metrics-server-token-np9tg": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"metrics-server-token-np9tg": Unexpected watch close - watch lasted less than a second and no items received Jul 13 23:22:46.999475 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:46.999457 5936 reflector.go:424] object-"kube-system"/"default-token-c2zcp": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"default-token-c2zcp": Unexpected watch close - watch lasted less than a second and no items received Jul 13 23:22:46.999692 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:46.999678 5936 reflector.go:424] object-"kube-system"/"kube-proxy-config": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"kube-proxy-config": Unexpected watch close - watch lasted less than a second and no items received Jul 13 23:22:47.091204 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:47.091181 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "metrics-server-token-np9tg" (UniqueName: "kubernetes.io/secret/5e256c24-b603-4fc6-8d1a-91d1f637d3bc-metrics-server-token-np9tg") pod "metrics-server-84d5cf8ccf-l88lb" (UID: "5e256c24-b603-4fc6-8d1a-91d1f637d3bc") Jul 13 23:22:47.122339 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:47.122294 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r29adb25783284e6bb675d3d6e9d6c638.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r29adb25783284e6bb675d3d6e9d6c638.scope: no such file or directory Jul 13 23:22:47.122339 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:47.122344 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r29adb25783284e6bb675d3d6e9d6c638.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r29adb25783284e6bb675d3d6e9d6c638.scope: no such file or directory Jul 13 23:22:47.122541 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:47.122371 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r29adb25783284e6bb675d3d6e9d6c638.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r29adb25783284e6bb675d3d6e9d6c638.scope: no such file or directory Jul 13 23:22:47.122541 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:47.122390 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r29adb25783284e6bb675d3d6e9d6c638.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r29adb25783284e6bb675d3d6e9d6c638.scope: no such file or directory Jul 13 23:22:47.122541 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:47.122416 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-r29adb25783284e6bb675d3d6e9d6c638.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-r29adb25783284e6bb675d3d6e9d6c638.scope: no such file or directory Jul 13 23:22:47.122892 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:47.122869 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "metrics-server-token-np9tg" (UniqueName: "kubernetes.io/secret/5e256c24-b603-4fc6-8d1a-91d1f637d3bc-metrics-server-token-np9tg") pod "metrics-server-84d5cf8ccf-l88lb" (UID: "5e256c24-b603-4fc6-8d1a-91d1f637d3bc") Jul 13 23:22:47.191602 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:47.191568 5936 kuberuntime_manager.go:419] No sandbox for pod "metrics-server-84d5cf8ccf-l88lb_kube-system(5e256c24-b603-4fc6-8d1a-91d1f637d3bc)" can be found. Need to start a new one Jul 13 23:22:48.815963 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:48.815926 5936 provider.go:102] Refreshing cache for provider: *azure.acrProvider Jul 13 23:22:49.232276 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:49.231898 5936 kubelet.go:1845] SyncLoop (PLEG): "azure-ip-masq-agent-rx86g_kube-system(a5fdc294-bd1d-47c8-98d2-4e1af9bacce7)", event: &pleg.PodLifecycleEvent{ID:"a5fdc294-bd1d-47c8-98d2-4e1af9bacce7", Type:"ContainerDied", Data:"3db4905a2f1095fcf08c87dd9ffe72c102e5a88e5665810e8d69cd5d3074660b"} Jul 13 23:22:49.232276 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:49.231994 5936 pod_container_deletor.go:79] Container "3db4905a2f1095fcf08c87dd9ffe72c102e5a88e5665810e8d69cd5d3074660b" not found in pod's containers Jul 13 23:22:49.240063 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:49.240023 5936 kubelet.go:1845] SyncLoop (PLEG): "kube-proxy-w9zmm_kube-system(73a6a591-25c0-4373-bfb8-64085747a713)", event: &pleg.PodLifecycleEvent{ID:"73a6a591-25c0-4373-bfb8-64085747a713", Type:"ContainerDied", Data:"04ab14435e0eab0a99791158dc55f56a29fec53f87598bd1d8466ab872f620ba"} ... skipping 5 lines ... Jul 13 23:22:51.451425 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:51.451391 5936 kubelet.go:1845] SyncLoop (PLEG): "blobfuse-flexvol-installer-rdlvb_kube-system(29b4ece1-1483-43ef-b668-e9d637b7b7a6)", event: &pleg.PodLifecycleEvent{ID:"29b4ece1-1483-43ef-b668-e9d637b7b7a6", Type:"ContainerStarted", Data:"b630e33357562d8b3afd34dcd0311156db07837db8246d117ba7385c3ebc110a"} Jul 13 23:22:51.455886 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:51.455853 5936 kubelet.go:1845] SyncLoop (PLEG): "azure-ip-masq-agent-rx86g_kube-system(a5fdc294-bd1d-47c8-98d2-4e1af9bacce7)", event: &pleg.PodLifecycleEvent{ID:"a5fdc294-bd1d-47c8-98d2-4e1af9bacce7", Type:"ContainerStarted", Data:"3db4905a2f1095fcf08c87dd9ffe72c102e5a88e5665810e8d69cd5d3074660b"} Jul 13 23:22:51.456008 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:51.455891 5936 kubelet.go:1845] SyncLoop (PLEG): "azure-ip-masq-agent-rx86g_kube-system(a5fdc294-bd1d-47c8-98d2-4e1af9bacce7)", event: &pleg.PodLifecycleEvent{ID:"a5fdc294-bd1d-47c8-98d2-4e1af9bacce7", Type:"ContainerStarted", Data:"fbb7cafaf7a7512427f90755e063eed303b4458ca27e196904cac230fba2fae8"} Jul 13 23:22:51.461573 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:51.461535 5936 kubelet.go:1845] SyncLoop (PLEG): "kube-proxy-w9zmm_kube-system(73a6a591-25c0-4373-bfb8-64085747a713)", event: &pleg.PodLifecycleEvent{ID:"73a6a591-25c0-4373-bfb8-64085747a713", Type:"ContainerStarted", Data:"04ab14435e0eab0a99791158dc55f56a29fec53f87598bd1d8466ab872f620ba"} Jul 13 23:22:51.473678 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:51.473636 5936 kubelet.go:1845] SyncLoop (PLEG): "metrics-server-84d5cf8ccf-l88lb_kube-system(5e256c24-b603-4fc6-8d1a-91d1f637d3bc)", event: &pleg.PodLifecycleEvent{ID:"5e256c24-b603-4fc6-8d1a-91d1f637d3bc", Type:"ContainerStarted", Data:"99da49f179875072e06b8090ca7cabc2236a4dc5b72d59ad6bcf3b98062ada9c"} Jul 13 23:22:51.541118 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:51.541081 5936 plugins.go:647] Loaded volume plugin "azure/blobfuse" Jul 13 23:22:52.044676 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.044636 5936 prober.go:117] Readiness probe for "coredns-7ff794b897-6vt24_kube-system(a244b257-53f1-4c95-b823-e5a5d0d80173):coredns" failed (failure): HTTP probe failed with statuscode: 503 Jul 13 23:22:52.184838 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.184557 5936 kubelet.go:1813] SyncLoop (ADD, "api"): "azure-cni-networkmonitor-pl76s_kube-system(769c65e5-820b-42f5-a72e-e31108d619eb)" Jul 13 23:22:52.184838 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.184616 5936 topology_manager.go:233] [topologymanager] Topology Admit Handler Jul 13 23:22:52.310264 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.310169 5936 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "telemetry" (UniqueName: "kubernetes.io/host-path/769c65e5-820b-42f5-a72e-e31108d619eb-telemetry") pod "azure-cni-networkmonitor-pl76s" (UID: "769c65e5-820b-42f5-a72e-e31108d619eb") Jul 13 23:22:52.310264 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.310214 5936 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-c2zcp" (UniqueName: "kubernetes.io/secret/769c65e5-820b-42f5-a72e-e31108d619eb-default-token-c2zcp") pod "azure-cni-networkmonitor-pl76s" (UID: "769c65e5-820b-42f5-a72e-e31108d619eb") Jul 13 23:22:52.310264 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.310245 5936 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ebtables-rule-repo" (UniqueName: "kubernetes.io/host-path/769c65e5-820b-42f5-a72e-e31108d619eb-ebtables-rule-repo") pod "azure-cni-networkmonitor-pl76s" (UID: "769c65e5-820b-42f5-a72e-e31108d619eb") Jul 13 23:22:52.310264 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.310268 5936 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "log" (UniqueName: "kubernetes.io/host-path/769c65e5-820b-42f5-a72e-e31108d619eb-log") pod "azure-cni-networkmonitor-pl76s" (UID: "769c65e5-820b-42f5-a72e-e31108d619eb") Jul 13 23:22:52.402828 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.402787 5936 kubelet.go:1813] SyncLoop (ADD, "api"): "csi-secrets-store-nqq5m_kube-system(5fd00258-0804-4bcd-ada5-18c1d7a0e424)" Jul 13 23:22:52.403020 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.402888 5936 topology_manager.go:233] [topologymanager] Topology Admit Handler Jul 13 23:22:52.403020 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.402944 5936 reflector.go:207] Starting reflector *v1.Secret (0s) from object-"kube-system"/"secrets-store-csi-driver-token-nt52h" Jul 13 23:22:52.405906 k8s-master-89242181-0 kubelet[5936]: E0713 23:22:52.405875 5936 reflector.go:127] object-"kube-system"/"secrets-store-csi-driver-token-nt52h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "secrets-store-csi-driver-token-nt52h" is forbidden: User "system:node:k8s-master-89242181-0" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'k8s-master-89242181-0' and this object Jul 13 23:22:52.410883 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.410855 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "telemetry" (UniqueName: "kubernetes.io/host-path/769c65e5-820b-42f5-a72e-e31108d619eb-telemetry") pod "azure-cni-networkmonitor-pl76s" (UID: "769c65e5-820b-42f5-a72e-e31108d619eb") Jul 13 23:22:52.410986 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.410899 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "default-token-c2zcp" (UniqueName: "kubernetes.io/secret/769c65e5-820b-42f5-a72e-e31108d619eb-default-token-c2zcp") pod "azure-cni-networkmonitor-pl76s" (UID: "769c65e5-820b-42f5-a72e-e31108d619eb") Jul 13 23:22:52.410986 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.410942 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "log" (UniqueName: "kubernetes.io/host-path/769c65e5-820b-42f5-a72e-e31108d619eb-log") pod "azure-cni-networkmonitor-pl76s" (UID: "769c65e5-820b-42f5-a72e-e31108d619eb") Jul 13 23:22:52.410986 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.410972 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "ebtables-rule-repo" (UniqueName: "kubernetes.io/host-path/769c65e5-820b-42f5-a72e-e31108d619eb-ebtables-rule-repo") pod "azure-cni-networkmonitor-pl76s" (UID: "769c65e5-820b-42f5-a72e-e31108d619eb") Jul 13 23:22:52.411133 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.411054 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "ebtables-rule-repo" (UniqueName: "kubernetes.io/host-path/769c65e5-820b-42f5-a72e-e31108d619eb-ebtables-rule-repo") pod "azure-cni-networkmonitor-pl76s" (UID: "769c65e5-820b-42f5-a72e-e31108d619eb") Jul 13 23:22:52.411133 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.411116 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "telemetry" (UniqueName: "kubernetes.io/host-path/769c65e5-820b-42f5-a72e-e31108d619eb-telemetry") pod "azure-cni-networkmonitor-pl76s" (UID: "769c65e5-820b-42f5-a72e-e31108d619eb") ... skipping 20 lines ... Jul 13 23:22:52.863330 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.615069 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "mountpoint-dir" (UniqueName: "kubernetes.io/host-path/5fd00258-0804-4bcd-ada5-18c1d7a0e424-mountpoint-dir") pod "csi-secrets-store-nqq5m" (UID: "5fd00258-0804-4bcd-ada5-18c1d7a0e424") Jul 13 23:22:52.863330 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.716041 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "providervol" (UniqueName: "kubernetes.io/host-path/ff423d99-d968-4de0-9257-e5c8fa34127d-providervol") pod "csi-secrets-store-provider-azure-pxr4g" (UID: "ff423d99-d968-4de0-9257-e5c8fa34127d") Jul 13 23:22:52.863330 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.716096 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "providervol" (UniqueName: "kubernetes.io/host-path/ff423d99-d968-4de0-9257-e5c8fa34127d-providervol") pod "csi-secrets-store-provider-azure-pxr4g" (UID: "ff423d99-d968-4de0-9257-e5c8fa34127d") Jul 13 23:22:52.863632 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.716118 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "default-token-c2zcp" (UniqueName: "kubernetes.io/secret/ff423d99-d968-4de0-9257-e5c8fa34127d-default-token-c2zcp") pod "csi-secrets-store-provider-azure-pxr4g" (UID: "ff423d99-d968-4de0-9257-e5c8fa34127d") Jul 13 23:22:52.864101 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.864078 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "plugin-dir" (UniqueName: "kubernetes.io/host-path/5fd00258-0804-4bcd-ada5-18c1d7a0e424-plugin-dir") pod "csi-secrets-store-nqq5m" (UID: "5fd00258-0804-4bcd-ada5-18c1d7a0e424") Jul 13 23:22:52.864357 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.864341 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "providers-dir" (UniqueName: "kubernetes.io/host-path/5fd00258-0804-4bcd-ada5-18c1d7a0e424-providers-dir") pod "csi-secrets-store-nqq5m" (UID: "5fd00258-0804-4bcd-ada5-18c1d7a0e424") Jul 13 23:22:52.908632 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:52.908598 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-ra1d4c8fa9e874311b77b99fb478e525a.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-ra1d4c8fa9e874311b77b99fb478e525a.scope: no such file or directory Jul 13 23:22:52.910571 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:52.910552 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "default-token-c2zcp" (UniqueName: "kubernetes.io/secret/ff423d99-d968-4de0-9257-e5c8fa34127d-default-token-c2zcp") pod "csi-secrets-store-provider-azure-pxr4g" (UID: "ff423d99-d968-4de0-9257-e5c8fa34127d") Jul 13 23:22:53.164269 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:53.164156 5936 kuberuntime_manager.go:419] No sandbox for pod "csi-secrets-store-provider-azure-pxr4g_kube-system(ff423d99-d968-4de0-9257-e5c8fa34127d)" can be found. Need to start a new one Jul 13 23:22:53.594818 k8s-master-89242181-0 kubelet[5936]: W0713 23:22:53.594660 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-ra8622bc82bd2444c8d70d1fe539d4573.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-ra8622bc82bd2444c8d70d1fe539d4573.scope: no such file or directory Jul 13 23:22:53.598190 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:53.596224 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "secrets-store-csi-driver-token-nt52h" (UniqueName: "kubernetes.io/secret/5fd00258-0804-4bcd-ada5-18c1d7a0e424-secrets-store-csi-driver-token-nt52h") pod "csi-secrets-store-nqq5m" (UID: "5fd00258-0804-4bcd-ada5-18c1d7a0e424") Jul 13 23:22:53.715946 k8s-master-89242181-0 kubelet[5936]: I0713 23:22:53.715364 5936 kuberuntime_manager.go:419] No sandbox for pod "csi-secrets-store-nqq5m_kube-system(5fd00258-0804-4bcd-ada5-18c1d7a0e424)" can be found. Need to start a new one Jul 13 23:23:00.025539 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:00.025501 5936 kube_docker_client.go:344] Pulling image "k8sprowinternal.azurecr.io/kube-proxy-amd64:v1.20.0-alpha.0-150-g240a72b5c0a": "69e4ecaaf634: Pull complete " Jul 13 23:23:00.275052 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:00.274986 5936 kube_docker_client.go:347] Stop pulling image "k8sprowinternal.azurecr.io/kube-proxy-amd64:v1.20.0-alpha.0-150-g240a72b5c0a": "Status: Downloaded newer image for k8sprowinternal.azurecr.io/kube-proxy-amd64:v1.20.0-alpha.0-150-g240a72b5c0a" Jul 13 23:23:01.521664 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:01.521370 5936 kubelet.go:1845] SyncLoop (PLEG): "csi-secrets-store-nqq5m_kube-system(5fd00258-0804-4bcd-ada5-18c1d7a0e424)", event: &pleg.PodLifecycleEvent{ID:"5fd00258-0804-4bcd-ada5-18c1d7a0e424", Type:"ContainerDied", Data:"da69629d22f30b48b7efb13b8d3323b9be7a282c865f961982f48e44be258c61"} Jul 13 23:23:01.521664 k8s-master-89242181-0 kubelet[5936]: W0713 23:23:01.521486 5936 pod_container_deletor.go:79] Container "da69629d22f30b48b7efb13b8d3323b9be7a282c865f961982f48e44be258c61" not found in pod's containers Jul 13 23:23:01.526222 k8s-master-89242181-0 kubelet[5936]: E0713 23:23:01.525995 5936 remote_runtime.go:329] ContainerStatus "aa2349f64aea0369142531b62e4915ec8558a6982a5e41f915b979f626a721a9" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: aa2349f64aea0369142531b62e4915ec8558a6982a5e41f915b979f626a721a9 Jul 13 23:23:01.526222 k8s-master-89242181-0 kubelet[5936]: E0713 23:23:01.526026 5936 kuberuntime_manager.go:949] getPodContainerStatuses for pod "azure-cni-networkmonitor-pl76s_kube-system(769c65e5-820b-42f5-a72e-e31108d619eb)" failed: rpc error: code = Unknown desc = Error: No such container: aa2349f64aea0369142531b62e4915ec8558a6982a5e41f915b979f626a721a9 Jul 13 23:23:01.638376 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:01.638329 5936 kubelet.go:1845] SyncLoop (PLEG): "csi-secrets-store-provider-azure-pxr4g_kube-system(ff423d99-d968-4de0-9257-e5c8fa34127d)", event: &pleg.PodLifecycleEvent{ID:"ff423d99-d968-4de0-9257-e5c8fa34127d", Type:"ContainerDied", Data:"92d4a79a096faea4f1b77ea7d5452fb6bd497edf2a876014695ba315d704aee8"} Jul 13 23:23:01.638612 k8s-master-89242181-0 kubelet[5936]: W0713 23:23:01.638417 5936 pod_container_deletor.go:79] Container "92d4a79a096faea4f1b77ea7d5452fb6bd497edf2a876014695ba315d704aee8" not found in pod's containers Jul 13 23:23:02.045767 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:02.044416 5936 prober.go:117] Readiness probe for "coredns-7ff794b897-6vt24_kube-system(a244b257-53f1-4c95-b823-e5a5d0d80173):coredns" failed (failure): HTTP probe failed with statuscode: 503 Jul 13 23:23:02.654276 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:02.654076 5936 kubelet.go:1845] SyncLoop (PLEG): "kube-proxy-w9zmm_kube-system(73a6a591-25c0-4373-bfb8-64085747a713)", event: &pleg.PodLifecycleEvent{ID:"73a6a591-25c0-4373-bfb8-64085747a713", Type:"ContainerStarted", Data:"55b20d07e4985997e0a979fd730743d7a6db37c88ed95de7a474000fb194543a"} Jul 13 23:23:02.838504 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:02.838453 5936 kubelet.go:1845] SyncLoop (PLEG): "csi-secrets-store-provider-azure-pxr4g_kube-system(ff423d99-d968-4de0-9257-e5c8fa34127d)", event: &pleg.PodLifecycleEvent{ID:"ff423d99-d968-4de0-9257-e5c8fa34127d", Type:"ContainerStarted", Data:"92d4a79a096faea4f1b77ea7d5452fb6bd497edf2a876014695ba315d704aee8"} Jul 13 23:23:03.139060 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:03.138961 5936 kubelet.go:1845] SyncLoop (PLEG): "csi-secrets-store-nqq5m_kube-system(5fd00258-0804-4bcd-ada5-18c1d7a0e424)", event: &pleg.PodLifecycleEvent{ID:"5fd00258-0804-4bcd-ada5-18c1d7a0e424", Type:"ContainerStarted", Data:"da69629d22f30b48b7efb13b8d3323b9be7a282c865f961982f48e44be258c61"} Jul 13 23:23:03.154729 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:03.154649 5936 kubelet.go:1845] SyncLoop (PLEG): "azure-cni-networkmonitor-pl76s_kube-system(769c65e5-820b-42f5-a72e-e31108d619eb)", event: &pleg.PodLifecycleEvent{ID:"769c65e5-820b-42f5-a72e-e31108d619eb", Type:"ContainerStarted", Data:"67087da7b54879a406226678fd3066d2b6045678e824a97d107d6685b013708c"} Jul 13 23:23:03.178088 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:03.178059 5936 reconciler.go:196] operationExecutor.UnmountVolume started for volume "etc-kubernetes" (UniqueName: "kubernetes.io/host-path/5ddf9fa33d2dcb615b0173624c2621da-etc-kubernetes") pod "5ddf9fa33d2dcb615b0173624c2621da" (UID: "5ddf9fa33d2dcb615b0173624c2621da") Jul 13 23:23:03.178338 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:03.178321 5936 reconciler.go:196] operationExecutor.UnmountVolume started for volume "var-lib-kubelet" (UniqueName: "kubernetes.io/host-path/5ddf9fa33d2dcb615b0173624c2621da-var-lib-kubelet") pod "5ddf9fa33d2dcb615b0173624c2621da" (UID: "5ddf9fa33d2dcb615b0173624c2621da") ... skipping 9 lines ... Jul 13 23:23:03.179077 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:03.179003 5936 reconciler.go:319] Volume detached for volume "var-lib-kubelet" (UniqueName: "kubernetes.io/host-path/5ddf9fa33d2dcb615b0173624c2621da-var-lib-kubelet") on node "k8s-master-89242181-0" DevicePath "" Jul 13 23:23:03.749709 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:03.749622 5936 kubelet_pods.go:1247] Killing unwanted pod "kube-addon-manager-k8s-master-89242181-0" Jul 13 23:23:03.757269 k8s-master-89242181-0 kubelet[5936]: W0713 23:23:03.757246 5936 kubelet_getters.go:300] Path "/var/lib/kubelet/pods/5ddf9fa33d2dcb615b0173624c2621da/volumes" does not exist Jul 13 23:23:04.165767 k8s-master-89242181-0 kubelet[5936]: W0713 23:23:04.165741 5936 pod_container_deletor.go:79] Container "5c62bd3d47c3ad3c4ca80540865b409c76a2760e184465a542d53df8da127a09" not found in pod's containers Jul 13 23:23:04.170664 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:04.170628 5936 kubelet.go:1845] SyncLoop (PLEG): "azure-cni-networkmonitor-pl76s_kube-system(769c65e5-820b-42f5-a72e-e31108d619eb)", event: &pleg.PodLifecycleEvent{ID:"769c65e5-820b-42f5-a72e-e31108d619eb", Type:"ContainerStarted", Data:"aa2349f64aea0369142531b62e4915ec8558a6982a5e41f915b979f626a721a9"} Jul 13 23:23:04.177759 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:04.177724 5936 kubelet.go:1845] SyncLoop (PLEG): "csi-secrets-store-provider-azure-pxr4g_kube-system(ff423d99-d968-4de0-9257-e5c8fa34127d)", event: &pleg.PodLifecycleEvent{ID:"ff423d99-d968-4de0-9257-e5c8fa34127d", Type:"ContainerStarted", Data:"33a3821b75c4f27a0a944fb38d39cf297f13d325d20ec3bd098a1aba9de7807d"} Jul 13 23:23:04.182103 k8s-master-89242181-0 kubelet[5936]: E0713 23:23:04.182077 5936 remote_runtime.go:329] ContainerStatus "89abae1e7476e6e7eee3673a9ea68514ba7017f403b4dca6dac7c74387ec4ded" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 89abae1e7476e6e7eee3673a9ea68514ba7017f403b4dca6dac7c74387ec4ded Jul 13 23:23:04.182299 k8s-master-89242181-0 kubelet[5936]: E0713 23:23:04.182129 5936 kuberuntime_manager.go:949] getPodContainerStatuses for pod "csi-secrets-store-nqq5m_kube-system(5fd00258-0804-4bcd-ada5-18c1d7a0e424)" failed: rpc error: code = Unknown desc = Error: No such container: 89abae1e7476e6e7eee3673a9ea68514ba7017f403b4dca6dac7c74387ec4ded Jul 13 23:23:05.195831 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:05.195788 5936 kubelet.go:1845] SyncLoop (PLEG): "csi-secrets-store-nqq5m_kube-system(5fd00258-0804-4bcd-ada5-18c1d7a0e424)", event: &pleg.PodLifecycleEvent{ID:"5fd00258-0804-4bcd-ada5-18c1d7a0e424", Type:"ContainerStarted", Data:"79253fd972315c494c67f117941540fb7a89695d532f3379cc1f4baddb3b57b5"} Jul 13 23:23:05.316095 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:05.196397 5936 kubelet.go:1845] SyncLoop (PLEG): "csi-secrets-store-nqq5m_kube-system(5fd00258-0804-4bcd-ada5-18c1d7a0e424)", event: &pleg.PodLifecycleEvent{ID:"5fd00258-0804-4bcd-ada5-18c1d7a0e424", Type:"ContainerStarted", Data:"89abae1e7476e6e7eee3673a9ea68514ba7017f403b4dca6dac7c74387ec4ded"} Jul 13 23:23:07.444670 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:07.444625 5936 kubelet.go:1845] SyncLoop (PLEG): "csi-secrets-store-nqq5m_kube-system(5fd00258-0804-4bcd-ada5-18c1d7a0e424)", event: &pleg.PodLifecycleEvent{ID:"5fd00258-0804-4bcd-ada5-18c1d7a0e424", Type:"ContainerStarted", Data:"0c65d81f3f84db11575b0099822b74b1ef8bbdddb06e9046fee429744893fc1b"} Jul 13 23:23:08.119244 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:08.119210 5936 plugin_watcher.go:199] Adding socket path or updating timestamp /var/lib/kubelet/plugins_registry/secrets-store.csi.k8s.io-reg.sock to desired state cache Jul 13 23:23:08.859201 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:08.859137 5936 reconciler.go:156] operationExecutor.RegisterPlugin started for plugin at "/var/lib/kubelet/plugins_registry/secrets-store.csi.k8s.io-reg.sock" (plugin details: &{/var/lib/kubelet/plugins_registry/secrets-store.csi.k8s.io-reg.sock 2020-07-13 23:23:08.119236833 +0000 UTC m=+91.340236401 <nil> }) Jul 13 23:23:08.860069 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:08.859265 5936 operation_generator.go:181] parsed scheme: "" ... skipping 7 lines ... Jul 13 23:23:08.861561 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:08.861473 5936 clientconn.go:106] parsed scheme: "" Jul 13 23:23:08.861701 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:08.861564 5936 clientconn.go:106] scheme "" not registered, fallback to default scheme Jul 13 23:23:08.861810 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:08.861793 5936 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/lib/kubelet/plugins/csi-secrets-store/csi.sock <nil> 0 <nil>}] <nil> <nil>} Jul 13 23:23:08.861810 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:08.861812 5936 clientconn.go:948] ClientConn switching balancer to "pick_first" Jul 13 23:23:08.862011 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:08.861994 5936 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc001aa2900, {CONNECTING <nil>} Jul 13 23:23:08.862587 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:08.862562 5936 balancer_conn_wrappers.go:78] pickfirstBalancer: HandleSubConnStateChange: 0xc001aa2900, {READY <nil>} Jul 13 23:23:08.863897 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:08.863877 5936 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing" Jul 13 23:23:08.881496 k8s-master-89242181-0 kubelet[5936]: E0713 23:23:08.881475 5936 nodeinfomanager.go:568] Invalid attach limit value 0 cannot be added to CSINode object for "secrets-store.csi.k8s.io" Jul 13 23:23:08.884923 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:08.884909 5936 controlbuf.go:508] transport: loopyWriter.run returning. connection error: desc = "transport is closing" Jul 13 23:23:12.044588 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:12.044556 5936 prober.go:117] Readiness probe for "coredns-7ff794b897-6vt24_kube-system(a244b257-53f1-4c95-b823-e5a5d0d80173):coredns" failed (failure): HTTP probe failed with statuscode: 503 Jul 13 23:23:14.763801 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:14.756042 5936 kubelet.go:1813] SyncLoop (ADD, "api"): "kube-controller-manager-k8s-master-89242181-0_kube-system(78e8d158-aa4f-4e4f-9677-ee04805bee3c)" Jul 13 23:23:16.761784 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:16.761750 5936 kubelet.go:1813] SyncLoop (ADD, "api"): "kube-apiserver-k8s-master-89242181-0_kube-system(03247c70-b63d-4ddb-8821-5fb54256d331)" Jul 13 23:23:33.697261 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:33.697212 5936 kubelet.go:1845] SyncLoop (PLEG): "metrics-server-84d5cf8ccf-l88lb_kube-system(5e256c24-b603-4fc6-8d1a-91d1f637d3bc)", event: &pleg.PodLifecycleEvent{ID:"5e256c24-b603-4fc6-8d1a-91d1f637d3bc", Type:"ContainerDied", Data:"cba06d8624219f0da7136572b9884af9efd0b90897bd88e4db296803b3c66e3c"} Jul 13 23:23:33.697868 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:33.697615 5936 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: cba06d8624219f0da7136572b9884af9efd0b90897bd88e4db296803b3c66e3c Jul 13 23:23:34.720493 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:34.720393 5936 kubelet.go:1845] SyncLoop (PLEG): "metrics-server-84d5cf8ccf-l88lb_kube-system(5e256c24-b603-4fc6-8d1a-91d1f637d3bc)", event: &pleg.PodLifecycleEvent{ID:"5e256c24-b603-4fc6-8d1a-91d1f637d3bc", Type:"ContainerStarted", Data:"bfbb7dc8c7784fd717e375f262cdd7f7873a8d9abab7ddd4ca7f51dcae30ce28"} Jul 13 23:23:37.580196 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:37.580151 5936 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-apiserver-k8s-master-89242181-0" status=Running ... skipping 11 lines ... Jul 13 23:23:49.711434 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:49.711360 5936 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-custom" (UniqueName: "kubernetes.io/configmap/2008aece-e54d-4645-9b8a-53a53ac0c834-config-custom") pod "coredns-779d547755-l5fn7" (UID: "2008aece-e54d-4645-9b8a-53a53ac0c834") Jul 13 23:23:49.811815 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:49.811788 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "coredns-token-cqwqr" (UniqueName: "kubernetes.io/secret/2008aece-e54d-4645-9b8a-53a53ac0c834-coredns-token-cqwqr") pod "coredns-779d547755-l5fn7" (UID: "2008aece-e54d-4645-9b8a-53a53ac0c834") Jul 13 23:23:49.868489 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:49.811839 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2008aece-e54d-4645-9b8a-53a53ac0c834-config-volume") pod "coredns-779d547755-l5fn7" (UID: "2008aece-e54d-4645-9b8a-53a53ac0c834") Jul 13 23:23:49.868489 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:49.811864 5936 reconciler.go:269] operationExecutor.MountVolume started for volume "config-custom" (UniqueName: "kubernetes.io/configmap/2008aece-e54d-4645-9b8a-53a53ac0c834-config-custom") pod "coredns-779d547755-l5fn7" (UID: "2008aece-e54d-4645-9b8a-53a53ac0c834") Jul 13 23:23:49.868489 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:49.867429 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "config-custom" (UniqueName: "kubernetes.io/configmap/2008aece-e54d-4645-9b8a-53a53ac0c834-config-custom") pod "coredns-779d547755-l5fn7" (UID: "2008aece-e54d-4645-9b8a-53a53ac0c834") Jul 13 23:23:49.872295 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:49.872168 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2008aece-e54d-4645-9b8a-53a53ac0c834-config-volume") pod "coredns-779d547755-l5fn7" (UID: "2008aece-e54d-4645-9b8a-53a53ac0c834") Jul 13 23:23:49.896446 k8s-master-89242181-0 kubelet[5936]: W0713 23:23:49.895811 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r538ed7a9a9724f058af5231f228eaf5e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r538ed7a9a9724f058af5231f228eaf5e.scope: no such file or directory Jul 13 23:23:49.896446 k8s-master-89242181-0 kubelet[5936]: W0713 23:23:49.895868 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r538ed7a9a9724f058af5231f228eaf5e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r538ed7a9a9724f058af5231f228eaf5e.scope: no such file or directory Jul 13 23:23:49.896446 k8s-master-89242181-0 kubelet[5936]: W0713 23:23:49.895897 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r538ed7a9a9724f058af5231f228eaf5e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r538ed7a9a9724f058af5231f228eaf5e.scope: no such file or directory Jul 13 23:23:49.896446 k8s-master-89242181-0 kubelet[5936]: W0713 23:23:49.896018 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-r538ed7a9a9724f058af5231f228eaf5e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-r538ed7a9a9724f058af5231f228eaf5e.scope: no such file or directory Jul 13 23:23:49.896446 k8s-master-89242181-0 kubelet[5936]: W0713 23:23:49.896049 5936 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-r538ed7a9a9724f058af5231f228eaf5e.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-r538ed7a9a9724f058af5231f228eaf5e.scope: no such file or directory Jul 13 23:23:49.896446 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:49.896388 5936 operation_generator.go:663] MountVolume.SetUp succeeded for volume "coredns-token-cqwqr" (UniqueName: "kubernetes.io/secret/2008aece-e54d-4645-9b8a-53a53ac0c834-coredns-token-cqwqr") pod "coredns-779d547755-l5fn7" (UID: "2008aece-e54d-4645-9b8a-53a53ac0c834") Jul 13 23:23:49.966213 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:49.966188 5936 kuberuntime_manager.go:419] No sandbox for pod "coredns-779d547755-l5fn7_kube-system(2008aece-e54d-4645-9b8a-53a53ac0c834)" can be found. Need to start a new one Jul 13 23:23:51.024472 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:51.024432 5936 kubelet.go:1845] SyncLoop (PLEG): "coredns-779d547755-l5fn7_kube-system(2008aece-e54d-4645-9b8a-53a53ac0c834)", event: &pleg.PodLifecycleEvent{ID:"2008aece-e54d-4645-9b8a-53a53ac0c834", Type:"ContainerDied", Data:"0086a0e653eb028ebf4f43eb84d72f14343682b442a2d4bb1fdf7c89d2e8e74a"} Jul 13 23:23:51.024994 k8s-master-89242181-0 kubelet[5936]: W0713 23:23:51.024508 5936 pod_container_deletor.go:79] Container "0086a0e653eb028ebf4f43eb84d72f14343682b442a2d4bb1fdf7c89d2e8e74a" not found in pod's containers Jul 13 23:23:51.031332 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:51.031303 5936 kubelet.go:1845] SyncLoop (PLEG): "coredns-7ff794b897-6vt24_kube-system(a244b257-53f1-4c95-b823-e5a5d0d80173)", event: &pleg.PodLifecycleEvent{ID:"a244b257-53f1-4c95-b823-e5a5d0d80173", Type:"ContainerDied", Data:"c43e782436db006ded56c581a2fde17c1a32029418425eb94a904166cfc114f2"} Jul 13 23:23:51.031489 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:51.031389 5936 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c43e782436db006ded56c581a2fde17c1a32029418425eb94a904166cfc114f2 Jul 13 23:23:51.117437 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:51.117414 5936 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a244b257-53f1-4c95-b823-e5a5d0d80173-config-volume") pod "a244b257-53f1-4c95-b823-e5a5d0d80173" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173") Jul 13 23:23:51.117557 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:51.117455 5936 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config-custom" (UniqueName: "kubernetes.io/configmap/a244b257-53f1-4c95-b823-e5a5d0d80173-config-custom") pod "a244b257-53f1-4c95-b823-e5a5d0d80173" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173") Jul 13 23:23:51.117557 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:51.117502 5936 reconciler.go:196] operationExecutor.UnmountVolume started for volume "coredns-token-cqwqr" (UniqueName: "kubernetes.io/secret/a244b257-53f1-4c95-b823-e5a5d0d80173-coredns-token-cqwqr") pod "a244b257-53f1-4c95-b823-e5a5d0d80173" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173") Jul 13 23:23:51.117844 k8s-master-89242181-0 kubelet[5936]: W0713 23:23:51.117826 5936 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/a244b257-53f1-4c95-b823-e5a5d0d80173/volumes/kubernetes.io~configmap/config-volume: ClearQuota called, but quotas disabled Jul 13 23:23:51.118300 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:51.118272 5936 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a244b257-53f1-4c95-b823-e5a5d0d80173-config-volume" (OuterVolumeSpecName: "config-volume") pod "a244b257-53f1-4c95-b823-e5a5d0d80173" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 13 23:23:51.118555 k8s-master-89242181-0 kubelet[5936]: W0713 23:23:51.118539 5936 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/a244b257-53f1-4c95-b823-e5a5d0d80173/volumes/kubernetes.io~configmap/config-custom: ClearQuota called, but quotas disabled Jul 13 23:23:51.118923 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:51.118897 5936 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a244b257-53f1-4c95-b823-e5a5d0d80173-config-custom" (OuterVolumeSpecName: "config-custom") pod "a244b257-53f1-4c95-b823-e5a5d0d80173" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173"). InnerVolumeSpecName "config-custom". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 13 23:23:51.119971 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:51.119905 5936 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a244b257-53f1-4c95-b823-e5a5d0d80173-coredns-token-cqwqr" (OuterVolumeSpecName: "coredns-token-cqwqr") pod "a244b257-53f1-4c95-b823-e5a5d0d80173" (UID: "a244b257-53f1-4c95-b823-e5a5d0d80173"). InnerVolumeSpecName "coredns-token-cqwqr". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 13 23:23:51.268888 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:51.217934 5936 reconciler.go:319] Volume detached for volume "coredns-token-cqwqr" (UniqueName: "kubernetes.io/secret/a244b257-53f1-4c95-b823-e5a5d0d80173-coredns-token-cqwqr") on node "k8s-master-89242181-0" DevicePath "" Jul 13 23:23:51.268888 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:51.217999 5936 reconciler.go:319] Volume detached for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a244b257-53f1-4c95-b823-e5a5d0d80173-config-volume") on node "k8s-master-89242181-0" DevicePath "" Jul 13 23:23:51.268888 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:51.218037 5936 reconciler.go:319] Volume detached for volume "config-custom" (UniqueName: "kubernetes.io/configmap/a244b257-53f1-4c95-b823-e5a5d0d80173-config-custom") on node "k8s-master-89242181-0" DevicePath "" Jul 13 23:23:52.040836 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:52.040739 5936 kubelet.go:1845] SyncLoop (PLEG): "coredns-7ff794b897-6vt24_kube-system(a244b257-53f1-4c95-b823-e5a5d0d80173)", event: &pleg.PodLifecycleEvent{ID:"a244b257-53f1-4c95-b823-e5a5d0d80173", Type:"ContainerDied", Data:"997cd75b4ed3fc45376b0b5f3f0caf21b2cef0cb3b2127390316d1946f342573"} Jul 13 23:23:52.052054 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:52.052021 5936 kubelet.go:1829] SyncLoop (DELETE, "api"): "coredns-7ff794b897-6vt24_kube-system(a244b257-53f1-4c95-b823-e5a5d0d80173)" Jul 13 23:23:52.055929 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:52.055899 5936 kubelet.go:1845] SyncLoop (PLEG): "coredns-779d547755-l5fn7_kube-system(2008aece-e54d-4645-9b8a-53a53ac0c834)", event: &pleg.PodLifecycleEvent{ID:"2008aece-e54d-4645-9b8a-53a53ac0c834", Type:"ContainerStarted", Data:"0086a0e653eb028ebf4f43eb84d72f14343682b442a2d4bb1fdf7c89d2e8e74a"} Jul 13 23:23:52.056052 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:52.055941 5936 kubelet.go:1845] SyncLoop (PLEG): "coredns-779d547755-l5fn7_kube-system(2008aece-e54d-4645-9b8a-53a53ac0c834)", event: &pleg.PodLifecycleEvent{ID:"2008aece-e54d-4645-9b8a-53a53ac0c834", Type:"ContainerStarted", Data:"34ecdfc95e3633c86255a474be43ee5360014cf1ca46d2457917dc28ca0e3932"} Jul 13 23:23:52.060462 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:52.059927 5936 kubelet.go:1823] SyncLoop (REMOVE, "api"): "coredns-7ff794b897-6vt24_kube-system(a244b257-53f1-4c95-b823-e5a5d0d80173)" Jul 13 23:23:52.060462 k8s-master-89242181-0 kubelet[5936]: I0713 23:23:52.059971 5936 kubelet.go:2011] Failed to delete pod "coredns-7ff794b897-6vt24_kube-system(a244b257-53f1-4c95-b823-e5a5d0d80173)", err: pod not found Jul 13 23:24:37.580568 k8s-master-89242181-0 kubelet[5936]: I0713 23:24:37.580499 5936 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-apiserver-k8s-master-89242181-0" status=Running Jul 13 23:24:37.580568 k8s-master-89242181-0 kubelet[5936]: I0713 23:24:37.580567 5936 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-scheduler-k8s-master-89242181-0" status=Running Jul 13 23:24:37.580568 k8s-master-89242181-0 kubelet[5936]: I0713 23:24:37.580607 5936 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-addon-manager-k8s-master-89242181-0" status=Running Jul 13 23:24:37.581318 k8s-master-89242181-0 kubelet[5936]: I0713 23:24:37.580621 5936 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-controller-manager-k8s-master-89242181-0" status=Running Jul 13 23:25:37.580932 k8s-master-89242181-0 kubelet[5936]: I0713 23:25:37.580899 5936 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-addon-manager-k8s-master-89242181-0" status=Running Jul 13 23:25:37.580932 k8s-master-89242181-0 kubelet[5936]: I0713 23:25:37.580940 5936 kubelet_getters.go:176] "Pod status updated" pod="kube-system/kube-controller-manager-k8s-master-89242181-0" status=Running ... skipping 64 lines ... Jul 13 23:20:08.122513 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:08.122354900Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 13 23:20:08.122513 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:08.122376100Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc Jul 13 23:20:08.122513 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:08.122389800Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 13 23:20:10.557648 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.557595200Z" level=info msg="starting containerd" revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=v1.2.13 Jul 13 23:20:10.558110 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.558089900Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1 Jul 13 23:20:10.558325 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.558301700Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1 Jul 13 23:20:10.558715 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.558686300Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" Jul 13 23:20:10.558838 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.558821900Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1 Jul 13 23:20:10.597539 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.597512600Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 Jul 13 23:20:10.597861 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.597842700Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 Jul 13 23:20:10.604271 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.604249700Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 Jul 13 23:20:10.628908 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.628875300Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 Jul 13 23:20:10.628908 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.628906600Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 Jul 13 23:20:10.629044 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.628939700Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" Jul 13 23:20:10.629044 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.628968000Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" Jul 13 23:20:10.684073 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.683477000Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 Jul 13 23:20:10.684073 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.683538600Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 Jul 13 23:20:10.684073 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.683596100Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 Jul 13 23:20:10.684073 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.683614200Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 Jul 13 23:20:10.684073 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.683631400Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 Jul 13 23:20:10.684073 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:10.683650500Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 ... skipping 41 lines ... Jul 13 23:20:21.629957 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:21.629704042Z" level=info msg="Docker daemon" commit=77e06fda0c graphdriver(s)=overlay2 version=3.0.13+azure Jul 13 23:20:21.669052 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:21.669019142Z" level=info msg="Daemon has completed initialization" Jul 13 23:20:21.840224 k8s-master-89242181-0 systemd[1]: Started Docker Application Container Engine. Jul 13 23:20:21.848027 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:20:21.838260942Z" level=info msg="API listen on /var/run/docker.sock" Jul 13 23:21:16.525615 k8s-master-89242181-0 systemd[1]: Stopping Docker Application Container Engine... Jul 13 23:21:16.526468 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:21:16.525765633Z" level=info msg="Processing signal 'terminated'" Jul 13 23:21:16.536549 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:21:16.536507533Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby Jul 13 23:21:16.537004 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:21:16.536832233Z" level=info msg="Daemon shutdown complete" Jul 13 23:21:16.537004 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:21:16.536838633Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd Jul 13 23:21:16.537004 k8s-master-89242181-0 dockerd[1484]: time="2020-07-13T23:21:16.536974133Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby Jul 13 23:21:17.540879 k8s-master-89242181-0 systemd[1]: Stopped Docker Application Container Engine. Jul 13 23:21:17.547335 k8s-master-89242181-0 systemd[1]: Starting Docker Application Container Engine... Jul 13 23:21:18.078619 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.078131533Z" level=info msg="Starting up" Jul 13 23:21:18.083256 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.083213533Z" level=info msg="libcontainerd: started new containerd process" pid=5342 Jul 13 23:21:18.083376 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.083275733Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 13 23:21:18.083376 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.083287933Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 13 23:21:18.083376 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.083314433Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc Jul 13 23:21:18.083376 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.083330433Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 13 23:21:18.117158 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.117103633Z" level=info msg="starting containerd" revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=v1.2.13 Jul 13 23:21:18.117514 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.117496033Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1 Jul 13 23:21:18.117600 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.117531833Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1 Jul 13 23:21:18.117828 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.117793933Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" Jul 13 23:21:18.117828 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.117819933Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1 Jul 13 23:21:18.119079 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119056533Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 Jul 13 23:21:18.119179 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119092133Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 Jul 13 23:21:18.119238 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119226833Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 Jul 13 23:21:18.119487 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119468533Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 Jul 13 23:21:18.119487 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119486533Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 Jul 13 23:21:18.119605 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119507133Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" Jul 13 23:21:18.119605 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119552533Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" Jul 13 23:21:18.119754 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119721433Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 Jul 13 23:21:18.119807 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119761633Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 Jul 13 23:21:18.119857 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119822433Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 Jul 13 23:21:18.119901 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119858433Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 Jul 13 23:21:18.119901 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119874833Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 Jul 13 23:21:18.119985 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:21:18.119910533Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 ... skipping 69 lines ... Jul 13 23:22:50.153899 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:22:50.153848433Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fbb7cafaf7a7512427f90755e063eed303b4458ca27e196904cac230fba2fae8/shim.sock" debug=false pid=9950 Jul 13 23:22:50.643658 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:22:50.643616133Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cba06d8624219f0da7136572b9884af9efd0b90897bd88e4db296803b3c66e3c/shim.sock" debug=false pid=10006 Jul 13 23:22:50.990343 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:22:50.990242433Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2765459dd10b346589893d836b72c5ca93369fe27c216b3be78c4ea6e06cfec6/shim.sock" debug=false pid=10065 Jul 13 23:22:53.721075 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:22:53.721016333Z" level=warning msg="Published ports are discarded when using host network mode" Jul 13 23:23:00.186563 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:23:00.186525133Z" level=warning msg="Published ports are discarded when using host network mode" Jul 13 23:23:00.685503 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:23:00.685469533Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/67087da7b54879a406226678fd3066d2b6045678e824a97d107d6685b013708c/shim.sock" debug=false pid=10414 Jul 13 23:23:00.870499 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:23:00.870225933Z" level=info msg="Container 23fd3e4ed45dd5e9ac349f60c883286638bf89bd5ccb9cd117da5bc2e864a5a7 failed to exit within 30 seconds of signal 15 - using the force" Jul 13 23:23:00.991568 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:23:00.991465933Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap." Jul 13 23:23:01.003766 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:23:01.003724533Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/92d4a79a096faea4f1b77ea7d5452fb6bd497edf2a876014695ba315d704aee8/shim.sock" debug=false pid=10482 Jul 13 23:23:01.054746 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:23:01.054667933Z" level=info msg="shim reaped" id=23fd3e4ed45dd5e9ac349f60c883286638bf89bd5ccb9cd117da5bc2e864a5a7 Jul 13 23:23:01.076288 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:23:01.076244733Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/da69629d22f30b48b7efb13b8d3323b9be7a282c865f961982f48e44be258c61/shim.sock" debug=false pid=10505 Jul 13 23:23:01.098161 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:23:01.095663633Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jul 13 23:23:01.127342 k8s-master-89242181-0 dockerd[5311]: time="2020-07-13T23:23:01.127306133Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/55b20d07e4985997e0a979fd730743d7a6db37c88ed95de7a474000fb194543a/shim.sock" debug=false pid=10530 ... skipping 140 lines ... ssh_exchange_identification: Connection closed by remote host 2020/07/13 23:55:34 process.go:155: Step 'bash -c /root/tmp641327099/win-ci-logs-collector.sh kubetest-047c5f1b-c559-11ea-8304-024250a65fb8.southcentralus.cloudapp.azure.com /root/tmp641327099 /etc/ssh-key-secret/ssh-private' finished in 21m50.19387882s 2020/07/13 23:55:34 aksengine.go:1155: Deleting resource group: kubetest-047c5f1b-c559-11ea-8304-024250a65fb8. 2020/07/14 00:07:10 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2020/07/14 00:07:10 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}" 2020/07/14 00:07:28 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 18.302466775s 2020/07/14 00:07:28 main.go:312: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --node-os-distro=windows --ginkgo.focus=\[Conformance\]|\[NodeConformance\]|\[sig-windows\]|\[sig-apps\].CronJob|\[sig-api-machinery\].ResourceQuota|\[sig-scheduling\].SchedulerPreemption --ginkgo.skip=\[LinuxOnly\]|\[Serial\]|Guestbook.application.should.create.and.stop.a.working.application --report-dir=/logs/artifacts --disable-log-dump=true: exit status 1] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker 4563bcbd7763 ... skipping 4 lines ...