ResultFAILURE
Tests 1 failed / 12 succeeded
Started2019-11-06 22:44
Elapsed1h40m
Revision
Buildered6d7467-00e6-11ea-918b-16773c0c7aed
infra-commite6c36a09c
job-versionv1.18.0-alpha.0.351+9d708b02031b5c-dirty
repok8s.io/kubernetes
repo-commit9d708b02031b5c5698359e99556f96fb5a786936
repos{u'k8s.io/kubernetes': u'master', u'sigs.k8s.io/azuredisk-csi-driver': u'master:b7c03af8eae7388e84115ace30cf2ff3285b0dc9,180:8870695e879adc3090624658b847398415b748f0'}
revisionv1.18.0-alpha.0.351+9d708b02031b5c-dirty

Test Failures


Test 1h3m

error during make e2e-test: exit status 2
				from junit_runner.xml

Filter through log files


Show 12 Passed Tests

Error lines from build-log.txt

... skipping 435 lines ...
I1106 23:13:41.225] STEP: Waiting for a default service account to be provisioned in namespace
I1106 23:13:41.371] STEP: setting up the StorageClass
I1106 23:13:41.372] STEP: creating a StorageClass 
I1106 23:13:41.449] STEP: setting up the PVC and PV
I1106 23:13:41.450] STEP: creating a PVC
I1106 23:13:41.536] STEP: deploying the pod
I1106 23:13:41.616] STEP: checking that the pods command exits with no error
I1106 23:13:41.616] Nov  6 23:13:41.616: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-t2ctt" in namespace "azuredisk-7356" to be "success or failure"
I1106 23:13:41.689] Nov  6 23:13:41.689: INFO: Pod "azuredisk-volume-tester-t2ctt": Phase="Pending", Reason="", readiness=false. Elapsed: 73.391275ms
I1106 23:13:43.765] Nov  6 23:13:43.765: INFO: Pod "azuredisk-volume-tester-t2ctt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149042628s
I1106 23:13:45.840] Nov  6 23:13:45.840: INFO: Pod "azuredisk-volume-tester-t2ctt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224023175s
I1106 23:13:47.915] Nov  6 23:13:47.914: INFO: Pod "azuredisk-volume-tester-t2ctt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.298549418s
I1106 23:13:50.012] Nov  6 23:13:50.012: INFO: Pod "azuredisk-volume-tester-t2ctt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.395767237s
... skipping 421 lines ...
I1106 23:28:29.993] Nov  6 23:28:29.993: INFO: Pod "azuredisk-volume-tester-t2ctt": Phase="Pending", Reason="", readiness=false. Elapsed: 14m48.377203533s
I1106 23:28:32.073] Nov  6 23:28:32.073: INFO: Pod "azuredisk-volume-tester-t2ctt": Phase="Pending", Reason="", readiness=false. Elapsed: 14m50.457215496s
I1106 23:28:34.154] Nov  6 23:28:34.153: INFO: Pod "azuredisk-volume-tester-t2ctt": Phase="Pending", Reason="", readiness=false. Elapsed: 14m52.537551462s
I1106 23:28:36.234] Nov  6 23:28:36.233: INFO: Pod "azuredisk-volume-tester-t2ctt": Phase="Pending", Reason="", readiness=false. Elapsed: 14m54.617454024s
I1106 23:28:38.313] Nov  6 23:28:38.313: INFO: Pod "azuredisk-volume-tester-t2ctt": Phase="Pending", Reason="", readiness=false. Elapsed: 14m56.696866583s
I1106 23:28:40.393] Nov  6 23:28:40.393: INFO: Pod "azuredisk-volume-tester-t2ctt": Phase="Pending", Reason="", readiness=false. Elapsed: 14m58.776972246s
I1106 23:28:42.394] Nov  6 23:28:42.393: INFO: Unexpected error occurred: Gave up after waiting 15m0s for pod "azuredisk-volume-tester-t2ctt" to be "success or failure"
I1106 23:28:42.394] Nov  6 23:28:42.394: INFO: deleting Pod "azuredisk-7356"/"azuredisk-volume-tester-t2ctt"
I1106 23:28:42.476] Nov  6 23:28:42.476: INFO: Pod azuredisk-volume-tester-t2ctt has the following logs: 
I1106 23:28:42.476] STEP: Deleting pod azuredisk-volume-tester-t2ctt in namespace azuredisk-7356
I1106 23:28:42.659] STEP: validating provisioned PV
I1106 23:28:42.659] Nov  6 23:28:42.659: INFO: Unexpected error occurred: resource name may not be empty
I1106 23:28:42.659] Nov  6 23:28:42.659: INFO: deleting StorageClass azuredisk-7356-disk.csi.azure.com-dynamic-sc-dznt7
I1106 23:28:42.748] STEP: Collecting events from namespace "azuredisk-7356".
I1106 23:28:42.827] STEP: Found 7 events.
I1106 23:28:42.827] Nov  6 23:28:42.827: INFO: At 2019-11-06 23:13:41 +0000 UTC - event for pvc-xxtmw: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
I1106 23:28:42.828] Nov  6 23:28:42.827: INFO: At 2019-11-06 23:13:41 +0000 UTC - event for pvc-xxtmw: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "disk.csi.azure.com" or manually created by system administrator
I1106 23:28:42.828] Nov  6 23:28:42.827: INFO: At 2019-11-06 23:13:41 +0000 UTC - event for pvc-xxtmw: {disk.csi.azure.com_csi-azuredisk-controller-c7cd577cc-5q479_0f8f91a4-00eb-11ea-84b4-aee61c96403d } Provisioning: External provisioner is provisioning volume for claim "azuredisk-7356/pvc-xxtmw"
I1106 23:28:42.828] Nov  6 23:28:42.827: INFO: At 2019-11-06 23:13:51 +0000 UTC - event for pvc-xxtmw: {disk.csi.azure.com_csi-azuredisk-controller-c7cd577cc-5q479_0f8f91a4-00eb-11ea-84b4-aee61c96403d } ProvisioningFailed: failed to provision volume with StorageClass "azuredisk-7356-disk.csi.azure.com-dynamic-sc-dznt7": rpc error: code = DeadlineExceeded desc = context deadline exceeded
I1106 23:28:42.828] Nov  6 23:28:42.827: INFO: At 2019-11-06 23:23:41 +0000 UTC - event for azuredisk-volume-tester-t2ctt: {default-scheduler } FailedScheduling: timed out waiting for the condition
I1106 23:28:42.829] Nov  6 23:28:42.827: INFO: At 2019-11-06 23:28:42 +0000 UTC - event for azuredisk-volume-tester-t2ctt: {default-scheduler } FailedScheduling: skip schedule deleting pod: azuredisk-7356/azuredisk-volume-tester-t2ctt
I1106 23:28:42.829] Nov  6 23:28:42.827: INFO: At 2019-11-06 23:28:42 +0000 UTC - event for azuredisk-volume-tester-t2ctt: {default-scheduler } FailedScheduling: pod "azuredisk-7356/azuredisk-volume-tester-t2ctt" does not exist any more
I1106 23:28:42.906] Nov  6 23:28:42.906: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I1106 23:28:42.906] Nov  6 23:28:42.906: INFO: 
I1106 23:28:43.140] Nov  6 23:28:43.140: INFO: 
... skipping 178 lines ...
I1106 23:29:04.200] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:33
I1106 23:29:04.200]   [multi-az]
I1106 23:29:04.200]   /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:40
I1106 23:29:04.200]     should create a volume on demand [It]
I1106 23:29:04.200]     /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:78
I1106 23:29:04.200] 
I1106 23:29:04.200]     Unexpected error:
I1106 23:29:04.200]         <*errors.errorString | 0xc0007d2b10>: {
I1106 23:29:04.201]             s: "Gave up after waiting 15m0s for pod \"azuredisk-volume-tester-t2ctt\" to be \"success or failure\"",
I1106 23:29:04.201]         }
I1106 23:29:04.201]         Gave up after waiting 15m0s for pod "azuredisk-volume-tester-t2ctt" to be "success or failure"
I1106 23:29:04.201]     occurred
I1106 23:29:04.201] 
... skipping 9 lines ...
I1106 23:29:04.701] STEP: Waiting for a default service account to be provisioned in namespace
I1106 23:29:04.777] STEP: setting up the StorageClass
I1106 23:29:04.777] STEP: creating a StorageClass 
I1106 23:29:04.865] STEP: setting up the PVC and PV
I1106 23:29:04.865] STEP: creating a PVC
I1106 23:29:04.952] STEP: deploying the pod
I1106 23:29:05.046] STEP: checking that the pods command exits with no error
I1106 23:29:05.046] Nov  6 23:29:05.046: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-t4j75" in namespace "azuredisk-1529" to be "success or failure"
I1106 23:29:05.125] Nov  6 23:29:05.125: INFO: Pod "azuredisk-volume-tester-t4j75": Phase="Pending", Reason="", readiness=false. Elapsed: 78.92311ms
I1106 23:29:07.204] Nov  6 23:29:07.204: INFO: Pod "azuredisk-volume-tester-t4j75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158438066s
I1106 23:29:09.284] Nov  6 23:29:09.284: INFO: Pod "azuredisk-volume-tester-t4j75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238114824s
I1106 23:29:11.363] Nov  6 23:29:11.363: INFO: Pod "azuredisk-volume-tester-t4j75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.317514479s
I1106 23:29:13.443] Nov  6 23:29:13.443: INFO: Pod "azuredisk-volume-tester-t4j75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.396961634s
... skipping 422 lines ...
I1106 23:43:53.787] Nov  6 23:43:53.787: INFO: Pod "azuredisk-volume-tester-t4j75": Phase="Pending", Reason="", readiness=false. Elapsed: 14m48.74131416s
I1106 23:43:55.867] Nov  6 23:43:55.867: INFO: Pod "azuredisk-volume-tester-t4j75": Phase="Pending", Reason="", readiness=false. Elapsed: 14m50.821078021s
I1106 23:43:57.947] Nov  6 23:43:57.947: INFO: Pod "azuredisk-volume-tester-t4j75": Phase="Pending", Reason="", readiness=false. Elapsed: 14m52.901145887s
I1106 23:44:00.028] Nov  6 23:44:00.027: INFO: Pod "azuredisk-volume-tester-t4j75": Phase="Pending", Reason="", readiness=false. Elapsed: 14m54.98186396s
I1106 23:44:02.108] Nov  6 23:44:02.108: INFO: Pod "azuredisk-volume-tester-t4j75": Phase="Pending", Reason="", readiness=false. Elapsed: 14m57.062122632s
I1106 23:44:04.188] Nov  6 23:44:04.187: INFO: Pod "azuredisk-volume-tester-t4j75": Phase="Pending", Reason="", readiness=false. Elapsed: 14m59.141879602s
I1106 23:44:06.188] Nov  6 23:44:06.188: INFO: Unexpected error occurred: Gave up after waiting 15m0s for pod "azuredisk-volume-tester-t4j75" to be "success or failure"
I1106 23:44:06.188] Nov  6 23:44:06.188: INFO: deleting Pod "azuredisk-1529"/"azuredisk-volume-tester-t4j75"
I1106 23:44:06.268] Nov  6 23:44:06.268: INFO: Pod azuredisk-volume-tester-t4j75 has the following logs: 
I1106 23:44:06.268] STEP: Deleting pod azuredisk-volume-tester-t4j75 in namespace azuredisk-1529
I1106 23:44:06.446] STEP: validating provisioned PV
I1106 23:44:06.446] Nov  6 23:44:06.446: INFO: Unexpected error occurred: resource name may not be empty
I1106 23:44:06.446] Nov  6 23:44:06.446: INFO: deleting StorageClass azuredisk-1529-disk.csi.azure.com-dynamic-sc-dwn6f
I1106 23:44:06.540] STEP: Collecting events from namespace "azuredisk-1529".
I1106 23:44:06.620] STEP: Found 6 events.
I1106 23:44:06.620] Nov  6 23:44:06.620: INFO: At 2019-11-06 23:29:04 +0000 UTC - event for pvc-9mz88: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
I1106 23:44:06.620] Nov  6 23:44:06.620: INFO: At 2019-11-06 23:29:05 +0000 UTC - event for pvc-9mz88: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "disk.csi.azure.com" or manually created by system administrator
I1106 23:44:06.621] Nov  6 23:44:06.620: INFO: At 2019-11-06 23:29:05 +0000 UTC - event for pvc-9mz88: {disk.csi.azure.com_csi-azuredisk-controller-c7cd577cc-5q479_0f8f91a4-00eb-11ea-84b4-aee61c96403d } Provisioning: External provisioner is provisioning volume for claim "azuredisk-1529/pvc-9mz88"
I1106 23:44:06.621] Nov  6 23:44:06.620: INFO: At 2019-11-06 23:29:15 +0000 UTC - event for pvc-9mz88: {disk.csi.azure.com_csi-azuredisk-controller-c7cd577cc-5q479_0f8f91a4-00eb-11ea-84b4-aee61c96403d } ProvisioningFailed: failed to provision volume with StorageClass "azuredisk-1529-disk.csi.azure.com-dynamic-sc-dwn6f": rpc error: code = DeadlineExceeded desc = context deadline exceeded
I1106 23:44:06.621] Nov  6 23:44:06.620: INFO: At 2019-11-06 23:39:05 +0000 UTC - event for azuredisk-volume-tester-t4j75: {default-scheduler } FailedScheduling: timed out waiting for the condition
I1106 23:44:06.621] Nov  6 23:44:06.620: INFO: At 2019-11-06 23:44:06 +0000 UTC - event for azuredisk-volume-tester-t4j75: {default-scheduler } FailedScheduling: skip schedule deleting pod: azuredisk-1529/azuredisk-volume-tester-t4j75
I1106 23:44:06.699] Nov  6 23:44:06.698: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I1106 23:44:06.699] Nov  6 23:44:06.698: INFO: 
I1106 23:44:06.938] Nov  6 23:44:06.938: INFO: 
I1106 23:44:06.938] Logging node info for node k8s-agentpool-36331299-vmss000000
... skipping 177 lines ...
I1106 23:44:27.994] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:33
I1106 23:44:27.994]   [multi-az]
I1106 23:44:27.994]   /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:40
I1106 23:44:27.994]     should create a raw block volume on demand [It]
I1106 23:44:27.994]     /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:101
I1106 23:44:27.994] 
I1106 23:44:27.994]     Unexpected error:
I1106 23:44:27.995]         <*errors.errorString | 0xc00080f300>: {
I1106 23:44:27.995]             s: "Gave up after waiting 15m0s for pod \"azuredisk-volume-tester-t4j75\" to be \"success or failure\"",
I1106 23:44:27.995]         }
I1106 23:44:27.995]         Gave up after waiting 15m0s for pod "azuredisk-volume-tester-t4j75" to be "success or failure"
I1106 23:44:27.995]     occurred
I1106 23:44:27.995] 
... skipping 9 lines ...
I1106 23:44:28.498] STEP: Waiting for a default service account to be provisioned in namespace
I1106 23:44:28.576] STEP: setting up the StorageClass
I1106 23:44:28.577] STEP: creating a StorageClass 
I1106 23:44:28.659] STEP: setting up the PVC and PV
I1106 23:44:28.660] STEP: creating a PVC
I1106 23:44:28.746] STEP: deploying the pod
I1106 23:44:28.834] STEP: checking that the pods command exits with an error
I1106 23:44:28.834] Nov  6 23:44:28.834: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-4j8ft" in namespace "azuredisk-7941" to be "Error status code"
I1106 23:44:28.913] Nov  6 23:44:28.913: INFO: Pod "azuredisk-volume-tester-4j8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 78.654582ms
I1106 23:44:30.993] Nov  6 23:44:30.993: INFO: Pod "azuredisk-volume-tester-4j8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158771281s
I1106 23:44:33.073] Nov  6 23:44:33.073: INFO: Pod "azuredisk-volume-tester-4j8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238680981s
I1106 23:44:35.153] Nov  6 23:44:35.153: INFO: Pod "azuredisk-volume-tester-4j8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 6.318432682s
I1106 23:44:37.233] Nov  6 23:44:37.232: INFO: Pod "azuredisk-volume-tester-4j8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 8.398334786s
I1106 23:44:39.313] Nov  6 23:44:39.313: INFO: Pod "azuredisk-volume-tester-4j8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 10.478997697s
... skipping 421 lines ...
I1106 23:59:17.400] Nov  6 23:59:17.400: INFO: Pod "azuredisk-volume-tester-4j8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 14m48.56611322s
I1106 23:59:19.480] Nov  6 23:59:19.480: INFO: Pod "azuredisk-volume-tester-4j8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 14m50.645582822s
I1106 23:59:21.559] Nov  6 23:59:21.559: INFO: Pod "azuredisk-volume-tester-4j8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 14m52.724643822s
I1106 23:59:23.639] Nov  6 23:59:23.639: INFO: Pod "azuredisk-volume-tester-4j8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 14m54.804866831s
I1106 23:59:25.719] Nov  6 23:59:25.719: INFO: Pod "azuredisk-volume-tester-4j8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 14m56.884480835s
I1106 23:59:27.798] Nov  6 23:59:27.798: INFO: Pod "azuredisk-volume-tester-4j8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 14m58.963833938s
I1106 23:59:29.798] Nov  6 23:59:29.798: INFO: Unexpected error occurred: Gave up after waiting 15m0s for pod "azuredisk-volume-tester-4j8ft" to be "Error status code"
I1106 23:59:29.799] Nov  6 23:59:29.798: INFO: deleting Pod "azuredisk-7941"/"azuredisk-volume-tester-4j8ft"
I1106 23:59:29.879] Nov  6 23:59:29.879: INFO: Pod azuredisk-volume-tester-4j8ft has the following logs: 
I1106 23:59:29.879] STEP: Deleting pod azuredisk-volume-tester-4j8ft in namespace azuredisk-7941
I1106 23:59:30.124] STEP: validating provisioned PV
I1106 23:59:30.124] Nov  6 23:59:30.124: INFO: Unexpected error occurred: resource name may not be empty
I1106 23:59:30.124] Nov  6 23:59:30.124: INFO: deleting StorageClass azuredisk-7941-disk.csi.azure.com-dynamic-sc-6zsdp
I1106 23:59:30.226] STEP: Collecting events from namespace "azuredisk-7941".
I1106 23:59:30.305] STEP: Found 7 events.
I1106 23:59:30.305] Nov  6 23:59:30.305: INFO: At 2019-11-06 23:44:28 +0000 UTC - event for pvc-g5hwm: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
I1106 23:59:30.305] Nov  6 23:59:30.305: INFO: At 2019-11-06 23:44:28 +0000 UTC - event for pvc-g5hwm: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "disk.csi.azure.com" or manually created by system administrator
I1106 23:59:30.305] Nov  6 23:59:30.305: INFO: At 2019-11-06 23:44:28 +0000 UTC - event for pvc-g5hwm: {disk.csi.azure.com_csi-azuredisk-controller-c7cd577cc-5q479_0f8f91a4-00eb-11ea-84b4-aee61c96403d } Provisioning: External provisioner is provisioning volume for claim "azuredisk-7941/pvc-g5hwm"
I1106 23:59:30.306] Nov  6 23:59:30.305: INFO: At 2019-11-06 23:44:38 +0000 UTC - event for pvc-g5hwm: {disk.csi.azure.com_csi-azuredisk-controller-c7cd577cc-5q479_0f8f91a4-00eb-11ea-84b4-aee61c96403d } ProvisioningFailed: failed to provision volume with StorageClass "azuredisk-7941-disk.csi.azure.com-dynamic-sc-6zsdp": rpc error: code = DeadlineExceeded desc = context deadline exceeded
I1106 23:59:30.306] Nov  6 23:59:30.305: INFO: At 2019-11-06 23:54:28 +0000 UTC - event for azuredisk-volume-tester-4j8ft: {default-scheduler } FailedScheduling: timed out waiting for the condition
I1106 23:59:30.306] Nov  6 23:59:30.305: INFO: At 2019-11-06 23:54:29 +0000 UTC - event for azuredisk-volume-tester-4j8ft: {default-scheduler } FailedScheduling: AssumePod failed: pod fcfb7147-ba6e-4011-9569-30ea01abc5b5 is in the cache, so can't be assumed
I1106 23:59:30.306] Nov  6 23:59:30.305: INFO: At 2019-11-06 23:59:29 +0000 UTC - event for azuredisk-volume-tester-4j8ft: {default-scheduler } FailedScheduling: skip schedule deleting pod: azuredisk-7941/azuredisk-volume-tester-4j8ft
I1106 23:59:30.400] Nov  6 23:59:30.400: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I1106 23:59:30.400] Nov  6 23:59:30.400: INFO: 
I1106 23:59:30.634] Nov  6 23:59:30.633: INFO: 
I1106 23:59:30.634] Logging node info for node k8s-agentpool-36331299-vmss000000
I1106 23:59:30.716] Nov  6 23:59:30.713: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool-36331299-vmss000000,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool-36331299-vmss000000,UID:125ee230-4db3-4b2c-9e8d-2973602a309d,ResourceVersion:9432,Generation:0,CreationTimestamp:2019-11-06 23:10:32 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_D4s_v3,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: eastus2,failure-domain.beta.kubernetes.io/zone: eastus2-1,kubernetes.azure.com/cluster: kubetest-2af767f3-00e7-11ea-a849-0242a2398615,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool-36331299-vmss000000,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,storageprofile: managed,storagetier: Premium_LRS,topology.disk.csi.azure.com/zone: eastus2-1,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool-36331299-vmss000000"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-2af767f3-00e7-11ea-a849-0242a2398615/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-36331299-vmss/virtualMachines/0,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-csi-disk.csi.azure.com: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797044736 0} {<nil>} 16403364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-csi-disk.csi.azure.com: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16010612736 0} {<nil>} 15635364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-06 23:11:18 +0000 UTC 2019-11-06 23:11:18 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-06 23:58:31 +0000 UTC 2019-11-06 23:10:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-06 23:58:31 +0000 UTC 2019-11-06 23:10:14 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-06 23:58:31 +0000 UTC 2019-11-06 23:10:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-06 23:58:31 +0000 UTC 2019-11-06 23:10:49 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.248.0.4} {Hostname k8s-agentpool-36331299-vmss000000}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e625f1ac480246c9a6ca48a84e7dfee2,SystemUUID:A8065A62-8955-8741-8F0F-24D2165CD482,BootID:f142ba43-b8c0-4473-b4c2-855beb1679a6,KernelVersion:4.15.0-1061-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.15.0,KubeProxyVersion:v1.15.0,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcrio.azureedge.net/google_containers/hyperkube-amd64@sha256:7f66ed6bd11edd309dad15db1dc906f89035cc9d9ae705c65c98264c775b9a8f gcrio.azureedge.net/google_containers/hyperkube-amd64:v1.15.0] 593352254} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:175eda8b83135efa8b7c7642080b8be977d5c2ae475317e31a6779568ff2047f k8sprowinternal.azurecr.io/azuredisk-csi:e2e-9feed04266edf86f2ee8447f59e07ac7ff44bd67] 283172400} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0] 71821984} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0] 50144412} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7] 22909487} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
... skipping 176 lines ...
I1106 23:59:51.445] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:33
I1106 23:59:51.445]   [multi-az]
I1106 23:59:51.445]   /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:40
I1106 23:59:51.445]     should create a volume on demand and mount it as readOnly in a pod [It]
I1106 23:59:51.445]     /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:126
I1106 23:59:51.445] 
I1106 23:59:51.445]     Unexpected error:
I1106 23:59:51.445]         <*errors.errorString | 0xc0007d2fd0>: {
I1106 23:59:51.446]             s: "Gave up after waiting 15m0s for pod \"azuredisk-volume-tester-4j8ft\" to be \"Error status code\"",
I1106 23:59:51.446]         }
I1106 23:59:51.446]         Gave up after waiting 15m0s for pod "azuredisk-volume-tester-4j8ft" to be "Error status code"
I1106 23:59:51.446]     occurred
I1106 23:59:51.446] 
I1106 23:59:51.446]     /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:553
I1106 23:59:51.446] ------------------------------
I1106 23:59:51.446] [azuredisk-csi-e2e] Dynamic Provisioning [multi-az] 
I1106 23:59:51.447]   should create multiple PV objects, bind to PVCs and attach all to different pods on the same node
... skipping 6 lines ...
I1106 23:59:52.026] STEP: setting up the StorageClass
I1106 23:59:52.027] STEP: creating a StorageClass 
I1106 23:59:52.145] STEP: setting up the PVC and PV
I1106 23:59:52.145] STEP: creating a PVC
I1106 23:59:52.235] STEP: deploying the pod
I1106 23:59:52.321] STEP: checking that the pod is running
I1107 00:04:52.561] Nov  7 00:04:52.561: INFO: Unexpected error occurred: timed out waiting for the condition
I1107 00:04:52.562] Nov  7 00:04:52.562: INFO: deleting Pod "azuredisk-3654"/"azuredisk-volume-tester-hbg5w"
I1107 00:04:52.641] Nov  7 00:04:52.641: INFO: Pod azuredisk-volume-tester-hbg5w has the following logs: 
I1107 00:04:52.641] STEP: Deleting pod azuredisk-volume-tester-hbg5w in namespace azuredisk-3654
I1107 00:04:52.826] STEP: validating provisioned PV
I1107 00:04:52.827] Nov  7 00:04:52.825: INFO: Unexpected error occurred: resource name may not be empty
I1107 00:04:52.827] Nov  7 00:04:52.826: INFO: deleting StorageClass azuredisk-3654-disk.csi.azure.com-dynamic-sc-5cllf
I1107 00:04:52.916] STEP: Collecting events from namespace "azuredisk-3654".
I1107 00:04:52.997] STEP: Found 5 events.
I1107 00:04:52.997] Nov  7 00:04:52.996: INFO: At 2019-11-06 23:59:52 +0000 UTC - event for pvc-v6jkf: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
I1107 00:04:52.998] Nov  7 00:04:52.996: INFO: At 2019-11-06 23:59:52 +0000 UTC - event for pvc-v6jkf: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "disk.csi.azure.com" or manually created by system administrator
I1107 00:04:52.998] Nov  7 00:04:52.996: INFO: At 2019-11-06 23:59:52 +0000 UTC - event for pvc-v6jkf: {disk.csi.azure.com_csi-azuredisk-controller-c7cd577cc-5q479_0f8f91a4-00eb-11ea-84b4-aee61c96403d } Provisioning: External provisioner is provisioning volume for claim "azuredisk-3654/pvc-v6jkf"
I1107 00:04:52.998] Nov  7 00:04:52.996: INFO: At 2019-11-07 00:00:02 +0000 UTC - event for pvc-v6jkf: {disk.csi.azure.com_csi-azuredisk-controller-c7cd577cc-5q479_0f8f91a4-00eb-11ea-84b4-aee61c96403d } ProvisioningFailed: failed to provision volume with StorageClass "azuredisk-3654-disk.csi.azure.com-dynamic-sc-5cllf": rpc error: code = DeadlineExceeded desc = context deadline exceeded
I1107 00:04:52.999] Nov  7 00:04:52.996: INFO: At 2019-11-07 00:04:52 +0000 UTC - event for azuredisk-volume-tester-hbg5w: {default-scheduler } FailedScheduling: skip schedule deleting pod: azuredisk-3654/azuredisk-volume-tester-hbg5w
I1107 00:04:53.077] Nov  7 00:04:53.077: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I1107 00:04:53.077] Nov  7 00:04:53.077: INFO: 
I1107 00:04:53.314] Nov  7 00:04:53.314: INFO: 
I1107 00:04:53.314] Logging node info for node k8s-agentpool-36331299-vmss000000
I1107 00:04:53.398] Nov  7 00:04:53.393: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool-36331299-vmss000000,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool-36331299-vmss000000,UID:125ee230-4db3-4b2c-9e8d-2973602a309d,ResourceVersion:10516,Generation:0,CreationTimestamp:2019-11-06 23:10:32 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_D4s_v3,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: eastus2,failure-domain.beta.kubernetes.io/zone: eastus2-1,kubernetes.azure.com/cluster: kubetest-2af767f3-00e7-11ea-a849-0242a2398615,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool-36331299-vmss000000,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,storageprofile: managed,storagetier: Premium_LRS,topology.disk.csi.azure.com/zone: eastus2-1,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool-36331299-vmss000000"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-2af767f3-00e7-11ea-a849-0242a2398615/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-36331299-vmss/virtualMachines/0,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-csi-disk.csi.azure.com: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797044736 0} {<nil>} 16403364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-csi-disk.csi.azure.com: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16010612736 0} {<nil>} 15635364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-06 23:11:18 +0000 UTC 2019-11-06 23:11:18 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-07 00:04:32 +0000 UTC 2019-11-06 23:10:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-07 00:04:32 +0000 UTC 2019-11-06 23:10:14 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-07 00:04:32 +0000 UTC 2019-11-06 23:10:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-07 00:04:32 +0000 UTC 2019-11-06 23:10:49 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.248.0.4} {Hostname k8s-agentpool-36331299-vmss000000}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e625f1ac480246c9a6ca48a84e7dfee2,SystemUUID:A8065A62-8955-8741-8F0F-24D2165CD482,BootID:f142ba43-b8c0-4473-b4c2-855beb1679a6,KernelVersion:4.15.0-1061-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.15.0,KubeProxyVersion:v1.15.0,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcrio.azureedge.net/google_containers/hyperkube-amd64@sha256:7f66ed6bd11edd309dad15db1dc906f89035cc9d9ae705c65c98264c775b9a8f gcrio.azureedge.net/google_containers/hyperkube-amd64:v1.15.0] 593352254} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:175eda8b83135efa8b7c7642080b8be977d5c2ae475317e31a6779568ff2047f k8sprowinternal.azurecr.io/azuredisk-csi:e2e-9feed04266edf86f2ee8447f59e07ac7ff44bd67] 283172400} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0] 71821984} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0] 50144412} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7] 22909487} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
... skipping 176 lines ...
I1107 00:05:14.324] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:33
I1107 00:05:14.324]   [multi-az]
I1107 00:05:14.324]   /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:40
I1107 00:05:14.324]     should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [It]
I1107 00:05:14.325]     /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:151
I1107 00:05:14.325] 
I1107 00:05:14.325]     Unexpected error:
I1107 00:05:14.325]         <*errors.errorString | 0xc000250df0>: {
I1107 00:05:14.326]             s: "timed out waiting for the condition",
I1107 00:05:14.326]         }
I1107 00:05:14.326]         timed out waiting for the condition
I1107 00:05:14.326]     occurred
I1107 00:05:14.326] 
... skipping 268 lines ...
I1107 00:13:55.363] Nov  7 00:13:55.363: INFO: PersistentVolumeClaim pvc-9w66s found but phase is Pending instead of Bound.
I1107 00:13:57.443] Nov  7 00:13:57.443: INFO: PersistentVolumeClaim pvc-9w66s found but phase is Pending instead of Bound.
I1107 00:13:59.523] Nov  7 00:13:59.523: INFO: PersistentVolumeClaim pvc-9w66s found but phase is Pending instead of Bound.
I1107 00:14:01.603] Nov  7 00:14:01.603: INFO: PersistentVolumeClaim pvc-9w66s found but phase is Pending instead of Bound.
I1107 00:14:03.683] Nov  7 00:14:03.683: INFO: PersistentVolumeClaim pvc-9w66s found but phase is Pending instead of Bound.
I1107 00:14:05.764] Nov  7 00:14:05.763: INFO: PersistentVolumeClaim pvc-9w66s found but phase is Pending instead of Bound.
I1107 00:14:07.764] Nov  7 00:14:07.764: INFO: Unexpected error occurred: PersistentVolumeClaims [pvc-9w66s] not all in phase Bound within 5m0s
I1107 00:14:07.764] STEP: Collecting events from namespace "azuredisk-6554".
I1107 00:14:07.844] STEP: Found 3 events.
I1107 00:14:07.845] Nov  7 00:14:07.844: INFO: At 2019-11-07 00:09:06 +0000 UTC - event for pvc-9w66s: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "disk.csi.azure.com" or manually created by system administrator
I1107 00:14:07.845] Nov  7 00:14:07.844: INFO: At 2019-11-07 00:09:06 +0000 UTC - event for pvc-9w66s: {disk.csi.azure.com_csi-azuredisk-controller-c7cd577cc-5q479_0f8f91a4-00eb-11ea-84b4-aee61c96403d } Provisioning: External provisioner is provisioning volume for claim "azuredisk-6554/pvc-9w66s"
I1107 00:14:07.845] Nov  7 00:14:07.844: INFO: At 2019-11-07 00:09:16 +0000 UTC - event for pvc-9w66s: {disk.csi.azure.com_csi-azuredisk-controller-c7cd577cc-5q479_0f8f91a4-00eb-11ea-84b4-aee61c96403d } ProvisioningFailed: failed to provision volume with StorageClass "azuredisk-6554-disk.csi.azure.com-dynamic-sc-vdcq2": rpc error: code = DeadlineExceeded desc = context deadline exceeded
I1107 00:14:07.924] Nov  7 00:14:07.924: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I1107 00:14:07.924] Nov  7 00:14:07.924: INFO: 
I1107 00:14:08.159] Nov  7 00:14:08.159: INFO: 
I1107 00:14:08.159] Logging node info for node k8s-agentpool-36331299-vmss000000
I1107 00:14:08.241] Nov  7 00:14:08.238: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-agentpool-36331299-vmss000000,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-agentpool-36331299-vmss000000,UID:125ee230-4db3-4b2c-9e8d-2973602a309d,ResourceVersion:12168,Generation:0,CreationTimestamp:2019-11-06 23:10:32 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{agentpool: agentpool,beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: Standard_D4s_v3,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: eastus2,failure-domain.beta.kubernetes.io/zone: eastus2-1,kubernetes.azure.com/cluster: kubetest-2af767f3-00e7-11ea-a849-0242a2398615,kubernetes.azure.com/role: agent,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-agentpool-36331299-vmss000000,kubernetes.io/os: linux,kubernetes.io/role: agent,node-role.kubernetes.io/agent: ,storageprofile: managed,storagetier: Premium_LRS,topology.disk.csi.azure.com/zone: eastus2-1,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"disk.csi.azure.com":"k8s-agentpool-36331299-vmss000000"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-2af767f3-00e7-11ea-a849-0242a2398615/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-36331299-vmss/virtualMachines/0,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-csi-disk.csi.azure.com: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797044736 0} {<nil>} 16403364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-csi-disk.csi.azure.com: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16010612736 0} {<nil>} 15635364Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-11-06 23:11:18 +0000 UTC 2019-11-06 23:11:18 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2019-11-07 00:13:13 +0000 UTC 2019-11-06 23:10:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-11-07 00:13:13 +0000 UTC 2019-11-06 23:10:14 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-11-07 00:13:13 +0000 UTC 2019-11-06 23:10:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-11-07 00:13:13 +0000 UTC 2019-11-06 23:10:49 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.248.0.4} {Hostname k8s-agentpool-36331299-vmss000000}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e625f1ac480246c9a6ca48a84e7dfee2,SystemUUID:A8065A62-8955-8741-8F0F-24D2165CD482,BootID:f142ba43-b8c0-4473-b4c2-855beb1679a6,KernelVersion:4.15.0-1061-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.15.0,KubeProxyVersion:v1.15.0,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcrio.azureedge.net/google_containers/hyperkube-amd64@sha256:7f66ed6bd11edd309dad15db1dc906f89035cc9d9ae705c65c98264c775b9a8f gcrio.azureedge.net/google_containers/hyperkube-amd64:v1.15.0] 593352254} {[k8sprowinternal.azurecr.io/azuredisk-csi@sha256:175eda8b83135efa8b7c7642080b8be977d5c2ae475317e31a6779568ff2047f k8sprowinternal.azurecr.io/azuredisk-csi:e2e-9feed04266edf86f2ee8447f59e07ac7ff44bd67] 283172400} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0] 71821984} {[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0] 50144412} {[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7] 22909487} {[quay.io/k8scsi/livenessprobe@sha256:8c9c8a0f330acc2e558ae6624309447d02522e76aa35af23fa19b88abf4381cb quay.io/k8scsi/livenessprobe:v1.0.2] 17415381} {[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0] 15815995} {[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8] 1159025} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
I1107 00:14:08.241] Nov  7 00:14:08.238: INFO: 
... skipping 175 lines ...
I1107 00:14:29.055] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:33
I1107 00:14:29.055]   [multi-az]
I1107 00:14:29.055]   /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:40
I1107 00:14:29.055]     should delete PV with reclaimPolicy "Delete" [It]
I1107 00:14:29.055]     /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning.go:221
I1107 00:14:29.055] 
I1107 00:14:29.055]     Unexpected error:
I1107 00:14:29.056]         <*errors.errorString | 0xc000b1f550>: {
I1107 00:14:29.056]             s: "PersistentVolumeClaims [pvc-9w66s] not all in phase Bound within 5m0s",
I1107 00:14:29.056]         }
I1107 00:14:29.056]         PersistentVolumeClaims [pvc-9w66s] not all in phase Bound within 5m0s
I1107 00:14:29.056]     occurred
I1107 00:14:29.056] 
... skipping 52 lines ...
I1107 00:14:58.946] make[1]: Leaving directory '/go/src/sigs.k8s.io/azuredisk-csi-driver'
I1107 00:14:58.946] 2019/11/07 00:14:58 Azure Disk CSI Driver uninstalled
I1107 00:14:58.946] 
I1107 00:14:58.946] 
I1107 00:14:58.946] Summarizing 5 Failures:
I1107 00:14:58.946] 
I1107 00:14:58.947] [Fail] [azuredisk-csi-e2e] Dynamic Provisioning [multi-az] [It] should create a volume on demand 
I1107 00:14:58.947] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:529
I1107 00:14:58.947] 
I1107 00:14:58.947] [Fail] [azuredisk-csi-e2e] Dynamic Provisioning [multi-az] [It] should create a raw block volume on demand 
I1107 00:14:58.947] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:529
I1107 00:14:58.947] 
I1107 00:14:58.947] [Fail] [azuredisk-csi-e2e] Dynamic Provisioning [multi-az] [It] should create a volume on demand and mount it as readOnly in a pod 
I1107 00:14:58.948] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:553
I1107 00:14:58.948] 
I1107 00:14:58.948] [Fail] [azuredisk-csi-e2e] Dynamic Provisioning [multi-az] [It] should create multiple PV objects, bind to PVCs and attach all to different pods on the same node 
I1107 00:14:58.948] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:534
I1107 00:14:58.948] 
I1107 00:14:58.948] [Fail] [azuredisk-csi-e2e] Dynamic Provisioning [multi-az] [It] should delete PV with reclaimPolicy "Delete" 
I1107 00:14:58.948] /go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:278
I1107 00:14:58.949] 
I1107 00:14:58.949] Ran 7 of 16 Specs in 3765.088 seconds
I1107 00:14:58.949] FAIL! -- 2 Passed | 5 Failed | 0 Pending | 9 Skipped
I1107 00:14:58.949] --- FAIL: TestE2E (3765.09s)
I1107 00:14:58.949] FAIL
I1107 00:14:58.951] FAIL	sigs.k8s.io/azuredisk-csi-driver/test/e2e	3765.145s
I1107 00:14:58.958] FAIL
W1107 00:14:59.214] make: *** [Makefile:65: e2e-test] Error 1
W1107 00:14:59.214] 2019/11/07 00:14:59 process.go:155: Step 'make e2e-test' finished in 1h3m33.115014216s
W1107 00:14:59.215] 2019/11/07 00:14:59 azure.go:912: Skippng DumpClusterLogs due to CCM not being enabled.
W1107 00:14:59.215] 2019/11/07 00:14:59 azure.go:906: Deleting resource group: kubetest-2af767f3-00e7-11ea-a849-0242a2398615.
W1107 00:25:08.579] 2019/11/07 00:25:08 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W1107 00:25:08.579] 2019/11/07 00:25:08 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W1107 00:25:09.348] 2019/11/07 00:25:09 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 769.20067ms
W1107 00:25:09.349] 2019/11/07 00:25:09 main.go:319: Something went wrong: encountered 1 errors: [error during make e2e-test: exit status 2]
W1107 00:25:09.350] Traceback (most recent call last):
W1107 00:25:09.350]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W1107 00:25:09.351]     main(parse_args())
W1107 00:25:09.351]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W1107 00:25:09.351]     mode.start(runner_args)
W1107 00:25:09.351]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W1107 00:25:09.352]     check_env(env, self.command, *args)
W1107 00:25:09.352]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W1107 00:25:09.352]     subprocess.check_call(cmd, env=env)
W1107 00:25:09.352]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W1107 00:25:09.352]     raise CalledProcessError(retcode, cmd)
W1107 00:25:09.353] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--build=quick', '--up', '--down', '--test', '--deployment=acsengine', '--provider=skeleton', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--ginkgo-parallel=1', '--acsengine-agentpoolcount=2', '--acsengine-admin-username=azureuser', '--acsengine-creds=/etc/azure-cred/credentials', '--acsengine-orchestratorRelease=1.16', '--acsengine-mastervmsize=Standard_DS2_v2', '--acsengine-agentvmsize=Standard_D4s_v3', '--acsengine-location=eastus2', '--acsengine-public-key=/etc/azure-ssh/azure-ssh-pub', '--acsengine-template-url=https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/tests/k8s-azure/manifest/linux-vmss-multi-zones.json', '--acsengine-download-url=https://github.com/Azure/aks-engine/releases/download/v0.39.1/aks-engine-v0.39.1-linux-amd64.tar.gz', '--test-azure-disk-csi-driver=True', '--timeout=420m')' returned non-zero exit status 1
E1107 00:25:09.357] Command failed
I1107 00:25:09.357] process 684 exited with code 1 after 99.4m
E1107 00:25:09.358] FAIL: chewong-pull-azuredisk-csi-driver-e2e-multi-az
I1107 00:25:09.358] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W1107 00:25:09.924] Activated service account credentials for: [kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com]
I1107 00:25:09.955] process 2581 exited with code 0 after 0.0m
I1107 00:25:09.955] Call:  gcloud config get-value account
I1107 00:25:10.213] process 2593 exited with code 0 after 0.0m
I1107 00:25:10.213] Will upload results to gs://kubernetes-upstream/pr-logs using kubernetes-upstream@kubernetes-upstream.iam.gserviceaccount.com
I1107 00:25:10.213] Upload result and artifacts...
I1107 00:25:10.213] Gubernator results at https://gubernator.k8s.io/build/kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/180/chewong-pull-azuredisk-csi-driver-e2e-multi-az/1192211106714619904
I1107 00:25:10.214] Call:  gsutil ls gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/180/chewong-pull-azuredisk-csi-driver-e2e-multi-az/1192211106714619904/artifacts
W1107 00:25:11.106] CommandException: One or more URLs matched no objects.
E1107 00:25:11.184] Command failed
I1107 00:25:11.184] process 2605 exited with code 1 after 0.0m
W1107 00:25:11.185] Remote dir gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/180/chewong-pull-azuredisk-csi-driver-e2e-multi-az/1192211106714619904/artifacts not exist yet
I1107 00:25:11.185] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-upstream/pr-logs/pull/sigs.k8s.io_azuredisk-csi-driver/180/chewong-pull-azuredisk-csi-driver-e2e-multi-az/1192211106714619904/artifacts
I1107 00:25:12.639] process 2749 exited with code 0 after 0.0m
I1107 00:25:12.639] Call:  git rev-parse HEAD
I1107 00:25:12.642] process 3278 exited with code 0 after 0.0m
... skipping 21 lines ...