Recent runs || View in Spyglass
Result | FAILURE |
Tests | 14 failed / 26 succeeded |
Started | |
Elapsed | 1h4m |
Revision | |
Builder | b3d03ae6-10b6-11ea-b3d3-b20db476995a |
Refs |
master:1c5b6cb6 85621:657a3a32 |
infra-commit | 282e49f14 |
job-version | v1.18.0-alpha.0.1191+5975b80d569031-dirty |
repo | k8s.io/kubernetes |
repo-commit | 5975b80d569031dc570afedb2a4c6574e512446c |
repos | {u'k8s.io/kubernetes': u'master:1c5b6cb66e6ae85177e76d4fddf7d99473ab2aed,85621:657a3a3294e70685229282713361ecef6a366989'} |
revision | v1.18.0-alpha.0.1191+5975b80d569031-dirty |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\saccess\sto\stwo\svolumes\swith\sdifferent\svolume\smode\sand\sretain\sdata\sacross\spod\srecreation\son\sdifferent\snode$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:241 Nov 27 02:21:51.287: Unexpected error: <*errors.errorString | 0xc00286ce10>: { s: "pod \"security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4\" is not Running: timed out waiting for the condition", } pod "security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4" is not Running: timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:360from junit_07.xml
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:88 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:12:00.953: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename multivolume �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in multivolume-7967 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should access to two volumes with different volume mode and retain data across pod recreation on different node /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:241 Nov 27 02:12:01.107: INFO: Creating resource for dynamic PV Nov 27 02:12:01.107: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass multivolume-7967-azure-disk-scrq25t �[1mSTEP�[0m: creating a claim Nov 27 02:12:01.121: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-disk88rsc] to have phase Bound Nov 27 02:12:01.124: INFO: PersistentVolumeClaim azure-disk88rsc found but phase is Pending instead of Bound. Nov 27 02:12:03.128: INFO: PersistentVolumeClaim azure-disk88rsc found but phase is Pending instead of Bound. Nov 27 02:12:05.133: INFO: PersistentVolumeClaim azure-disk88rsc found but phase is Pending instead of Bound. Nov 27 02:12:07.136: INFO: PersistentVolumeClaim azure-disk88rsc found and phase=Bound (6.01509341s) Nov 27 02:12:07.142: INFO: Creating resource for dynamic PV Nov 27 02:12:07.142: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass multivolume-7967-azure-disk-schljt7 �[1mSTEP�[0m: creating a claim Nov 27 02:12:07.150: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-diskgkhj8] to have phase Bound Nov 27 02:12:07.153: INFO: PersistentVolumeClaim azure-diskgkhj8 found but phase is Pending instead of Bound. Nov 27 02:12:09.157: INFO: PersistentVolumeClaim azure-diskgkhj8 found but phase is Pending instead of Bound. Nov 27 02:12:11.161: INFO: PersistentVolumeClaim azure-diskgkhj8 found but phase is Pending instead of Bound. Nov 27 02:12:13.164: INFO: PersistentVolumeClaim azure-diskgkhj8 found but phase is Pending instead of Bound. Nov 27 02:12:15.174: INFO: PersistentVolumeClaim azure-diskgkhj8 found but phase is Pending instead of Bound. Nov 27 02:12:17.178: INFO: PersistentVolumeClaim azure-diskgkhj8 found but phase is Pending instead of Bound. Nov 27 02:12:19.181: INFO: PersistentVolumeClaim azure-diskgkhj8 found and phase=Bound (12.031612198s) �[1mSTEP�[0m: Creating pod on {Name: Selector:map[] Affinity:nil} with multiple volumes �[1mSTEP�[0m: Checking if the volume1 exists as expected volume mode (Block) Nov 27 02:16:41.209: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume1] Namespace:multivolume-7967 PodName:security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:16:41.209: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:16:41.414: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume1] Namespace:multivolume-7967 PodName:security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:16:41.414: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if write to the volume1 works properly Nov 27 02:16:41.584: INFO: ExecWithOptions {Command:[/bin/sh -c echo XfWWzDVogbw5t/41o38Qg1pQUzTGKBDeXcvWkDgEgwf2y3CuxnelOXiNEsWANTQSW+Ofu2SubVj675IQWH8aZw== | base64 -d | sha256sum] Namespace:multivolume-7967 PodName:security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:16:41.584: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:16:41.758: INFO: ExecWithOptions {Command:[/bin/sh -c echo XfWWzDVogbw5t/41o38Qg1pQUzTGKBDeXcvWkDgEgwf2y3CuxnelOXiNEsWANTQSW+Ofu2SubVj675IQWH8aZw== | base64 -d | dd of=/mnt/volume1 bs=64 count=1] Namespace:multivolume-7967 PodName:security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:16:41.758: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if read from the volume1 works properly Nov 27 02:16:41.934: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1 bs=64 count=1 | sha256sum] Namespace:multivolume-7967 PodName:security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:16:41.934: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:16:42.091: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1 bs=64 count=1 | sha256sum | grep -Fq 7f5f8e4c0d7c1bcd4a32f8b3d259f79346c62a0fd3158cf5bfc0c2ce39bc52d2] Namespace:multivolume-7967 PodName:security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:16:42.091: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if the volume2 exists as expected volume mode (Filesystem) Nov 27 02:16:42.246: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume2] Namespace:multivolume-7967 PodName:security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:16:42.246: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:16:42.405: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume2] Namespace:multivolume-7967 PodName:security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:16:42.405: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if write to the volume2 works properly Nov 27 02:16:42.556: INFO: ExecWithOptions {Command:[/bin/sh -c echo ujpsrPdyYNwUeGL1Cw41jP5a5y7JJf7D4gwzZ7M4+2mvTTiyPlbXpgZjbavUdA4eeD94kSEPtdBYL0cKJ8mkjw== | base64 -d | sha256sum] Namespace:multivolume-7967 PodName:security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:16:42.556: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:16:42.728: INFO: ExecWithOptions {Command:[/bin/sh -c echo ujpsrPdyYNwUeGL1Cw41jP5a5y7JJf7D4gwzZ7M4+2mvTTiyPlbXpgZjbavUdA4eeD94kSEPtdBYL0cKJ8mkjw== | base64 -d | dd of=/mnt/volume2/file1.txt bs=64 count=1] Namespace:multivolume-7967 PodName:security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:16:42.728: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if read from the volume2 works properly Nov 27 02:16:42.895: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum] Namespace:multivolume-7967 PodName:security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:16:42.895: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:16:43.065: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum | grep -Fq d981c9ba94b587a9ef1a31820f7337aebfdb61be7b06064baf5bf8b5b745bc24] Namespace:multivolume-7967 PodName:security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:16:43.065: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:16:43.243: INFO: Deleting pod "security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9" in namespace "multivolume-7967" Nov 27 02:16:43.252: INFO: Wait up to 5m0s for pod "security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9" to be fully deleted �[1mSTEP�[0m: Creating pod on {Name: Selector:map[] Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:NotIn,Values:[k8s-agentpool1-27910301-vmss000000],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,}} with multiple volumes Nov 27 02:21:51.287: FAIL: Unexpected error: <*errors.errorString | 0xc00286ce10>: { s: "pod \"security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4\" is not Running: timed out waiting for the condition", } pod "security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4" is not Running: timed out waiting for the condition occurred Nov 27 02:21:51.287: INFO: Deleting pod "security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4" in namespace "multivolume-7967" Nov 27 02:21:51.292: INFO: Wait up to 5m0s for pod "security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4" to be fully deleted �[1mSTEP�[0m: Deleting pvc Nov 27 02:22:01.301: INFO: Deleting PersistentVolumeClaim "azure-disk88rsc" Nov 27 02:22:01.305: INFO: Waiting up to 5m0s for PersistentVolume pvc-ff9df561-0196-4319-bfaf-223abedd298f to get deleted Nov 27 02:22:01.309: INFO: PersistentVolume pvc-ff9df561-0196-4319-bfaf-223abedd298f found and phase=Bound (3.366027ms) Nov 27 02:22:06.312: INFO: PersistentVolume pvc-ff9df561-0196-4319-bfaf-223abedd298f found and phase=Failed (5.006839563s) Nov 27 02:22:11.317: INFO: PersistentVolume pvc-ff9df561-0196-4319-bfaf-223abedd298f found and phase=Failed (10.0115188s) Nov 27 02:22:16.321: INFO: PersistentVolume pvc-ff9df561-0196-4319-bfaf-223abedd298f found and phase=Failed (15.015477521s) Nov 27 02:22:21.324: INFO: PersistentVolume pvc-ff9df561-0196-4319-bfaf-223abedd298f found and phase=Failed (20.018442425s) Nov 27 02:22:26.327: INFO: PersistentVolume pvc-ff9df561-0196-4319-bfaf-223abedd298f found and phase=Failed (25.021693222s) Nov 27 02:22:31.330: INFO: PersistentVolume pvc-ff9df561-0196-4319-bfaf-223abedd298f found and phase=Failed (30.024719708s) Nov 27 02:22:36.333: INFO: PersistentVolume pvc-ff9df561-0196-4319-bfaf-223abedd298f was removed �[1mSTEP�[0m: Deleting sc �[1mSTEP�[0m: Deleting pvc Nov 27 02:22:36.338: INFO: Deleting PersistentVolumeClaim "azure-diskgkhj8" Nov 27 02:22:36.347: INFO: Waiting up to 5m0s for PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 to get deleted Nov 27 02:22:36.350: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 found and phase=Bound (2.51862ms) Nov 27 02:22:41.353: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 found and phase=Failed (5.00583529s) Nov 27 02:22:46.357: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 found and phase=Failed (10.009584154s) Nov 27 02:22:51.361: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 found and phase=Failed (15.013490711s) Nov 27 02:22:56.364: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 found and phase=Failed (20.016951756s) Nov 27 02:23:01.368: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 found and phase=Failed (25.020391292s) Nov 27 02:23:06.371: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 found and phase=Failed (30.02403222s) Nov 27 02:23:11.379: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 found and phase=Failed (35.031656472s) Nov 27 02:23:16.382: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 found and phase=Failed (40.034639878s) Nov 27 02:23:21.386: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 found and phase=Failed (45.038339382s) Nov 27 02:23:26.389: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 found and phase=Failed (50.041776876s) Nov 27 02:23:31.393: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 found and phase=Failed (55.045604864s) Nov 27 02:23:36.397: INFO: PersistentVolume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 was removed �[1mSTEP�[0m: Deleting sc Nov 27 02:23:36.402: INFO: In-tree plugin kubernetes.io/azure-disk is not migrated, not validating any metrics [AfterEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "multivolume-7967". �[1mSTEP�[0m: Found 19 events. Nov 27 02:23:36.408: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9: {default-scheduler } Scheduled: Successfully assigned multivolume-7967/security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 to k8s-agentpool1-27910301-vmss000000 Nov 27 02:23:36.408: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4: {default-scheduler } Scheduled: Successfully assigned multivolume-7967/security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4 to k8s-agentpool1-27910301-vmss000001 Nov 27 02:23:36.408: INFO: At 2019-11-27 02:12:06 +0000 UTC - event for azure-disk88rsc: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-ff9df561-0196-4319-bfaf-223abedd298f using kubernetes.io/azure-disk Nov 27 02:23:36.408: INFO: At 2019-11-27 02:12:17 +0000 UTC - event for azure-diskgkhj8: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-63caa552-e092-43fe-b924-ef6a44e177a3 using kubernetes.io/azure-disk Nov 27 02:23:36.408: INFO: At 2019-11-27 02:14:22 +0000 UTC - event for security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume2 volume1], unattached volumes=[volume2 default-token-mb5zz volume1]: timed out waiting for the condition Nov 27 02:23:36.408: INFO: At 2019-11-27 02:15:44 +0000 UTC - event for security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-63caa552-e092-43fe-b924-ef6a44e177a3" Nov 27 02:23:36.408: INFO: At 2019-11-27 02:16:00 +0000 UTC - event for security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-ff9df561-0196-4319-bfaf-223abedd298f" Nov 27 02:23:36.408: INFO: At 2019-11-27 02:16:32 +0000 UTC - event for security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9: {kubelet k8s-agentpool1-27910301-vmss000000} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "pvc-ff9df561-0196-4319-bfaf-223abedd298f" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io/azure-disk/volumeDevices/kubetest-f1185e95-10b6-11ea-8290-0-pvc-ff9df561-0196-4319-bfaf-223abedd298f" Nov 27 02:23:36.408: INFO: At 2019-11-27 02:16:32 +0000 UTC - event for security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9: {kubelet k8s-agentpool1-27910301-vmss000000} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "pvc-ff9df561-0196-4319-bfaf-223abedd298f" volumeMapPath "/var/lib/kubelet/pods/784359e0-9db8-4c6e-a9cf-43cb5eb3dc79/volumeDevices/kubernetes.io~azure-disk" Nov 27 02:23:36.408: INFO: At 2019-11-27 02:16:37 +0000 UTC - event for security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 27 02:23:36.408: INFO: At 2019-11-27 02:16:38 +0000 UTC - event for security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container write-pod Nov 27 02:23:36.408: INFO: At 2019-11-27 02:16:38 +0000 UTC - event for security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container write-pod Nov 27 02:23:36.408: INFO: At 2019-11-27 02:16:43 +0000 UTC - event for security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9: {kubelet k8s-agentpool1-27910301-vmss000000} Killing: Stopping container write-pod Nov 27 02:23:36.408: INFO: At 2019-11-27 02:16:51 +0000 UTC - event for security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4: {attachdetach-controller } FailedAttachVolume: Multi-Attach error for volume "pvc-ff9df561-0196-4319-bfaf-223abedd298f" Volume is already exclusively attached to one node and can't be attached to another Nov 27 02:23:36.408: INFO: At 2019-11-27 02:16:51 +0000 UTC - event for security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4: {attachdetach-controller } FailedAttachVolume: Multi-Attach error for volume "pvc-63caa552-e092-43fe-b924-ef6a44e177a3" Volume is already exclusively attached to one node and can't be attached to another Nov 27 02:23:36.408: INFO: At 2019-11-27 02:18:54 +0000 UTC - event for security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4: {kubelet k8s-agentpool1-27910301-vmss000001} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1 volume2], unattached volumes=[volume1 volume2 default-token-mb5zz]: timed out waiting for the condition Nov 27 02:23:36.408: INFO: At 2019-11-27 02:21:11 +0000 UTC - event for security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4: {kubelet k8s-agentpool1-27910301-vmss000001} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1 volume2], unattached volumes=[volume1 default-token-mb5zz volume2]: timed out waiting for the condition Nov 27 02:23:36.408: INFO: At 2019-11-27 02:22:25 +0000 UTC - event for security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-63caa552-e092-43fe-b924-ef6a44e177a3" Nov 27 02:23:36.408: INFO: At 2019-11-27 02:23:29 +0000 UTC - event for security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4: {kubelet k8s-agentpool1-27910301-vmss000001} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume2 default-token-mb5zz volume1], unattached volumes=[volume2 default-token-mb5zz volume1]: timed out waiting for the condition Nov 27 02:23:36.410: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:23:36.410: INFO: Nov 27 02:23:36.414: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:23:36.416: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 5091 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:20:53 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:20:53 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:20:53 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:20:53 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-73246121-d31d-4680-9b21-4b7330d473b8 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:23:36.416: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:23:36.421: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:23:36.469: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.469: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:23:36.469: INFO: azure-io-client started at 2019-11-27 02:18:34 +0000 UTC (1+1 container statuses recorded) Nov 27 02:23:36.469: INFO: Init container azure-io-init ready: false, restart count 0 Nov 27 02:23:36.469: INFO: Container azure-io-client ready: false, restart count 0 Nov 27 02:23:36.469: INFO: security-context-54d39a28-d787-43ae-9155-1d1da53f4a30 started at 2019-11-27 02:19:10 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.469: INFO: Container write-pod ready: false, restart count 0 Nov 27 02:23:36.469: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.469: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:23:36.469: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.469: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:23:36.469: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.469: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:23:36.469: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.469: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:23:36.469: INFO: volume-prep-provisioning-1676 started at 2019-11-27 02:20:44 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.469: INFO: Container init-volume-provisioning-1676 ready: false, restart count 0 Nov 27 02:23:36.469: INFO: azure-injector started at 2019-11-27 02:20:44 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.469: INFO: Container azure-injector ready: false, restart count 0 W1127 02:23:36.471880 14156 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:23:36.493: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:23:36.493: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:23:36.495: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 5430 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:23:36.495: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:23:36.507: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:23:36.541: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.541: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:23:36.541: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.541: INFO: Container metrics-server ready: true, restart count 0 Nov 27 02:23:36.541: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.541: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:23:36.541: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.541: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 27 02:23:36.541: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.541: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:23:36.541: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.541: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:23:36.541: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.541: INFO: Container coredns ready: true, restart count 0 Nov 27 02:23:36.541: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.541: INFO: Container kube-proxy ready: true, restart count 0 W1127 02:23:36.544376 14156 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:23:36.565: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:23:36.565: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:23:36.567: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 5287 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:23:36.568: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:23:36.573: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:23:36.596: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.596: INFO: Container kube-apiserver ready: true, restart count 0 Nov 27 02:23:36.596: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.596: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 27 02:23:36.596: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.596: INFO: Container kube-scheduler ready: true, restart count 0 Nov 27 02:23:36.596: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.596: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:23:36.596: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.596: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:23:36.596: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.596: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:23:36.596: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:23:36.596: INFO: Container kube-addon-manager ready: true, restart count 0 W1127 02:23:36.600341 14156 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:23:36.619: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:23:36.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "multivolume-7967" for this suite.
Find security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4 mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\saccess\sto\stwo\svolumes\swith\sthe\ssame\svolume\smode\sand\sretain\sdata\sacross\spod\srecreation\son\sdifferent\snode$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:159 Nov 27 02:11:40.719: Unexpected error: <*errors.errorString | 0xc000d3a260>: { s: "pod \"security-context-b5431a76-7b18-436d-92dd-bfb459822506\" is not Running: timed out waiting for the condition", } pod "security-context-b5431a76-7b18-436d-92dd-bfb459822506" is not Running: timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:360from junit_03.xml
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:88 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:03:42.441: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename multivolume Nov 27 02:03:42.480: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Nov 27 02:03:42.490: INFO: Found ClusterRoles; assuming RBAC is enabled. �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in multivolume-3641 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should access to two volumes with the same volume mode and retain data across pod recreation on different node /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:159 Nov 27 02:03:42.606: INFO: Creating resource for dynamic PV Nov 27 02:03:42.606: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass multivolume-3641-azure-disk-scrb9xz �[1mSTEP�[0m: creating a claim Nov 27 02:03:42.623: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-diskl4nkr] to have phase Bound Nov 27 02:03:42.633: INFO: PersistentVolumeClaim azure-diskl4nkr found but phase is Pending instead of Bound. Nov 27 02:03:44.636: INFO: PersistentVolumeClaim azure-diskl4nkr found but phase is Pending instead of Bound. Nov 27 02:03:46.639: INFO: PersistentVolumeClaim azure-diskl4nkr found but phase is Pending instead of Bound. Nov 27 02:03:48.642: INFO: PersistentVolumeClaim azure-diskl4nkr found but phase is Pending instead of Bound. Nov 27 02:03:50.646: INFO: PersistentVolumeClaim azure-diskl4nkr found but phase is Pending instead of Bound. Nov 27 02:03:52.650: INFO: PersistentVolumeClaim azure-diskl4nkr found but phase is Pending instead of Bound. Nov 27 02:03:54.653: INFO: PersistentVolumeClaim azure-diskl4nkr found and phase=Bound (12.029995568s) Nov 27 02:03:54.668: INFO: Creating resource for dynamic PV Nov 27 02:03:54.668: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass multivolume-3641-azure-disk-scxvmtk �[1mSTEP�[0m: creating a claim Nov 27 02:03:54.675: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-disknbgzh] to have phase Bound Nov 27 02:03:54.678: INFO: PersistentVolumeClaim azure-disknbgzh found but phase is Pending instead of Bound. Nov 27 02:03:56.681: INFO: PersistentVolumeClaim azure-disknbgzh found but phase is Pending instead of Bound. Nov 27 02:03:58.686: INFO: PersistentVolumeClaim azure-disknbgzh found but phase is Pending instead of Bound. Nov 27 02:04:00.689: INFO: PersistentVolumeClaim azure-disknbgzh found but phase is Pending instead of Bound. Nov 27 02:04:02.693: INFO: PersistentVolumeClaim azure-disknbgzh found but phase is Pending instead of Bound. Nov 27 02:04:04.696: INFO: PersistentVolumeClaim azure-disknbgzh found but phase is Pending instead of Bound. Nov 27 02:04:06.699: INFO: PersistentVolumeClaim azure-disknbgzh found and phase=Bound (12.024356925s) �[1mSTEP�[0m: Creating pod on {Name: Selector:map[] Affinity:nil} with multiple volumes �[1mSTEP�[0m: Checking if the volume1 exists as expected volume mode (Block) Nov 27 02:06:24.726: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume1] Namespace:multivolume-3641 PodName:security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:24.726: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:06:24.900: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume1] Namespace:multivolume-3641 PodName:security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:24.900: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if write to the volume1 works properly Nov 27 02:06:25.053: INFO: ExecWithOptions {Command:[/bin/sh -c echo xNxE+XoccsZJIBvjLoE6dlh0Zey6eQPPRM1gyZQpVJ5sI4/6PSfxCO4NVm4UbcAnASYufSxLDPUpdbLAZzMbjg== | base64 -d | sha256sum] Namespace:multivolume-3641 PodName:security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:25.053: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:06:25.211: INFO: ExecWithOptions {Command:[/bin/sh -c echo xNxE+XoccsZJIBvjLoE6dlh0Zey6eQPPRM1gyZQpVJ5sI4/6PSfxCO4NVm4UbcAnASYufSxLDPUpdbLAZzMbjg== | base64 -d | dd of=/mnt/volume1 bs=64 count=1] Namespace:multivolume-3641 PodName:security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:25.211: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if read from the volume1 works properly Nov 27 02:06:25.359: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1 bs=64 count=1 | sha256sum] Namespace:multivolume-3641 PodName:security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:25.359: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:06:25.522: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1 bs=64 count=1 | sha256sum | grep -Fq d35f7c647ba35eef9ca3b137515203b1cf07536af80f0eafc6cff439fba7ef6a] Namespace:multivolume-3641 PodName:security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:25.522: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if the volume2 exists as expected volume mode (Block) Nov 27 02:06:25.669: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume2] Namespace:multivolume-3641 PodName:security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:25.669: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:06:25.828: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume2] Namespace:multivolume-3641 PodName:security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:25.828: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if write to the volume2 works properly Nov 27 02:06:25.997: INFO: ExecWithOptions {Command:[/bin/sh -c echo oVrKHghJ8u38JPNCpwXyDyEIesf+FHE9J+7UGQ1ej/ylnFD+BTYtdjNW8I+YM8rQmZ5ygxdDTkXtsjKZt6yNzg== | base64 -d | sha256sum] Namespace:multivolume-3641 PodName:security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:25.997: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:06:26.156: INFO: ExecWithOptions {Command:[/bin/sh -c echo oVrKHghJ8u38JPNCpwXyDyEIesf+FHE9J+7UGQ1ej/ylnFD+BTYtdjNW8I+YM8rQmZ5ygxdDTkXtsjKZt6yNzg== | base64 -d | dd of=/mnt/volume2 bs=64 count=1] Namespace:multivolume-3641 PodName:security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:26.156: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if read from the volume2 works properly Nov 27 02:06:26.326: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2 bs=64 count=1 | sha256sum] Namespace:multivolume-3641 PodName:security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:26.326: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:06:26.481: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2 bs=64 count=1 | sha256sum | grep -Fq e23ebf12c82f1eaba6715012124b8a9b655e1400998b6d39d06c0ba59100dd10] Namespace:multivolume-3641 PodName:security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:26.481: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:06:26.676: INFO: Deleting pod "security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac" in namespace "multivolume-3641" Nov 27 02:06:26.679: INFO: Wait up to 5m0s for pod "security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac" to be fully deleted �[1mSTEP�[0m: Creating pod on {Name: Selector:map[] Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:NotIn,Values:[k8s-agentpool1-27910301-vmss000001],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,}} with multiple volumes Nov 27 02:11:40.719: FAIL: Unexpected error: <*errors.errorString | 0xc000d3a260>: { s: "pod \"security-context-b5431a76-7b18-436d-92dd-bfb459822506\" is not Running: timed out waiting for the condition", } pod "security-context-b5431a76-7b18-436d-92dd-bfb459822506" is not Running: timed out waiting for the condition occurred Nov 27 02:11:40.720: INFO: Deleting pod "security-context-b5431a76-7b18-436d-92dd-bfb459822506" in namespace "multivolume-3641" Nov 27 02:11:40.724: INFO: Wait up to 5m0s for pod "security-context-b5431a76-7b18-436d-92dd-bfb459822506" to be fully deleted �[1mSTEP�[0m: Deleting pvc Nov 27 02:11:50.730: INFO: Deleting PersistentVolumeClaim "azure-diskl4nkr" Nov 27 02:11:50.733: INFO: Waiting up to 5m0s for PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f to get deleted Nov 27 02:11:50.737: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Bound (3.302327ms) Nov 27 02:11:55.740: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (5.006596426s) Nov 27 02:12:00.744: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (10.010323497s) Nov 27 02:12:05.747: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (15.013342231s) Nov 27 02:12:10.750: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (20.01709684s) Nov 27 02:12:15.754: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (25.020652816s) Nov 27 02:12:20.757: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (30.023574657s) Nov 27 02:12:25.761: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (35.027565476s) Nov 27 02:12:30.764: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (40.031065262s) Nov 27 02:12:35.768: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (45.034658319s) Nov 27 02:12:40.772: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (50.039201654s) Nov 27 02:12:45.776: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (55.042671552s) Nov 27 02:12:50.779: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (1m0.046203822s) Nov 27 02:12:55.783: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (1m5.049294259s) Nov 27 02:13:00.786: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (1m10.052756071s) Nov 27 02:13:05.789: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (1m15.056053254s) Nov 27 02:13:10.793: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (1m20.060125316s) Nov 27 02:13:15.797: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (1m25.063566646s) Nov 27 02:13:20.801: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (1m30.067322651s) Nov 27 02:13:25.804: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (1m35.070698026s) Nov 27 02:13:30.808: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (1m40.074355577s) Nov 27 02:13:35.811: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (1m45.077479097s) Nov 27 02:13:40.814: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (1m50.08044519s) Nov 27 02:13:45.818: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (1m55.084515066s) Nov 27 02:13:50.826: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (2m0.092996254s) Nov 27 02:13:55.830: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (2m5.096531275s) Nov 27 02:14:00.833: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (2m10.09998227s) Nov 27 02:14:05.836: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (2m15.102906837s) Nov 27 02:14:10.840: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (2m20.106511284s) Nov 27 02:14:15.843: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (2m25.109464302s) Nov 27 02:14:20.847: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (2m30.113481904s) Nov 27 02:14:25.850: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (2m35.116776877s) Nov 27 02:14:30.853: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (2m40.119753823s) Nov 27 02:14:35.856: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (2m45.123202151s) Nov 27 02:14:40.860: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (2m50.126722555s) Nov 27 02:14:45.863: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (2m55.129966334s) Nov 27 02:14:50.866: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (3m0.132744887s) Nov 27 02:14:55.869: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (3m5.136126123s) Nov 27 02:15:00.872: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (3m10.138722829s) Nov 27 02:15:05.875: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (3m15.141734417s) Nov 27 02:15:10.878: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (3m20.144683083s) Nov 27 02:15:15.881: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (3m25.148034531s) Nov 27 02:15:20.884: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (3m30.150948253s) Nov 27 02:15:25.888: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (3m35.154608261s) Nov 27 02:15:30.892: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f found and phase=Failed (3m40.158551949s) Nov 27 02:15:35.896: INFO: PersistentVolume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f was removed �[1mSTEP�[0m: Deleting sc �[1mSTEP�[0m: Deleting pvc Nov 27 02:15:35.900: INFO: Deleting PersistentVolumeClaim "azure-disknbgzh" Nov 27 02:15:35.910: INFO: Waiting up to 5m0s for PersistentVolume pvc-812eea09-8614-4251-9c35-137bf0f82805 to get deleted Nov 27 02:15:35.913: INFO: PersistentVolume pvc-812eea09-8614-4251-9c35-137bf0f82805 found and phase=Bound (2.388119ms) Nov 27 02:15:40.917: INFO: PersistentVolume pvc-812eea09-8614-4251-9c35-137bf0f82805 found and phase=Released (5.006688669s) Nov 27 02:15:45.922: INFO: PersistentVolume pvc-812eea09-8614-4251-9c35-137bf0f82805 was removed �[1mSTEP�[0m: Deleting sc Nov 27 02:15:45.927: INFO: In-tree plugin kubernetes.io/azure-disk is not migrated, not validating any metrics [AfterEach] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "multivolume-3641". �[1mSTEP�[0m: Found 20 events. Nov 27 02:15:45.930: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac: {default-scheduler } Scheduled: Successfully assigned multivolume-3641/security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac to k8s-agentpool1-27910301-vmss000001 Nov 27 02:15:45.930: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for security-context-b5431a76-7b18-436d-92dd-bfb459822506: {default-scheduler } Scheduled: Successfully assigned multivolume-3641/security-context-b5431a76-7b18-436d-92dd-bfb459822506 to k8s-agentpool1-27910301-vmss000000 Nov 27 02:15:45.930: INFO: At 2019-11-27 02:03:52 +0000 UTC - event for azure-diskl4nkr: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f using kubernetes.io/azure-disk Nov 27 02:15:45.930: INFO: At 2019-11-27 02:04:04 +0000 UTC - event for azure-disknbgzh: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-812eea09-8614-4251-9c35-137bf0f82805 using kubernetes.io/azure-disk Nov 27 02:15:45.930: INFO: At 2019-11-27 02:05:20 +0000 UTC - event for security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f" Nov 27 02:15:45.930: INFO: At 2019-11-27 02:05:31 +0000 UTC - event for security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-812eea09-8614-4251-9c35-137bf0f82805" Nov 27 02:15:45.930: INFO: At 2019-11-27 02:06:09 +0000 UTC - event for security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac: {kubelet k8s-agentpool1-27910301-vmss000001} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume2 volume1], unattached volumes=[volume2 default-token-45gm6 volume1]: timed out waiting for the condition Nov 27 02:15:45.930: INFO: At 2019-11-27 02:06:17 +0000 UTC - event for security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac: {kubelet k8s-agentpool1-27910301-vmss000001} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io/azure-disk/volumeDevices/kubetest-f1185e95-10b6-11ea-8290-0-pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f" Nov 27 02:15:45.930: INFO: At 2019-11-27 02:06:17 +0000 UTC - event for security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac: {kubelet k8s-agentpool1-27910301-vmss000001} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f" volumeMapPath "/var/lib/kubelet/pods/00b7507f-4cf3-4b78-b8cb-37e89b056cc2/volumeDevices/kubernetes.io~azure-disk" Nov 27 02:15:45.930: INFO: At 2019-11-27 02:06:18 +0000 UTC - event for security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac: {kubelet k8s-agentpool1-27910301-vmss000001} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "pvc-812eea09-8614-4251-9c35-137bf0f82805" volumeMapPath "/var/lib/kubelet/pods/00b7507f-4cf3-4b78-b8cb-37e89b056cc2/volumeDevices/kubernetes.io~azure-disk" Nov 27 02:15:45.930: INFO: At 2019-11-27 02:06:18 +0000 UTC - event for security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac: {kubelet k8s-agentpool1-27910301-vmss000001} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "pvc-812eea09-8614-4251-9c35-137bf0f82805" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io/azure-disk/volumeDevices/kubetest-f1185e95-10b6-11ea-8290-0-pvc-812eea09-8614-4251-9c35-137bf0f82805" Nov 27 02:15:45.930: INFO: At 2019-11-27 02:06:22 +0000 UTC - event for security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac: {kubelet k8s-agentpool1-27910301-vmss000001} Created: Created container write-pod Nov 27 02:15:45.930: INFO: At 2019-11-27 02:06:22 +0000 UTC - event for security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac: {kubelet k8s-agentpool1-27910301-vmss000001} Started: Started container write-pod Nov 27 02:15:45.930: INFO: At 2019-11-27 02:06:22 +0000 UTC - event for security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac: {kubelet k8s-agentpool1-27910301-vmss000001} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 27 02:15:45.930: INFO: At 2019-11-27 02:06:26 +0000 UTC - event for security-context-b41c4dd3-0595-4b4c-9f71-f5a15d7b7fac: {kubelet k8s-agentpool1-27910301-vmss000001} Killing: Stopping container write-pod Nov 27 02:15:45.930: INFO: At 2019-11-27 02:08:43 +0000 UTC - event for security-context-b5431a76-7b18-436d-92dd-bfb459822506: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1 volume2], unattached volumes=[default-token-45gm6 volume1 volume2]: timed out waiting for the condition Nov 27 02:15:45.930: INFO: At 2019-11-27 02:10:55 +0000 UTC - event for security-context-b5431a76-7b18-436d-92dd-bfb459822506: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-3f3e01db-9cd3-4c95-ad1e-3e716aaa7d4f" Nov 27 02:15:45.930: INFO: At 2019-11-27 02:11:00 +0000 UTC - event for security-context-b5431a76-7b18-436d-92dd-bfb459822506: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1 volume2], unattached volumes=[volume1 volume2 default-token-45gm6]: timed out waiting for the condition Nov 27 02:15:45.930: INFO: At 2019-11-27 02:11:51 +0000 UTC - event for security-context-b5431a76-7b18-436d-92dd-bfb459822506: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-812eea09-8614-4251-9c35-137bf0f82805" Nov 27 02:15:45.930: INFO: At 2019-11-27 02:13:17 +0000 UTC - event for security-context-b5431a76-7b18-436d-92dd-bfb459822506: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[default-token-45gm6 volume1 volume2], unattached volumes=[default-token-45gm6 volume1 volume2]: timed out waiting for the condition Nov 27 02:15:45.933: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:15:45.933: INFO: Nov 27 02:15:45.935: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:15:45.938: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 3882 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:15:32 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:15:32 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:15:32 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:15:32 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-558fb052-dca8-45f0-9e26-0338846653c5 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-63caa552-e092-43fe-b924-ef6a44e177a3 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-72de6846-924c-4829-8625-a42e55bed18a kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-ba05354b-40b7-4340-9216-afb07cc40b79 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-ff9df561-0196-4319-bfaf-223abedd298f],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104,DevicePath:0,},AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-63caa552-e092-43fe-b924-ef6a44e177a3,DevicePath:1,},AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8,DevicePath:4,},},Config:nil,},} Nov 27 02:15:45.938: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:15:45.943: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:15:45.956: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.956: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:15:45.956: INFO: security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 started at 2019-11-27 02:15:21 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.956: INFO: Container write-pod ready: false, restart count 0 Nov 27 02:15:45.956: INFO: pod-subpath-test-azure-disk-dynamicpv-x4mr started at 2019-11-27 02:15:43 +0000 UTC (1+1 container statuses recorded) Nov 27 02:15:45.956: INFO: Init container init-volume-azure-disk-dynamicpv-x4mr ready: false, restart count 0 Nov 27 02:15:45.956: INFO: Container test-container-subpath-azure-disk-dynamicpv-x4mr ready: false, restart count 0 Nov 27 02:15:45.956: INFO: azure-client started at 2019-11-27 02:06:27 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.956: INFO: Container azure-client ready: true, restart count 0 Nov 27 02:15:45.956: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.956: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:15:45.956: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.956: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:15:45.956: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.956: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:15:45.956: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.956: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:15:45.956: INFO: azure-client started at 2019-11-27 02:06:31 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.956: INFO: Container azure-client ready: true, restart count 0 Nov 27 02:15:45.956: INFO: security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 started at 2019-11-27 02:12:19 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.956: INFO: Container write-pod ready: false, restart count 0 Nov 27 02:15:45.956: INFO: exec-volume-test-azure-disk-b2kl started at 2019-11-27 02:12:52 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.956: INFO: Container exec-container-azure-disk-b2kl ready: false, restart count 0 W1127 02:15:45.959497 14161 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:15:45.980: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:15:45.980: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:15:45.983: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 3489 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:14:20 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:14:20 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:14:20 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:14:20 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:15:45.983: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:15:45.988: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:15:45.994: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.994: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:15:45.994: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.994: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 27 02:15:45.994: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.994: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:15:45.994: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.994: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:15:45.994: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.995: INFO: Container coredns ready: true, restart count 0 Nov 27 02:15:45.995: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.995: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:15:45.995: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.995: INFO: Container metrics-server ready: true, restart count 0 Nov 27 02:15:45.995: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:45.995: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 W1127 02:15:45.998250 14161 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:15:46.031: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:15:46.031: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:15:46.033: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 2874 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:11:39 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:11:39 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:11:39 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:11:39 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:15:46.033: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:15:46.038: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:15:46.042: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:46.042: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 27 02:15:46.042: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:46.042: INFO: Container kube-scheduler ready: true, restart count 0 Nov 27 02:15:46.042: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:46.042: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:15:46.042: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:46.042: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:15:46.042: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:46.042: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:15:46.042: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:46.042: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 27 02:15:46.042: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:46.042: INFO: Container kube-apiserver ready: true, restart count 0 W1127 02:15:46.045439 14161 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:15:46.062: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:15:46.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "multivolume-3641" for this suite.
Find security-context-b5431a76-7b18-436d-92dd-bfb459822506 mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\sfail\sif\ssubpath\sfile\sis\soutside\sthe\svolume\s\[Slow\]\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:251 Nov 27 02:20:43.338: while waiting for subpath failure Unexpected error: <*errors.errorString | 0xc0000a1960>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:740from junit_06.xml
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:15:31.128: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename provisioning �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-2329 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath file is outside the volume [Slow][LinuxOnly] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:251 Nov 27 02:15:31.280: INFO: Creating resource for dynamic PV Nov 27 02:15:31.280: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass provisioning-2329-azure-disk-scmt9gc �[1mSTEP�[0m: creating a claim Nov 27 02:15:31.283: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 27 02:15:31.287: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-diskqc5pw] to have phase Bound Nov 27 02:15:31.292: INFO: PersistentVolumeClaim azure-diskqc5pw found but phase is Pending instead of Bound. Nov 27 02:15:33.296: INFO: PersistentVolumeClaim azure-diskqc5pw found but phase is Pending instead of Bound. Nov 27 02:15:35.303: INFO: PersistentVolumeClaim azure-diskqc5pw found but phase is Pending instead of Bound. Nov 27 02:15:37.306: INFO: PersistentVolumeClaim azure-diskqc5pw found but phase is Pending instead of Bound. Nov 27 02:15:39.309: INFO: PersistentVolumeClaim azure-diskqc5pw found but phase is Pending instead of Bound. Nov 27 02:15:41.312: INFO: PersistentVolumeClaim azure-diskqc5pw found but phase is Pending instead of Bound. Nov 27 02:15:43.316: INFO: PersistentVolumeClaim azure-diskqc5pw found and phase=Bound (12.02856s) �[1mSTEP�[0m: Creating pod pod-subpath-test-azure-disk-dynamicpv-x4mr �[1mSTEP�[0m: Checking for subpath error in container status Nov 27 02:20:43.338: FAIL: while waiting for subpath failure Unexpected error: <*errors.errorString | 0xc0000a1960>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Nov 27 02:20:43.339: INFO: Deleting pod "pod-subpath-test-azure-disk-dynamicpv-x4mr" in namespace "provisioning-2329" Nov 27 02:20:43.347: INFO: Wait up to 5m0s for pod "pod-subpath-test-azure-disk-dynamicpv-x4mr" to be fully deleted �[1mSTEP�[0m: Deleting pod Nov 27 02:20:51.354: INFO: Deleting pod "pod-subpath-test-azure-disk-dynamicpv-x4mr" in namespace "provisioning-2329" �[1mSTEP�[0m: Deleting pvc Nov 27 02:20:51.357: INFO: Deleting PersistentVolumeClaim "azure-diskqc5pw" Nov 27 02:20:51.360: INFO: Waiting up to 5m0s for PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f to get deleted Nov 27 02:20:51.368: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Bound (7.839662ms) Nov 27 02:20:56.372: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (5.011302943s) Nov 27 02:21:01.375: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (10.014275409s) Nov 27 02:21:06.378: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (15.017242963s) Nov 27 02:21:11.380: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (20.019765104s) Nov 27 02:21:16.384: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (25.023253841s) Nov 27 02:21:21.387: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (30.02689047s) Nov 27 02:21:26.391: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (35.03092839s) Nov 27 02:21:31.395: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (40.034791699s) Nov 27 02:21:36.398: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (45.037887392s) Nov 27 02:21:41.401: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (50.040860874s) Nov 27 02:21:46.405: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (55.044597051s) Nov 27 02:21:51.408: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (1m0.047582913s) Nov 27 02:21:56.411: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (1m5.051027069s) Nov 27 02:22:01.414: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (1m10.05393491s) Nov 27 02:22:06.418: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (1m15.057746849s) Nov 27 02:22:11.422: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (1m20.061529378s) Nov 27 02:22:16.425: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (1m25.064503391s) Nov 27 02:22:21.428: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (1m30.067205593s) Nov 27 02:22:26.431: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (1m35.07050109s) Nov 27 02:22:31.433: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (1m40.073068972s) Nov 27 02:22:36.436: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (1m45.076011648s) Nov 27 02:22:41.439: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (1m50.079035315s) Nov 27 02:22:46.443: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (1m55.082575878s) Nov 27 02:22:51.446: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (2m0.085346226s) Nov 27 02:22:56.449: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (2m5.088414167s) Nov 27 02:23:01.452: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (2m10.091844903s) Nov 27 02:23:06.455: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (2m15.094860026s) Nov 27 02:23:11.458: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (2m20.098090743s) Nov 27 02:23:16.462: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (2m25.10112865s) Nov 27 02:23:21.465: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (2m30.10445615s) Nov 27 02:23:26.468: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (2m35.107640242s) Nov 27 02:23:31.471: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (2m40.110837425s) Nov 27 02:23:36.474: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (2m45.113536396s) Nov 27 02:23:41.477: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (2m50.116507561s) Nov 27 02:23:46.481: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (2m55.120604528s) Nov 27 02:23:51.485: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (3m0.124455184s) Nov 27 02:23:56.488: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (3m5.127848329s) Nov 27 02:24:01.492: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (3m10.131423368s) Nov 27 02:24:06.497: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (3m15.136576811s) Nov 27 02:24:11.501: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (3m20.140279736s) Nov 27 02:24:16.504: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (3m25.14367985s) Nov 27 02:24:21.507: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (3m30.146954157s) Nov 27 02:24:26.511: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (3m35.15084456s) Nov 27 02:24:31.514: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (3m40.15391105s) Nov 27 02:24:36.518: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (3m45.157627038s) Nov 27 02:24:41.522: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (3m50.161669721s) Nov 27 02:24:46.525: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (3m55.164931491s) Nov 27 02:24:51.529: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (4m0.168245855s) Nov 27 02:24:56.532: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (4m5.171967314s) Nov 27 02:25:01.536: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (4m10.175851868s) Nov 27 02:25:06.540: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (4m15.179406613s) Nov 27 02:25:11.544: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (4m20.183649856s) Nov 27 02:25:16.548: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (4m25.187361589s) Nov 27 02:25:21.551: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (4m30.190722412s) Nov 27 02:25:26.555: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (4m35.194560032s) Nov 27 02:25:31.559: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (4m40.198325246s) Nov 27 02:25:36.562: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (4m45.201656049s) Nov 27 02:25:41.565: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (4m50.204846945s) Nov 27 02:25:46.568: INFO: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f found and phase=Failed (4m55.207988034s) �[1mSTEP�[0m: Deleting sc Nov 27 02:25:51.574: FAIL: while cleaning up resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { error: { cause: { s: "PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f still exists within 5m0s", }, msg: "Persistent Volume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f not deleted by dynamic provisioner", }, stack: [0x3995499, 0x39c48f3, 0x4329a2, 0x1593131, 0x4329a2, 0x7e3328, 0x159293c, 0x1593af5, 0x7e4590, 0x7e4387, 0x187f5d5, 0x39ab716, 0x39ab6b2, 0x39c57a3, 0x39c576b, 0x7d30a8, 0x7d2cff, 0x7d21a4, 0x7d9105, 0x7d8961, 0x7de7df, 0x7de300, 0x7ddb47, 0x7e012b, 0x7e2c87, 0x7e29cd, 0x3abb0ba, 0x3abfcab, 0x516669, 0x462e51], }, ], ] Persistent Volume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f not deleted by dynamic provisioner: PersistentVolume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f still exists within 5m0s occurred [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "provisioning-2329". �[1mSTEP�[0m: Found 6 events. Nov 27 02:25:51.578: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-x4mr: {default-scheduler } Scheduled: Successfully assigned provisioning-2329/pod-subpath-test-azure-disk-dynamicpv-x4mr to k8s-agentpool1-27910301-vmss000000 Nov 27 02:25:51.578: INFO: At 2019-11-27 02:15:41 +0000 UTC - event for azure-diskqc5pw: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f using kubernetes.io/azure-disk Nov 27 02:25:51.578: INFO: At 2019-11-27 02:17:46 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-x4mr: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[test-volume], unattached volumes=[test-volume liveness-probe-volume default-token-4h6dd]: timed out waiting for the condition Nov 27 02:25:51.578: INFO: At 2019-11-27 02:20:01 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-x4mr: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[test-volume], unattached volumes=[default-token-4h6dd test-volume liveness-probe-volume]: timed out waiting for the condition Nov 27 02:25:51.578: INFO: At 2019-11-27 02:20:08 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-x4mr: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f" Nov 27 02:25:51.578: INFO: At 2019-11-27 02:22:19 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-x4mr: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[liveness-probe-volume default-token-4h6dd test-volume], unattached volumes=[liveness-probe-volume default-token-4h6dd test-volume]: timed out waiting for the condition Nov 27 02:25:51.580: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:25:51.580: INFO: Nov 27 02:25:51.583: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:25:51.591: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 5932 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71,DevicePath:3,},},Config:nil,},} Nov 27 02:25:51.591: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:25:51.595: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:25:51.601: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.601: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:25:51.601: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.601: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:25:51.601: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.601: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:25:51.601: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.601: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:25:51.601: INFO: azure-injector started at 2019-11-27 02:20:44 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.601: INFO: Container azure-injector ready: false, restart count 0 Nov 27 02:25:51.601: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.601: INFO: Container kube-proxy ready: true, restart count 0 W1127 02:25:51.604452 14162 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:25:51.629: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:25:51.629: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:25:51.632: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 5430 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:25:51.632: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:25:51.637: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:25:51.644: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.644: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:25:51.644: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.644: INFO: Container metrics-server ready: true, restart count 0 Nov 27 02:25:51.644: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.644: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:25:51.644: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.644: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:25:51.644: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.644: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:25:51.644: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.644: INFO: Container coredns ready: true, restart count 0 Nov 27 02:25:51.644: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.644: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:25:51.644: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.644: INFO: Container kubernetes-dashboard ready: true, restart count 0 W1127 02:25:51.646801 14162 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:25:51.672: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:25:51.672: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:25:51.675: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 5287 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:25:51.675: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:25:51.679: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:25:51.684: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.684: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:25:51.684: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.684: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:25:51.684: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.684: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:25:51.684: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.684: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 27 02:25:51.684: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.684: INFO: Container kube-apiserver ready: true, restart count 0 Nov 27 02:25:51.684: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.684: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 27 02:25:51.684: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:51.684: INFO: Container kube-scheduler ready: true, restart count 0 W1127 02:25:51.686350 14162 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:25:51.704: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:25:51.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "provisioning-2329" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\sfail\sif\ssubpath\swith\sbackstepping\sis\soutside\sthe\svolume\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:273 Nov 27 02:15:26.921: while waiting for subpath failure Unexpected error: <*errors.errorString | 0xc0000d5950>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:740from junit_08.xml
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:10:20.737: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename provisioning �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-2819 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail if subpath with backstepping is outside the volume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:273 Nov 27 02:10:20.879: INFO: Creating resource for dynamic PV Nov 27 02:10:20.879: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass provisioning-2819-azure-disk-scxvmtk �[1mSTEP�[0m: creating a claim Nov 27 02:10:20.883: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 27 02:10:20.888: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-diskqvz7l] to have phase Bound Nov 27 02:10:20.891: INFO: PersistentVolumeClaim azure-diskqvz7l found but phase is Pending instead of Bound. Nov 27 02:10:22.895: INFO: PersistentVolumeClaim azure-diskqvz7l found but phase is Pending instead of Bound. Nov 27 02:10:24.899: INFO: PersistentVolumeClaim azure-diskqvz7l found but phase is Pending instead of Bound. Nov 27 02:10:26.903: INFO: PersistentVolumeClaim azure-diskqvz7l found and phase=Bound (6.014729345s) �[1mSTEP�[0m: Creating pod pod-subpath-test-azure-disk-dynamicpv-hkcr �[1mSTEP�[0m: Checking for subpath error in container status Nov 27 02:15:26.921: FAIL: while waiting for subpath failure Unexpected error: <*errors.errorString | 0xc0000d5950>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Nov 27 02:15:26.921: INFO: Deleting pod "pod-subpath-test-azure-disk-dynamicpv-hkcr" in namespace "provisioning-2819" Nov 27 02:15:26.925: INFO: Wait up to 5m0s for pod "pod-subpath-test-azure-disk-dynamicpv-hkcr" to be fully deleted �[1mSTEP�[0m: Deleting pod Nov 27 02:15:30.939: INFO: Deleting pod "pod-subpath-test-azure-disk-dynamicpv-hkcr" in namespace "provisioning-2819" �[1mSTEP�[0m: Deleting pvc Nov 27 02:15:30.941: INFO: Deleting PersistentVolumeClaim "azure-diskqvz7l" Nov 27 02:15:30.945: INFO: Waiting up to 5m0s for PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d to get deleted Nov 27 02:15:30.949: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Bound (3.61813ms) Nov 27 02:15:35.951: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (5.006219086s) Nov 27 02:15:40.954: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (10.008858723s) Nov 27 02:15:45.956: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (15.011285437s) Nov 27 02:15:50.959: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (20.014002334s) Nov 27 02:15:55.962: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (25.017036513s) Nov 27 02:16:00.965: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (30.01973907s) Nov 27 02:16:05.968: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (35.023465716s) Nov 27 02:16:10.971: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (40.026181034s) Nov 27 02:16:15.975: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (45.029637439s) Nov 27 02:16:20.978: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (50.032783822s) Nov 27 02:16:25.981: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (55.036031887s) Nov 27 02:16:30.984: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (1m0.038609929s) Nov 27 02:16:35.987: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (1m5.042003658s) Nov 27 02:16:40.990: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (1m10.044734664s) Nov 27 02:16:45.993: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (1m15.047657053s) Nov 27 02:16:50.995: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (1m20.050366122s) Nov 27 02:16:55.999: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (1m25.053911881s) Nov 27 02:17:01.002: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (1m30.056638515s) Nov 27 02:17:06.005: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (1m35.060513741s) Nov 27 02:17:11.009: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (1m40.063656144s) Nov 27 02:17:16.012: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (1m45.067174733s) Nov 27 02:17:21.015: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (1m50.070310102s) Nov 27 02:17:26.019: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (1m55.074216961s) Nov 27 02:17:31.022: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (2m0.077133395s) Nov 27 02:17:36.026: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (2m5.080699218s) Nov 27 02:17:41.029: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (2m10.08377922s) Nov 27 02:17:46.032: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (2m15.087067808s) Nov 27 02:17:51.035: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (2m20.089945577s) Nov 27 02:17:56.039: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (2m25.093570636s) Nov 27 02:18:01.042: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (2m30.097496482s) Nov 27 02:18:06.046: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (2m35.10112911s) Nov 27 02:18:11.050: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (2m40.104730123s) Nov 27 02:18:16.053: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (2m45.108262019s) Nov 27 02:18:21.058: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (2m50.112774709s) Nov 27 02:18:26.062: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (2m55.116744679s) Nov 27 02:18:31.064: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (3m0.119492225s) Nov 27 02:18:36.068: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (3m5.122840361s) Nov 27 02:18:41.071: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (3m10.125742879s) Nov 27 02:18:46.074: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (3m15.128526382s) Nov 27 02:18:51.077: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (3m20.131803674s) Nov 27 02:18:56.080: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (3m25.134941652s) Nov 27 02:19:01.083: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (3m30.138202316s) Nov 27 02:19:06.087: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (3m35.141795369s) Nov 27 02:19:11.090: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (3m40.145391809s) Nov 27 02:19:16.093: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (3m45.148431731s) Nov 27 02:19:21.097: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (3m50.151789942s) Nov 27 02:19:26.101: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (3m55.155853845s) Nov 27 02:19:31.105: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (4m0.159568233s) Nov 27 02:19:36.108: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (4m5.162738903s) Nov 27 02:19:41.111: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (4m10.166450364s) Nov 27 02:19:46.117: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (4m15.171705325s) Nov 27 02:19:51.119: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (4m20.174400054s) Nov 27 02:19:56.122: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (4m25.177344571s) Nov 27 02:20:01.126: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (4m30.181307785s) Nov 27 02:20:06.129: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (4m35.184135477s) Nov 27 02:20:11.132: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (4m40.18737386s) Nov 27 02:20:16.135: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (4m45.189719124s) Nov 27 02:20:21.138: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (4m50.193283086s) Nov 27 02:20:26.141: INFO: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d found and phase=Failed (4m55.196225531s) �[1mSTEP�[0m: Deleting sc Nov 27 02:20:31.147: FAIL: while cleaning up resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { error: { cause: { s: "PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d still exists within 5m0s", }, msg: "Persistent Volume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d not deleted by dynamic provisioner", }, stack: [0x3995499, 0x39c48f3, 0x4329a2, 0x1593131, 0x4329a2, 0x7e3328, 0x159293c, 0x1593af5, 0x7e4590, 0x7e4387, 0x187f5d5, 0x39ab716, 0x39ab6b2, 0x39c5bef, 0x39c5bb7, 0x7d30a8, 0x7d2cff, 0x7d21a4, 0x7d9105, 0x7d8961, 0x7de7df, 0x7de300, 0x7ddb47, 0x7e012b, 0x7e2c87, 0x7e29cd, 0x3abb0ba, 0x3abfcab, 0x516669, 0x462e51], }, ], ] Persistent Volume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d not deleted by dynamic provisioner: PersistentVolume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d still exists within 5m0s occurred [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "provisioning-2819". �[1mSTEP�[0m: Found 5 events. Nov 27 02:20:31.150: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-hkcr: {default-scheduler } Scheduled: Successfully assigned provisioning-2819/pod-subpath-test-azure-disk-dynamicpv-hkcr to k8s-agentpool1-27910301-vmss000000 Nov 27 02:20:31.150: INFO: At 2019-11-27 02:10:26 +0000 UTC - event for azure-diskqvz7l: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-a4e827c1-eef0-4abf-987f-7de787fa567d using kubernetes.io/azure-disk Nov 27 02:20:31.150: INFO: At 2019-11-27 02:12:29 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-hkcr: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[test-volume], unattached volumes=[test-volume liveness-probe-volume default-token-8ngzp]: timed out waiting for the condition Nov 27 02:20:31.150: INFO: At 2019-11-27 02:14:38 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-hkcr: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-a4e827c1-eef0-4abf-987f-7de787fa567d" Nov 27 02:20:31.150: INFO: At 2019-11-27 02:17:03 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-hkcr: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[test-volume liveness-probe-volume default-token-8ngzp], unattached volumes=[test-volume liveness-probe-volume default-token-8ngzp]: timed out waiting for the condition Nov 27 02:20:31.152: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:20:31.152: INFO: Nov 27 02:20:31.155: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:20:31.157: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 4857 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:20:13 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:20:13 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:20:13 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:20:13 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f,DevicePath:6,},},Config:nil,},} Nov 27 02:20:31.158: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:20:31.161: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:20:31.200: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.200: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:20:31.200: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.200: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:20:31.200: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.200: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:20:31.200: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.200: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:20:31.200: INFO: pod-subpath-test-azure-disk-dynamicpv-x4mr started at 2019-11-27 02:15:43 +0000 UTC (1+1 container statuses recorded) Nov 27 02:20:31.200: INFO: Init container init-volume-azure-disk-dynamicpv-x4mr ready: false, restart count 0 Nov 27 02:20:31.200: INFO: Container test-container-subpath-azure-disk-dynamicpv-x4mr ready: false, restart count 0 Nov 27 02:20:31.200: INFO: azure-io-client started at 2019-11-27 02:18:34 +0000 UTC (1+1 container statuses recorded) Nov 27 02:20:31.200: INFO: Init container azure-io-init ready: false, restart count 0 Nov 27 02:20:31.200: INFO: Container azure-io-client ready: false, restart count 0 Nov 27 02:20:31.200: INFO: security-context-54d39a28-d787-43ae-9155-1d1da53f4a30 started at 2019-11-27 02:19:10 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.200: INFO: Container write-pod ready: false, restart count 0 Nov 27 02:20:31.200: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.200: INFO: Container azure-cnms ready: true, restart count 0 W1127 02:20:31.202627 14151 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:20:31.227: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:20:31.227: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:20:31.229: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 4627 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:19:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:19:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:19:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:19:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-63caa552-e092-43fe-b924-ef6a44e177a3 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-ff9df561-0196-4319-bfaf-223abedd298f],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:20:31.229: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:20:31.233: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:20:31.263: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.263: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:20:31.263: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.263: INFO: Container metrics-server ready: true, restart count 0 Nov 27 02:20:31.263: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.263: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:20:31.263: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.263: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:20:31.263: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.263: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 27 02:20:31.263: INFO: security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4 started at 2019-11-27 02:16:51 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.263: INFO: Container write-pod ready: false, restart count 0 Nov 27 02:20:31.263: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.263: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:20:31.263: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.263: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:20:31.263: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.263: INFO: Container coredns ready: true, restart count 0 W1127 02:20:31.266063 14151 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:20:31.293: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:20:31.293: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:20:31.295: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 4091 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:16:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:16:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:16:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:16:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:20:31.295: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:20:31.319: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:20:31.351: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.351: INFO: Container kube-apiserver ready: true, restart count 0 Nov 27 02:20:31.351: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.351: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 27 02:20:31.351: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.351: INFO: Container kube-scheduler ready: true, restart count 0 Nov 27 02:20:31.351: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.351: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:20:31.351: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.351: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:20:31.351: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.351: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:20:31.351: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:31.351: INFO: Container kube-addon-manager ready: true, restart count 0 W1127 02:20:31.354494 14151 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:20:31.374: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:20:31.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "provisioning-2819" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\srestarting\scontainers\susing\sdirectory\sas\ssubpath\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:315 Nov 27 02:12:41.598: while cleaning up resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { error: { cause: { s: "PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a still exists within 5m0s", }, msg: "Persistent Volume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a not deleted by dynamic provisioner", }, stack: [0x3995499, 0x39c48f3, 0x39c67df, 0x7d30a8, 0x7d2cff, 0x7d21a4, 0x7d9105, 0x7d8961, 0x7de7df, 0x7de300, 0x7ddb47, 0x7e012b, 0x7e2c87, 0x7e29cd, 0x3abb0ba, 0x3abfcab, 0x516669, 0x462e51], }, ], ] Persistent Volume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a not deleted by dynamic provisioner: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a still exists within 5m0s occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:180from junit_05.xml
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:03:42.379: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename provisioning Nov 27 02:03:42.428: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Nov 27 02:03:42.438: INFO: Found ClusterRoles; assuming RBAC is enabled. �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-4613 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support restarting containers using directory as subpath [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:315 Nov 27 02:03:42.549: INFO: Creating resource for dynamic PV Nov 27 02:03:42.549: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass provisioning-4613-azure-disk-scdrwtd �[1mSTEP�[0m: creating a claim Nov 27 02:03:42.560: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 27 02:03:42.580: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-diskmdrwx] to have phase Bound Nov 27 02:03:42.585: INFO: PersistentVolumeClaim azure-diskmdrwx found but phase is Pending instead of Bound. Nov 27 02:03:44.588: INFO: PersistentVolumeClaim azure-diskmdrwx found but phase is Pending instead of Bound. Nov 27 02:03:46.593: INFO: PersistentVolumeClaim azure-diskmdrwx found but phase is Pending instead of Bound. Nov 27 02:03:48.596: INFO: PersistentVolumeClaim azure-diskmdrwx found but phase is Pending instead of Bound. Nov 27 02:03:50.599: INFO: PersistentVolumeClaim azure-diskmdrwx found but phase is Pending instead of Bound. Nov 27 02:03:52.603: INFO: PersistentVolumeClaim azure-diskmdrwx found but phase is Pending instead of Bound. Nov 27 02:03:54.606: INFO: PersistentVolumeClaim azure-diskmdrwx found and phase=Bound (12.02650894s) �[1mSTEP�[0m: Creating pod pod-subpath-test-azure-disk-dynamicpv-btrb �[1mSTEP�[0m: Failing liveness probe Nov 27 02:06:16.632: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-f1185e95-10b6-11ea-8290-02424e92b20f.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks490311382/kubeconfig/kubeconfig.westus2.json exec --namespace=provisioning-4613 pod-subpath-test-azure-disk-dynamicpv-btrb --container test-container-volume-azure-disk-dynamicpv-btrb -- /bin/sh -c rm /probe-volume/probe-file' Nov 27 02:06:16.995: INFO: stderr: "" Nov 27 02:06:16.995: INFO: stdout: "" Nov 27 02:06:16.995: INFO: Pod exec output: �[1mSTEP�[0m: Waiting for container to restart Nov 27 02:06:16.999: INFO: Container test-container-subpath-azure-disk-dynamicpv-btrb, restarts: 0 Nov 27 02:06:27.003: INFO: Container test-container-subpath-azure-disk-dynamicpv-btrb, restarts: 1 Nov 27 02:06:27.003: INFO: Container has restart count: 1 �[1mSTEP�[0m: Rewriting the file Nov 27 02:06:27.003: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-f1185e95-10b6-11ea-8290-02424e92b20f.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks490311382/kubeconfig/kubeconfig.westus2.json exec --namespace=provisioning-4613 pod-subpath-test-azure-disk-dynamicpv-btrb --container test-container-volume-azure-disk-dynamicpv-btrb -- /bin/sh -c echo test-after > /probe-volume/probe-file' Nov 27 02:06:27.314: INFO: stderr: "" Nov 27 02:06:27.314: INFO: stdout: "" Nov 27 02:06:27.314: INFO: Pod exec output: �[1mSTEP�[0m: Waiting for container to stop restarting Nov 27 02:06:29.321: INFO: Container has restart count: 2 Nov 27 02:07:31.322: INFO: Container restart has stabilized Nov 27 02:07:31.322: INFO: Deleting pod "pod-subpath-test-azure-disk-dynamicpv-btrb" in namespace "provisioning-4613" Nov 27 02:07:31.331: INFO: Wait up to 5m0s for pod "pod-subpath-test-azure-disk-dynamicpv-btrb" to be fully deleted �[1mSTEP�[0m: Deleting pod Nov 27 02:07:41.340: INFO: Deleting pod "pod-subpath-test-azure-disk-dynamicpv-btrb" in namespace "provisioning-4613" �[1mSTEP�[0m: Deleting pvc Nov 27 02:07:41.345: INFO: Deleting PersistentVolumeClaim "azure-diskmdrwx" Nov 27 02:07:41.349: INFO: Waiting up to 5m0s for PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a to get deleted Nov 27 02:07:41.369: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Bound (20.015253ms) Nov 27 02:07:46.373: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (5.023776255s) Nov 27 02:07:51.376: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (10.026712652s) Nov 27 02:07:56.380: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (15.030575855s) Nov 27 02:08:01.383: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (20.034005055s) Nov 27 02:08:06.387: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (25.037447656s) Nov 27 02:08:11.389: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (30.04005765s) Nov 27 02:08:16.393: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (35.043614551s) Nov 27 02:08:21.397: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (40.047387555s) Nov 27 02:08:26.400: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (45.051018557s) Nov 27 02:08:31.421: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (50.072007093s) Nov 27 02:08:36.425: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (55.075467694s) Nov 27 02:08:41.427: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (1m0.07824969s) Nov 27 02:08:46.431: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (1m5.082293396s) Nov 27 02:08:51.434: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (1m10.085299794s) Nov 27 02:08:56.438: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (1m15.089113799s) Nov 27 02:09:01.441: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (1m20.092129797s) Nov 27 02:09:06.445: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (1m25.0956843s) Nov 27 02:09:11.448: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (1m30.098976601s) Nov 27 02:09:16.451: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (1m35.102141101s) Nov 27 02:09:21.455: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (1m40.106016507s) Nov 27 02:09:26.460: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (1m45.110536417s) Nov 27 02:09:31.463: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (1m50.114257422s) Nov 27 02:09:36.467: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (1m55.117429222s) Nov 27 02:09:41.471: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (2m0.12152423s) Nov 27 02:09:46.475: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (2m5.125370736s) Nov 27 02:09:51.478: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (2m10.12897364s) Nov 27 02:09:56.481: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (2m15.132301242s) Nov 27 02:10:01.486: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (2m20.136549951s) Nov 27 02:10:06.489: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (2m25.140128855s) Nov 27 02:10:11.493: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (2m30.144015662s) Nov 27 02:10:16.496: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (2m35.147167163s) Nov 27 02:10:21.500: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (2m40.15103587s) Nov 27 02:10:26.504: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (2m45.154328572s) Nov 27 02:10:31.507: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (2m50.157354573s) Nov 27 02:10:36.510: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (2m55.160790377s) Nov 27 02:10:41.513: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (3m0.163784877s) Nov 27 02:10:46.517: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (3m5.167941786s) Nov 27 02:10:51.520: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (3m10.171054488s) Nov 27 02:10:56.523: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (3m15.173978988s) Nov 27 02:11:01.527: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (3m20.177713895s) Nov 27 02:11:06.530: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (3m25.180897697s) Nov 27 02:11:11.534: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (3m30.184390402s) Nov 27 02:11:16.536: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (3m35.187296636s) Nov 27 02:11:21.540: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (3m40.19087806s) Nov 27 02:11:26.543: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (3m45.19428225s) Nov 27 02:11:31.547: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (3m50.197568104s) Nov 27 02:11:36.550: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (3m55.20122123s) Nov 27 02:11:41.554: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (4m0.20441712s) Nov 27 02:11:46.557: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (4m5.207926079s) Nov 27 02:11:51.561: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (4m10.21188291s) Nov 27 02:11:56.564: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (4m15.215282705s) Nov 27 02:12:01.568: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (4m20.21896187s) Nov 27 02:12:06.572: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (4m25.222401602s) Nov 27 02:12:11.575: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (4m30.225827403s) Nov 27 02:12:16.578: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (4m35.229026671s) Nov 27 02:12:21.582: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (4m40.232524712s) Nov 27 02:12:26.585: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (4m45.235790521s) Nov 27 02:12:31.588: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (4m50.238772697s) Nov 27 02:12:36.592: INFO: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a found and phase=Failed (4m55.243116355s) �[1mSTEP�[0m: Deleting sc Nov 27 02:12:41.598: FAIL: while cleaning up resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { error: { cause: { s: "PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a still exists within 5m0s", }, msg: "Persistent Volume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a not deleted by dynamic provisioner", }, stack: [0x3995499, 0x39c48f3, 0x39c67df, 0x7d30a8, 0x7d2cff, 0x7d21a4, 0x7d9105, 0x7d8961, 0x7de7df, 0x7de300, 0x7ddb47, 0x7e012b, 0x7e2c87, 0x7e29cd, 0x3abb0ba, 0x3abfcab, 0x516669, 0x462e51], }, ], ] Persistent Volume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a not deleted by dynamic provisioner: PersistentVolume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a still exists within 5m0s occurred [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "provisioning-4613". �[1mSTEP�[0m: Found 18 events. Nov 27 02:12:41.603: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {default-scheduler } Scheduled: Successfully assigned provisioning-4613/pod-subpath-test-azure-disk-dynamicpv-btrb to k8s-agentpool1-27910301-vmss000000 Nov 27 02:12:41.603: INFO: At 2019-11-27 02:03:52 +0000 UTC - event for azure-diskmdrwx: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a using kubernetes.io/azure-disk Nov 27 02:12:41.603: INFO: At 2019-11-27 02:05:20 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-92776655-3807-44e5-8e4e-c5fc02eb5d8a" Nov 27 02:12:41.603: INFO: At 2019-11-27 02:05:57 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[test-volume], unattached volumes=[test-volume liveness-probe-volume default-token-rth2j]: timed out waiting for the condition Nov 27 02:12:41.603: INFO: At 2019-11-27 02:06:11 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Pulling: Pulling image "docker.io/library/busybox:1.29" Nov 27 02:12:41.603: INFO: At 2019-11-27 02:06:13 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" Nov 27 02:12:41.603: INFO: At 2019-11-27 02:06:14 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container init-volume-azure-disk-dynamicpv-btrb Nov 27 02:12:41.603: INFO: At 2019-11-27 02:06:14 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container init-volume-azure-disk-dynamicpv-btrb Nov 27 02:12:41.603: INFO: At 2019-11-27 02:06:15 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container test-container-volume-azure-disk-dynamicpv-btrb Nov 27 02:12:41.603: INFO: At 2019-11-27 02:06:15 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 27 02:12:41.603: INFO: At 2019-11-27 02:06:15 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container test-container-subpath-azure-disk-dynamicpv-btrb Nov 27 02:12:41.603: INFO: At 2019-11-27 02:06:15 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 27 02:12:41.603: INFO: At 2019-11-27 02:06:15 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container test-container-subpath-azure-disk-dynamicpv-btrb Nov 27 02:12:41.603: INFO: At 2019-11-27 02:06:16 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container test-container-volume-azure-disk-dynamicpv-btrb Nov 27 02:12:41.603: INFO: At 2019-11-27 02:06:18 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Unhealthy: Liveness probe failed: cat: can't open '/probe-volume/probe-file': No such file or directory Nov 27 02:12:41.603: INFO: At 2019-11-27 02:06:18 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Killing: Container test-container-subpath-azure-disk-dynamicpv-btrb failed liveness probe, will be restarted Nov 27 02:12:41.603: INFO: At 2019-11-27 02:07:31 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Killing: Stopping container test-container-volume-azure-disk-dynamicpv-btrb Nov 27 02:12:41.603: INFO: At 2019-11-27 02:07:31 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-btrb: {kubelet k8s-agentpool1-27910301-vmss000000} Killing: Stopping container test-container-subpath-azure-disk-dynamicpv-btrb Nov 27 02:12:41.606: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:12:41.606: INFO: Nov 27 02:12:41.609: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:12:41.611: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 3056 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:12:22 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:12:22 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:12:22 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:12:22 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-5353ebaf-bd58-4002-b67e-c3ea79ef5b51 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-63caa552-e092-43fe-b924-ef6a44e177a3 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-a4e827c1-eef0-4abf-987f-7de787fa567d kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-fb3ebcc3-f59e-438c-bed3-cc2f054f2ca7 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-ff9df561-0196-4319-bfaf-223abedd298f],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104,DevicePath:0,},AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-5353ebaf-bd58-4002-b67e-c3ea79ef5b51,DevicePath:3,},},Config:nil,},} Nov 27 02:12:41.611: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:12:41.615: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:12:41.652: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.652: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:12:41.652: INFO: exec-volume-test-azure-disk-dynamicpv-znvn started at 2019-11-27 02:10:16 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.652: INFO: Container exec-container-azure-disk-dynamicpv-znvn ready: false, restart count 0 Nov 27 02:12:41.652: INFO: pod-subpath-test-azure-disk-dynamicpv-hkcr started at 2019-11-27 02:10:26 +0000 UTC (1+1 container statuses recorded) Nov 27 02:12:41.652: INFO: Init container init-volume-azure-disk-dynamicpv-hkcr ready: false, restart count 0 Nov 27 02:12:41.652: INFO: Container test-container-subpath-azure-disk-dynamicpv-hkcr ready: false, restart count 0 Nov 27 02:12:41.652: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.652: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:12:41.652: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.652: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:12:41.652: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.652: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:12:41.652: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.652: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:12:41.652: INFO: azure-client started at 2019-11-27 02:06:31 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.652: INFO: Container azure-client ready: false, restart count 0 Nov 27 02:12:41.652: INFO: pod-subpath-test-azure-disk-dynamicpv-ppct started at 2019-11-27 02:08:31 +0000 UTC (1+2 container statuses recorded) Nov 27 02:12:41.652: INFO: Init container init-volume-azure-disk-dynamicpv-ppct ready: false, restart count 0 Nov 27 02:12:41.652: INFO: Container test-container-subpath-azure-disk-dynamicpv-ppct ready: false, restart count 0 Nov 27 02:12:41.652: INFO: Container test-container-volume-azure-disk-dynamicpv-ppct ready: false, restart count 0 Nov 27 02:12:41.652: INFO: azure-client started at 2019-11-27 02:06:27 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.652: INFO: Container azure-client ready: false, restart count 0 Nov 27 02:12:41.652: INFO: security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 started at 2019-11-27 02:12:19 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.652: INFO: Container write-pod ready: false, restart count 0 W1127 02:12:41.655879 14154 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:12:41.682: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:12:41.682: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:12:41.685: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 3119 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:12:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:12:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:12:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:12:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-0110776b-bbfc-4c10-9501-bd823eec8397 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-18b68e2d-8bb3-492c-9edf-612ccc5b5c83 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:12:41.685: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:12:41.694: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:12:41.725: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.725: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:12:41.725: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.725: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:12:41.725: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.725: INFO: Container metrics-server ready: true, restart count 0 Nov 27 02:12:41.725: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.725: INFO: Container coredns ready: true, restart count 0 Nov 27 02:12:41.725: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.725: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:12:41.725: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.726: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 27 02:12:41.726: INFO: pod-subpath-test-azure-disk-dynamicpv-szpk started at 2019-11-27 02:12:37 +0000 UTC (1+1 container statuses recorded) Nov 27 02:12:41.726: INFO: Init container init-volume-azure-disk-dynamicpv-szpk ready: false, restart count 0 Nov 27 02:12:41.726: INFO: Container test-container-subpath-azure-disk-dynamicpv-szpk ready: false, restart count 0 Nov 27 02:12:41.726: INFO: security-context-1e5313de-16c5-4161-8190-d9a40433987d started at 2019-11-27 02:08:50 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.726: INFO: Container write-pod ready: false, restart count 0 Nov 27 02:12:41.726: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.726: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:12:41.726: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.726: INFO: Container keyvault-flexvolume ready: true, restart count 0 W1127 02:12:41.728727 14154 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:12:41.751: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:12:41.752: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:12:41.754: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 2874 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:11:39 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:11:39 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:11:39 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:11:39 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:12:41.754: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:12:41.759: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:12:41.782: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.782: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 27 02:12:41.782: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.782: INFO: Container kube-apiserver ready: true, restart count 0 Nov 27 02:12:41.782: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.782: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 27 02:12:41.782: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.782: INFO: Container kube-scheduler ready: true, restart count 0 Nov 27 02:12:41.782: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.782: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:12:41.782: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.782: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:12:41.782: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:12:41.782: INFO: Container kube-proxy ready: true, restart count 0 W1127 02:12:41.785434 14154 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:12:41.803: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:12:41.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "provisioning-4613" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\sverify\scontainer\scannot\swrite\sto\ssubpath\sreadonly\svolumes\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:417 Nov 27 02:25:10.227: while waiting for subpath failure Unexpected error: <*errors.errorString | 0xc002724d20>: { s: "subpath container unexpectedly terminated", } subpath container unexpectedly terminated occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:740from junit_08.xml
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:20:31.510: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename provisioning �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-1676 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should verify container cannot write to subpath readonly volumes [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:417 Nov 27 02:20:31.657: INFO: Creating resource for dynamic PV Nov 27 02:20:31.657: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass provisioning-1676-azure-disk-sc45k2h �[1mSTEP�[0m: creating a claim Nov 27 02:20:31.660: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 27 02:20:31.665: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-disk25vdl] to have phase Bound Nov 27 02:20:31.673: INFO: PersistentVolumeClaim azure-disk25vdl found but phase is Pending instead of Bound. Nov 27 02:20:33.677: INFO: PersistentVolumeClaim azure-disk25vdl found but phase is Pending instead of Bound. Nov 27 02:20:35.683: INFO: PersistentVolumeClaim azure-disk25vdl found but phase is Pending instead of Bound. Nov 27 02:20:37.687: INFO: PersistentVolumeClaim azure-disk25vdl found but phase is Pending instead of Bound. Nov 27 02:20:39.690: INFO: PersistentVolumeClaim azure-disk25vdl found but phase is Pending instead of Bound. Nov 27 02:20:41.694: INFO: PersistentVolumeClaim azure-disk25vdl found but phase is Pending instead of Bound. Nov 27 02:20:43.697: INFO: PersistentVolumeClaim azure-disk25vdl found and phase=Bound (12.031742387s) �[1mSTEP�[0m: Creating pod to format volume volume-prep-provisioning-1676 Nov 27 02:20:43.706: INFO: Waiting up to 5m0s for pod "volume-prep-provisioning-1676" in namespace "provisioning-1676" to be "success or failure" Nov 27 02:20:43.713: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 6.810554ms Nov 27 02:20:45.715: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009441824s Nov 27 02:20:47.718: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012477796s Nov 27 02:20:49.722: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015712068s Nov 27 02:20:51.726: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019807244s Nov 27 02:20:53.728: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022359807s Nov 27 02:20:55.732: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026360779s Nov 27 02:20:57.739: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033394474s Nov 27 02:20:59.743: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 16.03706234s Nov 27 02:21:01.751: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 18.044639736s Nov 27 02:21:03.754: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 20.047983596s Nov 27 02:21:05.757: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 22.051250553s Nov 27 02:21:07.761: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 24.055058614s Nov 27 02:21:09.766: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 26.05985188s Nov 27 02:21:11.770: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 28.063646137s Nov 27 02:21:13.773: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 30.067045089s Nov 27 02:21:15.777: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 32.070768942s Nov 27 02:21:17.781: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 34.074940096s Nov 27 02:21:19.784: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 36.078432144s Nov 27 02:21:21.787: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 38.081518986s Nov 27 02:21:23.791: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 40.085317633s Nov 27 02:21:25.795: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 42.088826776s Nov 27 02:21:27.798: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 44.092198116s Nov 27 02:21:29.803: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 46.097114466s Nov 27 02:21:31.806: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 48.100352902s Nov 27 02:21:33.810: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 50.103579436s Nov 27 02:21:35.812: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 52.106271063s Nov 27 02:21:37.815: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 54.10905889s Nov 27 02:21:39.818: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 56.111781215s Nov 27 02:21:41.821: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 58.114790841s Nov 27 02:21:43.823: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.117488862s Nov 27 02:21:45.826: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.120427784s Nov 27 02:21:47.831: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.124642115s Nov 27 02:21:49.834: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.127569133s Nov 27 02:21:51.837: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.130608251s Nov 27 02:21:53.840: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.133900569s Nov 27 02:21:55.843: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.137213186s Nov 27 02:21:57.849: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.142664218s Nov 27 02:21:59.851: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.145496628s Nov 27 02:22:01.855: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.148649939s Nov 27 02:22:03.858: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.152197551s Nov 27 02:22:05.861: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.155382959s Nov 27 02:22:07.864: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.158421865s Nov 27 02:22:09.868: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.161900672s Nov 27 02:22:11.871: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.164970675s Nov 27 02:22:13.874: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.167908074s Nov 27 02:22:15.877: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.170941574s Nov 27 02:22:17.880: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.174309074s Nov 27 02:22:19.884: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.178406779s Nov 27 02:22:21.887: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.181495974s Nov 27 02:22:23.891: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.18485607s Nov 27 02:22:25.894: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.188012062s Nov 27 02:22:27.898: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.191723358s Nov 27 02:22:29.901: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.194932048s Nov 27 02:22:31.904: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.198261938s Nov 27 02:22:33.908: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.201731727s Nov 27 02:22:35.913: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.206863228s Nov 27 02:22:37.916: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.210516116s Nov 27 02:22:39.920: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.2139017s Nov 27 02:22:41.924: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.217874688s Nov 27 02:22:43.928: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.221608072s Nov 27 02:22:45.931: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.22471955s Nov 27 02:22:47.934: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.228421131s Nov 27 02:22:49.938: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.231799708s Nov 27 02:22:51.941: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.235203484s Nov 27 02:22:53.944: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.238494157s Nov 27 02:22:55.949: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.242911138s Nov 27 02:22:57.952: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.246491611s Nov 27 02:22:59.956: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.250235584s Nov 27 02:23:01.960: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.253860955s Nov 27 02:23:03.964: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.257644925s Nov 27 02:23:05.968: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.261839398s Nov 27 02:23:07.971: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.265407764s Nov 27 02:23:09.975: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.268642726s Nov 27 02:23:11.978: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.27225489s Nov 27 02:23:13.983: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.276635158s Nov 27 02:23:15.986: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.279876216s Nov 27 02:23:17.990: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.283638777s Nov 27 02:23:19.993: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.286832132s Nov 27 02:23:21.996: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.290269388s Nov 27 02:23:24.000: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.294263346s Nov 27 02:23:26.004: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.297663799s Nov 27 02:23:28.007: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.301173651s Nov 27 02:23:30.011: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.304906504s Nov 27 02:23:32.014: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.308475554s Nov 27 02:23:34.018: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.312174704s Nov 27 02:23:36.022: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.315786252s Nov 27 02:23:38.025: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.319034195s Nov 27 02:23:40.029: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.32257214s Nov 27 02:23:42.032: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.325878382s Nov 27 02:23:44.036: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.329576625s Nov 27 02:23:46.040: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.333902472s Nov 27 02:23:48.043: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.33722161s Nov 27 02:23:50.052: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.346130191s Nov 27 02:23:52.056: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.349874829s Nov 27 02:23:54.059: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.353290964s Nov 27 02:23:56.064: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.358335711s Nov 27 02:23:58.068: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.362409248s Nov 27 02:24:00.078: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.371663925s Nov 27 02:24:02.080: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.374498651s Nov 27 02:24:04.085: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.378602485s Nov 27 02:24:06.088: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.381795411s Nov 27 02:24:08.091: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.385073736s Nov 27 02:24:10.096: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.38962537s Nov 27 02:24:12.099: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.392758291s Nov 27 02:24:14.102: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.395555709s Nov 27 02:24:16.104: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.398481827s Nov 27 02:24:18.110: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.403796062s Nov 27 02:24:20.113: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.407395182s Nov 27 02:24:22.117: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.410617299s Nov 27 02:24:24.120: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.414189016s Nov 27 02:24:26.125: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.418556439s Nov 27 02:24:28.128: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.422071155s Nov 27 02:24:30.132: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.425632169s Nov 27 02:24:32.135: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.429183682s Nov 27 02:24:34.139: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.433094997s Nov 27 02:24:36.143: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.437270512s Nov 27 02:24:38.147: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.440747921s Nov 27 02:24:40.150: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.444213129s Nov 27 02:24:42.154: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.447859437s Nov 27 02:24:44.158: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.451627845s Nov 27 02:24:46.161: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.454595646s Nov 27 02:24:48.164: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.457855047s Nov 27 02:24:50.168: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.462473459s Nov 27 02:24:52.171: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.465475856s Nov 27 02:24:54.175: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.469260558s Nov 27 02:24:56.179: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.472593356s Nov 27 02:24:58.182: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.475839052s Nov 27 02:25:00.187: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.48074826s Nov 27 02:25:02.190: INFO: Pod "volume-prep-provisioning-1676": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.484015254s Nov 27 02:25:04.194: INFO: Pod "volume-prep-provisioning-1676": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4m20.48768435s �[1mSTEP�[0m: Saw pod success Nov 27 02:25:04.194: INFO: Pod "volume-prep-provisioning-1676" satisfied condition "success or failure" Nov 27 02:25:04.194: INFO: Deleting pod "volume-prep-provisioning-1676" in namespace "provisioning-1676" Nov 27 02:25:04.209: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-1676" to be fully deleted �[1mSTEP�[0m: Creating pod pod-subpath-test-azure-disk-dynamicpv-p7ss �[1mSTEP�[0m: Checking for subpath error in container status Nov 27 02:25:10.227: FAIL: while waiting for subpath failure Unexpected error: <*errors.errorString | 0xc002724d20>: { s: "subpath container unexpectedly terminated", } subpath container unexpectedly terminated occurred Nov 27 02:25:10.228: INFO: Deleting pod "pod-subpath-test-azure-disk-dynamicpv-p7ss" in namespace "provisioning-1676" Nov 27 02:25:10.235: INFO: Wait up to 5m0s for pod "pod-subpath-test-azure-disk-dynamicpv-p7ss" to be fully deleted �[1mSTEP�[0m: Deleting pod Nov 27 02:25:10.238: INFO: Deleting pod "pod-subpath-test-azure-disk-dynamicpv-p7ss" in namespace "provisioning-1676" �[1mSTEP�[0m: Deleting pvc Nov 27 02:25:10.241: INFO: Deleting PersistentVolumeClaim "azure-disk25vdl" Nov 27 02:25:10.244: INFO: Waiting up to 5m0s for PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 to get deleted Nov 27 02:25:10.253: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Bound (8.716669ms) Nov 27 02:25:15.256: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (5.012313802s) Nov 27 02:25:20.261: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (10.016675735s) Nov 27 02:25:25.265: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (15.020743859s) Nov 27 02:25:30.268: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (20.024506274s) Nov 27 02:25:35.272: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (25.027694778s) Nov 27 02:25:40.275: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (30.031216678s) Nov 27 02:25:45.279: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (35.034655171s) Nov 27 02:25:50.283: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (40.039099366s) Nov 27 02:25:55.287: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (45.042573047s) Nov 27 02:26:00.291: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (50.046727128s) Nov 27 02:26:05.295: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (55.0506822s) Nov 27 02:26:10.298: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (1m0.054048762s) Nov 27 02:26:15.301: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (1m5.056983115s) Nov 27 02:26:20.304: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (1m10.060173764s) Nov 27 02:26:25.307: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (1m15.063345407s) Nov 27 02:26:30.310: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (1m20.066420843s) Nov 27 02:26:35.314: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (1m25.070484282s) Nov 27 02:26:40.319: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (1m30.074653715s) Nov 27 02:26:45.321: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 found and phase=Failed (1m35.077512333s) Nov 27 02:26:50.324: INFO: PersistentVolume pvc-73246121-d31d-4680-9b21-4b7330d473b8 was removed �[1mSTEP�[0m: Deleting sc Nov 27 02:26:50.328: INFO: In-tree plugin kubernetes.io/azure-disk is not migrated, not validating any metrics [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "provisioning-1676". �[1mSTEP�[0m: Found 11 events. Nov 27 02:26:50.331: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-p7ss: {default-scheduler } Scheduled: Successfully assigned provisioning-1676/pod-subpath-test-azure-disk-dynamicpv-p7ss to k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:50.331: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for volume-prep-provisioning-1676: {default-scheduler } Scheduled: Successfully assigned provisioning-1676/volume-prep-provisioning-1676 to k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:50.331: INFO: At 2019-11-27 02:20:42 +0000 UTC - event for azure-disk25vdl: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-73246121-d31d-4680-9b21-4b7330d473b8 using kubernetes.io/azure-disk Nov 27 02:26:50.331: INFO: At 2019-11-27 02:22:47 +0000 UTC - event for volume-prep-provisioning-1676: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[test-volume], unattached volumes=[test-volume default-token-f9fwt]: timed out waiting for the condition Nov 27 02:26:50.331: INFO: At 2019-11-27 02:24:52 +0000 UTC - event for volume-prep-provisioning-1676: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-73246121-d31d-4680-9b21-4b7330d473b8" Nov 27 02:26:50.331: INFO: At 2019-11-27 02:25:02 +0000 UTC - event for volume-prep-provisioning-1676: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 27 02:26:50.331: INFO: At 2019-11-27 02:25:03 +0000 UTC - event for volume-prep-provisioning-1676: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container init-volume-provisioning-1676 Nov 27 02:26:50.331: INFO: At 2019-11-27 02:25:03 +0000 UTC - event for volume-prep-provisioning-1676: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container init-volume-provisioning-1676 Nov 27 02:26:50.331: INFO: At 2019-11-27 02:25:06 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-p7ss: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Nov 27 02:26:50.331: INFO: At 2019-11-27 02:25:07 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-p7ss: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container test-container-subpath-azure-disk-dynamicpv-p7ss Nov 27 02:26:50.331: INFO: At 2019-11-27 02:25:08 +0000 UTC - event for pod-subpath-test-azure-disk-dynamicpv-p7ss: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container test-container-subpath-azure-disk-dynamicpv-p7ss Nov 27 02:26:50.334: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:26:50.334: INFO: Nov 27 02:26:50.336: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:50.340: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 5932 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71,DevicePath:3,},},Config:nil,},} Nov 27 02:26:50.340: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:50.385: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:50.392: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.392: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:26:50.392: INFO: azure-injector started at 2019-11-27 02:20:44 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.392: INFO: Container azure-injector ready: false, restart count 0 Nov 27 02:26:50.392: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.392: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:26:50.392: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.392: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:26:50.392: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.392: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:26:50.392: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.392: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 W1127 02:26:50.394858 14151 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:26:50.422: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:50.422: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:26:50.424: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 5430 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:26:50.425: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:26:50.429: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:26:50.436: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.436: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:26:50.436: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.436: INFO: Container metrics-server ready: true, restart count 0 Nov 27 02:26:50.436: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.436: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:26:50.436: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.436: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 27 02:26:50.436: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.436: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:26:50.436: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.436: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:26:50.436: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.436: INFO: Container coredns ready: true, restart count 0 Nov 27 02:26:50.436: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.436: INFO: Container kube-proxy ready: true, restart count 0 W1127 02:26:50.438857 14151 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:26:50.466: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:26:50.466: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:26:50.468: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 6191 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:26:41 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:26:41 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:26:41 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:26:41 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:26:50.468: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:26:50.472: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:26:50.478: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.478: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 27 02:26:50.478: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.478: INFO: Container kube-scheduler ready: true, restart count 0 Nov 27 02:26:50.478: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.478: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:26:50.478: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.478: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:26:50.478: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.478: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:26:50.478: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.478: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 27 02:26:50.478: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:50.478: INFO: Container kube-apiserver ready: true, restart count 0 W1127 02:26:50.480600 14151 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:26:50.503: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:26:50.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "provisioning-1676" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\svolumeIO\sshould\swrite\sfiles\sof\svarious\ssizes\,\sverify\ssize\,\svalidate\scontent\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:143 Nov 27 02:24:01.102: Unexpected error: <*errors.errorString | 0xc000a47950>: { s: "client pod \"azure-io-client\" not running: timed out waiting for the condition", } client pod "azure-io-client" not running: timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:159from junit_01.xml
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumeIO /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (default fs)] volumeIO /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:18:22.756: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename volumeio �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volumeio-9331 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should write files of various sizes, verify size, validate content [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_io.go:143 Nov 27 02:18:22.914: INFO: Creating resource for dynamic PV Nov 27 02:18:22.915: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass volumeio-9331-azure-disk-scqspg7 �[1mSTEP�[0m: creating a claim Nov 27 02:18:22.918: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 27 02:18:22.921: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-diskl7vpz] to have phase Bound Nov 27 02:18:22.924: INFO: PersistentVolumeClaim azure-diskl7vpz found but phase is Pending instead of Bound. Nov 27 02:18:24.927: INFO: PersistentVolumeClaim azure-diskl7vpz found but phase is Pending instead of Bound. Nov 27 02:18:26.930: INFO: PersistentVolumeClaim azure-diskl7vpz found but phase is Pending instead of Bound. Nov 27 02:18:28.934: INFO: PersistentVolumeClaim azure-diskl7vpz found but phase is Pending instead of Bound. Nov 27 02:18:30.937: INFO: PersistentVolumeClaim azure-diskl7vpz found but phase is Pending instead of Bound. Nov 27 02:18:32.941: INFO: PersistentVolumeClaim azure-diskl7vpz found but phase is Pending instead of Bound. Nov 27 02:18:34.944: INFO: PersistentVolumeClaim azure-diskl7vpz found and phase=Bound (12.022214913s) �[1mSTEP�[0m: starting azure-io-client �[1mSTEP�[0m: deleting test file /opt/azure-volumeio-9331-dd_if... Nov 27 02:23:34.968: INFO: ExecWithOptions {Command:[/bin/sh -c rm -f /opt/azure-volumeio-9331-dd_if] Namespace:volumeio-9331 PodName:azure-io-client ContainerName:azure-io-client Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:23:34.968: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:23:35.083: INFO: unable to delete test file /opt/azure-volumeio-9331-dd_if: unable to upgrade connection: container not found ("azure-io-client") error ignored, continuing test �[1mSTEP�[0m: deleting client pod "azure-io-client"... Nov 27 02:23:35.083: INFO: Deleting pod "azure-io-client" in namespace "volumeio-9331" Nov 27 02:23:35.088: INFO: Wait up to 5m0s for pod "azure-io-client" to be fully deleted Nov 27 02:23:41.101: INFO: sleeping a bit so kubelet can unmount and detach the volume Nov 27 02:24:01.102: FAIL: Unexpected error: <*errors.errorString | 0xc000a47950>: { s: "client pod \"azure-io-client\" not running: timed out waiting for the condition", } client pod "azure-io-client" not running: timed out waiting for the condition occurred �[1mSTEP�[0m: Deleting pvc Nov 27 02:24:01.102: INFO: Deleting PersistentVolumeClaim "azure-diskl7vpz" Nov 27 02:24:01.107: INFO: Waiting up to 5m0s for PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd to get deleted Nov 27 02:24:01.113: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Bound (5.708646ms) Nov 27 02:24:06.116: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (5.009326577s) Nov 27 02:24:11.120: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (10.013481106s) Nov 27 02:24:16.125: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (15.018157131s) Nov 27 02:24:21.128: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (20.021440538s) Nov 27 02:24:26.131: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (25.024000932s) Nov 27 02:24:31.134: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (30.027220724s) Nov 27 02:24:36.138: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (35.031327815s) Nov 27 02:24:41.141: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (40.034592793s) Nov 27 02:24:46.145: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (45.038088265s) Nov 27 02:24:51.149: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (50.041638631s) Nov 27 02:24:56.152: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (55.045143689s) Nov 27 02:25:01.155: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (1m0.04855424s) Nov 27 02:25:06.159: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (1m5.052214586s) Nov 27 02:25:11.163: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (1m10.056012626s) Nov 27 02:25:16.166: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (1m15.059279056s) Nov 27 02:25:21.170: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (1m20.062877781s) Nov 27 02:25:26.173: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (1m25.065811995s) Nov 27 02:25:31.176: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (1m30.069389707s) Nov 27 02:25:36.179: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (1m35.072353708s) Nov 27 02:25:41.183: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (1m40.076088809s) Nov 27 02:25:46.187: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (1m45.079834204s) Nov 27 02:25:51.190: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (1m50.08336119s) Nov 27 02:25:56.194: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (1m55.087001872s) Nov 27 02:26:01.198: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (2m0.090623747s) Nov 27 02:26:06.201: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (2m5.094100814s) Nov 27 02:26:11.204: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (2m10.096979572s) Nov 27 02:26:16.207: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (2m15.100575428s) Nov 27 02:26:21.211: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (2m20.103992178s) Nov 27 02:26:26.214: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (2m25.10714062s) Nov 27 02:26:31.217: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd found and phase=Failed (2m30.110289355s) Nov 27 02:26:36.220: INFO: PersistentVolume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd was removed �[1mSTEP�[0m: Deleting sc Nov 27 02:26:36.224: INFO: In-tree plugin kubernetes.io/azure-disk is not migrated, not validating any metrics [AfterEach] [Testpattern: Dynamic PV (default fs)] volumeIO /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "volumeio-9331". �[1mSTEP�[0m: Found 6 events. Nov 27 02:26:36.228: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for azure-io-client: {default-scheduler } Scheduled: Successfully assigned volumeio-9331/azure-io-client to k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:36.228: INFO: At 2019-11-27 02:18:33 +0000 UTC - event for azure-diskl7vpz: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd using kubernetes.io/azure-disk Nov 27 02:26:36.228: INFO: At 2019-11-27 02:20:38 +0000 UTC - event for azure-io-client: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[io-volume-volumeio-9331], unattached volumes=[io-volume-volumeio-9331 default-token-xhf4g]: timed out waiting for the condition Nov 27 02:26:36.228: INFO: At 2019-11-27 02:22:52 +0000 UTC - event for azure-io-client: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[io-volume-volumeio-9331], unattached volumes=[default-token-xhf4g io-volume-volumeio-9331]: timed out waiting for the condition Nov 27 02:26:36.228: INFO: At 2019-11-27 02:23:41 +0000 UTC - event for azure-io-client: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd" Nov 27 02:26:36.228: INFO: At 2019-11-27 02:25:09 +0000 UTC - event for azure-io-client: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[io-volume-volumeio-9331 default-token-xhf4g], unattached volumes=[io-volume-volumeio-9331 default-token-xhf4g]: timed out waiting for the condition Nov 27 02:26:36.230: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:26:36.230: INFO: Nov 27 02:26:36.232: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:36.234: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 5932 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71,DevicePath:3,},},Config:nil,},} Nov 27 02:26:36.235: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:36.239: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:36.245: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.245: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:26:36.245: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.245: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:26:36.245: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.245: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:26:36.245: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.245: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:26:36.245: INFO: azure-injector started at 2019-11-27 02:20:44 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.245: INFO: Container azure-injector ready: false, restart count 0 Nov 27 02:26:36.245: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.245: INFO: Container azure-cnms ready: true, restart count 0 W1127 02:26:36.248576 14152 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:26:36.274: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:36.274: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:26:36.276: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 5430 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:26:36.277: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:26:36.281: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:26:36.288: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.288: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:26:36.288: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.288: INFO: Container metrics-server ready: true, restart count 0 Nov 27 02:26:36.288: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.288: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:26:36.288: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.288: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:26:36.288: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.288: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:26:36.288: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.288: INFO: Container coredns ready: true, restart count 0 Nov 27 02:26:36.288: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.288: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:26:36.288: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.288: INFO: Container kubernetes-dashboard ready: true, restart count 0 W1127 02:26:36.290984 14152 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:26:36.313: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:26:36.313: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:26:36.315: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 5287 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:26:36.316: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:26:36.319: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:26:36.323: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.323: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 27 02:26:36.323: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.323: INFO: Container kube-apiserver ready: true, restart count 0 Nov 27 02:26:36.323: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.323: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 27 02:26:36.323: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.323: INFO: Container kube-scheduler ready: true, restart count 0 Nov 27 02:26:36.323: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.323: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:26:36.323: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.323: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:26:36.323: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:36.323: INFO: Container kube-proxy ready: true, restart count 0 W1127 02:26:36.326272 14152 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:26:36.342: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:26:36.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "volumeio-9331" for this suite.
Find azure-io-client mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sDynamic\sPV\s\(filesystem\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\saccess\sto\stwo\svolumes\swith\sthe\ssame\svolume\smode\sand\sretain\sdata\sacross\spod\srecreation\son\sdifferent\snode$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:159 Nov 27 02:13:50.813: Unexpected error: <*errors.errorString | 0xc00094c730>: { s: "pod \"security-context-1e5313de-16c5-4161-8190-d9a40433987d\" is not Running: timed out waiting for the condition", } pod "security-context-1e5313de-16c5-4161-8190-d9a40433987d" is not Running: timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:360from junit_06.xml
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:88 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:03:42.478: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename multivolume Nov 27 02:03:42.548: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Nov 27 02:03:42.573: INFO: Found ClusterRoles; assuming RBAC is enabled. �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in multivolume-2138 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should access to two volumes with the same volume mode and retain data across pod recreation on different node /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:159 Nov 27 02:03:42.683: INFO: Creating resource for dynamic PV Nov 27 02:03:42.683: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass multivolume-2138-azure-disk-scpvfhf �[1mSTEP�[0m: creating a claim Nov 27 02:03:42.691: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-diskqs9kf] to have phase Bound Nov 27 02:03:42.703: INFO: PersistentVolumeClaim azure-diskqs9kf found but phase is Pending instead of Bound. Nov 27 02:03:44.706: INFO: PersistentVolumeClaim azure-diskqs9kf found but phase is Pending instead of Bound. Nov 27 02:03:46.709: INFO: PersistentVolumeClaim azure-diskqs9kf found but phase is Pending instead of Bound. Nov 27 02:03:48.712: INFO: PersistentVolumeClaim azure-diskqs9kf found but phase is Pending instead of Bound. Nov 27 02:03:50.716: INFO: PersistentVolumeClaim azure-diskqs9kf found but phase is Pending instead of Bound. Nov 27 02:03:52.719: INFO: PersistentVolumeClaim azure-diskqs9kf found but phase is Pending instead of Bound. Nov 27 02:03:54.722: INFO: PersistentVolumeClaim azure-diskqs9kf found and phase=Bound (12.030673172s) Nov 27 02:03:54.726: INFO: Creating resource for dynamic PV Nov 27 02:03:54.726: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass multivolume-2138-azure-disk-scqtxqn �[1mSTEP�[0m: creating a claim Nov 27 02:03:54.732: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-disk486xr] to have phase Bound Nov 27 02:03:54.738: INFO: PersistentVolumeClaim azure-disk486xr found but phase is Pending instead of Bound. Nov 27 02:03:56.740: INFO: PersistentVolumeClaim azure-disk486xr found but phase is Pending instead of Bound. Nov 27 02:03:58.745: INFO: PersistentVolumeClaim azure-disk486xr found but phase is Pending instead of Bound. Nov 27 02:04:00.748: INFO: PersistentVolumeClaim azure-disk486xr found but phase is Pending instead of Bound. Nov 27 02:04:02.754: INFO: PersistentVolumeClaim azure-disk486xr found but phase is Pending instead of Bound. Nov 27 02:04:04.758: INFO: PersistentVolumeClaim azure-disk486xr found but phase is Pending instead of Bound. Nov 27 02:04:06.766: INFO: PersistentVolumeClaim azure-disk486xr found and phase=Bound (12.0341872s) �[1mSTEP�[0m: Creating pod on {Name: Selector:map[] Affinity:nil} with multiple volumes �[1mSTEP�[0m: Checking if the volume1 exists as expected volume mode (Filesystem) Nov 27 02:08:38.787: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume1] Namespace:multivolume-2138 PodName:security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:08:38.787: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:08:39.021: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume1] Namespace:multivolume-2138 PodName:security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:08:39.021: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if write to the volume1 works properly Nov 27 02:08:39.173: INFO: ExecWithOptions {Command:[/bin/sh -c echo ZO0PMaUal5OExMVwHGVpImrR9QcsWvl+0Mq4HqKbjG8vzaOEQFqQzs3V+0LeR26j4gEq8dmfbY2HULDu6LZCJQ== | base64 -d | sha256sum] Namespace:multivolume-2138 PodName:security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:08:39.173: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:08:39.324: INFO: ExecWithOptions {Command:[/bin/sh -c echo ZO0PMaUal5OExMVwHGVpImrR9QcsWvl+0Mq4HqKbjG8vzaOEQFqQzs3V+0LeR26j4gEq8dmfbY2HULDu6LZCJQ== | base64 -d | dd of=/mnt/volume1/file1.txt bs=64 count=1] Namespace:multivolume-2138 PodName:security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:08:39.324: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if read from the volume1 works properly Nov 27 02:08:39.481: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1/file1.txt bs=64 count=1 | sha256sum] Namespace:multivolume-2138 PodName:security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:08:39.481: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:08:39.641: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1/file1.txt bs=64 count=1 | sha256sum | grep -Fq 8e9d1285e14fdc2f9a7352a1c0e3481a19981f5a60a411f0860cf40375dc2d38] Namespace:multivolume-2138 PodName:security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:08:39.641: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if the volume2 exists as expected volume mode (Filesystem) Nov 27 02:08:39.800: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume2] Namespace:multivolume-2138 PodName:security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:08:39.800: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:08:39.960: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume2] Namespace:multivolume-2138 PodName:security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:08:39.960: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if write to the volume2 works properly Nov 27 02:08:40.130: INFO: ExecWithOptions {Command:[/bin/sh -c echo uXpFe1ZOmsOAv0jL5PChbKg4FG80V/gtgAwHVJxUQdymimGIGIm0W56gjWyFjIeTt1lC2j54YfyFlLRhQHa+PA== | base64 -d | sha256sum] Namespace:multivolume-2138 PodName:security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:08:40.130: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:08:40.281: INFO: ExecWithOptions {Command:[/bin/sh -c echo uXpFe1ZOmsOAv0jL5PChbKg4FG80V/gtgAwHVJxUQdymimGIGIm0W56gjWyFjIeTt1lC2j54YfyFlLRhQHa+PA== | base64 -d | dd of=/mnt/volume2/file1.txt bs=64 count=1] Namespace:multivolume-2138 PodName:security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:08:40.281: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if read from the volume2 works properly Nov 27 02:08:40.447: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum] Namespace:multivolume-2138 PodName:security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:08:40.447: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:08:40.611: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum | grep -Fq bb4d84e64b1613a0af255f7de880ffd330d560b3f7cf39b84de27547620138b9] Namespace:multivolume-2138 PodName:security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:08:40.611: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:08:40.779: INFO: Deleting pod "security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0" in namespace "multivolume-2138" Nov 27 02:08:40.783: INFO: Wait up to 5m0s for pod "security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0" to be fully deleted �[1mSTEP�[0m: Creating pod on {Name: Selector:map[] Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:NotIn,Values:[k8s-agentpool1-27910301-vmss000000],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,}} with multiple volumes Nov 27 02:13:50.813: FAIL: Unexpected error: <*errors.errorString | 0xc00094c730>: { s: "pod \"security-context-1e5313de-16c5-4161-8190-d9a40433987d\" is not Running: timed out waiting for the condition", } pod "security-context-1e5313de-16c5-4161-8190-d9a40433987d" is not Running: timed out waiting for the condition occurred Nov 27 02:13:50.814: INFO: Deleting pod "security-context-1e5313de-16c5-4161-8190-d9a40433987d" in namespace "multivolume-2138" Nov 27 02:13:50.817: INFO: Wait up to 5m0s for pod "security-context-1e5313de-16c5-4161-8190-d9a40433987d" to be fully deleted �[1mSTEP�[0m: Deleting pvc Nov 27 02:14:00.830: INFO: Deleting PersistentVolumeClaim "azure-diskqs9kf" Nov 27 02:14:00.836: INFO: Waiting up to 5m0s for PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 to get deleted Nov 27 02:14:00.842: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Bound (6.07345ms) Nov 27 02:14:05.844: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (5.008580213s) Nov 27 02:14:10.848: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (10.012677365s) Nov 27 02:14:15.851: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (15.015291279s) Nov 27 02:14:20.854: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (20.018048572s) Nov 27 02:14:25.856: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (25.020487037s) Nov 27 02:14:30.859: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (30.02306248s) Nov 27 02:14:35.866: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (35.030410439s) Nov 27 02:14:40.869: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (40.033567241s) Nov 27 02:14:45.872: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (45.036387717s) Nov 27 02:14:50.875: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (50.039300171s) Nov 27 02:14:55.879: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (55.043481312s) Nov 27 02:15:00.882: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (1m0.046592923s) Nov 27 02:15:05.885: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (1m5.049040307s) Nov 27 02:15:10.888: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (1m10.051835871s) Nov 27 02:15:15.890: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 found and phase=Failed (1m15.054385412s) Nov 27 02:15:20.893: INFO: PersistentVolume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 was removed �[1mSTEP�[0m: Deleting sc �[1mSTEP�[0m: Deleting pvc Nov 27 02:15:20.897: INFO: Deleting PersistentVolumeClaim "azure-disk486xr" Nov 27 02:15:20.910: INFO: Waiting up to 5m0s for PersistentVolume pvc-0110776b-bbfc-4c10-9501-bd823eec8397 to get deleted Nov 27 02:15:20.916: INFO: PersistentVolume pvc-0110776b-bbfc-4c10-9501-bd823eec8397 found and phase=Bound (6.292451ms) Nov 27 02:15:25.919: INFO: PersistentVolume pvc-0110776b-bbfc-4c10-9501-bd823eec8397 found and phase=Released (5.009291253s) Nov 27 02:15:30.922: INFO: PersistentVolume pvc-0110776b-bbfc-4c10-9501-bd823eec8397 was removed �[1mSTEP�[0m: Deleting sc Nov 27 02:15:30.926: INFO: In-tree plugin kubernetes.io/azure-disk is not migrated, not validating any metrics [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "multivolume-2138". �[1mSTEP�[0m: Found 18 events. Nov 27 02:15:30.929: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for security-context-1e5313de-16c5-4161-8190-d9a40433987d: {default-scheduler } Scheduled: Successfully assigned multivolume-2138/security-context-1e5313de-16c5-4161-8190-d9a40433987d to k8s-agentpool1-27910301-vmss000001 Nov 27 02:15:30.929: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0: {default-scheduler } Scheduled: Successfully assigned multivolume-2138/security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0 to k8s-agentpool1-27910301-vmss000000 Nov 27 02:15:30.929: INFO: At 2019-11-27 02:03:53 +0000 UTC - event for azure-diskqs9kf: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344 using kubernetes.io/azure-disk Nov 27 02:15:30.929: INFO: At 2019-11-27 02:04:05 +0000 UTC - event for azure-disk486xr: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-0110776b-bbfc-4c10-9501-bd823eec8397 using kubernetes.io/azure-disk Nov 27 02:15:30.929: INFO: At 2019-11-27 02:06:09 +0000 UTC - event for security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1 volume2], unattached volumes=[volume1 volume2 default-token-9624d]: timed out waiting for the condition Nov 27 02:15:30.929: INFO: At 2019-11-27 02:06:52 +0000 UTC - event for security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-0110776b-bbfc-4c10-9501-bd823eec8397" Nov 27 02:15:30.929: INFO: At 2019-11-27 02:07:28 +0000 UTC - event for security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344" Nov 27 02:15:30.929: INFO: At 2019-11-27 02:08:24 +0000 UTC - event for security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume2], unattached volumes=[volume1 volume2 default-token-9624d]: timed out waiting for the condition Nov 27 02:15:30.929: INFO: At 2019-11-27 02:08:36 +0000 UTC - event for security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container write-pod Nov 27 02:15:30.929: INFO: At 2019-11-27 02:08:36 +0000 UTC - event for security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 27 02:15:30.929: INFO: At 2019-11-27 02:08:37 +0000 UTC - event for security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container write-pod Nov 27 02:15:30.929: INFO: At 2019-11-27 02:08:40 +0000 UTC - event for security-context-747c1ff8-c695-4f86-a42b-6d4e96e55ea0: {kubelet k8s-agentpool1-27910301-vmss000000} Killing: Stopping container write-pod Nov 27 02:15:30.929: INFO: At 2019-11-27 02:08:50 +0000 UTC - event for security-context-1e5313de-16c5-4161-8190-d9a40433987d: {attachdetach-controller } FailedAttachVolume: Multi-Attach error for volume "pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344" Volume is already exclusively attached to one node and can't be attached to another Nov 27 02:15:30.929: INFO: At 2019-11-27 02:08:50 +0000 UTC - event for security-context-1e5313de-16c5-4161-8190-d9a40433987d: {attachdetach-controller } FailedAttachVolume: Multi-Attach error for volume "pvc-0110776b-bbfc-4c10-9501-bd823eec8397" Volume is already exclusively attached to one node and can't be attached to another Nov 27 02:15:30.929: INFO: At 2019-11-27 02:10:53 +0000 UTC - event for security-context-1e5313de-16c5-4161-8190-d9a40433987d: {kubelet k8s-agentpool1-27910301-vmss000001} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1 volume2], unattached volumes=[volume1 volume2 default-token-9624d]: timed out waiting for the condition Nov 27 02:15:30.929: INFO: At 2019-11-27 02:13:48 +0000 UTC - event for security-context-1e5313de-16c5-4161-8190-d9a40433987d: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-bd9b0944-89fc-4d5d-b1c2-10fd55c5d344" Nov 27 02:15:30.929: INFO: At 2019-11-27 02:14:03 +0000 UTC - event for security-context-1e5313de-16c5-4161-8190-d9a40433987d: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-0110776b-bbfc-4c10-9501-bd823eec8397" Nov 27 02:15:30.929: INFO: At 2019-11-27 02:15:27 +0000 UTC - event for security-context-1e5313de-16c5-4161-8190-d9a40433987d: {kubelet k8s-agentpool1-27910301-vmss000001} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume2 default-token-9624d volume1], unattached volumes=[volume2 default-token-9624d volume1]: timed out waiting for the condition Nov 27 02:15:30.933: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:15:30.933: INFO: Nov 27 02:15:30.940: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:15:30.942: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 3762 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:15:22 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:15:22 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:15:22 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:15:22 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-558fb052-dca8-45f0-9e26-0338846653c5 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-63caa552-e092-43fe-b924-ef6a44e177a3 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-72de6846-924c-4829-8625-a42e55bed18a kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-a4e827c1-eef0-4abf-987f-7de787fa567d kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-ba05354b-40b7-4340-9216-afb07cc40b79 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-ff9df561-0196-4319-bfaf-223abedd298f],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104,DevicePath:0,},AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-a4e827c1-eef0-4abf-987f-7de787fa567d,DevicePath:6,},AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8,DevicePath:4,},},Config:nil,},} Nov 27 02:15:30.943: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:15:30.951: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:15:30.965: INFO: security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 started at 2019-11-27 02:15:21 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:30.965: INFO: Container write-pod ready: false, restart count 0 Nov 27 02:15:30.965: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:30.965: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:15:30.965: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:30.965: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:15:30.965: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:30.965: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:15:30.965: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:30.965: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:15:30.965: INFO: azure-client started at 2019-11-27 02:06:31 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:30.965: INFO: Container azure-client ready: true, restart count 0 Nov 27 02:15:30.965: INFO: azure-client started at 2019-11-27 02:06:27 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:30.965: INFO: Container azure-client ready: true, restart count 0 Nov 27 02:15:30.965: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:30.965: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:15:30.965: INFO: exec-volume-test-azure-disk-b2kl started at 2019-11-27 02:12:52 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:30.965: INFO: Container exec-container-azure-disk-b2kl ready: false, restart count 0 Nov 27 02:15:30.965: INFO: security-context-2939d0e3-7f56-4de8-a2d6-fc3765dba3e9 started at 2019-11-27 02:12:19 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:30.965: INFO: Container write-pod ready: false, restart count 0 W1127 02:15:30.969129 14162 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:15:30.996: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:15:30.996: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:15:30.999: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 3489 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:14:20 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:14:20 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:14:20 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:14:20 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:15:30.999: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:15:31.003: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:15:31.027: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.027: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 27 02:15:31.027: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.027: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:15:31.027: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.027: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:15:31.027: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.027: INFO: Container coredns ready: true, restart count 0 Nov 27 02:15:31.027: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.027: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:15:31.027: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.027: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:15:31.027: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.027: INFO: Container metrics-server ready: true, restart count 0 Nov 27 02:15:31.027: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.027: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 W1127 02:15:31.030260 14162 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:15:31.053: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:15:31.053: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:15:31.055: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 2874 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:11:39 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:11:39 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:11:39 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:11:39 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:15:31.055: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:15:31.060: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:15:31.085: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.085: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:15:31.085: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.085: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:15:31.085: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.085: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 27 02:15:31.085: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.085: INFO: Container kube-apiserver ready: true, restart count 0 Nov 27 02:15:31.085: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.085: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 27 02:15:31.085: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.085: INFO: Container kube-scheduler ready: true, restart count 0 Nov 27 02:15:31.085: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:15:31.085: INFO: Container azure-ip-masq-agent ready: true, restart count 0 W1127 02:15:31.088354 14162 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:15:31.107: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:15:31.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "multivolume-2138" for this suite.
Find security-context-1e5313de-16c5-4161-8190-d9a40433987d mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sDynamic\sPV\s\(filesystem\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\saccess\sto\stwo\svolumes\swith\sthe\ssame\svolume\smode\sand\sretain\sdata\sacross\spod\srecreation\son\sthe\ssame\snode$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:129 Nov 27 02:25:31.974: while cleanup resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { error: { cause: { s: "PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a still exists within 5m0s", }, msg: "Persistent Volume pvc-72de6846-924c-4829-8625-a42e55bed18a not deleted by dynamic provisioner", }, stack: [0x3995499, 0x39bb3f5, 0x39bbb06, 0x7d30a8, 0x7d2cff, 0x7d21a4, 0x7d9105, 0x7d8961, 0x7de7df, 0x7de300, 0x7ddb47, 0x7e012b, 0x7e2c87, 0x7e29cd, 0x3abb0ba, 0x3abfcab, 0x516669, 0x462e51], }, ], ] Persistent Volume pvc-72de6846-924c-4829-8625-a42e55bed18a not deleted by dynamic provisioner: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a still exists within 5m0s occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:120from junit_02.xml
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:88 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:14:56.703: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename multivolume �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in multivolume-6798 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should access to two volumes with the same volume mode and retain data across pod recreation on the same node /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:129 Nov 27 02:14:56.845: INFO: Creating resource for dynamic PV Nov 27 02:14:56.845: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass multivolume-6798-azure-disk-scphhk4 �[1mSTEP�[0m: creating a claim Nov 27 02:14:56.853: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-diskxmvts] to have phase Bound Nov 27 02:14:56.858: INFO: PersistentVolumeClaim azure-diskxmvts found but phase is Pending instead of Bound. Nov 27 02:14:58.861: INFO: PersistentVolumeClaim azure-diskxmvts found but phase is Pending instead of Bound. Nov 27 02:15:00.863: INFO: PersistentVolumeClaim azure-diskxmvts found but phase is Pending instead of Bound. Nov 27 02:15:02.866: INFO: PersistentVolumeClaim azure-diskxmvts found but phase is Pending instead of Bound. Nov 27 02:15:04.869: INFO: PersistentVolumeClaim azure-diskxmvts found but phase is Pending instead of Bound. Nov 27 02:15:06.873: INFO: PersistentVolumeClaim azure-diskxmvts found but phase is Pending instead of Bound. Nov 27 02:15:08.875: INFO: PersistentVolumeClaim azure-diskxmvts found and phase=Bound (12.022623003s) Nov 27 02:15:08.882: INFO: Creating resource for dynamic PV Nov 27 02:15:08.882: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass multivolume-6798-azure-disk-sc2fw4k �[1mSTEP�[0m: creating a claim Nov 27 02:15:08.889: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-diskqplxd] to have phase Bound Nov 27 02:15:08.892: INFO: PersistentVolumeClaim azure-diskqplxd found but phase is Pending instead of Bound. Nov 27 02:15:10.894: INFO: PersistentVolumeClaim azure-diskqplxd found but phase is Pending instead of Bound. Nov 27 02:15:12.897: INFO: PersistentVolumeClaim azure-diskqplxd found but phase is Pending instead of Bound. Nov 27 02:15:14.900: INFO: PersistentVolumeClaim azure-diskqplxd found but phase is Pending instead of Bound. Nov 27 02:15:16.904: INFO: PersistentVolumeClaim azure-diskqplxd found but phase is Pending instead of Bound. Nov 27 02:15:18.906: INFO: PersistentVolumeClaim azure-diskqplxd found but phase is Pending instead of Bound. Nov 27 02:15:20.916: INFO: PersistentVolumeClaim azure-diskqplxd found and phase=Bound (12.026289908s) �[1mSTEP�[0m: Creating pod on {Name: Selector:map[] Affinity:nil} with multiple volumes �[1mSTEP�[0m: Checking if the volume1 exists as expected volume mode (Filesystem) Nov 27 02:19:42.943: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume1] Namespace:multivolume-6798 PodName:security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:19:42.943: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:19:43.142: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume1] Namespace:multivolume-6798 PodName:security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:19:43.142: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if write to the volume1 works properly Nov 27 02:19:43.297: INFO: ExecWithOptions {Command:[/bin/sh -c echo 5I9n5fz19QPa6DM2JRy52iLx06adrFOoJT39MxZ4eKVHHhLsjw7qdZ9mw5kvVDGwJ51oQeD9k7vQYqn94bvsqA== | base64 -d | sha256sum] Namespace:multivolume-6798 PodName:security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:19:43.297: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:19:43.461: INFO: ExecWithOptions {Command:[/bin/sh -c echo 5I9n5fz19QPa6DM2JRy52iLx06adrFOoJT39MxZ4eKVHHhLsjw7qdZ9mw5kvVDGwJ51oQeD9k7vQYqn94bvsqA== | base64 -d | dd of=/mnt/volume1/file1.txt bs=64 count=1] Namespace:multivolume-6798 PodName:security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:19:43.461: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if read from the volume1 works properly Nov 27 02:19:43.633: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1/file1.txt bs=64 count=1 | sha256sum] Namespace:multivolume-6798 PodName:security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:19:43.634: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:19:43.794: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1/file1.txt bs=64 count=1 | sha256sum | grep -Fq 3430b6efd80ce15b0ea24daeb4be8ef570c256565a8dc19af597a1dca83e420c] Namespace:multivolume-6798 PodName:security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:19:43.794: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if the volume2 exists as expected volume mode (Filesystem) Nov 27 02:19:43.959: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume2] Namespace:multivolume-6798 PodName:security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:19:43.959: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:19:44.135: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume2] Namespace:multivolume-6798 PodName:security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:19:44.135: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if write to the volume2 works properly Nov 27 02:19:44.288: INFO: ExecWithOptions {Command:[/bin/sh -c echo uQxdg/7subOsS01aw8/uZPe1aOpqqdKlQZ6SSBCs/v5w5dLzR80rk9wtJgCorotQwRl9aEL2MTTuZKdbkbGs5w== | base64 -d | sha256sum] Namespace:multivolume-6798 PodName:security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:19:44.288: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:19:44.456: INFO: ExecWithOptions {Command:[/bin/sh -c echo uQxdg/7subOsS01aw8/uZPe1aOpqqdKlQZ6SSBCs/v5w5dLzR80rk9wtJgCorotQwRl9aEL2MTTuZKdbkbGs5w== | base64 -d | dd of=/mnt/volume2/file1.txt bs=64 count=1] Namespace:multivolume-6798 PodName:security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:19:44.457: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if read from the volume2 works properly Nov 27 02:19:44.615: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum] Namespace:multivolume-6798 PodName:security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:19:44.615: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:19:44.763: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum | grep -Fq b4f1d1b8c4efeb007a67a5e3def01c1b5ac26a1425c52a88fa94e15f5132dcfe] Namespace:multivolume-6798 PodName:security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:19:44.763: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:19:44.927: INFO: Deleting pod "security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5" in namespace "multivolume-6798" Nov 27 02:19:44.931: INFO: Wait up to 5m0s for pod "security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5" to be fully deleted �[1mSTEP�[0m: Creating pod on {Name: Selector:map[] Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{},MatchFields:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:metadata.name,Operator:In,Values:[k8s-agentpool1-27910301-vmss000000],},},},},},PreferredDuringSchedulingIgnoredDuringExecution:[]PreferredSchedulingTerm{},},PodAffinity:nil,PodAntiAffinity:nil,}} with multiple volumes �[1mSTEP�[0m: Checking if the volume1 exists as expected volume mode (Filesystem) Nov 27 02:20:02.961: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume1] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:02.961: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:20:03.159: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume1] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:03.159: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if read from the volume1 works properly Nov 27 02:20:03.313: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1/file1.txt bs=64 count=1 | sha256sum] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:03.313: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:20:03.477: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1/file1.txt bs=64 count=1 | sha256sum | grep -Fq 3430b6efd80ce15b0ea24daeb4be8ef570c256565a8dc19af597a1dca83e420c] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:03.478: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if write to the volume1 works properly Nov 27 02:20:03.632: INFO: ExecWithOptions {Command:[/bin/sh -c echo s5As0PQqMrlNX5V68u4li+7Delwyi0+WBprUj52Sq7rL4b7igoGmlUv9xSrHcchew20aUdIJG7mfKrWxOYx93w== | base64 -d | sha256sum] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:03.632: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:20:03.811: INFO: ExecWithOptions {Command:[/bin/sh -c echo s5As0PQqMrlNX5V68u4li+7Delwyi0+WBprUj52Sq7rL4b7igoGmlUv9xSrHcchew20aUdIJG7mfKrWxOYx93w== | base64 -d | dd of=/mnt/volume1/file1.txt bs=64 count=1] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:03.811: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if read from the volume1 works properly Nov 27 02:20:03.969: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1/file1.txt bs=64 count=1 | sha256sum] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:03.969: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:20:04.225: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume1/file1.txt bs=64 count=1 | sha256sum | grep -Fq 4ca0988e248916dfe21d46a5ed56de9ba5fa96e2da33c51fcf5467e17765d5e7] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:04.225: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if the volume2 exists as expected volume mode (Filesystem) Nov 27 02:20:04.391: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/volume2] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:04.391: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:20:04.557: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /mnt/volume2] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:04.557: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if read from the volume2 works properly Nov 27 02:20:04.724: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:04.724: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:20:04.905: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum | grep -Fq b4f1d1b8c4efeb007a67a5e3def01c1b5ac26a1425c52a88fa94e15f5132dcfe] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:04.905: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if write to the volume2 works properly Nov 27 02:20:05.060: INFO: ExecWithOptions {Command:[/bin/sh -c echo CRYSpWaGbbGIcIjaen5ZJDe3hz+xCD0utZ5GBNbnpRyDPwDm4i/q3kFWF6Eu6qEmd+UeqY/hIif9bFnmePnzvg== | base64 -d | sha256sum] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:05.060: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:20:05.211: INFO: ExecWithOptions {Command:[/bin/sh -c echo CRYSpWaGbbGIcIjaen5ZJDe3hz+xCD0utZ5GBNbnpRyDPwDm4i/q3kFWF6Eu6qEmd+UeqY/hIif9bFnmePnzvg== | base64 -d | dd of=/mnt/volume2/file1.txt bs=64 count=1] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:05.211: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking if read from the volume2 works properly Nov 27 02:20:05.372: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:05.372: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:20:05.527: INFO: ExecWithOptions {Command:[/bin/sh -c dd if=/mnt/volume2/file1.txt bs=64 count=1 | sha256sum | grep -Fq c1300aefc78204c23f1f0b51d6384593f8f38ccf8b0cda343ded465cf77815e4] Namespace:multivolume-6798 PodName:security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:20:05.527: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:20:05.711: INFO: Deleting pod "security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1" in namespace "multivolume-6798" Nov 27 02:20:05.715: INFO: Wait up to 5m0s for pod "security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1" to be fully deleted �[1mSTEP�[0m: Deleting pvc Nov 27 02:20:21.730: INFO: Deleting PersistentVolumeClaim "azure-diskxmvts" Nov 27 02:20:21.734: INFO: Waiting up to 5m0s for PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a to get deleted Nov 27 02:20:21.737: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Bound (2.755822ms) Nov 27 02:20:26.741: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (5.006587773s) Nov 27 02:20:31.744: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (10.010420412s) Nov 27 02:20:36.747: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (15.012697127s) Nov 27 02:20:41.750: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (20.015902539s) Nov 27 02:20:46.753: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (25.018673435s) Nov 27 02:20:51.756: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (30.021818824s) Nov 27 02:20:56.759: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (35.025039701s) Nov 27 02:21:01.763: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (40.029230776s) Nov 27 02:21:06.771: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (45.036521464s) Nov 27 02:21:11.773: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (50.039192605s) Nov 27 02:21:16.777: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (55.043178146s) Nov 27 02:21:21.783: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (1m0.048710688s) Nov 27 02:21:26.786: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (1m5.052266405s) Nov 27 02:21:31.790: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (1m10.05573591s) Nov 27 02:21:36.793: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (1m15.059310205s) Nov 27 02:21:41.797: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (1m20.06272479s) Nov 27 02:21:46.800: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (1m25.066268665s) Nov 27 02:21:51.803: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (1m30.068944823s) Nov 27 02:21:56.806: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (1m35.072002075s) Nov 27 02:22:01.808: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (1m40.074336611s) Nov 27 02:22:06.813: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (1m45.078601353s) Nov 27 02:22:11.815: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (1m50.081362573s) Nov 27 02:22:16.819: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (1m55.084771389s) Nov 27 02:22:21.822: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (2m0.08751949s) Nov 27 02:22:26.825: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (2m5.090969588s) Nov 27 02:22:31.828: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (2m10.094186874s) Nov 27 02:22:36.832: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (2m15.097653754s) Nov 27 02:22:41.835: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (2m20.101335926s) Nov 27 02:22:46.839: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (2m25.105268791s) Nov 27 02:22:51.842: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (2m30.108057538s) Nov 27 02:22:56.846: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (2m35.111727483s) Nov 27 02:23:01.850: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (2m40.115544421s) Nov 27 02:23:06.853: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (2m45.118859447s) Nov 27 02:23:11.856: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (2m50.122010062s) Nov 27 02:23:16.859: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (2m55.125181369s) Nov 27 02:23:21.862: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (3m0.128512069s) Nov 27 02:23:26.866: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (3m5.132219164s) Nov 27 02:23:31.869: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (3m10.135047244s) Nov 27 02:23:36.872: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (3m15.13846552s) Nov 27 02:23:41.875: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (3m20.140704579s) Nov 27 02:23:46.886: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (3m25.151985301s) Nov 27 02:23:51.889: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (3m30.155085351s) Nov 27 02:23:56.892: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (3m35.158149493s) Nov 27 02:24:01.895: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (3m40.161209527s) Nov 27 02:24:06.900: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (3m45.165679264s) Nov 27 02:24:11.903: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (3m50.168989285s) Nov 27 02:24:16.906: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (3m55.171861195s) Nov 27 02:24:21.910: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (4m0.175557204s) Nov 27 02:24:26.913: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (4m5.178883803s) Nov 27 02:24:31.916: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (4m10.182118493s) Nov 27 02:24:36.920: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (4m15.186310984s) Nov 27 02:24:41.924: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (4m20.189733362s) Nov 27 02:24:46.927: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (4m25.192692629s) Nov 27 02:24:51.930: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (4m30.195935991s) Nov 27 02:24:56.933: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (4m35.199151946s) Nov 27 02:25:01.937: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (4m40.202831898s) Nov 27 02:25:06.940: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (4m45.206154541s) Nov 27 02:25:11.944: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (4m50.210455084s) Nov 27 02:25:16.948: INFO: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a found and phase=Failed (4m55.214008115s) �[1mSTEP�[0m: Deleting sc �[1mSTEP�[0m: Deleting pvc Nov 27 02:25:21.954: INFO: Deleting PersistentVolumeClaim "azure-diskqplxd" Nov 27 02:25:21.958: INFO: Waiting up to 5m0s for PersistentVolume pvc-ba05354b-40b7-4340-9216-afb07cc40b79 to get deleted Nov 27 02:25:21.961: INFO: PersistentVolume pvc-ba05354b-40b7-4340-9216-afb07cc40b79 found and phase=Bound (2.744222ms) Nov 27 02:25:26.965: INFO: PersistentVolume pvc-ba05354b-40b7-4340-9216-afb07cc40b79 found and phase=Released (5.006934745s) Nov 27 02:25:31.968: INFO: PersistentVolume pvc-ba05354b-40b7-4340-9216-afb07cc40b79 was removed �[1mSTEP�[0m: Deleting sc Nov 27 02:25:31.974: FAIL: while cleanup resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { error: { cause: { s: "PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a still exists within 5m0s", }, msg: "Persistent Volume pvc-72de6846-924c-4829-8625-a42e55bed18a not deleted by dynamic provisioner", }, stack: [0x3995499, 0x39bb3f5, 0x39bbb06, 0x7d30a8, 0x7d2cff, 0x7d21a4, 0x7d9105, 0x7d8961, 0x7de7df, 0x7de300, 0x7ddb47, 0x7e012b, 0x7e2c87, 0x7e29cd, 0x3abb0ba, 0x3abfcab, 0x516669, 0x462e51], }, ], ] Persistent Volume pvc-72de6846-924c-4829-8625-a42e55bed18a not deleted by dynamic provisioner: PersistentVolume pvc-72de6846-924c-4829-8625-a42e55bed18a still exists within 5m0s occurred [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "multivolume-6798". �[1mSTEP�[0m: Found 15 events. Nov 27 02:25:31.978: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5: {default-scheduler } Scheduled: Successfully assigned multivolume-6798/security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5 to k8s-agentpool1-27910301-vmss000000 Nov 27 02:25:31.978: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1: {default-scheduler } Scheduled: Successfully assigned multivolume-6798/security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1 to k8s-agentpool1-27910301-vmss000000 Nov 27 02:25:31.978: INFO: At 2019-11-27 02:15:07 +0000 UTC - event for azure-diskxmvts: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-72de6846-924c-4829-8625-a42e55bed18a using kubernetes.io/azure-disk Nov 27 02:25:31.978: INFO: At 2019-11-27 02:15:19 +0000 UTC - event for azure-diskqplxd: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-ba05354b-40b7-4340-9216-afb07cc40b79 using kubernetes.io/azure-disk Nov 27 02:25:31.978: INFO: At 2019-11-27 02:17:24 +0000 UTC - event for security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1 volume2], unattached volumes=[default-token-rmj5c volume1 volume2]: timed out waiting for the condition Nov 27 02:25:31.978: INFO: At 2019-11-27 02:18:01 +0000 UTC - event for security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-ba05354b-40b7-4340-9216-afb07cc40b79" Nov 27 02:25:31.978: INFO: At 2019-11-27 02:18:22 +0000 UTC - event for security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-72de6846-924c-4829-8625-a42e55bed18a" Nov 27 02:25:31.978: INFO: At 2019-11-27 02:19:40 +0000 UTC - event for security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 27 02:25:31.978: INFO: At 2019-11-27 02:19:41 +0000 UTC - event for security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container write-pod Nov 27 02:25:31.978: INFO: At 2019-11-27 02:19:42 +0000 UTC - event for security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container write-pod Nov 27 02:25:31.978: INFO: At 2019-11-27 02:19:44 +0000 UTC - event for security-context-4740750c-4490-4dc5-81e9-bb807a3f8ee5: {kubelet k8s-agentpool1-27910301-vmss000000} Killing: Stopping container write-pod Nov 27 02:25:31.978: INFO: At 2019-11-27 02:20:00 +0000 UTC - event for security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 27 02:25:31.978: INFO: At 2019-11-27 02:20:02 +0000 UTC - event for security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container write-pod Nov 27 02:25:31.978: INFO: At 2019-11-27 02:20:02 +0000 UTC - event for security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container write-pod Nov 27 02:25:31.978: INFO: At 2019-11-27 02:20:05 +0000 UTC - event for security-context-a3d2939e-befd-496b-b04d-d54a7d7bb6e1: {kubelet k8s-agentpool1-27910301-vmss000000} Killing: Stopping container write-pod Nov 27 02:25:31.981: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:25:31.981: INFO: Nov 27 02:25:31.984: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:25:31.987: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 5932 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71,DevicePath:3,},},Config:nil,},} Nov 27 02:25:31.987: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:25:31.991: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:25:32.026: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.026: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:25:32.026: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.026: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:25:32.026: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.026: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:25:32.026: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.026: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:25:32.026: INFO: azure-injector started at 2019-11-27 02:20:44 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.026: INFO: Container azure-injector ready: false, restart count 0 Nov 27 02:25:32.026: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.026: INFO: Container azure-cnms ready: true, restart count 0 W1127 02:25:32.030461 14155 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:25:32.059: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:25:32.059: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:25:32.061: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 5430 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:25:32.061: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:25:32.066: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:25:32.094: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.094: INFO: Container coredns ready: true, restart count 0 Nov 27 02:25:32.094: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.094: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:25:32.094: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.094: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 27 02:25:32.094: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.094: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:25:32.094: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.094: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:25:32.094: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.094: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:25:32.094: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.094: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:25:32.094: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.094: INFO: Container metrics-server ready: true, restart count 0 W1127 02:25:32.097458 14155 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:25:32.125: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:25:32.125: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:25:32.127: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 5287 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:21:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:25:32.128: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:25:32.132: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:25:32.158: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.158: INFO: Container kube-scheduler ready: true, restart count 0 Nov 27 02:25:32.158: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.158: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:25:32.158: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.158: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:25:32.158: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.158: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:25:32.158: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.158: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 27 02:25:32.158: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.158: INFO: Container kube-apiserver ready: true, restart count 0 Nov 27 02:25:32.158: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:25:32.158: INFO: Container kube-controller-manager ready: true, restart count 0 W1127 02:25:32.160745 14155 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:25:32.176: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:25:32.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "multivolume-6798" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sDynamic\sPV\s\(filesystem\svolmode\)\]\smultiVolume\s\[Slow\]\sshould\sconcurrently\saccess\sthe\ssingle\svolume\sfrom\spods\son\sthe\ssame\snode$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:293 Nov 27 02:24:10.130: Unexpected error: <*errors.errorString | 0xc002b29060>: { s: "pod \"security-context-54d39a28-d787-43ae-9155-1d1da53f4a30\" is not Running: timed out waiting for the condition", } pod "security-context-54d39a28-d787-43ae-9155-1d1da53f4a30" is not Running: timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:434from junit_04.xml
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:88 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:18:57.933: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename multivolume �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in multivolume-2560 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should concurrently access the single volume from pods on the same node /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/multivolume.go:293 Nov 27 02:18:58.078: INFO: Creating resource for dynamic PV Nov 27 02:18:58.078: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass multivolume-2560-azure-disk-scmhf94 �[1mSTEP�[0m: creating a claim Nov 27 02:18:58.085: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-disk2vw9v] to have phase Bound Nov 27 02:18:58.088: INFO: PersistentVolumeClaim azure-disk2vw9v found but phase is Pending instead of Bound. Nov 27 02:19:00.091: INFO: PersistentVolumeClaim azure-disk2vw9v found but phase is Pending instead of Bound. Nov 27 02:19:02.095: INFO: PersistentVolumeClaim azure-disk2vw9v found but phase is Pending instead of Bound. Nov 27 02:19:04.099: INFO: PersistentVolumeClaim azure-disk2vw9v found but phase is Pending instead of Bound. Nov 27 02:19:06.102: INFO: PersistentVolumeClaim azure-disk2vw9v found but phase is Pending instead of Bound. Nov 27 02:19:08.106: INFO: PersistentVolumeClaim azure-disk2vw9v found but phase is Pending instead of Bound. Nov 27 02:19:10.109: INFO: PersistentVolumeClaim azure-disk2vw9v found and phase=Bound (12.02430189s) �[1mSTEP�[0m: Creating pod1 with a volume on {Name: Selector:map[] Affinity:nil} Nov 27 02:24:10.130: FAIL: Unexpected error: <*errors.errorString | 0xc002b29060>: { s: "pod \"security-context-54d39a28-d787-43ae-9155-1d1da53f4a30\" is not Running: timed out waiting for the condition", } pod "security-context-54d39a28-d787-43ae-9155-1d1da53f4a30" is not Running: timed out waiting for the condition occurred Nov 27 02:24:10.130: INFO: Deleting pod "security-context-54d39a28-d787-43ae-9155-1d1da53f4a30" in namespace "multivolume-2560" Nov 27 02:24:10.134: INFO: Wait up to 5m0s for pod "security-context-54d39a28-d787-43ae-9155-1d1da53f4a30" to be fully deleted �[1mSTEP�[0m: Deleting pvc Nov 27 02:24:22.144: INFO: Deleting PersistentVolumeClaim "azure-disk2vw9v" Nov 27 02:24:22.147: INFO: Waiting up to 5m0s for PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e to get deleted Nov 27 02:24:22.149: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Bound (2.378219ms) Nov 27 02:24:27.153: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (5.00608432s) Nov 27 02:24:32.156: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (10.009006708s) Nov 27 02:24:37.159: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (15.011938588s) Nov 27 02:24:42.162: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (20.015308865s) Nov 27 02:24:47.166: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (25.018952138s) Nov 27 02:24:52.170: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (30.022893905s) Nov 27 02:24:57.173: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (35.026520463s) Nov 27 02:25:02.177: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (40.029729011s) Nov 27 02:25:07.180: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (45.033271755s) Nov 27 02:25:12.183: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (50.036481989s) Nov 27 02:25:17.188: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (55.040693025s) Nov 27 02:25:22.191: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (1m0.04447775s) Nov 27 02:25:27.196: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (1m5.048630372s) Nov 27 02:25:32.198: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (1m10.050914573s) Nov 27 02:25:37.200: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (1m15.05359937s) Nov 27 02:25:42.206: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (1m20.059290385s) Nov 27 02:25:47.210: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (1m25.062716676s) Nov 27 02:25:52.213: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (1m30.066148661s) Nov 27 02:25:57.216: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (1m35.069541639s) Nov 27 02:26:02.220: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (1m40.073058212s) Nov 27 02:26:07.224: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (1m45.07678138s) Nov 27 02:26:12.227: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (1m50.079826937s) Nov 27 02:26:17.233: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (1m55.086127214s) Nov 27 02:26:22.236: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (2m0.089576763s) Nov 27 02:26:27.240: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (2m5.092973105s) Nov 27 02:26:32.243: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (2m10.09609534s) Nov 27 02:26:37.246: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (2m15.099199368s) Nov 27 02:26:42.249: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (2m20.102482093s) Nov 27 02:26:47.252: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e found and phase=Failed (2m25.105490409s) Nov 27 02:26:52.256: INFO: PersistentVolume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e was removed �[1mSTEP�[0m: Deleting sc Nov 27 02:26:52.260: INFO: In-tree plugin kubernetes.io/azure-disk is not migrated, not validating any metrics [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "multivolume-2560". �[1mSTEP�[0m: Found 6 events. Nov 27 02:26:52.268: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for security-context-54d39a28-d787-43ae-9155-1d1da53f4a30: {default-scheduler } Scheduled: Successfully assigned multivolume-2560/security-context-54d39a28-d787-43ae-9155-1d1da53f4a30 to k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:52.269: INFO: At 2019-11-27 02:19:08 +0000 UTC - event for azure-disk2vw9v: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e using kubernetes.io/azure-disk Nov 27 02:26:52.269: INFO: At 2019-11-27 02:21:13 +0000 UTC - event for security-context-54d39a28-d787-43ae-9155-1d1da53f4a30: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[default-token-xml7b volume1]: timed out waiting for the condition Nov 27 02:26:52.269: INFO: At 2019-11-27 02:23:26 +0000 UTC - event for security-context-54d39a28-d787-43ae-9155-1d1da53f4a30: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 default-token-xml7b]: timed out waiting for the condition Nov 27 02:26:52.269: INFO: At 2019-11-27 02:23:56 +0000 UTC - event for security-context-54d39a28-d787-43ae-9155-1d1da53f4a30: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e" Nov 27 02:26:52.269: INFO: At 2019-11-27 02:25:42 +0000 UTC - event for security-context-54d39a28-d787-43ae-9155-1d1da53f4a30: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1 default-token-xml7b], unattached volumes=[volume1 default-token-xml7b]: timed out waiting for the condition Nov 27 02:26:52.271: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:26:52.271: INFO: Nov 27 02:26:52.273: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:52.276: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 5932 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:25:14 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71,DevicePath:3,},},Config:nil,},} Nov 27 02:26:52.276: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:52.282: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:52.294: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.294: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:26:52.294: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.294: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:26:52.294: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.294: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:26:52.294: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.294: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:26:52.294: INFO: azure-injector started at 2019-11-27 02:20:44 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.294: INFO: Container azure-injector ready: false, restart count 0 Nov 27 02:26:52.294: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.294: INFO: Container azure-cnms ready: true, restart count 0 W1127 02:26:52.297036 14146 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:26:52.315: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:26:52.315: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:26:52.318: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 5430 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:22:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:26:52.318: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:26:52.322: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:26:52.331: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.331: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:26:52.331: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.331: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:26:52.331: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.331: INFO: Container coredns ready: true, restart count 0 Nov 27 02:26:52.331: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.331: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:26:52.331: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.331: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 27 02:26:52.331: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.331: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:26:52.331: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.331: INFO: Container metrics-server ready: true, restart count 0 Nov 27 02:26:52.331: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.331: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 W1127 02:26:52.335019 14146 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:26:52.358: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:26:52.358: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:26:52.360: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 6191 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:26:41 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:26:41 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:26:41 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:26:41 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:26:52.360: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:26:52.364: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:26:52.368: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.368: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:26:52.368: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.368: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 27 02:26:52.368: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.368: INFO: Container kube-apiserver ready: true, restart count 0 Nov 27 02:26:52.368: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.368: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 27 02:26:52.368: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.368: INFO: Container kube-scheduler ready: true, restart count 0 Nov 27 02:26:52.368: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.368: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:26:52.368: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:26:52.368: INFO: Container azure-cnms ready: true, restart count 0 W1127 02:26:52.371094 14146 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:26:52.383: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:26:52.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "multivolume-2560" for this suite.
Find security-context-54d39a28-d787-43ae-9155-1d1da53f4a30 mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sDynamic\sPV\s\(xfs\)\]\[Slow\]\svolumes\sshould\sstore\sdata$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:150 Nov 27 02:30:43.817: Failed to create injector pod: timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:597from junit_03.xml
[BeforeEach] [Testpattern: Dynamic PV (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Dynamic PV (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:20:31.591: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename volume �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-1408 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should store data /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:150 Nov 27 02:20:31.743: INFO: Creating resource for dynamic PV Nov 27 02:20:31.743: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(azure-disk) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass volume-1408-azure-disk-scp7ssx �[1mSTEP�[0m: creating a claim Nov 27 02:20:31.747: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 27 02:20:31.752: INFO: Waiting up to 5m0s for PersistentVolumeClaims [azure-diskvs9rl] to have phase Bound Nov 27 02:20:31.758: INFO: PersistentVolumeClaim azure-diskvs9rl found but phase is Pending instead of Bound. Nov 27 02:20:33.762: INFO: PersistentVolumeClaim azure-diskvs9rl found but phase is Pending instead of Bound. Nov 27 02:20:35.765: INFO: PersistentVolumeClaim azure-diskvs9rl found but phase is Pending instead of Bound. Nov 27 02:20:37.768: INFO: PersistentVolumeClaim azure-diskvs9rl found but phase is Pending instead of Bound. Nov 27 02:20:39.773: INFO: PersistentVolumeClaim azure-diskvs9rl found but phase is Pending instead of Bound. Nov 27 02:20:41.776: INFO: PersistentVolumeClaim azure-diskvs9rl found but phase is Pending instead of Bound. Nov 27 02:20:43.779: INFO: PersistentVolumeClaim azure-diskvs9rl found and phase=Bound (12.027394053s) �[1mSTEP�[0m: starting azure-injector Nov 27 02:25:43.806: INFO: Waiting for pod azure-injector to disappear Nov 27 02:25:43.808: INFO: Pod azure-injector still exists Nov 27 02:25:45.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:25:45.812: INFO: Pod azure-injector still exists Nov 27 02:25:47.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:25:47.812: INFO: Pod azure-injector still exists Nov 27 02:25:49.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:25:49.811: INFO: Pod azure-injector still exists Nov 27 02:25:51.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:25:51.812: INFO: Pod azure-injector still exists Nov 27 02:25:53.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:25:53.812: INFO: Pod azure-injector still exists Nov 27 02:25:55.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:25:55.812: INFO: Pod azure-injector still exists Nov 27 02:25:57.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:25:57.812: INFO: Pod azure-injector still exists Nov 27 02:25:59.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:25:59.811: INFO: Pod azure-injector still exists Nov 27 02:26:01.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:01.811: INFO: Pod azure-injector still exists Nov 27 02:26:03.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:03.812: INFO: Pod azure-injector still exists Nov 27 02:26:05.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:05.812: INFO: Pod azure-injector still exists Nov 27 02:26:07.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:07.812: INFO: Pod azure-injector still exists Nov 27 02:26:09.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:09.811: INFO: Pod azure-injector still exists Nov 27 02:26:11.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:11.812: INFO: Pod azure-injector still exists Nov 27 02:26:13.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:13.812: INFO: Pod azure-injector still exists Nov 27 02:26:15.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:15.812: INFO: Pod azure-injector still exists Nov 27 02:26:17.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:17.812: INFO: Pod azure-injector still exists Nov 27 02:26:19.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:19.811: INFO: Pod azure-injector still exists Nov 27 02:26:21.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:21.811: INFO: Pod azure-injector still exists Nov 27 02:26:23.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:23.812: INFO: Pod azure-injector still exists Nov 27 02:26:25.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:25.812: INFO: Pod azure-injector still exists Nov 27 02:26:27.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:27.812: INFO: Pod azure-injector still exists Nov 27 02:26:29.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:29.811: INFO: Pod azure-injector still exists Nov 27 02:26:31.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:31.812: INFO: Pod azure-injector still exists Nov 27 02:26:33.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:33.811: INFO: Pod azure-injector still exists Nov 27 02:26:35.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:35.812: INFO: Pod azure-injector still exists Nov 27 02:26:37.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:37.811: INFO: Pod azure-injector still exists Nov 27 02:26:39.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:39.811: INFO: Pod azure-injector still exists Nov 27 02:26:41.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:41.811: INFO: Pod azure-injector still exists Nov 27 02:26:43.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:43.811: INFO: Pod azure-injector still exists Nov 27 02:26:45.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:45.812: INFO: Pod azure-injector still exists Nov 27 02:26:47.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:47.813: INFO: Pod azure-injector still exists Nov 27 02:26:49.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:49.812: INFO: Pod azure-injector still exists Nov 27 02:26:51.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:51.811: INFO: Pod azure-injector still exists Nov 27 02:26:53.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:53.812: INFO: Pod azure-injector still exists Nov 27 02:26:55.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:55.812: INFO: Pod azure-injector still exists Nov 27 02:26:57.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:57.811: INFO: Pod azure-injector still exists Nov 27 02:26:59.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:26:59.813: INFO: Pod azure-injector still exists Nov 27 02:27:01.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:01.812: INFO: Pod azure-injector still exists Nov 27 02:27:03.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:03.812: INFO: Pod azure-injector still exists Nov 27 02:27:05.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:05.812: INFO: Pod azure-injector still exists Nov 27 02:27:07.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:07.812: INFO: Pod azure-injector still exists Nov 27 02:27:09.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:09.811: INFO: Pod azure-injector still exists Nov 27 02:27:11.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:11.811: INFO: Pod azure-injector still exists Nov 27 02:27:13.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:13.812: INFO: Pod azure-injector still exists Nov 27 02:27:15.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:15.812: INFO: Pod azure-injector still exists Nov 27 02:27:17.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:17.812: INFO: Pod azure-injector still exists Nov 27 02:27:19.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:19.812: INFO: Pod azure-injector still exists Nov 27 02:27:21.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:21.812: INFO: Pod azure-injector still exists Nov 27 02:27:23.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:23.812: INFO: Pod azure-injector still exists Nov 27 02:27:25.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:25.812: INFO: Pod azure-injector still exists Nov 27 02:27:27.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:27.812: INFO: Pod azure-injector still exists Nov 27 02:27:29.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:29.812: INFO: Pod azure-injector still exists Nov 27 02:27:31.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:31.812: INFO: Pod azure-injector still exists Nov 27 02:27:33.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:33.812: INFO: Pod azure-injector still exists Nov 27 02:27:35.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:35.812: INFO: Pod azure-injector still exists Nov 27 02:27:37.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:37.812: INFO: Pod azure-injector still exists Nov 27 02:27:39.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:39.811: INFO: Pod azure-injector still exists Nov 27 02:27:41.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:41.812: INFO: Pod azure-injector still exists Nov 27 02:27:43.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:43.812: INFO: Pod azure-injector still exists Nov 27 02:27:45.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:45.812: INFO: Pod azure-injector still exists Nov 27 02:27:47.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:47.812: INFO: Pod azure-injector still exists Nov 27 02:27:49.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:49.814: INFO: Pod azure-injector still exists Nov 27 02:27:51.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:51.812: INFO: Pod azure-injector still exists Nov 27 02:27:53.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:53.812: INFO: Pod azure-injector still exists Nov 27 02:27:55.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:55.812: INFO: Pod azure-injector still exists Nov 27 02:27:57.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:57.812: INFO: Pod azure-injector still exists Nov 27 02:27:59.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:27:59.811: INFO: Pod azure-injector still exists Nov 27 02:28:01.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:01.812: INFO: Pod azure-injector still exists Nov 27 02:28:03.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:03.812: INFO: Pod azure-injector still exists Nov 27 02:28:05.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:05.812: INFO: Pod azure-injector still exists Nov 27 02:28:07.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:07.811: INFO: Pod azure-injector still exists Nov 27 02:28:09.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:09.812: INFO: Pod azure-injector still exists Nov 27 02:28:11.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:11.811: INFO: Pod azure-injector still exists Nov 27 02:28:13.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:13.812: INFO: Pod azure-injector still exists Nov 27 02:28:15.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:15.812: INFO: Pod azure-injector still exists Nov 27 02:28:17.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:17.811: INFO: Pod azure-injector still exists Nov 27 02:28:19.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:19.812: INFO: Pod azure-injector still exists Nov 27 02:28:21.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:21.812: INFO: Pod azure-injector still exists Nov 27 02:28:23.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:23.812: INFO: Pod azure-injector still exists Nov 27 02:28:25.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:25.812: INFO: Pod azure-injector still exists Nov 27 02:28:27.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:27.812: INFO: Pod azure-injector still exists Nov 27 02:28:29.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:29.812: INFO: Pod azure-injector still exists Nov 27 02:28:31.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:31.812: INFO: Pod azure-injector still exists Nov 27 02:28:33.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:33.812: INFO: Pod azure-injector still exists Nov 27 02:28:35.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:35.812: INFO: Pod azure-injector still exists Nov 27 02:28:37.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:37.812: INFO: Pod azure-injector still exists Nov 27 02:28:39.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:39.812: INFO: Pod azure-injector still exists Nov 27 02:28:41.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:41.812: INFO: Pod azure-injector still exists Nov 27 02:28:43.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:43.812: INFO: Pod azure-injector still exists Nov 27 02:28:45.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:45.812: INFO: Pod azure-injector still exists Nov 27 02:28:47.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:47.812: INFO: Pod azure-injector still exists Nov 27 02:28:49.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:49.812: INFO: Pod azure-injector still exists Nov 27 02:28:51.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:51.813: INFO: Pod azure-injector still exists Nov 27 02:28:53.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:53.812: INFO: Pod azure-injector still exists Nov 27 02:28:55.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:55.812: INFO: Pod azure-injector still exists Nov 27 02:28:57.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:57.812: INFO: Pod azure-injector still exists Nov 27 02:28:59.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:28:59.812: INFO: Pod azure-injector still exists Nov 27 02:29:01.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:01.812: INFO: Pod azure-injector still exists Nov 27 02:29:03.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:03.812: INFO: Pod azure-injector still exists Nov 27 02:29:05.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:05.812: INFO: Pod azure-injector still exists Nov 27 02:29:07.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:07.812: INFO: Pod azure-injector still exists Nov 27 02:29:09.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:09.812: INFO: Pod azure-injector still exists Nov 27 02:29:11.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:11.812: INFO: Pod azure-injector still exists Nov 27 02:29:13.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:13.812: INFO: Pod azure-injector still exists Nov 27 02:29:15.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:15.812: INFO: Pod azure-injector still exists Nov 27 02:29:17.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:17.813: INFO: Pod azure-injector still exists Nov 27 02:29:19.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:19.811: INFO: Pod azure-injector still exists Nov 27 02:29:21.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:21.812: INFO: Pod azure-injector still exists Nov 27 02:29:23.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:23.812: INFO: Pod azure-injector still exists Nov 27 02:29:25.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:25.812: INFO: Pod azure-injector still exists Nov 27 02:29:27.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:27.812: INFO: Pod azure-injector still exists Nov 27 02:29:29.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:29.811: INFO: Pod azure-injector still exists Nov 27 02:29:31.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:31.812: INFO: Pod azure-injector still exists Nov 27 02:29:33.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:33.812: INFO: Pod azure-injector still exists Nov 27 02:29:35.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:35.812: INFO: Pod azure-injector still exists Nov 27 02:29:37.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:37.812: INFO: Pod azure-injector still exists Nov 27 02:29:39.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:39.812: INFO: Pod azure-injector still exists Nov 27 02:29:41.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:41.812: INFO: Pod azure-injector still exists Nov 27 02:29:43.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:43.812: INFO: Pod azure-injector still exists Nov 27 02:29:45.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:45.812: INFO: Pod azure-injector still exists Nov 27 02:29:47.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:47.812: INFO: Pod azure-injector still exists Nov 27 02:29:49.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:49.812: INFO: Pod azure-injector still exists Nov 27 02:29:51.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:51.812: INFO: Pod azure-injector still exists Nov 27 02:29:53.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:53.812: INFO: Pod azure-injector still exists Nov 27 02:29:55.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:55.812: INFO: Pod azure-injector still exists Nov 27 02:29:57.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:57.811: INFO: Pod azure-injector still exists Nov 27 02:29:59.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:29:59.812: INFO: Pod azure-injector still exists Nov 27 02:30:01.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:01.812: INFO: Pod azure-injector still exists Nov 27 02:30:03.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:03.812: INFO: Pod azure-injector still exists Nov 27 02:30:05.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:05.813: INFO: Pod azure-injector still exists Nov 27 02:30:07.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:07.812: INFO: Pod azure-injector still exists Nov 27 02:30:09.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:09.811: INFO: Pod azure-injector still exists Nov 27 02:30:11.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:11.812: INFO: Pod azure-injector still exists Nov 27 02:30:13.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:13.812: INFO: Pod azure-injector still exists Nov 27 02:30:15.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:15.812: INFO: Pod azure-injector still exists Nov 27 02:30:17.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:17.812: INFO: Pod azure-injector still exists Nov 27 02:30:19.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:19.812: INFO: Pod azure-injector still exists Nov 27 02:30:21.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:21.813: INFO: Pod azure-injector still exists Nov 27 02:30:23.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:23.812: INFO: Pod azure-injector still exists Nov 27 02:30:25.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:25.812: INFO: Pod azure-injector still exists Nov 27 02:30:27.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:27.812: INFO: Pod azure-injector still exists Nov 27 02:30:29.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:29.812: INFO: Pod azure-injector still exists Nov 27 02:30:31.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:31.811: INFO: Pod azure-injector still exists Nov 27 02:30:33.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:33.814: INFO: Pod azure-injector still exists Nov 27 02:30:35.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:35.812: INFO: Pod azure-injector still exists Nov 27 02:30:37.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:37.812: INFO: Pod azure-injector still exists Nov 27 02:30:39.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:39.812: INFO: Pod azure-injector still exists Nov 27 02:30:41.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:41.812: INFO: Pod azure-injector still exists Nov 27 02:30:43.808: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:43.813: INFO: Pod azure-injector still exists Nov 27 02:30:43.813: INFO: Waiting for pod azure-injector to disappear Nov 27 02:30:43.816: INFO: Pod azure-injector still exists Nov 27 02:30:43.817: FAIL: Failed to create injector pod: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/volume.InjectContent(0xc000c10000, 0xc002f6cd90, 0xb, 0x4a4dbcb, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:597 +0x944 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).defineTests.func3() /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:181 +0x3c9 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0028ec800) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:111 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc0028ec800) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b testing.tRunner(0xc0028ec800, 0x4c34198) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 �[1mSTEP�[0m: cleaning the environment after azure Nov 27 02:30:43.817: INFO: Deleting pod "azure-client" in namespace "volume-1408" �[1mSTEP�[0m: Deleting pvc Nov 27 02:30:43.820: INFO: Deleting PersistentVolumeClaim "azure-diskvs9rl" Nov 27 02:30:43.823: INFO: Waiting up to 5m0s for PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 to get deleted Nov 27 02:30:43.826: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2.826522ms) Nov 27 02:30:48.829: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (5.006144427s) Nov 27 02:30:53.834: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (10.010799939s) Nov 27 02:30:58.840: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (15.016311854s) Nov 27 02:31:03.844: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (20.020480056s) Nov 27 02:31:08.847: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (25.023725147s) Nov 27 02:31:13.851: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (30.02774584s) Nov 27 02:31:18.855: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (35.031443428s) Nov 27 02:31:23.859: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (40.035693017s) Nov 27 02:31:28.864: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (45.040523607s) Nov 27 02:31:33.867: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (50.044055684s) Nov 27 02:31:38.871: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (55.047611358s) Nov 27 02:31:43.875: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (1m0.05128723s) Nov 27 02:31:48.878: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (1m5.054670496s) Nov 27 02:31:53.881: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (1m10.057466155s) Nov 27 02:31:58.884: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (1m15.060953716s) Nov 27 02:32:03.887: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (1m20.064162971s) Nov 27 02:32:08.891: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (1m25.067434825s) Nov 27 02:32:13.894: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (1m30.070345072s) Nov 27 02:32:18.898: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (1m35.074406726s) Nov 27 02:32:23.901: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (1m40.077462169s) Nov 27 02:32:28.904: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (1m45.080922912s) Nov 27 02:32:33.908: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (1m50.084280852s) Nov 27 02:32:38.911: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (1m55.087588688s) Nov 27 02:32:43.914: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2m0.09070522s) Nov 27 02:32:48.917: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2m5.094038851s) Nov 27 02:32:53.920: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2m10.096873876s) Nov 27 02:32:58.924: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2m15.100370202s) Nov 27 02:33:03.928: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2m20.104793834s) Nov 27 02:33:08.933: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2m25.110020769s) Nov 27 02:33:13.945: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2m30.122112155s) Nov 27 02:33:18.948: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2m35.125001266s) Nov 27 02:33:23.952: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2m40.128788182s) Nov 27 02:33:28.956: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2m45.132300394s) Nov 27 02:33:33.960: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2m50.136562408s) Nov 27 02:33:38.963: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (2m55.139640711s) Nov 27 02:33:43.967: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (3m0.143480918s) Nov 27 02:33:48.970: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (3m5.146716017s) Nov 27 02:33:53.973: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (3m10.150042714s) Nov 27 02:33:58.976: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (3m15.152987706s) Nov 27 02:34:03.980: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (3m20.156291699s) Nov 27 02:34:08.983: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (3m25.159586389s) Nov 27 02:34:13.986: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (3m30.163171179s) Nov 27 02:34:18.990: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (3m35.16715277s) Nov 27 02:34:24.000: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (3m40.1764907s) Nov 27 02:34:29.004: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (3m45.18089159s) Nov 27 02:34:34.009: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (3m50.18565868s) Nov 27 02:34:39.013: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (3m55.189912664s) Nov 27 02:34:44.017: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (4m0.19342874s) Nov 27 02:34:49.021: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (4m5.197424817s) Nov 27 02:34:54.024: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (4m10.201167091s) Nov 27 02:34:59.028: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (4m15.204482158s) Nov 27 02:35:04.032: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (4m20.208737232s) Nov 27 02:35:09.036: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (4m25.212385898s) Nov 27 02:35:14.041: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (4m30.217345972s) Nov 27 02:35:19.045: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (4m35.221449638s) Nov 27 02:35:24.049: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (4m40.2252973s) Nov 27 02:35:29.052: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (4m45.228287353s) Nov 27 02:35:34.055: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (4m50.231820508s) Nov 27 02:35:39.058: INFO: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 found and phase=Bound (4m55.235113559s) �[1mSTEP�[0m: Deleting sc Nov 27 02:35:44.064: FAIL: while cleaning up resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { error: { cause: { s: "PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 still exists within 5m0s", }, msg: "Persistent Volume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 not deleted by dynamic provisioner", }, stack: [0x3995499, 0x39d7a55, 0x39d7bed, 0x4329a2, 0x1593131, 0x4329a2, 0x7e3328, 0x159293c, 0x18888b5, 0x1c6bef4, 0x1c6be93, 0x39d8079, 0x7d30a8, 0x7d2cff, 0x7d21a4, 0x7d9105, 0x7d8961, 0x7de7df, 0x7de300, 0x7ddb47, 0x7e012b, 0x7e2c87, 0x7e29cd, 0x3abb0ba, 0x3abfcab, 0x516669, 0x462e51], }, ], ] Persistent Volume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 not deleted by dynamic provisioner: PersistentVolume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 still exists within 5m0s occurred [AfterEach] [Testpattern: Dynamic PV (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "volume-1408". �[1mSTEP�[0m: Found 8 events. Nov 27 02:35:44.067: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for azure-injector: {default-scheduler } Scheduled: Successfully assigned volume-1408/azure-injector to k8s-agentpool1-27910301-vmss000000 Nov 27 02:35:44.068: INFO: At 2019-11-27 02:20:42 +0000 UTC - event for azure-diskvs9rl: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-239fed30-3cd8-4faf-a0cc-913154702e71 using kubernetes.io/azure-disk Nov 27 02:35:44.068: INFO: At 2019-11-27 02:22:47 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[azure-volume-0], unattached volumes=[default-token-ggbwd azure-volume-0]: timed out waiting for the condition Nov 27 02:35:44.068: INFO: At 2019-11-27 02:25:01 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[azure-volume-0], unattached volumes=[azure-volume-0 default-token-ggbwd]: timed out waiting for the condition Nov 27 02:35:44.068: INFO: At 2019-11-27 02:25:07 +0000 UTC - event for azure-injector: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-239fed30-3cd8-4faf-a0cc-913154702e71" Nov 27 02:35:44.068: INFO: At 2019-11-27 02:27:03 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 27 02:35:44.068: INFO: At 2019-11-27 02:27:04 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container azure-injector Nov 27 02:35:44.068: INFO: At 2019-11-27 02:27:05 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container azure-injector Nov 27 02:35:44.070: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:35:44.070: INFO: azure-injector k8s-agentpool1-27910301-vmss000000 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-27 02:20:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-27 02:27:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-27 02:27:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-27 02:20:43 +0000 UTC }] Nov 27 02:35:44.070: INFO: Nov 27 02:35:44.072: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:35:44.075: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 7476 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:35:16 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:35:16 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:35:16 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:35:16 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71,DevicePath:3,},},Config:nil,},} Nov 27 02:35:44.075: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:35:44.080: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:35:44.280: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.280: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:35:44.280: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.280: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:35:44.280: INFO: azure-injector started at 2019-11-27 02:20:44 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.280: INFO: Container azure-injector ready: true, restart count 0 Nov 27 02:35:44.280: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.280: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:35:44.280: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.280: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:35:44.280: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.280: INFO: Container azure-cnms ready: true, restart count 0 W1127 02:35:44.283345 14161 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:35:44.694: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:35:44.694: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:35:44.698: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 7019 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:32:03 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:32:03 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:32:03 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:32:03 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:35:44.698: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:35:44.703: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:35:44.734: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.734: INFO: Container coredns ready: true, restart count 0 Nov 27 02:35:44.734: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.734: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:35:44.734: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.734: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 27 02:35:44.734: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.734: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:35:44.734: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.734: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:35:44.734: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.734: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:35:44.734: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.734: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:35:44.734: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.734: INFO: Container metrics-server ready: true, restart count 0 W1127 02:35:44.736898 14161 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:35:44.763: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:35:44.763: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:35:44.766: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 6966 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:31:41 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:31:41 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:31:41 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:31:41 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:35:44.766: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:35:44.770: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:35:44.792: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.792: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:35:44.793: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.793: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 27 02:35:44.793: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.793: INFO: Container kube-apiserver ready: true, restart count 0 Nov 27 02:35:44.793: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.793: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 27 02:35:44.793: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.793: INFO: Container kube-scheduler ready: true, restart count 0 Nov 27 02:35:44.793: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.793: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:35:44.793: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:35:44.793: INFO: Container azure-cnms ready: true, restart count 0 W1127 02:35:44.795720 14161 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:35:44.811: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:35:44.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "volume-1408" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sInline\-volume\s\(xfs\)\]\[Slow\]\svolumes\sshould\sstore\sdata$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:150 Nov 27 02:16:31.378: Failed to create client pod: timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:580from junit_10.xml
[BeforeEach] [Testpattern: Inline-volume (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Inline-volume (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:03:43.227: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename volume Nov 27 02:03:43.290: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Nov 27 02:03:43.303: INFO: Found ClusterRoles; assuming RBAC is enabled. �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-2369 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should store data /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:150 �[1mSTEP�[0m: creating a test azure disk volume Nov 27 02:03:53.940: INFO: Successfully created a new PD: "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8". Nov 27 02:03:53.940: INFO: Creating resource for inline volume �[1mSTEP�[0m: starting azure-injector �[1mSTEP�[0m: Writing text file contents in the container. Nov 27 02:06:17.963: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-f1185e95-10b6-11ea-8290-02424e92b20f.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks490311382/kubeconfig/kubeconfig.westus2.json exec azure-injector --namespace=volume-2369 -- /bin/sh -c echo 'Hello from azure-disk from namespace volume-2369' > /opt/0/index.html' Nov 27 02:06:18.255: INFO: stderr: "" Nov 27 02:06:18.255: INFO: stdout: "" �[1mSTEP�[0m: Checking that text file contents are perfect. Nov 27 02:06:18.255: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-f1185e95-10b6-11ea-8290-02424e92b20f.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks490311382/kubeconfig/kubeconfig.westus2.json exec azure-injector --namespace=volume-2369 -- cat /opt/0/index.html' Nov 27 02:06:18.577: INFO: stderr: "" Nov 27 02:06:18.577: INFO: stdout: "Hello from azure-disk from namespace volume-2369\n" Nov 27 02:06:18.577: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-2369 PodName:azure-injector ContainerName:azure-injector Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:18.577: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:06:18.789: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-2369 PodName:azure-injector ContainerName:azure-injector Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:18.789: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking fsType is correct. Nov 27 02:06:18.987: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-f1185e95-10b6-11ea-8290-02424e92b20f.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks490311382/kubeconfig/kubeconfig.westus2.json exec azure-injector --namespace=volume-2369 -- grep /opt/0 /proc/mounts' Nov 27 02:06:19.303: INFO: stderr: "" Nov 27 02:06:19.303: INFO: stdout: "/dev/sdc /opt/0 xfs rw,relatime,attr2,inode64,noquota 0 0\n" �[1mSTEP�[0m: Deleting pod azure-injector in namespace volume-2369 Nov 27 02:06:19.307: INFO: Waiting for pod azure-injector to disappear Nov 27 02:06:19.315: INFO: Pod azure-injector still exists Nov 27 02:06:21.315: INFO: Waiting for pod azure-injector to disappear Nov 27 02:06:21.319: INFO: Pod azure-injector still exists Nov 27 02:06:23.315: INFO: Waiting for pod azure-injector to disappear Nov 27 02:06:23.318: INFO: Pod azure-injector still exists Nov 27 02:06:25.315: INFO: Waiting for pod azure-injector to disappear Nov 27 02:06:25.319: INFO: Pod azure-injector still exists Nov 27 02:06:27.315: INFO: Waiting for pod azure-injector to disappear Nov 27 02:06:27.319: INFO: Pod azure-injector still exists Nov 27 02:06:29.315: INFO: Waiting for pod azure-injector to disappear Nov 27 02:06:29.319: INFO: Pod azure-injector still exists Nov 27 02:06:31.315: INFO: Waiting for pod azure-injector to disappear Nov 27 02:06:31.331: INFO: Pod azure-injector no longer exists �[1mSTEP�[0m: starting azure-client Nov 27 02:11:31.366: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:31.372: INFO: Pod azure-client still exists Nov 27 02:11:33.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:33.375: INFO: Pod azure-client still exists Nov 27 02:11:35.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:35.376: INFO: Pod azure-client still exists Nov 27 02:11:37.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:37.375: INFO: Pod azure-client still exists Nov 27 02:11:39.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:39.375: INFO: Pod azure-client still exists Nov 27 02:11:41.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:41.375: INFO: Pod azure-client still exists Nov 27 02:11:43.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:43.376: INFO: Pod azure-client still exists Nov 27 02:11:45.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:45.376: INFO: Pod azure-client still exists Nov 27 02:11:47.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:47.376: INFO: Pod azure-client still exists Nov 27 02:11:49.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:49.375: INFO: Pod azure-client still exists Nov 27 02:11:51.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:51.375: INFO: Pod azure-client still exists Nov 27 02:11:53.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:53.375: INFO: Pod azure-client still exists Nov 27 02:11:55.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:55.376: INFO: Pod azure-client still exists Nov 27 02:11:57.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:57.376: INFO: Pod azure-client still exists Nov 27 02:11:59.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:59.375: INFO: Pod azure-client still exists Nov 27 02:12:01.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:01.375: INFO: Pod azure-client still exists Nov 27 02:12:03.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:03.375: INFO: Pod azure-client still exists Nov 27 02:12:05.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:05.375: INFO: Pod azure-client still exists Nov 27 02:12:07.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:07.375: INFO: Pod azure-client still exists Nov 27 02:12:09.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:09.375: INFO: Pod azure-client still exists Nov 27 02:12:11.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:11.375: INFO: Pod azure-client still exists Nov 27 02:12:13.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:13.375: INFO: Pod azure-client still exists Nov 27 02:12:15.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:15.375: INFO: Pod azure-client still exists Nov 27 02:12:17.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:17.375: INFO: Pod azure-client still exists Nov 27 02:12:19.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:19.375: INFO: Pod azure-client still exists Nov 27 02:12:21.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:21.375: INFO: Pod azure-client still exists Nov 27 02:12:23.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:23.375: INFO: Pod azure-client still exists Nov 27 02:12:25.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:25.375: INFO: Pod azure-client still exists Nov 27 02:12:27.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:27.375: INFO: Pod azure-client still exists Nov 27 02:12:29.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:29.375: INFO: Pod azure-client still exists Nov 27 02:12:31.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:31.375: INFO: Pod azure-client still exists Nov 27 02:12:33.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:33.375: INFO: Pod azure-client still exists Nov 27 02:12:35.374: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:35.378: INFO: Pod azure-client still exists Nov 27 02:12:37.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:37.376: INFO: Pod azure-client still exists Nov 27 02:12:39.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:39.375: INFO: Pod azure-client still exists Nov 27 02:12:41.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:41.375: INFO: Pod azure-client still exists Nov 27 02:12:43.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:43.375: INFO: Pod azure-client still exists Nov 27 02:12:45.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:45.375: INFO: Pod azure-client still exists Nov 27 02:12:47.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:47.376: INFO: Pod azure-client still exists Nov 27 02:12:49.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:49.375: INFO: Pod azure-client still exists Nov 27 02:12:51.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:51.375: INFO: Pod azure-client still exists Nov 27 02:12:53.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:53.375: INFO: Pod azure-client still exists Nov 27 02:12:55.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:55.375: INFO: Pod azure-client still exists Nov 27 02:12:57.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:57.375: INFO: Pod azure-client still exists Nov 27 02:12:59.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:59.375: INFO: Pod azure-client still exists Nov 27 02:13:01.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:01.375: INFO: Pod azure-client still exists Nov 27 02:13:03.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:03.374: INFO: Pod azure-client still exists Nov 27 02:13:05.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:05.375: INFO: Pod azure-client still exists Nov 27 02:13:07.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:07.376: INFO: Pod azure-client still exists Nov 27 02:13:09.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:09.375: INFO: Pod azure-client still exists Nov 27 02:13:11.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:11.375: INFO: Pod azure-client still exists Nov 27 02:13:13.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:13.375: INFO: Pod azure-client still exists Nov 27 02:13:15.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:15.380: INFO: Pod azure-client still exists Nov 27 02:13:17.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:17.375: INFO: Pod azure-client still exists Nov 27 02:13:19.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:19.375: INFO: Pod azure-client still exists Nov 27 02:13:21.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:21.375: INFO: Pod azure-client still exists Nov 27 02:13:23.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:23.375: INFO: Pod azure-client still exists Nov 27 02:13:25.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:25.375: INFO: Pod azure-client still exists Nov 27 02:13:27.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:27.375: INFO: Pod azure-client still exists Nov 27 02:13:29.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:29.375: INFO: Pod azure-client still exists Nov 27 02:13:31.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:31.375: INFO: Pod azure-client still exists Nov 27 02:13:33.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:33.375: INFO: Pod azure-client still exists Nov 27 02:13:35.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:35.375: INFO: Pod azure-client still exists Nov 27 02:13:37.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:37.376: INFO: Pod azure-client still exists Nov 27 02:13:39.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:39.375: INFO: Pod azure-client still exists Nov 27 02:13:41.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:41.375: INFO: Pod azure-client still exists Nov 27 02:13:43.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:43.375: INFO: Pod azure-client still exists Nov 27 02:13:45.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:45.375: INFO: Pod azure-client still exists Nov 27 02:13:47.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:47.375: INFO: Pod azure-client still exists Nov 27 02:13:49.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:49.375: INFO: Pod azure-client still exists Nov 27 02:13:51.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:51.375: INFO: Pod azure-client still exists Nov 27 02:13:53.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:53.375: INFO: Pod azure-client still exists Nov 27 02:13:55.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:55.375: INFO: Pod azure-client still exists Nov 27 02:13:57.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:57.375: INFO: Pod azure-client still exists Nov 27 02:13:59.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:59.375: INFO: Pod azure-client still exists Nov 27 02:14:01.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:01.380: INFO: Pod azure-client still exists Nov 27 02:14:03.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:03.375: INFO: Pod azure-client still exists Nov 27 02:14:05.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:05.375: INFO: Pod azure-client still exists Nov 27 02:14:07.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:07.375: INFO: Pod azure-client still exists Nov 27 02:14:09.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:09.375: INFO: Pod azure-client still exists Nov 27 02:14:11.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:11.376: INFO: Pod azure-client still exists Nov 27 02:14:13.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:13.375: INFO: Pod azure-client still exists Nov 27 02:14:15.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:15.375: INFO: Pod azure-client still exists Nov 27 02:14:17.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:17.375: INFO: Pod azure-client still exists Nov 27 02:14:19.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:19.375: INFO: Pod azure-client still exists Nov 27 02:14:21.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:21.375: INFO: Pod azure-client still exists Nov 27 02:14:23.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:23.375: INFO: Pod azure-client still exists Nov 27 02:14:25.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:25.375: INFO: Pod azure-client still exists Nov 27 02:14:27.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:27.375: INFO: Pod azure-client still exists Nov 27 02:14:29.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:29.375: INFO: Pod azure-client still exists Nov 27 02:14:31.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:31.375: INFO: Pod azure-client still exists Nov 27 02:14:33.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:33.376: INFO: Pod azure-client still exists Nov 27 02:14:35.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:35.375: INFO: Pod azure-client still exists Nov 27 02:14:37.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:37.374: INFO: Pod azure-client still exists Nov 27 02:14:39.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:39.374: INFO: Pod azure-client still exists Nov 27 02:14:41.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:41.375: INFO: Pod azure-client still exists Nov 27 02:14:43.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:43.375: INFO: Pod azure-client still exists Nov 27 02:14:45.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:45.375: INFO: Pod azure-client still exists Nov 27 02:14:47.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:47.375: INFO: Pod azure-client still exists Nov 27 02:14:49.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:49.376: INFO: Pod azure-client still exists Nov 27 02:14:51.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:51.375: INFO: Pod azure-client still exists Nov 27 02:14:53.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:53.375: INFO: Pod azure-client still exists Nov 27 02:14:55.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:55.375: INFO: Pod azure-client still exists Nov 27 02:14:57.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:57.376: INFO: Pod azure-client still exists Nov 27 02:14:59.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:59.375: INFO: Pod azure-client still exists Nov 27 02:15:01.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:01.375: INFO: Pod azure-client still exists Nov 27 02:15:03.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:03.375: INFO: Pod azure-client still exists Nov 27 02:15:05.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:05.375: INFO: Pod azure-client still exists Nov 27 02:15:07.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:07.375: INFO: Pod azure-client still exists Nov 27 02:15:09.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:09.375: INFO: Pod azure-client still exists Nov 27 02:15:11.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:11.375: INFO: Pod azure-client still exists Nov 27 02:15:13.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:13.375: INFO: Pod azure-client still exists Nov 27 02:15:15.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:15.375: INFO: Pod azure-client still exists Nov 27 02:15:17.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:17.375: INFO: Pod azure-client still exists Nov 27 02:15:19.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:19.375: INFO: Pod azure-client still exists Nov 27 02:15:21.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:21.375: INFO: Pod azure-client still exists Nov 27 02:15:23.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:23.376: INFO: Pod azure-client still exists Nov 27 02:15:25.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:25.375: INFO: Pod azure-client still exists Nov 27 02:15:27.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:27.375: INFO: Pod azure-client still exists Nov 27 02:15:29.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:29.375: INFO: Pod azure-client still exists Nov 27 02:15:31.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:31.375: INFO: Pod azure-client still exists Nov 27 02:15:33.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:33.375: INFO: Pod azure-client still exists Nov 27 02:15:35.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:35.375: INFO: Pod azure-client still exists Nov 27 02:15:37.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:37.375: INFO: Pod azure-client still exists Nov 27 02:15:39.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:39.375: INFO: Pod azure-client still exists Nov 27 02:15:41.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:41.375: INFO: Pod azure-client still exists Nov 27 02:15:43.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:43.375: INFO: Pod azure-client still exists Nov 27 02:15:45.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:45.375: INFO: Pod azure-client still exists Nov 27 02:15:47.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:47.375: INFO: Pod azure-client still exists Nov 27 02:15:49.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:49.375: INFO: Pod azure-client still exists Nov 27 02:15:51.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:51.375: INFO: Pod azure-client still exists Nov 27 02:15:53.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:53.375: INFO: Pod azure-client still exists Nov 27 02:15:55.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:55.375: INFO: Pod azure-client still exists Nov 27 02:15:57.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:57.375: INFO: Pod azure-client still exists Nov 27 02:15:59.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:59.374: INFO: Pod azure-client still exists Nov 27 02:16:01.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:01.375: INFO: Pod azure-client still exists Nov 27 02:16:03.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:03.374: INFO: Pod azure-client still exists Nov 27 02:16:05.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:05.375: INFO: Pod azure-client still exists Nov 27 02:16:07.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:07.375: INFO: Pod azure-client still exists Nov 27 02:16:09.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:09.375: INFO: Pod azure-client still exists Nov 27 02:16:11.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:11.375: INFO: Pod azure-client still exists Nov 27 02:16:13.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:13.375: INFO: Pod azure-client still exists Nov 27 02:16:15.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:15.375: INFO: Pod azure-client still exists Nov 27 02:16:17.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:17.375: INFO: Pod azure-client still exists Nov 27 02:16:19.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:19.375: INFO: Pod azure-client still exists Nov 27 02:16:21.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:21.374: INFO: Pod azure-client still exists Nov 27 02:16:23.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:23.375: INFO: Pod azure-client still exists Nov 27 02:16:25.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:25.376: INFO: Pod azure-client still exists Nov 27 02:16:27.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:27.375: INFO: Pod azure-client still exists Nov 27 02:16:29.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:29.375: INFO: Pod azure-client still exists Nov 27 02:16:31.372: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:31.375: INFO: Pod azure-client still exists Nov 27 02:16:31.375: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:31.377: INFO: Pod azure-client still exists Nov 27 02:16:31.378: FAIL: Failed to create client pod: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/volume.TestVolumeClient(0xc0018af540, 0xc001f0f000, 0xb, 0x4a4dbcb, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:580 +0x15a k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).defineTests.func3() /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:183 +0x4ad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002009000) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:111 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc002009000) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b testing.tRunner(0xc002009000, 0x4c34198) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 �[1mSTEP�[0m: cleaning the environment after azure Nov 27 02:16:31.378: INFO: Deleting pod "azure-client" in namespace "volume-2369" Nov 27 02:16:31.381: INFO: Wait up to 5m0s for pod "azure-client" to be fully deleted Nov 27 02:16:41.618: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:41.618: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:46.817: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:46.817: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:52.078: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:52.078: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:57.265: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:57.265: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:02.466: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:02.466: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:07.672: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:07.672: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:12.881: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:12.881: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:18.070: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:18.070: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:23.271: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:23.271: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:28.475: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:28.475: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:33.658: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:33.658: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:38.864: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:38.865: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:44.067: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:44.067: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:49.267: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:49.267: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:54.448: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:54.448: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:59.669: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:59.669: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:04.873: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:04.873: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:10.070: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:10.070: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:15.275: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:15.275: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:20.493: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:20.493: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:25.688: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:25.688: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:30.870: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:30.870: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:36.130: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:36.130: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:41.309: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:41.309: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:46.534: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:46.534: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:51.731: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:51.731: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:56.945: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:56.945: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:02.152: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:02.152: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:07.355: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:07.355: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:12.544: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:12.544: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:17.725: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:17.725: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:22.921: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:22.921: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:28.114: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:28.115: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:33.333: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:33.333: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:38.579: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:38.579: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:43.785: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:43.785: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:49.014: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:49.014: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:54.220: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:54.220: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:59.413: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:59.413: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:04.606: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:04.606: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:09.799: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:09.799: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:15.035: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:15.035: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:20.241: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:20.241: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:25.447: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:25.447: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:30.641: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:30.641: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:35.832: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:35.832: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:41.043: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:41.043: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:46.540: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:46.540: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:51.721: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:51.721: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:56.909: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:56.909: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:21:02.110: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:21:02.110: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:21:12.246: INFO: Successfully deleted PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-116c204c-14d5-417a-b957-6fa6a5f747b8". Nov 27 02:21:12.246: INFO: In-tree plugin kubernetes.io/azure-disk is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "volume-2369". �[1mSTEP�[0m: Found 15 events. Nov 27 02:21:12.328: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for azure-client: {default-scheduler } Scheduled: Successfully assigned volume-2369/azure-client to k8s-agentpool1-27910301-vmss000000 Nov 27 02:21:12.328: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for azure-injector: {default-scheduler } Scheduled: Successfully assigned volume-2369/azure-injector to k8s-agentpool1-27910301-vmss000000 Nov 27 02:21:12.328: INFO: At 2019-11-27 02:05:09 +0000 UTC - event for azure-injector: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "azure-volume-0" Nov 27 02:21:12.328: INFO: At 2019-11-27 02:05:56 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[azure-volume-0], unattached volumes=[azure-volume-0 default-token-h29nq]: timed out waiting for the condition Nov 27 02:21:12.328: INFO: At 2019-11-27 02:06:12 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000000} Pulling: Pulling image "docker.io/library/busybox:1.29" Nov 27 02:21:12.328: INFO: At 2019-11-27 02:06:15 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container azure-injector Nov 27 02:21:12.328: INFO: At 2019-11-27 02:06:15 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container azure-injector Nov 27 02:21:12.328: INFO: At 2019-11-27 02:06:15 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" Nov 27 02:21:12.328: INFO: At 2019-11-27 02:06:19 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000000} Killing: Stopping container azure-injector Nov 27 02:21:12.328: INFO: At 2019-11-27 02:08:34 +0000 UTC - event for azure-client: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[azure-volume-0], unattached volumes=[azure-volume-0 default-token-h29nq]: timed out waiting for the condition Nov 27 02:21:12.328: INFO: At 2019-11-27 02:14:03 +0000 UTC - event for azure-client: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "azure-volume-0" Nov 27 02:21:12.328: INFO: At 2019-11-27 02:14:49 +0000 UTC - event for azure-client: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 27 02:21:12.328: INFO: At 2019-11-27 02:14:50 +0000 UTC - event for azure-client: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container azure-client Nov 27 02:21:12.328: INFO: At 2019-11-27 02:14:50 +0000 UTC - event for azure-client: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container azure-client Nov 27 02:21:12.328: INFO: At 2019-11-27 02:16:31 +0000 UTC - event for azure-client: {kubelet k8s-agentpool1-27910301-vmss000000} Killing: Stopping container azure-client Nov 27 02:21:12.330: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:21:12.330: INFO: Nov 27 02:21:12.333: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:21:12.335: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 5091 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:20:53 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:20:53 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:20:53 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:20:53 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-239fed30-3cd8-4faf-a0cc-913154702e71 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-73246121-d31d-4680-9b21-4b7330d473b8 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:21:12.335: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:21:12.339: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:21:12.351: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.351: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:21:12.351: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.351: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:21:12.351: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.351: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:21:12.351: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.351: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:21:12.351: INFO: volume-prep-provisioning-1676 started at 2019-11-27 02:20:44 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.351: INFO: Container init-volume-provisioning-1676 ready: false, restart count 0 Nov 27 02:21:12.351: INFO: azure-injector started at 2019-11-27 02:20:44 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.351: INFO: Container azure-injector ready: false, restart count 0 Nov 27 02:21:12.351: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.351: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:21:12.351: INFO: azure-io-client started at 2019-11-27 02:18:34 +0000 UTC (1+1 container statuses recorded) Nov 27 02:21:12.351: INFO: Init container azure-io-init ready: false, restart count 0 Nov 27 02:21:12.351: INFO: Container azure-io-client ready: false, restart count 0 Nov 27 02:21:12.351: INFO: security-context-54d39a28-d787-43ae-9155-1d1da53f4a30 started at 2019-11-27 02:19:10 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.351: INFO: Container write-pod ready: false, restart count 0 W1127 02:21:12.354795 14148 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:21:12.378: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:21:12.378: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:21:12.381: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 4627 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:19:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:19:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:19:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:19:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-63caa552-e092-43fe-b924-ef6a44e177a3 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-ff9df561-0196-4319-bfaf-223abedd298f],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:21:12.381: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:21:12.385: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:21:12.391: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.392: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:21:12.392: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.392: INFO: Container metrics-server ready: true, restart count 0 Nov 27 02:21:12.392: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.392: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:21:12.392: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.392: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:21:12.392: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.392: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:21:12.392: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.392: INFO: Container coredns ready: true, restart count 0 Nov 27 02:21:12.392: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.392: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:21:12.392: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.392: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 27 02:21:12.392: INFO: security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4 started at 2019-11-27 02:16:51 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.392: INFO: Container write-pod ready: false, restart count 0 W1127 02:21:12.395227 14148 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:21:12.414: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:21:12.414: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:21:12.416: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 4091 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:16:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:16:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:16:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:16:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:21:12.417: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:21:12.420: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:21:12.425: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.425: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 27 02:21:12.425: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.425: INFO: Container kube-apiserver ready: true, restart count 0 Nov 27 02:21:12.425: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.425: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 27 02:21:12.425: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.425: INFO: Container kube-scheduler ready: true, restart count 0 Nov 27 02:21:12.425: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.425: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:21:12.425: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.425: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:21:12.425: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:21:12.425: INFO: Container kube-proxy ready: true, restart count 0 W1127 02:21:12.428130 14148 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:21:12.445: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:21:12.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "volume-2369" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sazure\-disk\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(xfs\)\]\[Slow\]\svolumes\sshould\sstore\sdata$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:150 Nov 27 02:16:27.251: Failed to create client pod: timed out waiting for the condition /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:580from junit_09.xml
[BeforeEach] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:101 [BeforeEach] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 27 02:03:42.961: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename volume �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in volume-4051 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should store data /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:150 �[1mSTEP�[0m: creating a test azure disk volume Nov 27 02:03:53.694: INFO: Successfully created a new PD: "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104". Nov 27 02:03:53.694: INFO: Creating resource for pre-provisioned PV Nov 27 02:03:53.694: INFO: Creating PVC and PV �[1mSTEP�[0m: Creating a PVC followed by a PV Nov 27 02:03:53.706: INFO: Waiting for PV azure-disk-b44l6 to bind to PVC pvc-z22mj Nov 27 02:03:53.706: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-z22mj] to have phase Bound Nov 27 02:03:53.724: INFO: PersistentVolumeClaim pvc-z22mj found but phase is Pending instead of Bound. Nov 27 02:03:55.727: INFO: PersistentVolumeClaim pvc-z22mj found but phase is Pending instead of Bound. Nov 27 02:03:57.731: INFO: PersistentVolumeClaim pvc-z22mj found but phase is Pending instead of Bound. Nov 27 02:03:59.735: INFO: PersistentVolumeClaim pvc-z22mj found and phase=Bound (6.029438644s) Nov 27 02:03:59.735: INFO: Waiting up to 3m0s for PersistentVolume azure-disk-b44l6 to have phase Bound Nov 27 02:03:59.738: INFO: PersistentVolume azure-disk-b44l6 found and phase=Bound (2.676921ms) �[1mSTEP�[0m: starting azure-injector �[1mSTEP�[0m: Writing text file contents in the container. Nov 27 02:06:19.762: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-f1185e95-10b6-11ea-8290-02424e92b20f.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks490311382/kubeconfig/kubeconfig.westus2.json exec azure-injector --namespace=volume-4051 -- /bin/sh -c echo 'Hello from azure-disk from namespace volume-4051' > /opt/0/index.html' Nov 27 02:06:20.073: INFO: stderr: "" Nov 27 02:06:20.073: INFO: stdout: "" �[1mSTEP�[0m: Checking that text file contents are perfect. Nov 27 02:06:20.074: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-f1185e95-10b6-11ea-8290-02424e92b20f.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks490311382/kubeconfig/kubeconfig.westus2.json exec azure-injector --namespace=volume-4051 -- cat /opt/0/index.html' Nov 27 02:06:20.355: INFO: stderr: "" Nov 27 02:06:20.355: INFO: stdout: "Hello from azure-disk from namespace volume-4051\n" Nov 27 02:06:20.355: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-4051 PodName:azure-injector ContainerName:azure-injector Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:20.355: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json Nov 27 02:06:20.638: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-4051 PodName:azure-injector ContainerName:azure-injector Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 02:06:20.638: INFO: >>> kubeConfig: /workspace/aks490311382/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Checking fsType is correct. Nov 27 02:06:20.823: INFO: Running '/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://kubetest-f1185e95-10b6-11ea-8290-02424e92b20f.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks490311382/kubeconfig/kubeconfig.westus2.json exec azure-injector --namespace=volume-4051 -- grep /opt/0 /proc/mounts' Nov 27 02:06:21.201: INFO: stderr: "" Nov 27 02:06:21.201: INFO: stdout: "/dev/sdc /opt/0 xfs rw,relatime,attr2,inode64,noquota 0 0\n" �[1mSTEP�[0m: Deleting pod azure-injector in namespace volume-4051 Nov 27 02:06:21.206: INFO: Waiting for pod azure-injector to disappear Nov 27 02:06:21.213: INFO: Pod azure-injector still exists Nov 27 02:06:23.213: INFO: Waiting for pod azure-injector to disappear Nov 27 02:06:23.217: INFO: Pod azure-injector still exists Nov 27 02:06:25.213: INFO: Waiting for pod azure-injector to disappear Nov 27 02:06:25.217: INFO: Pod azure-injector still exists Nov 27 02:06:27.213: INFO: Waiting for pod azure-injector to disappear Nov 27 02:06:27.216: INFO: Pod azure-injector no longer exists �[1mSTEP�[0m: starting azure-client Nov 27 02:11:27.242: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:27.245: INFO: Pod azure-client still exists Nov 27 02:11:29.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:29.249: INFO: Pod azure-client still exists Nov 27 02:11:31.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:31.249: INFO: Pod azure-client still exists Nov 27 02:11:33.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:33.249: INFO: Pod azure-client still exists Nov 27 02:11:35.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:35.249: INFO: Pod azure-client still exists Nov 27 02:11:37.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:37.249: INFO: Pod azure-client still exists Nov 27 02:11:39.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:39.250: INFO: Pod azure-client still exists Nov 27 02:11:41.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:41.249: INFO: Pod azure-client still exists Nov 27 02:11:43.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:43.249: INFO: Pod azure-client still exists Nov 27 02:11:45.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:45.249: INFO: Pod azure-client still exists Nov 27 02:11:47.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:47.249: INFO: Pod azure-client still exists Nov 27 02:11:49.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:49.249: INFO: Pod azure-client still exists Nov 27 02:11:51.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:51.250: INFO: Pod azure-client still exists Nov 27 02:11:53.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:53.249: INFO: Pod azure-client still exists Nov 27 02:11:55.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:55.249: INFO: Pod azure-client still exists Nov 27 02:11:57.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:57.249: INFO: Pod azure-client still exists Nov 27 02:11:59.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:11:59.250: INFO: Pod azure-client still exists Nov 27 02:12:01.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:01.249: INFO: Pod azure-client still exists Nov 27 02:12:03.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:03.249: INFO: Pod azure-client still exists Nov 27 02:12:05.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:05.249: INFO: Pod azure-client still exists Nov 27 02:12:07.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:07.248: INFO: Pod azure-client still exists Nov 27 02:12:09.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:09.249: INFO: Pod azure-client still exists Nov 27 02:12:11.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:11.248: INFO: Pod azure-client still exists Nov 27 02:12:13.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:13.249: INFO: Pod azure-client still exists Nov 27 02:12:15.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:15.248: INFO: Pod azure-client still exists Nov 27 02:12:17.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:17.249: INFO: Pod azure-client still exists Nov 27 02:12:19.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:19.248: INFO: Pod azure-client still exists Nov 27 02:12:21.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:21.248: INFO: Pod azure-client still exists Nov 27 02:12:23.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:23.248: INFO: Pod azure-client still exists Nov 27 02:12:25.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:25.248: INFO: Pod azure-client still exists Nov 27 02:12:27.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:27.248: INFO: Pod azure-client still exists Nov 27 02:12:29.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:29.248: INFO: Pod azure-client still exists Nov 27 02:12:31.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:31.250: INFO: Pod azure-client still exists Nov 27 02:12:33.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:33.248: INFO: Pod azure-client still exists Nov 27 02:12:35.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:35.248: INFO: Pod azure-client still exists Nov 27 02:12:37.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:37.249: INFO: Pod azure-client still exists Nov 27 02:12:39.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:39.248: INFO: Pod azure-client still exists Nov 27 02:12:41.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:41.249: INFO: Pod azure-client still exists Nov 27 02:12:43.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:43.248: INFO: Pod azure-client still exists Nov 27 02:12:45.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:45.248: INFO: Pod azure-client still exists Nov 27 02:12:47.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:47.250: INFO: Pod azure-client still exists Nov 27 02:12:49.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:49.248: INFO: Pod azure-client still exists Nov 27 02:12:51.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:51.249: INFO: Pod azure-client still exists Nov 27 02:12:53.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:53.248: INFO: Pod azure-client still exists Nov 27 02:12:55.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:55.248: INFO: Pod azure-client still exists Nov 27 02:12:57.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:57.248: INFO: Pod azure-client still exists Nov 27 02:12:59.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:12:59.250: INFO: Pod azure-client still exists Nov 27 02:13:01.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:01.249: INFO: Pod azure-client still exists Nov 27 02:13:03.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:03.249: INFO: Pod azure-client still exists Nov 27 02:13:05.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:05.249: INFO: Pod azure-client still exists Nov 27 02:13:07.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:07.248: INFO: Pod azure-client still exists Nov 27 02:13:09.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:09.249: INFO: Pod azure-client still exists Nov 27 02:13:11.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:11.248: INFO: Pod azure-client still exists Nov 27 02:13:13.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:13.248: INFO: Pod azure-client still exists Nov 27 02:13:15.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:15.249: INFO: Pod azure-client still exists Nov 27 02:13:17.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:17.252: INFO: Pod azure-client still exists Nov 27 02:13:19.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:19.249: INFO: Pod azure-client still exists Nov 27 02:13:21.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:21.248: INFO: Pod azure-client still exists Nov 27 02:13:23.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:23.248: INFO: Pod azure-client still exists Nov 27 02:13:25.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:25.249: INFO: Pod azure-client still exists Nov 27 02:13:27.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:27.248: INFO: Pod azure-client still exists Nov 27 02:13:29.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:29.249: INFO: Pod azure-client still exists Nov 27 02:13:31.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:31.249: INFO: Pod azure-client still exists Nov 27 02:13:33.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:33.249: INFO: Pod azure-client still exists Nov 27 02:13:35.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:35.248: INFO: Pod azure-client still exists Nov 27 02:13:37.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:37.249: INFO: Pod azure-client still exists Nov 27 02:13:39.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:39.249: INFO: Pod azure-client still exists Nov 27 02:13:41.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:41.249: INFO: Pod azure-client still exists Nov 27 02:13:43.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:43.248: INFO: Pod azure-client still exists Nov 27 02:13:45.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:45.248: INFO: Pod azure-client still exists Nov 27 02:13:47.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:47.248: INFO: Pod azure-client still exists Nov 27 02:13:49.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:49.249: INFO: Pod azure-client still exists Nov 27 02:13:51.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:51.250: INFO: Pod azure-client still exists Nov 27 02:13:53.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:53.248: INFO: Pod azure-client still exists Nov 27 02:13:55.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:55.248: INFO: Pod azure-client still exists Nov 27 02:13:57.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:57.249: INFO: Pod azure-client still exists Nov 27 02:13:59.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:13:59.249: INFO: Pod azure-client still exists Nov 27 02:14:01.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:01.249: INFO: Pod azure-client still exists Nov 27 02:14:03.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:03.249: INFO: Pod azure-client still exists Nov 27 02:14:05.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:05.249: INFO: Pod azure-client still exists Nov 27 02:14:07.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:07.248: INFO: Pod azure-client still exists Nov 27 02:14:09.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:09.248: INFO: Pod azure-client still exists Nov 27 02:14:11.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:11.249: INFO: Pod azure-client still exists Nov 27 02:14:13.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:13.249: INFO: Pod azure-client still exists Nov 27 02:14:15.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:15.249: INFO: Pod azure-client still exists Nov 27 02:14:17.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:17.249: INFO: Pod azure-client still exists Nov 27 02:14:19.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:19.249: INFO: Pod azure-client still exists Nov 27 02:14:21.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:21.249: INFO: Pod azure-client still exists Nov 27 02:14:23.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:23.248: INFO: Pod azure-client still exists Nov 27 02:14:25.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:25.248: INFO: Pod azure-client still exists Nov 27 02:14:27.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:27.248: INFO: Pod azure-client still exists Nov 27 02:14:29.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:29.249: INFO: Pod azure-client still exists Nov 27 02:14:31.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:31.250: INFO: Pod azure-client still exists Nov 27 02:14:33.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:33.249: INFO: Pod azure-client still exists Nov 27 02:14:35.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:35.248: INFO: Pod azure-client still exists Nov 27 02:14:37.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:37.248: INFO: Pod azure-client still exists Nov 27 02:14:39.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:39.248: INFO: Pod azure-client still exists Nov 27 02:14:41.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:41.249: INFO: Pod azure-client still exists Nov 27 02:14:43.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:43.249: INFO: Pod azure-client still exists Nov 27 02:14:45.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:45.248: INFO: Pod azure-client still exists Nov 27 02:14:47.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:47.249: INFO: Pod azure-client still exists Nov 27 02:14:49.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:49.248: INFO: Pod azure-client still exists Nov 27 02:14:51.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:51.248: INFO: Pod azure-client still exists Nov 27 02:14:53.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:53.249: INFO: Pod azure-client still exists Nov 27 02:14:55.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:55.248: INFO: Pod azure-client still exists Nov 27 02:14:57.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:57.249: INFO: Pod azure-client still exists Nov 27 02:14:59.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:14:59.251: INFO: Pod azure-client still exists Nov 27 02:15:01.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:01.249: INFO: Pod azure-client still exists Nov 27 02:15:03.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:03.249: INFO: Pod azure-client still exists Nov 27 02:15:05.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:05.249: INFO: Pod azure-client still exists Nov 27 02:15:07.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:07.248: INFO: Pod azure-client still exists Nov 27 02:15:09.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:09.249: INFO: Pod azure-client still exists Nov 27 02:15:11.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:11.248: INFO: Pod azure-client still exists Nov 27 02:15:13.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:13.249: INFO: Pod azure-client still exists Nov 27 02:15:15.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:15.249: INFO: Pod azure-client still exists Nov 27 02:15:17.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:17.249: INFO: Pod azure-client still exists Nov 27 02:15:19.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:19.248: INFO: Pod azure-client still exists Nov 27 02:15:21.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:21.249: INFO: Pod azure-client still exists Nov 27 02:15:23.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:23.249: INFO: Pod azure-client still exists Nov 27 02:15:25.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:25.248: INFO: Pod azure-client still exists Nov 27 02:15:27.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:27.248: INFO: Pod azure-client still exists Nov 27 02:15:29.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:29.249: INFO: Pod azure-client still exists Nov 27 02:15:31.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:31.249: INFO: Pod azure-client still exists Nov 27 02:15:33.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:33.249: INFO: Pod azure-client still exists Nov 27 02:15:35.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:35.248: INFO: Pod azure-client still exists Nov 27 02:15:37.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:37.248: INFO: Pod azure-client still exists Nov 27 02:15:39.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:39.248: INFO: Pod azure-client still exists Nov 27 02:15:41.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:41.248: INFO: Pod azure-client still exists Nov 27 02:15:43.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:43.248: INFO: Pod azure-client still exists Nov 27 02:15:45.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:45.249: INFO: Pod azure-client still exists Nov 27 02:15:47.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:47.248: INFO: Pod azure-client still exists Nov 27 02:15:49.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:49.249: INFO: Pod azure-client still exists Nov 27 02:15:51.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:51.249: INFO: Pod azure-client still exists Nov 27 02:15:53.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:53.248: INFO: Pod azure-client still exists Nov 27 02:15:55.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:55.249: INFO: Pod azure-client still exists Nov 27 02:15:57.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:57.249: INFO: Pod azure-client still exists Nov 27 02:15:59.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:15:59.248: INFO: Pod azure-client still exists Nov 27 02:16:01.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:01.248: INFO: Pod azure-client still exists Nov 27 02:16:03.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:03.248: INFO: Pod azure-client still exists Nov 27 02:16:05.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:05.250: INFO: Pod azure-client still exists Nov 27 02:16:07.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:07.250: INFO: Pod azure-client still exists Nov 27 02:16:09.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:09.250: INFO: Pod azure-client still exists Nov 27 02:16:11.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:11.248: INFO: Pod azure-client still exists Nov 27 02:16:13.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:13.249: INFO: Pod azure-client still exists Nov 27 02:16:15.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:15.255: INFO: Pod azure-client still exists Nov 27 02:16:17.246: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:17.249: INFO: Pod azure-client still exists Nov 27 02:16:19.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:19.249: INFO: Pod azure-client still exists Nov 27 02:16:21.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:21.248: INFO: Pod azure-client still exists Nov 27 02:16:23.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:23.248: INFO: Pod azure-client still exists Nov 27 02:16:25.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:25.248: INFO: Pod azure-client still exists Nov 27 02:16:27.245: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:27.248: INFO: Pod azure-client still exists Nov 27 02:16:27.248: INFO: Waiting for pod azure-client to disappear Nov 27 02:16:27.250: INFO: Pod azure-client still exists Nov 27 02:16:27.251: FAIL: Failed to create client pod: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/volume.TestVolumeClient(0xc00198fa40, 0xc0021d3790, 0xb, 0x4a4dbcb, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:580 +0x15a k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).defineTests.func3() /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:183 +0x4ad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0022a2c00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:111 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc0022a2c00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b testing.tRunner(0xc0022a2c00, 0x4c34198) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 �[1mSTEP�[0m: cleaning the environment after azure Nov 27 02:16:27.251: INFO: Deleting pod "azure-client" in namespace "volume-4051" Nov 27 02:16:27.256: INFO: Wait up to 5m0s for pod "azure-client" to be fully deleted �[1mSTEP�[0m: Deleting pv and pvc Nov 27 02:16:35.266: INFO: Deleting PersistentVolumeClaim "pvc-z22mj" Nov 27 02:16:35.270: INFO: Deleting PersistentVolume "azure-disk-b44l6" Nov 27 02:16:35.486: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:35.486: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:40.688: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:40.688: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:45.882: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:45.882: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:51.073: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:51.073: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:56.273: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:16:56.273: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:01.461: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:01.461: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:06.653: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:06.653: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:11.846: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:11.846: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:17.055: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:17.055: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:22.266: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:22.266: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:27.453: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:27.453: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:32.643: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:32.643: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:37.847: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:37.847: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:43.047: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:43.047: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:48.237: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:48.237: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:53.437: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:53.437: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:58.647: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:17:58.647: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:03.883: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:03.883: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:09.095: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:09.095: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:14.348: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:14.348: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:19.557: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:19.557: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:24.768: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:24.768: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:29.972: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:29.972: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:35.160: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:35.160: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:40.338: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:40.338: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:45.522: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:45.522: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:50.731: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:50.731: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:55.928: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:18:55.928: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:01.120: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:01.120: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:06.330: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:06.330: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:11.515: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:11.515: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:16.701: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:16.701: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:21.878: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:21.878: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:27.071: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:27.071: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:32.297: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:32.297: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:37.505: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:37.505: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:42.707: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:42.707: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:47.903: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:47.903: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:53.117: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:53.117: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:58.371: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:19:58.371: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:03.571: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:03.571: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:08.752: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:08.752: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:14.001: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:14.001: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:19.236: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:19.236: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:24.477: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:24.477: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:29.679: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:29.679: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:34.871: INFO: failed to delete Azure volume "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104": compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:34.871: INFO: Couldn't delete PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104", sleeping 5s: compute.DisksClient#Delete: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Disk bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104 is attached to VM /subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/k8s-agentpool1-27910301-vmss_0." Nov 27 02:20:44.984: INFO: Successfully deleted PD "/subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/bootstrap-e2e-b4ed53c9-c4c3-4f4a-9308-21ed593c1104". Nov 27 02:20:44.984: INFO: In-tree plugin kubernetes.io/azure-disk is not migrated, not validating any metrics [AfterEach] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "volume-4051". �[1mSTEP�[0m: Found 17 events. Nov 27 02:20:45.070: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for azure-client: {default-scheduler } Scheduled: Successfully assigned volume-4051/azure-client to k8s-agentpool1-27910301-vmss000000 Nov 27 02:20:45.070: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for azure-injector: {default-scheduler } Scheduled: Successfully assigned volume-4051/azure-injector to k8s-agentpool1-27910301-vmss000001 Nov 27 02:20:45.070: INFO: At 2019-11-27 02:03:53 +0000 UTC - event for pvc-z22mj: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "volume-4051" not found Nov 27 02:20:45.070: INFO: At 2019-11-27 02:05:10 +0000 UTC - event for azure-injector: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "azure-disk-b44l6" Nov 27 02:20:45.070: INFO: At 2019-11-27 02:06:02 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000001} FailedMount: Unable to attach or mount volumes: unmounted volumes=[azure-volume-0], unattached volumes=[azure-volume-0 default-token-52vxs]: timed out waiting for the condition Nov 27 02:20:45.070: INFO: At 2019-11-27 02:06:15 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000001} Pulling: Pulling image "docker.io/library/busybox:1.29" Nov 27 02:20:45.070: INFO: At 2019-11-27 02:06:16 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000001} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" Nov 27 02:20:45.070: INFO: At 2019-11-27 02:06:17 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000001} Started: Started container azure-injector Nov 27 02:20:45.070: INFO: At 2019-11-27 02:06:17 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000001} Created: Created container azure-injector Nov 27 02:20:45.070: INFO: At 2019-11-27 02:06:21 +0000 UTC - event for azure-injector: {kubelet k8s-agentpool1-27910301-vmss000001} Killing: Stopping container azure-injector Nov 27 02:20:45.070: INFO: At 2019-11-27 02:06:27 +0000 UTC - event for azure-client: {attachdetach-controller } FailedAttachVolume: Multi-Attach error for volume "azure-disk-b44l6" Volume is already exclusively attached to one node and can't be attached to another Nov 27 02:20:45.070: INFO: At 2019-11-27 02:08:30 +0000 UTC - event for azure-client: {kubelet k8s-agentpool1-27910301-vmss000000} FailedMount: Unable to attach or mount volumes: unmounted volumes=[azure-volume-0], unattached volumes=[azure-volume-0 default-token-52vxs]: timed out waiting for the condition Nov 27 02:20:45.070: INFO: At 2019-11-27 02:10:40 +0000 UTC - event for azure-client: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "azure-disk-b44l6" Nov 27 02:20:45.070: INFO: At 2019-11-27 02:12:45 +0000 UTC - event for azure-client: {kubelet k8s-agentpool1-27910301-vmss000000} Started: Started container azure-client Nov 27 02:20:45.070: INFO: At 2019-11-27 02:12:45 +0000 UTC - event for azure-client: {kubelet k8s-agentpool1-27910301-vmss000000} Created: Created container azure-client Nov 27 02:20:45.070: INFO: At 2019-11-27 02:12:45 +0000 UTC - event for azure-client: {kubelet k8s-agentpool1-27910301-vmss000000} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Nov 27 02:20:45.070: INFO: At 2019-11-27 02:16:27 +0000 UTC - event for azure-client: {kubelet k8s-agentpool1-27910301-vmss000000} Killing: Stopping container azure-client Nov 27 02:20:45.073: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 02:20:45.073: INFO: Nov 27 02:20:45.076: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:20:45.078: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000000 /api/v1/nodes/k8s-agentpool1-27910301-vmss000000 433df295-d1c5-449e-b3e0-cf5dcbb47db5 5024 0 2019-11-27 02:02:30 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:20:43 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:20:43 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:20:43 +0000 UTC,LastTransitionTime:2019-11-27 02:02:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:20:43 +0000 UTC,LastTransitionTime:2019-11-27 02:02:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000000,},NodeAddress{Type:InternalIP,Address:10.240.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5cbe44f20b024d2194231594a97855b1,SystemUUID:91700562-C649-C848-A3DD-1E514501D602,BootID:4dc3a524-9111-40e4-8765-ea1a0ccebb06,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-6549e7cd-aab5-46e2-b213-5504e8fcacdd kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-73246121-d31d-4680-9b21-4b7330d473b8 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-dc0fffc5-5c18-4cfb-91a6-549b41b3639e],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-1b6ba83a-4b5f-4fbd-91bd-6f7f2121fe4f,DevicePath:6,},},Config:nil,},} Nov 27 02:20:45.079: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:20:45.083: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000000 Nov 27 02:20:45.096: INFO: kube-proxy-956dz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.096: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 02:20:45.096: INFO: azure-ip-masq-agent-q77hb started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.096: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:20:45.096: INFO: keyvault-flexvolume-gt5vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.096: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:20:45.096: INFO: blobfuse-flexvol-installer-2c2vq started at 2019-11-27 02:02:40 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.096: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:20:45.096: INFO: volume-prep-provisioning-1676 started at 2019-11-27 02:20:44 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.096: INFO: Container init-volume-provisioning-1676 ready: false, restart count 0 Nov 27 02:20:45.096: INFO: azure-injector started at 2019-11-27 02:20:44 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.096: INFO: Container azure-injector ready: false, restart count 0 Nov 27 02:20:45.096: INFO: azure-cni-networkmonitor-f9bdz started at 2019-11-27 02:02:32 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.096: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:20:45.096: INFO: pod-subpath-test-azure-disk-dynamicpv-x4mr started at 2019-11-27 02:15:43 +0000 UTC (1+1 container statuses recorded) Nov 27 02:20:45.096: INFO: Init container init-volume-azure-disk-dynamicpv-x4mr ready: false, restart count 0 Nov 27 02:20:45.096: INFO: Container test-container-subpath-azure-disk-dynamicpv-x4mr ready: false, restart count 0 Nov 27 02:20:45.096: INFO: azure-io-client started at 2019-11-27 02:18:34 +0000 UTC (1+1 container statuses recorded) Nov 27 02:20:45.096: INFO: Init container azure-io-init ready: false, restart count 0 Nov 27 02:20:45.096: INFO: Container azure-io-client ready: false, restart count 0 Nov 27 02:20:45.096: INFO: security-context-54d39a28-d787-43ae-9155-1d1da53f4a30 started at 2019-11-27 02:19:10 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.096: INFO: Container write-pod ready: false, restart count 0 W1127 02:20:45.099637 14150 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:20:45.121: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000000 Nov 27 02:20:45.121: INFO: Logging node info for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:20:45.124: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool1-27910301-vmss000001 /api/v1/nodes/k8s-agentpool1-27910301-vmss000001 d9661b2f-8dba-435f-9146-c53e15590b86 4627 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[agentpool:agentpool1 beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:1 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool1-27910301-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: node.kubernetes.io/instance-type:Standard_D4s_v3 storageprofile:managed storagetier:Premium_LRS topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:1] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool1-27910301-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:19:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:19:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:19:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:19:01 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-agentpool1-27910301-vmss000001,},NodeAddress{Type:InternalIP,Address:10.240.0.35,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1701de1799d4bda8bb81a1a74346a2a,SystemUUID:E7C6657A-6794-5043-BF92-958F99FF1F10,BootID:f3be9e11-e33b-462a-b7b8-5177e4209627,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:e51071a0670e003e3936190cfda74cfd6edaa39e0d16fc0b5a5f09dbf09dd1ff k8s.gcr.io/metrics-server-amd64:v0.3.4],SizeBytes:39944451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4f12fead8bab1fc9daed78e42b282e520526341b1f60550da187093bffe237b0 mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.13],SizeBytes:25024769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-63caa552-e092-43fe-b924-ef6a44e177a3 kubernetes.io/azure-disk//subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/disks/kubetest-f1185e95-10b6-11ea-8290-0-pvc-ff9df561-0196-4319-bfaf-223abedd298f],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:20:45.124: INFO: Logging kubelet events for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:20:45.133: INFO: Logging pods the kubelet thinks is on node k8s-agentpool1-27910301-vmss000001 Nov 27 02:20:45.141: INFO: azure-ip-masq-agent-sxnbk started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.141: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:20:45.141: INFO: metrics-server-855b565c8f-nbxsd started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.141: INFO: Container metrics-server ready: true, restart count 0 Nov 27 02:20:45.141: INFO: blobfuse-flexvol-installer-x5wvc started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.141: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 27 02:20:45.141: INFO: kubernetes-dashboard-65966766b9-hfq6z started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.141: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 27 02:20:45.141: INFO: security-context-d6c4054e-cb3a-4228-9e3b-db7254d4c5b4 started at 2019-11-27 02:16:51 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.141: INFO: Container write-pod ready: false, restart count 0 Nov 27 02:20:45.141: INFO: azure-cni-networkmonitor-wcplr started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.141: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:20:45.141: INFO: keyvault-flexvolume-q956v started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.141: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 27 02:20:45.141: INFO: coredns-56bc7dfcc6-pxrl6 started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.141: INFO: Container coredns ready: true, restart count 0 Nov 27 02:20:45.141: INFO: kube-proxy-wcptt started at 2019-11-27 02:01:03 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.141: INFO: Container kube-proxy ready: true, restart count 0 W1127 02:20:45.143944 14150 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:20:45.163: INFO: Latency metrics for node k8s-agentpool1-27910301-vmss000001 Nov 27 02:20:45.163: INFO: Logging node info for node k8s-master-27910301-0 Nov 27 02:20:45.166: INFO: Node Info: &Node{ObjectMeta:{k8s-master-27910301-0 /api/v1/nodes/k8s-master-27910301-0 c69dd001-f5f0-495c-861a-fcdc5c4685fa 4091 0 2019-11-27 02:00:48 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:0 kubernetes.azure.com/cluster:kubetest-f1185e95-10b6-11ea-8290-02424e92b20f kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-27910301-0 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master: node.kubernetes.io/instance-type:Standard_DS2_v2 topology.kubernetes.io/region:westus2 topology.kubernetes.io/zone:0] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/940f88ce-a64b-4e73-a258-9931349b9789/resourceGroups/kubetest-f1185e95-10b6-11ea-8290-02424e92b20f/providers/Microsoft.Compute/virtualMachines/k8s-master-27910301-0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{8 0} {<nil>} 8 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{30 0} {<nil>} 30 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-27 02:16:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-27 02:16:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-27 02:16:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-27 02:16:40 +0000 UTC,LastTransitionTime:2019-11-27 02:00:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:k8s-master-27910301-0,},NodeAddress{Type:InternalIP,Address:10.255.255.5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4fbcdc9a6aec4bf5bbacc8e6c7d0effc,SystemUUID:6F669D5F-47AF-F34B-A88F-B20FF17F57DA,BootID:5ce46fbc-c556-476d-85de-4196f4d7878c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.8,KubeletVersion:v1.18.0-alpha.0.1191+5975b80d569031,KubeProxyVersion:v1.18.0-alpha.0.1191+5975b80d569031,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprowinternal.azurecr.io/hyperkube-amd64@sha256:95ba7c51a1f6e306cceef9718496c35304277d9d469e657aed2b672d443a07d6 k8sprowinternal.azurecr.io/hyperkube-amd64:azure-e2e-1199502904264757249-f1184886],SizeBytes:854264900,},ContainerImage{Names:[mcr.microsoft.com/containernetworking/networkmonitor@sha256:d875511410502c3e37804e1f313cc2b0a03d7a03d3d5e6adaf8994b753a76f8e mcr.microsoft.com/containernetworking/networkmonitor:v0.0.6],SizeBytes:123663837,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager-amd64:v9.0.2],SizeBytes:83076028,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:7d02b5aaf76c999529b68d2b5e21d20468c14baa467e1c1b16126e0ccd86753b k8s.gcr.io/ip-masq-agent-amd64:v2.5.0],SizeBytes:50148508,},ContainerImage{Names:[mcr.microsoft.com/k8s/core/pause@sha256:6666771bdc36e6c335f8bfcc1976fc0624c1dd9bc9fa9793ea27ccd6de5e4289 mcr.microsoft.com/k8s/core/pause:1.2.0],SizeBytes:738384,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 27 02:20:45.166: INFO: Logging kubelet events for node k8s-master-27910301-0 Nov 27 02:20:45.170: INFO: Logging pods the kubelet thinks is on node k8s-master-27910301-0 Nov 27 02:20:45.175: INFO: kube-addon-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.175: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 27 02:20:45.175: INFO: kube-apiserver-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.175: INFO: Container kube-apiserver ready: true, restart count 0 Nov 27 02:20:45.175: INFO: kube-controller-manager-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.175: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 27 02:20:45.175: INFO: kube-scheduler-k8s-master-27910301-0 started at 2019-11-27 02:00:37 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.175: INFO: Container kube-scheduler ready: true, restart count 0 Nov 27 02:20:45.175: INFO: azure-ip-masq-agent-x5bzr started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.175: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 27 02:20:45.175: INFO: azure-cni-networkmonitor-5jpkd started at 2019-11-27 02:01:02 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.175: INFO: Container azure-cnms ready: true, restart count 0 Nov 27 02:20:45.175: INFO: kube-proxy-krs8d started at 2019-11-27 02:01:05 +0000 UTC (0+1 container statuses recorded) Nov 27 02:20:45.175: INFO: Container kube-proxy ready: true, restart count 0 W1127 02:20:45.178751 14150 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 02:20:45.195: INFO: Latency metrics for node k8s-master-27910301-0 Nov 27 02:20:45.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "volume-4051" for this suite.
Filter through log files
error during ./hack/ginkgo-e2e.sh --ginkgo.focus=azure-disk.*\[Slow\] --ginkgo.skip=\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
from junit_runner.xml
Filter through log files
Build
Check APIReachability
Deferred TearDown
DumpClusterLogs
IsUp
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
TearDown
TearDown Previous
Timeout
Up
kubectl version
list nodes
test setup
Kubernetes e2e suite Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
Kubernetes e2e suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
Kubernetes e2e suite [k8s.io] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD]
Kubernetes e2e suite [k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Lease lease API should be available [Conformance]
Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set
Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
Kubernetes e2e suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
Kubernetes e2e suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
Kubernetes e2e suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
Kubernetes e2e suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
Kubernetes e2e suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls
Kubernetes e2e suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls
Kubernetes e2e suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
Kubernetes e2e suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage]
Kubernetes e2e suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow]
Kubernetes e2e suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow]
Kubernetes e2e suite [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow]
Kubernetes e2e suite [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow]
Kubernetes e2e suite [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow]
Kubernetes e2e suite [k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret
Kubernetes e2e suite [k8s.io] [Feature:TTLAfterFinished][NodeAlphaFeature:TTLAfterFinished] job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [k8s.io] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
Kubernetes e2e suite [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [k8s.io] [sig-node] Mount propagation should propagate mounts to the host
Kubernetes e2e suite [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] should run without error
Kubernetes e2e suite [k8s.io] [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]
Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [k8s.io] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process [Flaky]
Kubernetes e2e suite [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [k8s.io] [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support seccomp alpha runtime/default annotation [Feature:Seccomp] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support seccomp alpha unconfined annotation on the container [Feature:Seccomp] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support seccomp alpha unconfined annotation on the pod [Feature:Seccomp] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support seccomp default which is unconfined [Feature:Seccomp] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] host cleanup with volume mounts [sig-storage][HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] host cleanup with volume mounts [sig-storage][HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [sig-apps] CronJob should delete successful/failed finished jobs with limit of one job
Kubernetes e2e suite [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [sig-apps] CronJob should not schedule jobs when suspended [Slow]
Kubernetes e2e suite [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow]
Kubernetes e2e suite [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [sig-apps] CronJob should replace jobs when ReplaceConcurrent
Kubernetes e2e suite [sig-apps] CronJob should schedule multiple jobs concurrently
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it
Kubernetes e2e suite [sig-apps] DisruptionController should create a PodDisruptionBudget
Kubernetes e2e suite [sig-apps] DisruptionController should update PodDisruptionBudget status
Kubernetes e2e suite [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] Pods should be evicted from unready Node [Feature:TaintEviction] All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be evicted after eviction timeout passes
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive]
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive]
Kubernetes e2e suite [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create and delete custom resource definition.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create, get, update, patch, delete, list, watch configmaps.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create, get, update, patch, delete, list, watch deployments.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create, get, update, patch, delete, list, watch pods.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create, get, update, patch, delete, list, watch secrets.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to get a pod with unauthorized user.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should list pods as impersonated user.
Kubernetes e2e suite [sig-auth] Certificates API should support building a client with a CSR
Kubernetes e2e suite [sig-auth] Metadata Concealment should run a check-metadata-concealment job to completion
Kubernetes e2e suite [sig-auth] PodSecurityPolicy should allow pods under the privileged policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy should enforce the restricted policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available
Kubernetes e2e suite [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should ensure a single API token exists
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] [Feature:TokenRequestProjection]
Kubernetes e2e suite [sig-auth] [Feature:DynamicAudit] should dynamically audit API calls
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [DisabledForLargeClusters] kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling [sig-autoscaling] Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 2 pods to 1 pod [Slow]
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Object from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver with Prometheus [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target average value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two External metrics from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two metrics of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl alpha client Kubectl run CronJob should create a CronJob
Kubernetes e2e suite [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run CronJob should create a CronJob
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client kubectl get output should contain custom columns for each resource
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [sig-instrumentation] Cadvisor should be healthy on every node.
Kubernetes e2e suite [sig-instrumentation] Cluster level logging implemented by Stackdriver [Feature:StackdriverLogging] [Soak] should ingest logs from applications running for a prolonged amount of time
Kubernetes e2e suite [sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest events [Feature:StackdriverLogging]
Kubernetes e2e suite [sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest logs [Feature:StackdriverLogging]
Kubernetes e2e suite [sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest system logs from all nodes [Feature:StackdriverLogging]
Kubernetes e2e suite [sig-instrumentation] Cluster level logging using Elasticsearch [Feature:Elasticsearch] should check that logs from containers are ingested into Elasticsearch
Kubernetes e2e suite [sig-instrumentation] Kibana Logging Instances Is Alive [Feature:Elasticsearch] should check that the Kibana logging instance is alive
Kubernetes e2e suite [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [sig-network] DNS configMap federations [Feature:Federation] should be able to change federation configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [Feature:Networking-IPv6] [LinuxOnly] Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [Feature:Networking-IPv6] [LinuxOnly] Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [Feature:Networking-IPv6] [LinuxOnly] Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [IPv4] Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [IPv4] Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [IPv4] Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should only target nodes with endpoints
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should work for type=LoadBalancer
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should work for type=NodePort
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should work from pods
Kubernetes e2e suite [sig-network] EndpointSlice [Feature:EndpointSlice] version v1 should create Endpoints and EndpointSlices for Pods matching a Service
Kubernetes e2e suite [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] multicluster ingress should get instance group annotation
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should create ingress with pre-shared certificate
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should support multiple TLS certs
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to create a ClusterIP service
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to switch between IG and NEG modes
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints to NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should create ingress with backend HTTPS
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should create ingress with pre-shared certificate
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should remove clusters as expected
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should support https-only annotation
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] single and multi-cluster ingresses should be able to exist together
Kubernetes e2e suite [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [sig-network] Loadbalancing: L7 [Slow] Nginx should conform to Ingress spec
Kubernetes e2e suite [sig-network] Network should resolve connrection reset issue #74839 [Slow]
Kubernetes e2e suite [sig-network] Network should set TCP CLOSE_WAIT timeout
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [sig-network] Networking IPerf IPv4 [Experimental] [Feature:Networking-IPv4] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients)
Kubernetes e2e suite [sig-network] Networking IPerf IPv6 [Experimental] [Feature:Networking-IPv6] [Slow] [Feature:Networking-Performance] [LinuxOnly] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients)
Kubernetes e2e suite [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [sig-network] Services [Feature:GCEAlphaFeature][Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [sig-network] Services should be able to change the type and ports of a service [Slow] [DisabledForLargeClusters]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to create an internal type load balancer [Slow] [DisabledForLargeClusters]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should be able to up and down services
Kubernetes e2e suite [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [sig-network] Services should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [sig-network] Services should have session affinity work for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should have session affinity work for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should have session affinity work for NodePort service [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [sig-network] Services should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [sig-network] Services should reconcile LB health check interval [Slow][Serial]
Kubernetes e2e suite [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should be able to reach pod on ipv4 and ipv6 ip [Feature:IPv6DualStackAlphaFeature:Phase2]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should create service with cluster ip from primary service range [Feature:IPv6DualStackAlphaFeature:Phase2]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should create service with ipv4 cluster ip [Feature:IPv6DualStackAlphaFeature:Phase2]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should create service with ipv6 cluster ip [Feature:IPv6DualStackAlphaFeature:Phase2]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should have ipv4 and ipv6 node podCIDRs
Kubernetes e2e suite [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for intra-pod communication: http
Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for intra-pod communication: udp
Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for node-pod communication: http
Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for node-pod communication: udp
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling [NodeFeature:RuntimeHandler] [Disruptive]
Kubernetes e2e suite [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
Kubernetes e2e suite [sig-scheduling] Multi-AZ Cluster Volumes [sig-storage] should only be allowed to provision PDs in zones where nodes exist
Kubernetes e2e suite [sig-scheduling] Multi-AZ Cluster Volumes [sig-storage] should schedule pods in the same zones as statically provisioned PVs
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [sig-scheduling] PreemptionExecutionPath runs ReplicaSets to verify preemption running path
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [sig-service-catalog] [Feature:PodPreset] PodPreset should create a pod preset
Kubernetes e2e suite [sig-service-catalog] [Feature:PodPreset] PodPreset should not modify the pod on conflict
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot] snapshottable should create snapshot with defaults [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support two pods which share the same volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot] snapshottable should create snapshot with defaults [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: inline ephemeral CSI volume] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: inline ephemeral CSI volume] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: inline ephemeral CSI volume] ephemeral should support two pods which share the same volume
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
Kubernetes e2e suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
Kubernetes e2e suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Detaching volumes should not work when mount is in progress [Slow]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by changing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by removing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner Default should create and delete default persistent volumes [Slow]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] deletion should be idempotent
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] should not provision a volume in an unmanaged GCE zone.
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] should provision storage with different parameters
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] should provision storage with non-default reclaim policy Retain
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] should test that deleting a claim before the volume is provisioned deletes the volume.
Kubernetes e2e suite [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV
Kubernetes e2e suite [sig-storage] Dynamic Provisioning [k8s.io] GlusterDynamicProvisioner should create and delete persistent volumes [fast]
Kubernetes e2e suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [sig-storage] Flexvolumes should be mountable when attachable
Kubernetes e2e suite [sig-storage] Flexvolumes should be mountable when non-attachable
Kubernetes e2e suite [sig-storage] GCP Volumes GlusterFS should be mountable
Kubernetes e2e suite [sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
Kubernetes e2e suite [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a file written to the mount before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: iscsi][Feature:Volumes] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: gce-localssd-scsi-fs] [Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: rbd][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: vsphere] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] Mounted flexvolume expand[Slow] Should verify mounted flex volumes can be resized
Kubernetes e2e suite [sig-storage] Mounted flexvolume volume expand [Slow] [Feature:ExpandInUsePersistentVolumes] should be resizable when mounted
Kubernetes e2e suite [sig-storage] Mounted volume expand Should verify mounted devices can be resized
Kubernetes e2e suite [sig-storage] NFSPersistentVolumes[Disruptive][Flaky] when kube-controller-manager restarts should delete a bound PVC from a clientPod, restart the kube-control-manager, and ensure the kube-controller-manager does not crash
Kubernetes e2e suite [sig-storage] NFSPersistentVolumes[Disruptive][Flaky] when kubelet restarts Should test that a file written to the mount before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] NFSPersistentVolumes[Disruptive][Flaky] when kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [sig-storage] NFSPersistentVolumes[Disruptive][Flaky] when kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive] verify volume status after node power off
Kubernetes e2e suite [sig-storage] Node Unregister [Feature:vsphere] [Slow] [Disruptive] node unregister
Kubernetes e2e suite [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC
Kubernetes e2e suite [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately
Kubernetes e2e suite [sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod
Kubernetes e2e suite [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately
Kubernetes e2e suite [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
Kubernetes e2e suite [sig-storage] PersistentVolumes Default StorageClass pods that use multiple volumes should be reschedulable [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
Kubernetes e2e suite [sig-storage] PersistentVolumes GCEPD should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk
Kubernetes e2e suite [sig-storage] PersistentVolumes GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes [Feature:LabelSelector] [sig-storage] Selector-Label Volume Binding:vsphere should bind volume with claim for given label
Kubernetes e2e suite [sig-storage] PersistentVolumes [Feature:ReclaimPolicy] [sig-storage] persistentvolumereclaim:vsphere should delete persistent volume when reclaimPolicy set to delete and associated claim is deleted
Kubernetes e2e suite [sig-storage] PersistentVolumes [Feature:ReclaimPolicy] [sig-storage] persistentvolumereclaim:vsphere should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
Kubernetes e2e suite [sig-storage] PersistentVolumes [Feature:ReclaimPolicy] [sig-storage] persistentvolumereclaim:vsphere should retain persistent volume when reclaimPolicy set to retain when associated claim is deleted
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to non-existent path
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Local volume that cannot be mounted [Slow] should fail due to wrong node
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running
Kubernetes e2e suite [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod has affinity
Kubernetes e2e suite [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes on one node when pod management is parallel and pod has affinity
Kubernetes e2e suite [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes spread across nodes when pod has anti-affinity
Kubernetes e2e suite [sig-storage] PersistentVolumes-local StatefulSet with pod affinity [Slow] should use volumes spread across nodes when pod management is parallel and pod has anti-affinity
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set fsGroup for one pod [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set fsGroup for one pod [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set fsGroup for one pod [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set fsGroup for one pod [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set fsGroup for one pod [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set fsGroup for one pod [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set fsGroup for one pod [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow]
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes:vsphere should test that a file written to the vspehre volume mount before kubelet restart can be read after restart [Disruptive]
Kubernetes e2e suite [sig-storage] PersistentVolumes:vsphere should test that a vspehre volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive]
Kubernetes e2e suite [sig-storage] PersistentVolumes:vsphere should test that deleting a PVC before the pod does not cause pod deletion to fail on vsphere volume detach
Kubernetes e2e suite [sig-storage] PersistentVolumes:vsphere should test that deleting the Namespace of a PVC and Pod causes the successful detach of vsphere volume
Kubernetes e2e suite [sig-storage] PersistentVolumes:vsphere should test that deleting the PV before the pod does not cause pod deletion to fail on vspehre volume detach
Kubernetes e2e suite [sig-storage] Pod Disks detach in a disrupted environment [Slow] [Disruptive] when node is deleted
Kubernetes e2e suite [sig-storage] Pod Disks detach in a disrupted environment [Slow] [Disruptive] when node's API object is deleted
Kubernetes e2e suite [sig-storage] Pod Disks detach in a disrupted environment [Slow] [Disruptive] when pod is evicted
Kubernetes e2e suite [sig-storage] Pod Disks schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] using 1 containers and 2 PDs
Kubernetes e2e suite [sig-storage] Pod Disks schedule a pod w/ RW PD(s) mounted to 1 or more containers, write to PD, verify content, delete pod, and repeat in rapid succession [Slow] using 4 containers and 1 PDs
Kubernetes e2e suite [sig-storage] Pod Disks schedule pods each with a PD, delete pod and verify detach [Slow] for RW PD with pod delete grace period of "default (30s)"
Kubernetes e2e suite [sig-storage] Pod Disks schedule pods each with a PD, delete pod and verify detach [Slow] for RW PD with pod delete grace period of "immediate (0s)"
Kubernetes e2e suite [sig-storage] Pod Disks schedule pods each with a PD, delete pod and verify detach [Slow] for read-only PD with pod delete grace period of "default (30s)"
Kubernetes e2e suite [sig-storage] Pod Disks schedule pods each with a PD, delete pod and verify detach [Slow] for read-only PD with pod delete grace period of "immediate (0s)"
Kubernetes e2e suite [sig-storage] Pod Disks should be able to delete a non-existent PD without error
Kubernetes e2e suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
Kubernetes e2e suite [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
Kubernetes e2e suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
Kubernetes e2e suite [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
Kubernetes e2e suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Regional PD RegionalPD should failover to a different zone when all nodes in one zone become unreachable [Slow] [Disruptive]
Kubernetes e2e suite [sig-storage] Regional PD RegionalPD should provision storage [Slow]
Kubernetes e2e suite [sig-storage] Regional PD RegionalPD should provision storage in the allowedTopologies [Slow]
Kubernetes e2e suite [sig-storage] Regional PD RegionalPD should provision storage in the allowedTopologies with delayed binding [Slow]
Kubernetes e2e suite [sig-storage] Regional PD RegionalPD should provision storage with delayed binding [Slow]
Kubernetes e2e suite [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
Kubernetes e2e suite [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
Kubernetes e2e suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with invalid capability name objectSpaceReserve is not honored for dynamically provisioned pvc using storageclass
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with invalid diskStripes value is not honored for dynamically provisioned pvc using storageclass
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with invalid hostFailuresToTolerate value is not honored for dynamically provisioned pvc using storageclass
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with non-vsan datastore is not honored for dynamically provisioned pvc using storageclass
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid diskStripes and objectSpaceReservation values and a VSAN datastore is honored for dynamically provisioned pvc using storageclass
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid diskStripes and objectSpaceReservation values is honored for dynamically provisioned pvc using storageclass
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid hostFailuresToTolerate and cacheReservation values is honored for dynamically provisioned pvc using storageclass
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify VSAN storage capability with valid objectSpaceReservation and iopsLimit values is honored for dynamically provisioned pvc using storageclass
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify an existing and compatible SPBM policy is honored for dynamically provisioned pvc using storageclass
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify an if a SPBM policy and VSAN capabilities cannot be honored for dynamically provisioned pvc using storageclass
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify clean up of stale dummy VM for dynamically provisioned pvc using SPBM policy
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify if a SPBM policy is not honored on a non-compatible datastore for dynamically provisioned pvc using storageclass
Kubernetes e2e suite [sig-storage] Storage Policy Based Volume Provisioning [Feature:vsphere] verify if a non-existing SPBM policy is not honored for dynamically provisioned pvc using storageclass
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Verify Volume Attach Through vpxd Restart [Feature:vsphere][Serial][Disruptive] verify volume remains attached through vpxd restart
Kubernetes e2e suite [sig-storage] Volume Attach Verify [Feature:vsphere][Serial][Disruptive] verify volume remains attached after master kubelet restart
Kubernetes e2e suite [sig-storage] Volume Disk Format [Feature:vsphere] verify disk format type - eagerzeroedthick is honored for dynamically provisioned pv using storageclass
Kubernetes e2e suite [sig-storage] Volume Disk Format [Feature:vsphere] verify disk format type - thin is honored for dynamically provisioned pv using storageclass
Kubernetes e2e suite [sig-storage] Volume Disk Format [Feature:vsphere] verify disk format type - zeroedthick is honored for dynamically provisioned pv using storageclass
Kubernetes e2e suite [sig-storage] Volume Disk Size [Feature:vsphere] verify dynamically provisioned pv has size rounded up correctly
Kubernetes e2e suite [sig-storage] Volume FStype [Feature:vsphere] verify fstype - default value should be ext4
Kubernetes e2e suite [sig-storage] Volume FStype [Feature:vsphere] verify fstype - ext3 formatted volume
Kubernetes e2e suite [sig-storage] Volume FStype [Feature:vsphere] verify invalid fstype
Kubernetes e2e suite [sig-storage] Volume Operations Storm [Feature:vsphere] should create pod with many volumes and verify no attach call fails
Kubernetes e2e suite [sig-storage] Volume Placement should create and delete pod with multiple volumes from different datastore
Kubernetes e2e suite [sig-storage] Volume Placement should create and delete pod with multiple volumes from same datastore
Kubernetes e2e suite [sig-storage] Volume Placement should create and delete pod with the same volume source attach/detach to different worker nodes
Kubernetes e2e suite [sig-storage] Volume Placement should create and delete pod with the same volume source on the same worker node
Kubernetes e2e suite [sig-storage] Volume Placement test back to back pod creation and deletion with different volume sources on the same worker node
Kubernetes e2e suite [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] verify dynamic provision with default parameter on clustered datastore
Kubernetes e2e suite [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] verify dynamic provision with spbm policy on clustered datastore
Kubernetes e2e suite [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere] verify static provisioning on clustered datastore
Kubernetes e2e suite [sig-storage] Volume Provisioning on Datastore [Feature:vsphere] verify dynamically provisioned pv using storageclass fails on an invalid datastore
Kubernetes e2e suite [sig-storage] Volume limits should verify that all nodes have volume limits
Kubernetes e2e suite [sig-storage] Volumes ConfigMap should be mountable
Kubernetes e2e suite [sig-storage] Zone Support Verify PVC creation fails if no zones are specified in the storage class (No shared datastores exist among all the nodes)
Kubernetes e2e suite [sig-storage] Zone Support Verify PVC creation fails if only datastore is specified in the storage class (No shared datastores exist among all the nodes)
Kubernetes e2e suite [sig-storage] Zone Support Verify PVC creation fails if only storage policy is specified in the storage class (No shared datastores exist among all the nodes)
Kubernetes e2e suite [sig-storage] Zone Support Verify PVC creation fails if the availability zone specified in the storage class have no shared datastores under it.
Kubernetes e2e suite [sig-storage] Zone Support Verify PVC creation with an invalid VSAN capability along with a compatible zone combination specified in storage class fails
Kubernetes e2e suite [sig-storage] Zone Support Verify PVC creation with compatible policy and datastore without any zones specified in the storage class fails (No shared datastores exist among all the nodes)
Kubernetes e2e suite [sig-storage] Zone Support Verify PVC creation with incompatible datastore and zone combination specified in storage class fails
Kubernetes e2e suite [sig-storage] Zone Support Verify PVC creation with incompatible storage policy along with compatible zone and datastore combination specified in storage class fails
Kubernetes e2e suite [sig-storage] Zone Support Verify PVC creation with incompatible storagePolicy and zone combination specified in storage class fails
Kubernetes e2e suite [sig-storage] Zone Support Verify PVC creation with incompatible zone along with compatible storagePolicy and datastore combination specified in storage class fails
Kubernetes e2e suite [sig-storage] Zone Support Verify PVC creation with invalid zone specified in storage class fails
Kubernetes e2e suite [sig-storage] Zone Support Verify a PVC creation fails when multiple zones are specified in the storage class without shared datastores among the zones in waitForFirstConsumer binding mode
Kubernetes e2e suite [sig-storage] Zone Support Verify a pod fails to get scheduled when conflicting volume topology (allowedTopologies) and pod scheduling constraints(nodeSelector) are specified
Kubernetes e2e suite [sig-storage] Zone Support Verify a pod is created and attached to a dynamically created PV with storage policy specified in storage class in waitForFirstConsumer binding mode
Kubernetes e2e suite [sig-storage] Zone Support Verify a pod is created and attached to a dynamically created PV with storage policy specified in storage class in waitForFirstConsumer binding mode with allowedTopologies
Kubernetes e2e suite [sig-storage] Zone Support Verify a pod is created and attached to a dynamically created PV with storage policy specified in storage class in waitForFirstConsumer binding mode with multiple allowedTopologies
Kubernetes e2e suite [sig-storage] Zone Support Verify a pod is created and attached to a dynamically created PV, based on a VSAN capability, datastore and compatible zone specified in storage class
Kubernetes e2e suite [sig-storage] Zone Support Verify a pod is created and attached to a dynamically created PV, based on allowed zones specified in storage class
Kubernetes e2e suite [sig-storage] Zone Support Verify a pod is created and attached to a dynamically created PV, based on multiple zones specified in storage class
Kubernetes e2e suite [sig-storage] Zone Support Verify a pod is created and attached to a dynamically created PV, based on multiple zones specified in the storage class. (No shared datastores exist among both zones)
Kubernetes e2e suite [sig-storage] Zone Support Verify a pod is created and attached to a dynamically created PV, based on the allowed zones and datastore specified in storage class
Kubernetes e2e suite [sig-storage] Zone Support Verify a pod is created and attached to a dynamically created PV, based on the allowed zones and storage policy specified in storage class
Kubernetes e2e suite [sig-storage] Zone Support Verify a pod is created and attached to a dynamically created PV, based on the allowed zones, datastore and storage policy specified in storage class
Kubernetes e2e suite [sig-storage] Zone Support Verify a pod is created on a non-Workspace zone and attached to a dynamically created PV, based on the allowed zones and storage policy specified in storage class
Kubernetes e2e suite [sig-storage] Zone Support Verify dynamically created pv with allowed zones specified in storage class, shows the right zone information on its labels
Kubernetes e2e suite [sig-storage] Zone Support Verify dynamically created pv with multiple zones specified in the storage class, shows both the zones on its labels
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning errors [Slow]
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager
Kubernetes e2e suite [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref
Kubernetes e2e suite [sig-storage] vcp at scale [Feature:vsphere] vsphere scale tests
Kubernetes e2e suite [sig-storage] vcp-performance [Feature:vsphere] vcp performance tests
Kubernetes e2e suite [sig-storage] vsphere cloud provider stress [Feature:vsphere] vsphere stress tests
Kubernetes e2e suite [sig-storage] vsphere statefulset vsphere statefulset testing
Kubernetes e2e suite [sig-ui] Kubernetes Dashboard [Feature:Dashboard] should check that the kubernetes-dashboard instance is alive
Kubernetes e2e suite [sig-windows] DNS should support configurable pod DNS servers
Kubernetes e2e suite [sig-windows] Hybrid cluster network for all supported CNIs should have stable networking for Linux and Windows pods
Kubernetes e2e suite [sig-windows] Services should be able to create a functioning NodePort service for Windows
Kubernetes e2e suite [sig-windows] Windows volume mounts check volume mount permissions container should have readOnly permissions on emptyDir
Kubernetes e2e suite [sig-windows] Windows volume mounts check volume mount permissions container should have readOnly permissions on hostMapPath
Kubernetes e2e suite [sig-windows] [Feature:Windows] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
Kubernetes e2e suite [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] Allocatable node memory should be equal to a calculated allocatable memory value
Kubernetes e2e suite [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] attempt to deploy past allocatable memory limits should fail deployments of pods once there isn't enough memory
Kubernetes e2e suite [sig-windows] [Feature:Windows] SecurityContext RunAsUserName should be able create pods and run containers with a given username
Kubernetes e2e suite [sig-windows] [Feature:Windows] SecurityContext RunAsUserName should not be able to create pods with unknown usernames
Kubernetes e2e suite [sig-windows] [Feature:Windows] SecurityContext RunAsUserName should override SecurityContext username if set
Kubernetes e2e suite [sig-windows] [Feature:Windows] [Feature:WindowsGMSA] GMSA Full [Slow] GMSA support works end to end
Kubernetes e2e suite [sig-windows] [Feature:Windows] [Feature:WindowsGMSA] GMSA Kubelet [Slow] kubelet GMSA support when creating a pod with correct GMSA credential specs passes the credential specs down to the Pod's containers