Recent runs || View in Spyglass
Result | FAILURE |
Tests | 0 failed / 0 succeeded |
Started | |
Elapsed | 32m12s |
Revision | 879465972041635fd7bc8269da49153b67e63822 |
Refs |
113 |
... skipping 145 lines ... Look at http://kubernetes.io/ for information on how to contact the development team for help. !!! [1212 00:36:36] Call tree: !!! [1212 00:36:36] 1: ./hack/ginkgo-e2e.sh:66 detect-master-from-kubeconfig(...) 2019/12/12 00:36:36 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.flakeAttempts=2 --ginkgo.focus=azure-disk --ginkgo.skip=\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|default\sfs.*should\sverify\scontainer\scannot\swrite\sto\ssubpath\sreadonly\svolumes' finished in 89.799338ms 2019/12/12 00:36:36 azure.go:1137: Deleting resource group: kubetest-163a5b0d-1c74-11ea-bbe3-02428835837c. 2019/12/12 00:44:22 main.go:319: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.flakeAttempts=2 --ginkgo.focus=azure-disk --ginkgo.skip=\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|default\sfs.*should\sverify\scontainer\scannot\swrite\sto\ssubpath\sreadonly\svolumes: exit status 1] + EXIT_VALUE=1 + set +o xtrace Cleaning up after docker in docker. ================================================================================ [Barnacle] 2019/12/12 00:44:22 Cleaning up Docker data root... [Barnacle] 2019/12/12 00:44:22 Removing all containers. ... skipping 23 lines ...