Skip to content

Commit

Permalink
Use teardown function hook
Browse files Browse the repository at this point in the history
Using `teardown` function hook which will be called by scripts in
knative/test-infra to:
1. Destroy the existing environment when using an existing cluster (so
   we must wait for it to be deleted before attempting to deploy it
   again)
2. Bring down the environment after the tests complete

Also adding deployment/teardown of Build CRD which is required for the
PipelineCRD (Tasks wrap Builds)
  • Loading branch information
bobcatfish authored and knative-prow-robot committed Oct 4, 2018
1 parent 1180a45 commit 591b367
Show file tree
Hide file tree
Showing 2 changed files with 20 additions and 12 deletions.
10 changes: 6 additions & 4 deletions DEVELOPMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,10 +89,9 @@ kubectl create clusterrolebinding cluster-admin-binding \

## Deploy Knative Build

build-pipeline has a dependency on [build](https://github.com/knative/build)

```
kubectl deploy -f ./third_party/config/build/release.yaml
kubectl apply -f ./third_party/config/build/release.yaml
```

## Iterating
Expand All @@ -116,11 +115,12 @@ To make changes to these CRDs, you will probably interact with:

## Running everything

You can stand up a version of this controller on-cluster (to your `kubectl config current-context`) with:
You can stand up a version of this controller on-cluster (to your `kubectl config current-context`),
including `knative/build` (which is wrapped by [`Task`](README.md#task)):

```shell
# This will register the CRD and deploy the controller to start acting on them.
ko apply -f config/
kubectl deploy -f ./third_party/config/build/release.yaml
```

As you make changes to the code, you can redeploy your controller with:
Expand All @@ -133,7 +133,9 @@ You can clean up everything with:

```shell
ko delete -f config/
kubectl delete -f ./third_party/config/build/release.yaml
```

## Accessing logs

To look at the controller logs, run:
Expand Down
22 changes: 14 additions & 8 deletions test/e2e-tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,16 @@ if ! [[ -z ${KO_DOCKER_REPO} ]]; then
export DOCKER_REPO_OVERRIDE=${KO_DOCKER_REPO}
fi

function take_down_pipeline() {
function teardown() {
header "Tearing down Pipeline CRD"
ko delete --ignore-not-found=true -f config/
kubectl delete --ignore-not-found=true -f ./third_party/config/build/release.yaml
# teardown will be called when run against an existing cluster to cleanup before
# continuing, so we must wait for the cleanup to complete or the subsequent attempt
# to deploy to the same namespace will fail
wait_until_object_does_not_exist namespace knative-build-pipeline
wait_until_object_does_not_exist namespace knative-build
}
# TODO: add a wait for resources to be up and change take_down_pipeline to
# teardown so that it will be called auto-magically.
trap take_down_pipeline EXIT

# Called by `fail_test` (provided by `e2e-tests.sh`) to dump info on test failure
function dump_extra_cluster_state() {
Expand All @@ -69,16 +72,19 @@ if [[ -z ${KO_DOCKER_REPO} ]]; then
export KO_DOCKER_REPO=${DOCKER_REPO_OVERRIDE}
fi

# Deploy the latest version of the Pipeline CRD.
# TODO(#59) do we need to deploy the Build CRD as well?
header "Deploying Build CRD"
kubectl apply -f ./third_party/config/build/release.yaml

header "Deploying Pipeline CRD"
ko apply -f config/

# Wait for pods to be running in the namespaces we are deploying to
# The functions we are calling out to get pretty noisy when tracing is on
set +o xtrace

# Wait for pods to be running in the namespaces we are deploying to
wait_until_pods_running knative-build-pipeline || fail_test "Pipeline CRD did not come up"
set -o xtrace

# Actually run the tests
report_go_test \
-v -tags=e2e -count=1 -timeout=20m ./test \
${options} || fail_test
Expand Down

0 comments on commit 591b367

Please sign in to comment.