Why does Tekton pipelines have a folder called tekton
? Cuz we think it would be cool
if the tekton
folder were the place to look for CI/CD logic in most repos!
We dogfood our project by using Tekton Pipelines to build, test and release Tekton Pipelines!
This directory contains the
Tasks
and
Pipelines
that we use.
The Pipelines and Tasks in this folder are used for:
To start from scratch and use these Pipelines and Tasks:
Official releases are performed from the dogfooding
cluster
in the tekton-releases
GCP project.
This cluster already has the correct version of Tekton installed.
To make a new release:
- (Optionally) Apply the latest versions of the Tasks + Pipelines
- (If you haven't already) Install
tkn
- Run the Pipeline
- Create the new tag and release in GitHub (see one of way of doing that here). TODO(#530): Automate as much of this as possible with Tekton.
- Add an entry to the README at
HEAD
for docs and examples for the new release (README.md#read-the-docs). - Update the new release in GitHub with the same links to the docs and examples, see v0.1.0 for example.
To use tkn
to run the publish-tekton-pipelines
Task
and create a release:
-
Pick the revision you want to release and update the
resources.yaml
file to add aPipelineResoruce
for it, e.g.:apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: tekton-pipelines-vX-Y-Z spec: type: git params: - name: url value: https://github.com/tektoncd/pipeline - name: revision value: revision-for-vX.Y.Z-invalid-tags-boouuhhh # REPLACE with the commit you'd like to build from (not a tag, since that's not created yet)
-
To use release post-processing services, update the
resources.yaml
file to add a valid targetURL in the cloud eventPipelineResoruce
namedpost-release-trigger
:apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: post-release-trigger spec: type: cloudEvent params: - name: targetURI value: http://el-pipeline-release-post-processing.default.svc.cluster.local:8080 # This has to be changed to a valid URL
The targetURL should point to the event listener configured in the cluster. The example above is configured with the correct value for the
dogfooding
cluster. -
To run against your own infrastructure (if you are running in the production cluster the default account should already have these creds, this is just a bonus - plus
release-right-meow
might already exist in the cluster!), also setup the required credentials for therelease-right-meow
service account, either:- For
the GCP service account
[email protected]
which has the proper authorization to release the images and yamls in ourtekton-releases
GCP project - For your own GCP service account if running against your own infrastructure
- For
the GCP service account
-
Connect to the production cluster:
gcloud container clusters get-credentials dogfooding --zone us-central1-a --project tekton-releases
-
Run the
release-pipeline
(assuming you are using the production cluster and all the Tasks and Pipelines already exist):# Create the resoruces - i.e. set the revision that you wan to build from kubectl apply -f tekton/resources.yaml # Change the environment variable to the version you would like to use. # Be careful: due to #983 it is possible to overwrite previous releases. export VERSION_TAG=v0.X.Y export IMAGE_REGISTRY=gcr.io/tekton-releases # Double-check the git revision that is going to be used for the release: kubectl get pipelineresource/tekton-pipelines-git -o=jsonpath="{'Target Revision: '}{.spec.params[?(@.name == 'revision')].value}{'\n'}" tkn pipeline start \ --param=versionTag=${VERSION_TAG} \ --param=imageRegistry=${IMAGE_REGISTRY} \ --serviceaccount=release-right-meow \ --resource=source-repo=tekton-pipelines-git \ --resource=bucket=tekton-bucket \ --resource=builtBaseImage=base-image \ --resource=builtEntrypointImage=entrypoint-image \ --resource=builtKubeconfigWriterImage=kubeconfigwriter-image \ --resource=builtCredsInitImage=creds-init-image \ --resource=builtGitInitImage=git-init-image \ --resource=builtControllerImage=controller-image \ --resource=builtWebhookImage=webhook-image \ --resource=builtDigestExporterImage=digest-exporter-image \ --resource=builtPullRequestInitImage=pull-request-init-image \ --resource=builtGcsFetcherImage=gcs-fetcher-image \ --resource=notification=post-release-trigger pipeline-release
TODO(#569): Normally we'd use the image PipelineResources
to control which
image registry the images are pushed to. However since we have so many images,
all going to the same registry, we are cheating and using a parameter for the
image registry instead.
The nightly release pipeline is triggered nightly by Prow.
This Pipeline uses:
The nightly release Pipeline is currently missing Tasks which we want to add once we are able:
- The unit tests aren't run due to the data race reported in #1124
- Linting isn't run due to it being flakey #1205
- Build isn't run because it uses
workingDir
which is broken in v0.3.1 (kubernetes/test-infra#13948)
Some of the Pipelines and Tasks in this repo work with v0.3.1 due to Prow #13948, so that they can be used with Prow.
Specifically, nightly releases are triggered by Prow, so they are compatible with v0.3.1, while full releases are triggered manually and require Tekton >= v0.7.0.
# If this is your first time installing Tekton in the cluster you might need to give yourself permission to do so
kubectl create clusterrolebinding cluster-admin-binding-someusername \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)
# For Tekton v0.3.1 - apply version v0.3.1
kubectl apply --filename https://storage.googleapis.com/tekton-releases/previous/v0.3.1/release.yaml
# For Tekton v0.7.0 - apply version v0.7.0 - Do not apply both versions in the same cluster!
kubectl apply --filename https://storage.googleapis.com/tekton-releases/previous/v0.3.1/release.yaml
Add all the Tasks
to the cluster, including the
golang
Tasks from the
tektoncd/catalog
, and the
release pre-check Task from
tektoncd/plumbing
.
For nightly releases, use a version of the tektoncdcatalog
tasks that is compatible with Tekton v0.3.1:
# Apply the Tasks we are using from the catalog
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/14d38f2041312b0ad17bc079cfa9c0d66895cc7a/golang/lint.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/14d38f2041312b0ad17bc079cfa9c0d66895cc7a/golang/build.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/14d38f2041312b0ad17bc079cfa9c0d66895cc7a/golang/tests.yaml
For full releases, use a version of the tektoncdcatalog
tasks that is compatible with Tekton v0.7.0 (master
) and install the pre-release
check Task from plumbing too:
# Apply the Tasks we are using from the catalog
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/golang/lint.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/golang/build.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/golang/tests.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/plumbing/master/tekton/prerelease_checks.yaml
Apply the tasks from the pipeline
repo:
# Apply the Tasks and Pipelines we use from this repo
kubectl apply -f tekton/ci-images.yaml
kubectl apply -f tekton/publish.yaml
kubectl apply -f tekton/publish-nightly.yaml
kubectl apply -f tekton/release-pipeline.yaml
kubectl apply -f tekton/release-pipeline-nightly.yaml
# Apply the resources - note that when manually releasing you'll re-apply these
kubectl apply -f tekton/resources.yaml
Tasks
from this repo are:
ci-images.yaml
- ThisTask
useskaniko
to build and publish images for the CI itself, which can then be used assteps
in downstreamTasks
publish.yaml
- ThisTask
useskaniko
to build and publish base images, and usesko
to build all of the container images we release and generate therelease.yaml
release-pipeline.yaml
- ThisPipeline
uses thegolang
Task
s from thetektoncd/catalog
andpublish.yaml
'sTask
.
In order to release, these Pipelines use the release-right-meow
service account,
which uses release-secret
and has
Storage Admin
access to
tekton-releases
and
tekton-releases-nightly
.
After creating these service accounts in GCP, the kubernetes service account and secret were created with:
KEY_FILE=release.json
GENERIC_SECRET=release-secret
ACCOUNT=release-right-meow
# Connected to the `prow` in the `tekton-releases` GCP project
GCP_ACCOUNT="$ACCOUNT@tekton-releases.iam.gserviceaccount.com"
# 1. Create a private key for the service account
gcloud iam service-accounts keys create $KEY_FILE --iam-account $GCP_ACCOUNT
# 2. Create kubernetes secret, which we will use via a service account and directly mounting
kubectl create secret generic $GENERIC_SECRET --from-file=./$KEY_FILE
# 3. Add the docker secret to the service account
kubectl apply -f tekton/account.yaml
kubectl patch serviceaccount $ACCOUNT \
-p "{\"secrets\": [{\"name\": \"$GENERIC_SECRET\"}]}"
Some supporting scripts have been written using Python 2.7:
- koparse - Contains logic for parsing
release.yaml
files created byko
In order to run ko
, and to be able to use a cluster's default credentials, we
need an image which contains:
ko
golang
- Required byko
to buildgcloud
- Required to auth with default namespace credentials
The image which we use for this is built from tekton/ko/Dockerfile.
go-containerregistry#383
is about publishing a ko
image, which hopefully we'll be able to move it.