-
Notifications
You must be signed in to change notification settings - Fork 31
Pipeline Templates
This design is heavily inspired/copied from Azure Pipeline Templates.
- Share common step definitions across many jobs
- Allow local as well as remote (i.e. pulled from a repo) re-use
- Allow both globally defined, as well as team specific shared repositories
- Enable minimal pipeline definitions where no customization is required
- i.e. microservices that have same build process, but different repos
- Templates can be customized by passing parameters from including pipeline
This design focuses on extending our existing syntax to introduce an include
property that will allow re-using pipeline definitions and steps from both local and remote (git) sources.
This approach was chosen over general purpose templating and a more complex extension/heirachy mechanism. We believe it's simpler, and more straightforward than those approaches (see the section on Alternative Approaches Considered at the end of the document for more detail).
The include
property will be available at the step
level for adding steps to your pipeline, and the pipeline
level for including environment variables or other global configuration.
Making this split and having explicit includes for environment vs steps gives us a lot of flexibility and still keeps things simple. Having a single include that performs deep merges across sets of files can be confusing, and gives more limited options for certain types of overriding or re-use in the parent config file.
One or more steps can be reused across several pipelines. We can include steps from multiple files, and include additional steps explicitly in our pipeline before or after the inclusion.
As mentioned, both local and remote inclusion will be possible. The following focus on local as an example, remote inclusion is discussion in its own section.
Let's define two local templates:
# File: templates/docker-compose-publish.yml
# Note, split into two steps as an example of a template
# that expands to multiple steps
steps:
- name: build-containers
image: ${plugins}/docker
commands:
- docker-compose build
- name: publish-containers
image: ${plugins}/docker
secrets:
- source: jenkinsdeployer
target:
- DOCKER_USER
- DOCKER_PASS
commands:
- echo ${DOCKER_PASS} | docker login -u ${DOCKER_USER} --password-stdin ${ARTIFACTORY}
- docker-compose push
- docker-compose down -v --rmi=all
- docker logout ${ARTIFACTORY}
# File: templates/slack.yml
steps:
- name: slack-notification
image: ${plugins}/slack
secrets:
- source: pipeline_notifier_slack_hook
target:
- SLACK_WEBHOOK
commands:
- python /app/slack.py
when:
status: [ success, failure ]
branch: master
Now we'll incorporate them into our pipeline:
# File: pipeline.yml
pipeline:
steps:
- name: build-jar
image: mvn:3-jdk-11
commands: [ mvn package ]
# the following include will expand to 2 steps
- include: templates/docker-compose-publish.yml
- include: templates/slack.yml # template include (1 step)
environment: # note we're adding to the environment
ROOM: "#sample_channel"
- name: say hello
image: ubuntu:14.04
commands: [ echo "hello" ]
Note that the resulting pipeline would contain 5 steps, since we have 1 step before and after the included steps, and one of the includes will expand to 2 steps.
Like explicit steps, templates will have access to the Standard Environment Variables and the pipeline global Environment.
Also like explicit steps, these variables will be for expansion in any step.name
or step.image
properties in a template. Likewise, they will be available as environment variables in the container for the commands
at runtime.
At the use site of the template, the containing pipeline can add additional variables for use in the template by adding an environment
section when referencing the template.
pipeline:
steps:
- include: templates/slack.yml
environment:
ROOM: "#my_room"
This will add the environment variable ROOM
to the environment
for all steps contained in the template. If any of the steps previously had a value for ROOM
in its environment, it will be overwritten.
Currently, the step
environment is only provided at runtime for commands, and is not used as input for any substitution in the step.name
or step.image
. We will change this so that it will be provided as during variable expansion to enable customization of those properties by calling pipelines. E.g.
# File: templates/java.yml
steps:
- name: build-jar
image: mvn:${maven_version}
commands: [ mvn package ]
# File: pipeline.yml
steps:
- include: templates/java.yml
environment:
maven_version: 3-jdk-11
Currently we only support the syntax ${variable}
for basic string interpolation.
We will extend this to also support ${variable:-default}
.
This syntax was chosen because it matches bash syntax, and so we can use the same syntax both in our pipeline properties as well as in a commands
block.
E.g.
# File: template/java.yml
steps:
- name: build-jar
image: mvn:${maven_version:-3-jdk-8}
commands: [ mvn package]
Like environment
above, we can also extend the template to add secret information for the steps.
pipeline:
steps:
- include: templates/slack.yml
secrets:
- source: pipeline_notifier_slack_hook
target:
- SLACK_WEBHOOK
This will add the secret to the list of available secrets for all steps in the template.
If any of the steps had previously defined secrets that referenced any of the same target
elements,
they will be removed.
Like our existing string interpolation, if any template variables cannot be expanded, this will result in a validation error before the pipeline is run.
For environment variables references in the commands
list, we do not perform any validation. This is because we can't be certain what environment variables will be available when an arbitrary command is run in an arbitrary container.
To allow better validation upfront, we will allow an optional element in templates listing what parameters are expected. This will cover both environment variables as well as secrets.
# File: templates/docker-compose-publish.yml
parameters:
required: [ DOCKER_USER, DOCKER_PASS, ARTIFACTORY ]
steps:
- name: publish containers
image: docker
secrets:
- source: poetjenkinsdeployer
target:
- DOCKER_USER
- DOCKER_PASS
commands:
- echo ${DOCKER_PASS} | docker login -u ${DOCKER_USER}--password-stdin ${ARTIFACTORY}
- docker-compose build
- docker-compose push
- docker-compose down -v --rmi=all
- docker logout ${ARTIFACTORY}
There may be other global pipeline information we wish to share. For example, in a microservice type architecture, we may want to share a base version or other standard configuration information.
We will support an include
property at the pipeline
level that can import global
and environment
information from a list of templates. In the future, this may include other information such as secrets, services/sidecar containers, etc.
# File: templates/microservice-env.yml
pipeline:
appVersion:
master: 1.0.0
environment:
LOG_LEVEL: "info"
# File: pipeline.yml
pipeline:
include:
- templates/microservice-env.yml
include
at the pipeline
level is only valid for configuration information. Templates included at this level must not include pipeline steps.
- Included files will be processed in order, with later files overriding previous ones.
- Definitions from the host
pipeline.yml
file will be evaluated last and override any previous definitions.
Templates can be defined in arbitrary repositories.
For example, we could define a "standard java" template in an poet-pipeline-templates
repository:
# File: standard-java.yml
pipeline:
steps:
- name: build-jar
image: maven:${maven_version:-3-jdk-9}
commands:
- mvn clean package
- name: Run SonarQube
image: ...
...
- name: publish to artifactory
image: ...
...
Now we can reuse this template in multiple repositories.
Under a new resources
element at the top-level in our pipeline.yml
, we can provide repository details.
To specify that the file is coming from a repo, we use @
followed by the name we gave the repository.
We can include files from multiple remote repositories, as well as mix with local templates.
# File: pipeline.yml
resources:
repositories:
- name: templates
uri: https://github.com/tmobile/POET-pipeline-templates-library.git ## Work-in-progress
label: master # optional branch/tag/commit, defaults to master
# Note our larger design for secrets is still pending, credentials details may change
credentials:
id: buildmaster
pipeline:
steps:
- include: standard-java.yml@templates
- include: slack.yml@templates
- include: templates/my_local_template.yml
- Repositories are resolved only once, at pipeline start-up
- the same resource is used for the duration of the pipeline
- Once expanded, the final pipeline runs as if it were defined entirely
in the source repo
- you can't use scripts or other files from the template repo
Our initial design specifically disallowed templates including other templates. With use, it's become obvious this would be a powerful feature. For example, a common use would be to template out a common notification step and use that inside a higher-level template of building a java application.
When including a template, the user should not need to know how the template is implemented or if it includes other templates. It can be thought of as a block on its own.
Likewise, a given template does not need to make assumptions about how it is included. If a template needs to reference additional outside resources, they can be defined and scoped within the template.
Currently we impose a soft limit of 5 levels deep of includes. Since 5 is a relatively small number, we don't perform any other loop detection as it will be quickly caught by tha maximum include depth.
Like the host pipeline.yml
file, both step and config templates can include their own resources
section where they may specify other repositories to use for template includes.
The parent repositories are not consulted, each template is treated independently.
A template may refer to other templates within its repository in a local/relative fashion.
# File: pipeline.yml
resources:
repositories:
- name: my-templates
uri: https://github.com/tmobile/POET-pipeline-templates-library.git ## Work-in-progress
credentials:
id: buildmaster
pipeline:
steps:
- include: java-build.yml@my-templates
# Repository: https://github.com/tmobile/POET-pipeline-templates-library.git ## Work-in-progress
# File: java-build.yml
resources:
repositories:
- name: templates
uri: ...
credentials:
id: ...
steps:
- include: mvn-package.yml # note this will be in the same repo as this template
- include: slack.yml@templates # remote repo
- name: a directly defined step
image: ...
...
# Repository: https://github.com/tmobile/POET-pipeline-templates-library.git ## Work-in-progress
# File: mvn-package.yml
steps:
- name: mvn-package
image: mvn:3
commands:
- mvn clean package
As mentioned, we have explicitly separated out importing configuration information and importing steps.
This makes our minimal possible pipeline.yml
that only re-uses existing definitions something like:
# File: pipeline.yml
pipeline:
include:
- templates/shared-env.yml
steps:
- include: templates/shared-steps.yml
For reference, here are some other approaches that were considered. Ultimately, I think the final approach detailed above (based on Azure Pipeline templates) struck the right balance of flexibility and simplicity for end-users.
I briefly considered implementing a pre-processing step treating the input pipeline.yml
as text, and running an existing templating engine on it. My preference was for jinja2 as it's well-know, has a nice syntax, and a promising java implementation that looks flexible in terms of extension.
The idea was, we'd run a container to perform templating, which would produce a valid pipeline.yml
file when complete.
The jinjava implementation has support for custom Resource Locators for finding templates to include, so I envisioned something like:
# File: pipeline.yml
{% include "https://github..../[email protected]" %}
- steps:
{% for app in apps %}
- name: {{ app.name }}
{% endfor %}
Where we'd write a custom Resource Locator to support pulling files from git repos.
Having all of jinja2 templating features seemed like overkill, but maybe useful in the future.
I abandoned this idea for a few reasons:
- it seemed overly complex to include a general purpose templating language as part of our pipeline
- if we can't expect the
pipeline.yml
to be a valid yaml file, but only unstructured text that we must process, then it becomes hard to process. I.e. in the example above:- how do we provide repository info/credentials to the template engine?
- introduce a secondary config file?
- how do we provide other user input parameters to the templator (such as
apps
above)?- another config file/format?
- how do we provide repository info/credentials to the template engine?
Yaml has syntax for referencing and re-using parts of a document via Anchors and Aliases.
Here is a good overview of its use.
While Bitbucket Pipelines don't allow external inclusion, they do document using anchors to re-use parts of a single pipeline file.
I briefly considered a mechanism where you could include definitions, somehow include them, and then reference them in your pipeline.yml file.
Something like
# File: templates/java.yml
definitions:
steps:
- step: &build-test
name: build and test
image: mvn:3-jdk-8
commands: [ mvn compile test ]
# File: pipeline.yml
pipeline:
definitions:
include: "templates/java.yml"
steps:
- *build-test
This approach was abandoned:
- the
pipeline.yml
would not be valid yaml!!- we still need a preprocessing step
- it include references to anchors that do not exist.
- we still need a preprocessing step
- the yaml anchor/alias mechanism isn't that flexible
- only supports referencing objects
- we'd have to re:reference each individual step, or (more likely) come up with alternate syntax to allow the steps list to contain special objects with sublists.
- the yaml anchor/alias mechanism and its limitations aren't super user-friendly.
- the fact that we'd have a non-standard implementation that allowed external references wouldn't make it any easier..
- Azure Templates
-
Configuration of your jobs with .gitlab-ci.yml | GitLab
- the gitlab include is very basic and allows config files to be deep merged