Component Pipeline
This pipeline allows to have a minimal Jenkinsfile
in each repository by providing all language-agnostic build aspects. The goal is to duplicate as little as possible between repositories and have an easy way to ship updates to all projects.
Usage
Load the shared library in your Jenkinsfile
like this:
@Library('ods-jenkins-shared-library@3.x') _
odsComponentPipeline(
imageStreamTag: 'ods/jenkins-agent-golang:3.x',
branchToEnvironmentMapping: [
'master': 'dev',
// 'release/': 'test'
]
) { context ->
odsComponentFindOpenShiftImageOrElse(context) {
stage('Build') {
// custom stage
}
odsComponentStageScanWithSonar(context)
odsComponentStageBuildOpenShiftImage(context)
}
odsComponentStageRolloutOpenShiftDeployment(context)
}
The version in @Library
can be any Git revision, such as a branch (e.g. master
or 2.x
), a tag (e.g. v2.0
) or even a specific commit.
There are many built-in stages like odsComponentStageScanWithSonar
that you can use, please see Stages for more details.
Pipeline Options
odsComponentPipeline
can be customized by passing configuration options like this:
odsComponentPipeline(
imageStreamTag: 'ods/jenkins-agent-golang:3.x',
dockerDir: 'foo'
)
Available options are:
Property | Description |
---|---|
image |
Container image to use for the Jenkins agent container. This value is not used when |
imageStreamTag |
Container image tag of an |
alwaysPullImage |
Determine whether to always pull the container image before each build run. Defaults to |
resourceRequestMemory |
How much memory the container requests - defaults to 1Gi. This value is not used when |
resourceLimitMemory |
Maximum memory the container can use - defaults to 2Gi. This value is not used when |
resourceRequestCpu |
How much CPU the container requests - defaults to 10m. This value is not used when |
resourceLimitCpu |
Maximum CPU the container can use - defaults to 300m. This value is not used when |
podLabel |
Pod label, set by default to a random label to avoid caching issues. Set to a stable label if you want to reuse pods across builds. |
podContainers |
Custom pod containers to use if the default, automatically configured container is not suitable for your use case (e.g. if you need multiple containers such as app and database). See Agent customization. |
podVolumes |
Volumes to make available to the pod. |
podServiceAccount |
Serviceaccount to use when running the pod. |
notifyNotGreen |
Whether to send notifications if the build is not successful. Enabled by default. |
emailextRecipients |
Notify to this list of emails when |
branchToEnvironmentMapping |
Define which branches are deployed to which environments, see Git Workflow / Branch to Environment Mapping |
projectId |
Project ID, e.g. |
componentId |
Component ID, e.g. |
environmentLimit |
Number of environments to allow when auto-cloning environments. |
dockerDir |
The docker directory to use when building the image in openshift. Defaults to |
sonarQubeBranch |
Please use option |
failOnSnykScanVulnerabilities |
Deprecated in 3.x! Please use option |
openshiftBuildTimeout |
Deprecated in 3.x! Please use option |
openshiftRolloutTimeout |
Deprecated in 3.x! Please use option |
testResults |
Configurable location for xunit test results, in case the build does not put them into |
commitGitWorkingTree |
Defaulting to false, if set to true, any changes in the working directory added with |
Pipeline Context
When you write custom stages inside the closure passed to odsComponentPipeline
, you have access to the context
, which is assembled for you on the master node. The context
can be influenced by changing the config map passed to odsComponentPipeline
, see Pipeline Options.
The context
object contains the following properties:
Property | Description |
---|---|
jobName |
Value of JOB_NAME. It is the name of the project of the build. |
buildNumber |
Value of BUILD_NUMBER. The current build number, such as |
buildUrl |
Value of BUILD_URL. The URL where the results of the build can be found (e.g. |
buildTime |
Time of the build, collected when the odsComponentPipeline starts. |
credentialsId |
Credentials identifier (Credentials are created and named automatically by the OpenShift Jenkins plugin). |
tagversion |
The tagversion is made up of the build number and the first 8 chars of the commit SHA. |
nexusUrl |
Nexus URL - value taken from |
nexusUsername |
Nexus username. |
nexusPassword |
Nexus password. |
nexusUrlWithBasicAuth |
Nexus URL, including username and password as BasicAuth. |
sonarQubeEdition |
Edition of SonarQube in use, determined by |
environment |
The environment which was chosen as the deployment target, e.g. |
targetProject |
Target project, based on the environment. E.g. |
cdProject |
CD project. E.g. |
groupId |
Group ID, defaults to: org.opendevstack. |
projectId |
Project ID, e.g. |
componentId |
Component ID, e.g. |
selector |
Selector common to all resources of component, defaults to |
gitUrl |
Git URL of repository |
gitBranch |
Git branch for which the build runs. |
gitCommit |
Git commit SHA to build. |
shortGitCommit |
Short Git commit SHA (first 8 chars) to build. |
gitCommitAuthor |
Git commit author. |
gitCommitMessage |
Git commit message (sanitized). |
gitCommitRawMessage |
Git commit message (raw). |
gitCommitTime |
Git commit time in RFC 3399. |
issueId |
Jira issue ID if any present in either commit message or branch name (e.g. |
openshiftHost |
OpenShift host - value taken from |
odsSharedLibVersion |
ODS Jenkins shared library version, taken from reference in |
bitbucketUrl |
Bitbucket URL - value taken from |
dockerDir |
The docker directory to use when building the image in openshift. Defaults to |
Git Workflow / Branch to Environment Mapping
The shared library does not impose which Git workflow you use. Whether you use git-flow, GitHub flow or a custom workflow, it is possible to configure the pipeline according to your needs by configuring the pipeline option branchToEnvironmentMapping
. The setting could look like this:
branchToEnvironmentMapping: [ 'master': 'prod', 'develop': 'dev', 'hotfix/': 'hotfix', '*': 'review' ]
There are three ways to reference branches:
-
Fixed name (e.g.
master
) -
Prefix (ending with a slash, e.g.
hotfix/
) -
Any branch (
*
)
Matches are made top-to-bottom. For prefixes / any branch, a more specific environment might be selected if:
-
the branch contains a ticket ID and a corresponding env exists in OpenShift. E.g. for mapping
"feature/": "dev"
and branchfeature/foo-123-bar
, the envdev-123
is selected instead ofdev
if it exists. -
the branch name corresponds to an existing env in OpenShift. E.g. for mapping
"release/": "rel"
and branchrelease/1.0.0
, the envrel-1.0.0
is selected instead ofrel
if it exists.
Examples
If you use git-flow, the following config fits well:
branchToEnvironmentMapping: [ 'master': 'prod', 'develop': 'dev', 'release/': 'rel', 'hotfix/': 'hotfix', '*': 'preview' ]
If you use GitHub Flow, the following config fits well:
branchToEnvironmentMapping: [ 'master': 'prod', '*': 'preview' ]
If you use a custom workflow, the config could look like this:
branchToEnvironmentMapping: [ 'production': 'prod', 'master': 'dev', 'staging': 'uat' ]
Advanced
Agent customization
The agent used in the pipeline can be customized by adjusting the image
(or imageStreamTag
to
use. Further, alwaysPullImage
(defaulting to true
) can be used to
determine whether this image should be refreshed on each build.
Resource constraints of the container can be changed via resourceRequestCpu
,
resourceLimitCpu
, resourceRequestMemory
and resourceLimitMemory
.
The setting podVolumes
allows to mount persistent volume claims to the pod
(the value is passed to the podTemplate
call as volumes
).
To completely control the container(s) within the pod, set podContainers
(which is passed to the podTemplate
call as containers
).
Configuring of a customized agent container in a Jenkinsfile
:
odsComponentPipeline( branchToEnvironmentMapping: [:], podContainers: [ containerTemplate( name: 'jnlp', // do not change, see https://github.com/jenkinsci/kubernetes-plugin#constraints image: "${env.DOCKER_REGISTRY}/foo-cd/jenkins-agent-custom", workingDir: '/tmp', resourceRequestCpu: '100m', resourceLimitCpu: '500m', resourceRequestMemory: '2Gi', resourceLimitMemory: '4Gi', alwaysPullImage: true, args: '${computer.jnlpmac} ${computer.name}' ) ], ... ) { context -> stageBuild(context) ... }
See the kubernetes-plugin documentation for possible configuration.
Git LFS (Git Large File Storage extension)
If you are working with large files (e.g.: binary files, media files, files bigger than 5MB…), you can follow the following steps:
-
Check this HOWTO about Git LFS
-
Track your large files in your local clone, as explained in previous step
-
Enable Git LFS in your repository (if Bitbucket: under repository’s settings main page you can enable it)
NOTE: if already having a repository with large files and you want to migrate it to using git LFS:
git lfs migrate
Deploying OpenShift resources from source code
By default, the component pipeline uses existing OpenShift resources, and just creates new images / deployments related to them. However, it is possible to control all OpenShift resources in code, following the infrastructure-as-code approach. This can be done by defining the resources as OpenShift templates in the directory openshift
of the repository, which will then get applied by Tailor when running the pipeline. The advantage of this approach:
-
All changes to OpenShift resources are traceble: who did the change and when?
-
Moving your application between OpenShift projects or even clusters is trivial
-
Changes to your application code that require a change in configuration (e.g. a new environment variable) as well can be done together in one commit.
If you have an existing component for which you want to enable this feature, you simply need to run:
mkdir -p openshift
tailor -n foo-dev export -l app=foo-bar > openshift/template.yml
Commit the result and the component pipeline should show in the ouput whether there has been drift and how it was reconciled.
When using this approach, you need to keep a few things in mind:
-
Any changes done in the OpenShift web console will effectively be reverted with each deploy. When you store templates in code, all changes must be applied to them.
-
You can always preview the changes that will happen by running
tailor diff
from your local machine. -
DeploymentConfig
resources allow to specify config and image triggers (and ODS configures them by default like this). When deploying via Tailor, it is recommended to remove the image trigger, otherwise you might trigger two deployments: one when config (such as an environment variable) changes, and one when the image changes. When you remove the image trigger, it is crucial to add the internal registry to theimage
field, and to configureimagePullPolicy: Always
for the container (otherwise you might roll out old images).
If you want to use encrypted secrets with Tailor, you have to create a keypair for Jenkins so that the pipeline can use it to decrypt the parameters. The easiest way to do this is to create an OpenShift secret named tailor-private-key
and sync it with Jenkins as a credential. Example:
tailor secrets generate-key jenkins@example.com
oc -n foo-cd create secret generic tailor-private-key --from-file=ssh-privatekey=private.key
oc -n foo-cd label secret tailor-private-key credential.sync.jenkins.openshift.io=true
Controlling your OpenShift resources in source code enables a lot of other use cases as well. For example, you might want to preview changes to a component before merging the source code. By using Tailor to deploy your templates, you can create multiple running components from one repository, e.g. one per feature branch. Following are some steps how to achieve this:
First, add 'feature/': 'dev'
to the branchToEnvironmentMapping
. Then, create new variables in the pipeline block:
def componentSuffix = context.issueId ? "-${context.issueId}" : ''
def suffixedComponent = context.componentId + componentSuffix
With this in place, you can adapt the rollout stage:
odsComponentStageRolloutOpenShiftDeployment(
context,
[
tailorSelector: "app=${context.projectId}-${suffixedComponent}",
tailorParams: ["COMPONENT_SUFFIX=${componentSuffix}"]
]
)
And finally, in your openshift/template.yml
, you need to add the COMPONENT_SUFFIX
parameter and append ${COMPONENT_SUFFIX}
everywhere the component ID is used in deployment relevant resources (such as Service
, DeploymentConfig
, Route
). That’s all you need to have automatic previews!
You might want to clean up when the code is merged, which can be achieved with something like this:
stage('Cleanup preview resources') {
if (context.environment != 'dev') {
echo "Not performing cleanup outside dev environment"; return
}
def mergedIssueId = org.ods.services.GitService.mergedIssueId(context.projectId, context.repoName, context.gitCommitRawMessage)
if (mergedIssueId) {
echo "Perform cleanup of suffix '-${mergedIssueId}'"
sh("oc -n ${context.targetProject} delete all -l app=${context.projectId}-${context.componentId}-${mergedIssueId}")
} else {
echo "Nothing to cleanup"
}
}
Interacting with Bitbucket
The shared library already sets the build status of the built commit. It also
provides convenience methods on BitbucketService
to interact with pull
requests:
-
String createPullRequest(String repo, String fromRef, String toRef, String title, String description, List<String> reviewers)
creates a pull request inrepo
from branchfromRef
totoRef
.reviewers
is a list of bitbucket user names. -
List<String> getDefaultReviewers(String repo)
returns a list of bitbucket user names (not display names) that are listed as the default reviewers of the givenrepo
. -
String getDefaultReviewerConditions(String repo)
returns all default reviewer conditions of the givenrepo
, which can be parsed usingreadJSON
. -
String getPullRequests(String repo, String state = 'OPEN')
returns all open pull requests, which can be parsed usingreadJSON
. -
Map findPullRequest(String repo, String branch, String state = 'OPEN')
tries to find a pull request for the givenbranch
, and returns a map with its ID and target branch. -
void postComment(String repo, int pullRequestId, String comment)
allows to addcomment
to the PR identified bypullRequestId
.
To make use of these methods, you need to get an instance of the BitbucketService
in your Jenkinsfile
like this:
import org.ods.services.ServiceRegistry
import org.ods.services.BitbucketService
def sayHello(def context) {
stage('Say Hello') {
def bitbucketService = ServiceRegistry.instance.get(BitbucketService)
bitbucketService.postComment(context.repoName, 1, "Hello world")
}
}
Skipping pipeline runs
If the subject of the built commit message contains [ci skip]
, [skip ci]
or ***NO_CI***
, the pipeline is skipped.
# skip pipeline (one-line commit)
$ git commit -m "docs: update README [ci skip]"
# run pipeline (multi-line commit) as it is not part of the subject
$ git commit -m "docs: update README
- add section installation
- [ci skip]"
The Jenkins build status will be set to NOT_BUILT
, the Bitbucket build status to SUCCESSFUL
(as there is no "skipped" state). The pipeline will start to execute initially, but abort before launching any agent nodes or starting any of the stages defined in the Jenkinsfile
.
Stages
Each built-in stage (like odsComponentStageScanWithSonar
) takes two arguments:
-
context
(required, this is the pipeline context) -
config
(optional, a map of configuration options)
Example:
odsComponentStageScanWithSonar(context, [branch: 'production'])
odsComponentFindOpenShiftImageOrElse
Checks if an image for the current commit exists already, otherwise executes the given closure.
Example:
odsComponentFindOpenShiftImageOrElse(context) {
stage('Build') {
// custom stage to build your application package
}
odsComponentStageBuildOpenShiftImage(context)
}
The step can be customized using the options resourceName
and imageTag
.
Using this step in your Jenkinsfile
allows you to avoid building a container image for the same Git commit multiple times, reducing build times and increasing reliability as you can promote the exact same image from one environment to another.
Keep in mind that image lookup works by finding an image tagged with the current Git commit. If you merge a branch into another using a merge commit, the current Git commit SHA will differ from the previously built image tag, even if the actual contents of the repository are the same. To ensure image importing kicks in, use the --ff-only option on git merge
(this can also be enabled for pull requests in Bitbucket under "Merge strategies"). There are a few consequences when doing so, which should be kept in mind:
-
No merge commit is created, which has the downside that you do not see when a PR was merged, and that the merge commit is a convenient way to find the associated PR. Further, if the latest commit on a branch which you want to merge contains
[ci skip]
, beware that the build on the target branch will also be skipped. That siad, having no merge commit has the upside that your Git history is not polluted by merge commits. -
Enforcing a fast-forward merge prevents you from merging a branch which is not up-to-date with the target branch. This has the downside that before merging, you may need to rebase your branch or merge the target branch into your branch if someone else updated the target branch in the meantime. While this may cause extra work, it has the upside that you cannot accidentally break the target branch (e.g. tests on your branch may work based on the outdated target branch, but fail after the merge).
In summary, using git merge --ff-only
provides safety, a clean history and allows to promote the exact same image between environments.
odsComponentStageScanWithSonar
The "SonarQube Analysis" stage scans your source code and reports findings to
SonarQube. The configuration of the scan happens via the
sonar-project.properties
file in the repository being built.
If your SonarQube server edition allows to scan multiple branches (any commercial edition does), then this stage will automatically decorate pull requests in Bitbucket with feedback from SonarQube (if the PR already exists at the time of the Jenkins pipeline run).
In debug mode, the sonar-scanner
binary is started with the -X
flag.
If no sonar.projectVersion
is specified in sonar-project.properties
, it is
set to the shortened Git SHA.
Options
Option | Description |
---|---|
analyzePullRequests |
Whether to analyze pull requests and decorate them in Bitbucket. Turned
on by default, however a scan is only performed if the |
branch |
Branch to scan.
Example: |
branches |
Branches to scan.
Example: |
longLivedBranches |
Branch(es) for which no PR analysis should be performed. If not set, it
will be extracted from |
requireQualityGatePass |
Whether to fail the build if the quality gate defined in the SonarQube
project is not reached. Defaults to |
resourceName |
Name of |
odsComponentStageScanWithAqua
The "Aqua Security Scan" stage scans an image that was previously built in that same pipeline run.
As a result, a Bitbucket Code Insight entry is added to the git commit (in Bitbucket) that basically contains a link to the scan result on the Aqua platform. The Bitbucket Code Insight entry can be seen in a pull request. The pull request in Bitbucket shows the Code Insight of the latest commit of the PR. In case the Aqua scan detects remotely exploitable cirtical vulnerabilities with solutions the build fails until the solution is implemented.
To get started, make sure you have a ConfigMap
in OpenDevStack project namespace (usually ods) in OpenShift that has these fields:
... metadata: name: aqua ... data: registry: <registry-name-in-aqua-platform> secretName: <secret-name-of-aqua-user-credentials> url: <aqua-platform-url> enabled: <true/false> nexusRepository: <name-repository-in-nexus-to-store-the-results> alertEmails: <emails-to-send-notifications>
-
registry
: Refers to a name for the image registry given in the Aqua platform by an Aqua platform admin. -
secretName
: Name of aSecret
that contains the credentials of the Aqua platform user that is used for executing the scan. That user needs to have scanner rights. This field is optional, if the property doesn’t exists the system will use the credential 'cd-user-with-password'. -
url
: Base URL of the Aqua platform (including scheme). -
enabled
: If true, the scan always occur in all projects. False to disable the scan. -
nexusRepository
: Name of the repository in Nexus instance to store the results of analysis in HTML format. -
alertEmails
: Optional field. It contains the emails splitted by ',' to send error notifications regarding with Aqua analysis (misconfigurations, etc…). The mail server must be configured in Jenkins to send the emails.
Is possible to disable the analysis at project level. for that is necessary to add in the ConfigMap
new properties e.g. like this:
... metadata: name: aqua ... data: registry: <registry-name-in-aqua-platform> secretName: <secret-name-of-aqua-user-credentials> url: <aqua-platform-url> enabled: <true/false> nexusRepository: <name-repository-in-nexus-to-store-the-results> alertEmails: <emails-to-send-notifications> project.key1.enabled: <false> project.key2.enabled: <false>
-
project.key1.enabled
: Property to indicate that key1 (being key1 the key of the project) has the aqua analysis disabled. -
project.key2.enabled
: The same but for key2 project.
odsComponentStageScanWithTrivy
The "Trivy Security Scan" stage scans the filesystem of the cloned repository using Trivy and generates a SBOM report, with CycloneDX format by default. Check Trivy supported formats here.
As a result, a Bitbucket Code Insight entry is added to the git commit (in Bitbucket) that basically contains a link to the scan report stored in Nexus. The Bitbucket Code Insight entry can be seen in a pull request. The pull request in Bitbucket shows the Code Insight of the latest commit of the PR.
To get started, edit your Jenkinsfile
and add the Trivy stage:
) { context -> ... odsComponentStageScanWithTrivy(context) ... }
Options
Option | Description |
---|---|
additionalFlags |
Additional flags for the Trivy CLI. Please refer to the official Trivy CLI
reference for possible options and don’t forget to take the CLI version
of your ODS installation into account. The value of |
branch |
Branch to run stage for.
Example: |
branches |
Branches to run stage for.
Example: |
format |
Set the format for the generated report. Defaults to |
nexusDataBaseRepository |
Name of the Nexus repository used to proxy the location of the database of vulnerabilities located in GitHub.
Defaults to |
nexusReportRepository |
Name of the Nexus repository where the scan report will be stored. Defaults to |
pkgType |
Comma-separated list of vulnerability types to scan. Defaults to |
reportFile |
Name of the file that will be archived in Jenkins and uploaded in Nexus.
Defaults to |
resourceName |
Name of component that we want to scan. Defaults to |
scanners |
Comma-separated list of what security issues to detect. Defaults to |
odsComponentStageScanWithSnyk
The "Snyk Security Scan" stage performs two tasks:
-
It uploads your 3rd party dependencies including their licenses for monitoring. Snyk will then notify developers about new vulnerabilities per email once they are reported to the Snyk Vulnerability Database.
-
It analyses your 3rd party dependencies including their licenses and breaks the build if vulnerable versions are found.
To get started, setup an organisation in snyk.io with exactly the same name as your ODS project name. Under "Settings", create a service account for this organisation and make a note of the displayed token. Edit your Jenkinsfile
and add the Snyk stage:
) { context -> ... odsComponentStageScanWithSnyk(context, [snykAuthenticationCode: <your token>]) ... }
It is recommended to read your authentication token dynamically, e.g. from an environment variable or a credential in your Jenkins master.
Options
Option | Description |
---|---|
additionalFlags |
Additional flags for the Snyk CLI. Please refer to the official Snyk CLI
reference for possible options and don’t forget to take the CLI version
of your ODS installation into account. The value of |
branch |
Branch to run stage for.
Example: |
branches |
Branches to run stage for.
Example: |
buildFile |
Build file from which to gather dependency information. Defaults to |
failOnVulnerabilities |
Whether to fail the build when vulnerabilities are found. Defaults to |
organisation |
Name of the Snyk organisation. Default to |
projectName |
Name of the Snyk project name. Default to |
severityThreshold |
Severity threshold for failing. If any found vulnerability has a severity
equal or higher to the threshold, the snyk test will return with a
failure status. Possible values are |
snykAuthenticationCode |
Required! Authentication token of a service account within your organisation. |
odsComponentStageBuildOpenShiftImage
Triggers (and follows) a build in the BuildConfig
related to the repository
being built.
The resulting image is tagged with context.shortGitCommit
.
Options
Option | Description |
---|---|
branch |
Branch to run stage for.
Example: |
branches |
Branches to run stage for.
Example: |
buildArgs |
Pass build arguments to the image build process. |
buildTimeoutMinutes |
Timeout of build (defaults to 15 minutes). |
buildTimeoutRetries |
Adjust retries to wait for the build pod status (defaults to 5). |
dockerDir |
Docker context directory (defaults to |
extensionImageLabels |
Extra image labels added into |
imageLabels |
Pass labels which should be added on the image.
Each label will be prefixed with |
imageTag |
Image tag to apply (defaults to |
resourceName |
Name of |
odsComponentStageImportOpenShiftImage
Imports an image from another namespace.
By default, the source image is identified using the commit which triggered the pipeline run.
Options
Option | Description |
---|---|
branch |
Branch to run stage for.
Example: |
branches |
Branches to run stage for.
Example: |
imagePullerSecret |
Name of image-puller secret (optional, used when pulling images from an external source cluster). |
resourceName |
Name of |
sourceProject |
OpenShift project from which to import the image identified by |
sourceTag |
Image tag to look for in the |
targetTag |
Image tag to apply to the imported image in the target project (defaults to |
odsComponentStageRolloutOpenShiftDeployment
Rolls out the current resources as defined in the component.
Without any configuration the stage tries to guess what a user expects.
If the component contains a directory name chart
, a Helm deployment is assumed.
If the component contains a directory name openshift
, a Tailor deployment is assumed.
If neither exists a Tailor deployment is assumed.
Helm
Triggers a release or update of an release with Helm.
The stage will use the helm
command to trigger the release.
The command will be executed in the directory referenced by chartDir
.
If the directory does not exist, the stage will fail.
The images used in the deployment will not be tagged or otherwise modified.
HELM_DIFF_IGNORE_UNKNOWN_FLAGS=true helm -n play-dev secrets diff upgrade \
--install --atomic --force \
-f values.yaml \
--set registry=registry.example.com \
--set componentId=example-helm-chart \
--set imageNamespace=example-dev \
--set imageTag=deadbeef69cafebabe \
--no-color --three-way-merge --normalize-manifests \
example-release . || true
# run the upgrade
helm -n play-dev secrets upgrade \
--install --atomic --force \
-f values.yaml \
--set registry=registry.example.com \
--set componentId=example-helm-chart \
--set imageNamespace=play-dev \
--set imageTag=deadbeef69cafebabe \
example-release .
Tailor
Triggers (and follows) a rollout of the DeploymentConfig
related to the repository
being built.
It achieves this by tagging the image built in odsComponentStageBuildOpenShiftImage
with latest
.
This might already trigger a rollout based on an existing ImageTrigger
.
If none is set, the stage will start a manual rollout.
If the directory referenced by openshiftDir
exists, the templates in there will be applied using Tailor.
In this case, it is recommended to remove any image triggers to avoid duplicate rollouts
(one when configuration changes due to a config trigger and one when the image is tagged to latest
).
In addition to the configuration options below, one can use e.g. a Tailorfile
to adjust the behaviour of Tailor as needed.
Options
Option | Description |
---|---|
branch |
Branch to run stage for.
Example: |
branches |
Branches to run stage for.
Example: |
chartDir |
Directory of Helm chart (defaults to |
deployTimeoutMinutes |
Adjust timeout of rollout (defaults to 5 minutes). Caution: This needs to be aligned with the deployment strategy timeout (timeoutSeconds) and the readiness probe timeouts (initialDelaySeconds + failureThreshold * periodSeconds). |
deployTimeoutRetries |
Adjust retries to wait for the pod during a rollout (defaults to 5). |
helmAdditionalFlags |
List of additional flags to be passed verbatim to to |
helmDefaultFlags |
List of default flags to be passed verbatim to to |
helmDiff |
Whether to show diff explaining changes to the release before running
|
helmEnvBasedValuesFiles |
List of paths to values files (empty by default). Only relevant if the
directory referenced by Passing a string literal of 'values.env.yaml' will be expanded to their respective environments. For example: 'values.env.yaml' will become 'values.dev.yaml', 'values.test.yaml' or 'values.prod.yaml'. That means creating the usual files that are named after their respective environment are parsed as usual. |
helmPrivateKeyCredentialsId |
Credentials name of the private key used by helm-secrets (defaults to
|
helmReleaseName |
Name of the Helm release (defaults to |
helmValues |
Key/value pairs to pass as values (by default, the key |
helmValuesFiles |
List of paths to values files (empty by default). Only relevant if the
directory referenced by |
imageTag |
Image tag on which to apply the |
openshiftDir |
Directory with OpenShift templates (defaults to |
selector |
Selector scope used to determine which resources are part of a component
(defaults to |
tailorExclude |
Resource kind exclusion used by Tailor (defaults to |
tailorParamFile |
Path to Tailor parameter file (defaults to none). Only relevant if the
directory referenced by |
tailorParams |
Additional parameters to pass to Tailor (defaults to |
tailorPreserve |
Paths to preserve in the live configuration (defaults to |
tailorPrivateKeyCredentialsId |
Credentials name of the private key used by Tailor (defaults to
|
tailorSelector |
Selector scope used by Tailor (defaults to config option |
tailorVerify |
Whether Tailor verifies the live configuration against the desired state
after application (defaults to |
Notable Differences between tailor and helm deployments
When tailor does the rollout, all the created or updated OpenShift resources are automatically labeled to ease their management. This is in contrast to helm rollouts which rely on the chart providing the desired labels. Add labels either via the chart directly or via supplying them in the values or values files.
Detailed information about the labelling can be found here.
odsComponentStageUploadToNexus
Triggers the upload of an artifact to Nexus. Implementation is based on https://help.sonatype.com/repomanager3/rest-and-integration-api/components-api
Options
Option | Description |
---|---|
artifactId |
For |
branch |
Branch to run stage for.
Example: |
branches |
Branches to run stage for.
Example: |
distributionFile |
Filename. Defaults to |
groupId |
For |
repository |
Name of the Nexus repository. Defaults to |
repositoryType |
Type of the Nexus repository. Defaults to |
targetDirectory |
For |
version |
For |
odsComponentStageCopyImage
Copies a source image into the project.
This is useful to get images into the OpenShift registry so that release manager will accept all images.
The primary intention is for helm charts so that external images can be imported.
Options
Option | Description |
---|---|
branch |
Branch to run stage for.
Example: |
branches |
Branches to run stage for.
Example: |
insecurePolicy |
insecurePolicy turn on the insecure policy with skopeo |
preserveDigests |
preserveDigests allows to sync the source and destination image digests The default is false, set to true to preserve digests |
sourceCredential |
sourceCredential is the credential to use, if any, to access the source registry |
sourceImageUrlIncludingRegistry |
Source image to import This needs to be in the following format: [REGISTRY/]REPO/IMAGE[:TAG] |
tagIntoTargetEnv |
true will tag the image from the -cd namespace into the targetEnvironment that the pipeline is running for |
targetRegistry |
target registry url, if not set the current internal route will be used |
targetToken |
targetToken is the bearer token to use, if any, to access the target registry |
verifyTLS |
verifyTLS allows the stage to ignore certificate validation errors. The default is to verify certificate paths |