Component Pipeline
This pipeline allows to have a minimal Jenkinsfile
in each repository by providing all language-agnostic build aspects. The goal is to duplicate as little as possible between repositories and have an easy way to ship updates to all projects.
Usage
Load the shared library in your Jenkinsfile
like this:
@Library('ods-jenkins-shared-library@3.x') _
odsComponentPipeline(
imageStreamTag: 'ods/jenkins-agent-golang:3.x',
branchToEnvironmentMapping: [
'master': 'dev',
// 'release/*': 'test'
]
) { context ->
odsComponentStageImportOpenShiftImageOrElse(context) {
stage('Build') {
// custom stage
}
odsComponentStageScanWithSonar(context)
odsComponentStageBuildOpenShiftImage(context)
}
odsComponentStageRolloutOpenShiftDeployment(context)
}
The version in @Library
can be any Git revsison, such as a branch (e.g. master
or 2.x
), a tag (e.g. v2.0
) or even a specific commit.
There are many built-in stages like odsComponentStageScanWithSonar
that you can use, please see Stages for more details.
Pipeline Options
odsComponentPipeline
can be customized by passing configuration options like this:
odsComponentPipeline(
imageStreamTag: 'ods/jenkins-agent-golang:3.x',
dockerDir: 'foo'
)
Available options are:
Property | Description |
---|---|
image |
Container image to use for the Jenkins agent container. This value is not used when |
imageStreamTag |
Container image tag of an |
alwaysPullImage |
Determine whether to always pull the container image before each build run. Defaults to |
resourceRequestMemory |
How much memory the container requests - defaults to 1Gi. This value is not used when |
resourceLimitMemory |
Maximum memory the container can use - defaults to 2Gi. This value is not used when |
resourceRequestCpu |
How much CPU the container requests - defaults to 10mi. This value is not used when |
resourceLimitCpu |
Maximum CPU the container can use - defaults to 300mi. This value is not used when |
podLabel |
Pod label, set by default to a random label to avoid caching issues. Set to a stable label if you want to reuse pods across builds. |
podContainers |
Custom pod containers to use if the default, automatically configured container is not suitable for your use case (e.g. if you need multiple containers such as app and database). See Agent customization. |
podVolumes |
Volumes to make available to the pod. |
podServiceAccount |
Serviceaccount to use when running the pod. |
notifyNotGreen |
Whether to send notifications if the build is not successful. Enabled by default. |
emailextRecipients |
Notify to this list of emails when |
branchToEnvironmentMapping |
Define which branches are deployed to which environments, see Git Workflow / Branch to Environment Mapping |
autoCloneEnvironmentsFromSourceMapping |
Define which environments are cloned from which source environments. |
projectId |
Project ID, e.g. |
componentId |
Component ID, e.g. |
environmentLimit |
Number of environments to allow when auto-cloning environments. |
dockerDir |
The docker directory to use when building the image in openshift. Defaults to |
imagePromotionSequences |
Sequence of environments between which images can be promoted in |
sonarQubeBranch |
Please use option |
failOnSnykScanVulnerabilities |
Deprecated in 3.x! Please use option |
openshiftBuildTimeout |
Deprecated in 3.x! Please use option |
openshiftRolloutTimeout |
Deprecated in 3.x! Please use option |
testResults |
Configurable location for xunit test results, in case the build does not put them into |
Pipeline Context
When you write custom stages inside the closure passed to odsComponentPipeline
, you have access to the context
, which is assembled for you on the master node. The context
can be influenced by changing the config map passed to odsComponentPipeline
, see Pipeline Options.
The context
object contains the following properties:
Property | Description |
---|---|
jobName |
Value of JOB_NAME. It is the name of the project of the build. |
buildNumber |
Value of BUILD_NUMBER. The current build number, such as |
buildUrl |
Value of BUILD_URL. The URL where the results of the build can be found (e.g. |
buildTime |
Time of the build, collected when the odsComponentPipeline starts. |
credentialsId |
Credentials identifier (Credentials are created and named automatically by the OpenShift Jenkins plugin). |
tagversion |
The tagversion is made up of the build number and the first 8 chars of the commit SHA. |
nexusUrl |
Nexus URL - value taken from |
nexusUsername |
Nexus username. |
nexusPassword |
Nexus password. |
nexusUrlWithBasicAuth |
Nexus URL, including username and password as BasicAuth. |
sonarQubeEdition |
Edition of SonarQube in use, determined by |
cloneSourceEnv |
The environment which was chosen as the clone source. |
environment |
The environment which was chosen as the deployment target, e.g. |
targetProject |
Target project, based on the environment. E.g. |
groupId |
Group ID, defaults to: org.opendevstack. |
projectId |
Project ID, e.g. |
componentId |
Component ID, e.g. |
gitUrl |
Git URL of repository |
gitBranch |
Git branch for which the build runs. |
gitCommit |
Git commit SHA to build. |
shortGitCommit |
Short Git commit SHA (first 8 chars) to build. |
gitCommitAuthor |
Git commit author. |
gitCommitMessage |
Git commit message (sanitized). |
gitCommitRawMessage |
Git commit message (raw). |
gitCommitTime |
Git commit time in RFC 3399. |
issueId |
Jira issue ID if any present in the branch name (e.g. |
openshiftHost |
OpenShift host - value taken from |
odsSharedLibVersion |
ODS Jenkins shared library version, taken from reference in |
bitbucketUrl |
Bitbucket URL - value taken from |
dockerDir |
The docker directory to use when building the image in openshift. Defaults to |
imagePromotionSequences |
Sequence of environments between which images can be promoted. Used e.g. in |
Git Workflow / Branch to Environment Mapping
The shared library does not impose which Git workflow you use. Whether you use git-flow, GitHub flow or a custom workflow, it is possible to configure the pipeline according to your needs by configuring the pipeline option branchToEnvironmentMapping
. The setting could look like this:
branchToEnvironmentMapping: [ 'master': 'prod', 'develop': 'dev', 'hotfix/': 'hotfix', '*': 'review' ]
There are three ways to reference branches:
-
Fixed name (e.g.
master
) -
Prefix (ending with a slash, e.g.
hotfix/
) -
Any branch (
*
)
Matches are made top-to-bottom. For prefixes / any branch, a more specific environment might be selected if:
-
the branch contains a ticket ID and a corresponding env exists in OpenShift. E.g. for mapping
"feature/": "dev"
and branchfeature/foo-123-bar
, the envdev-123
is selected instead ofdev
if it exists. -
the branch name corresponds to an existing env in OpenShift. E.g. for mapping
"release/": "rel"
and branchrelease/1.0.0
, the envrel-1.0.0
is selected instead ofrel
if it exists.
Examples
If you use git-flow, the following config fits well:
branchToEnvironmentMapping: [ 'master': 'prod', 'develop': 'dev', 'release/': 'rel', 'hotfix/': 'hotfix', '*': 'preview' ]
If you use GitHub Flow, the following config fits well:
branchToEnvironmentMapping: [ 'master': 'prod', '*': 'preview' ]
If you use a custom workflow, the config could look like this:
branchToEnvironmentMapping: [ 'production': 'prod', 'master': 'dev', 'staging': 'uat' ]
Advanced
Agent customization
The agent used in the pipeline can be customized by adjusting the image
(or imageStreamTag
to
use. Further, alwaysPullImage
(defaulting to true
) can be used to
determine whether this image should be refreshed on each build.
Resource constraints of the container can be changed via resourceRequestCpu
,
resourceLimitCpu
, resourceRequestMemory
and resourceLimitMemory
.
The setting podVolumes
allows to mount persistent volume claims to the pod
(the value is passed to the podTemplate
call as volumes
).
To completely control the container(s) within the pod, set podContainers
(which is passed to the podTemplate
call as containers
).
Configuring of a customized agent container in a Jenkinsfile
:
node { dockerRegistry = env.DOCKER_REGISTRY } // ... odsComponentPipeline( branchToEnvironmentMapping: [:], podContainers: [ containerTemplate( name: 'jnlp', // do not change, see https://github.com/jenkinsci/kubernetes-plugin#constraints image: "${dockerRegistry}/foo-cd/jenkins-agent-custom", workingDir: '/tmp', resourceRequestCpu: '100m', resourceLimitCpu: '500m', resourceRequestMemory: '2Gi', resourceLimitMemory: '4Gi', alwaysPullImage: true, args: '${computer.jnlpmac} ${computer.name}' ) ], ... ) { context -> stageBuild(context) ... }
See the kubernetes-plugin documentation for possible configuration.
Git LFS (Git Large File Storage extension)
If you are working with large files (e.g.: binary files, media files, files bigger than 5MB…), you can follow the following steps:
-
Check this HOWTO about Git LFS
-
Track your large files in your local clone, as explained in previous step
-
Enable Git LFS in your repository (if Bitbucket: under repository’s settings main page you can enable it)
NOTE: if already having a repository with large files and you want to migrate it to using git LFS:
git lfs migrate
Deploying OpenShift resources from source code
By default, the component pipeline uses existing OpenShift resources, and just creates new images / deployments related to them. However, it is possible to control all OpenShift resources in code, following the infrastructure-as-code approach. This can be done by defining the resources as OpenShift templates in the directory openshift
of the repository, which will then get applied by Tailor when running the pipeline. The advantage of this approach:
-
All changes to OpenShift resources are traceble: who did the change and when?
-
Moving your application between OpenShift projects or even clusters is trivial
-
Changes to your application code that require a change in configuration (e.g. a new environment variable) as well can be done together in one commit.
If you have an existing component for which you want to enable this feature, you simply need to run:
mkdir -p openshift
tailor -n foo-dev export -l app=foo-bar > openshift/template.yml
Commit the result and the component pipeline should show in the ouput whether there has been drift and how it was reconciled.
When using this approach, you need to keep a few things in mind:
-
Any changes done in the OpenShift web console will effectively be reverted with each deploy. When you store templates in code, all changes must be applied to them.
-
You can always preview the changes that will happen by running
tailor diff
from your local machine. -
DeploymentConfig
resources allow to specify config and image triggers (and ODS configures them by default like this). When deploying via Tailor, it is recommended to remove the image trigger, otherwise you might trigger two deployments: one when config (such as an environment variable) changes, and one when the image changes. When you remove the image trigger, it is crucial to add the internal registry to theimage
field, and to configureimagePullPolicy: Always
for the container (otherwise you might roll out old images).
If you want to use encrypted secrets with Tailor, you have to create a keypair for Jenkins so that the pipeline can use it to decrypt the parameters. The easiest way to do this is to create an OpenShift secret named tailor-private-key
and sync it with Jenkins as a credential. Example:
tailor secrets generate-key jenkins@example.com
oc -n foo-cd create secret generic tailor-private-key --from-file=ssh-privatekey=private.key
oc -n foo-cd label secret tailor-private-key credential.sync.jenkins.openshift.io=true
Controlling your OpenShift resources in source code enables a lot of other use cases as well. For example, you might want to preview changes to a component before merging the source code. By using Tailor to deploy your templates, you can create multiple running components from one repository, e.g. one per feature branch. Following are some steps how to achieve this:
First, add 'feature/': 'dev'
to the branchToEnvironmentMapping
. Then, create new variables in the pipeline block:
def componentSuffix = context.issueId ? "-${context.issueId}" : ''
def suffixedComponent = context.componentId + componentSuffix
With this in place, you can adapt the rollout stage:
odsComponentStageRolloutOpenShiftDeployment(
context,
[
resourceName: "${suffixedComponent}",
tailorSelector: "app=${context.projectId}-${suffixedComponent}",
tailorParams: ["COMPONENT_SUFFIX=${componentSuffix}"]
]
)
And finally, in your openshift/template.yml
, you need to add the COMPONENT_SUFFIX
parameter and append ${COMPONENT_SUFFIX}
everywhere the component ID is used in deployment relevant resources (such as Service
, DeploymentConfig
, Route
). That’s all you need to have automatic previews!
You might want to clean up when the code is merged, which can be achieved with something like this:
stage('Cleanup preview resources') {
if (context.environment != 'dev') {
echo "Not performing cleanup outside dev environment"; return
}
def mergedIssueId = org.ods.services.GitService.mergedIssueId(context.projectId, context.repoName, context.gitCommitRawMessage)
if (mergedIssueId) {
echo "Perform cleanup of suffix '-${mergedIssueId}'"
sh("oc -n ${context.targetProject} delete all -l app=${context.projectId}-${context.componentId}-${mergedIssueId}")
} else {
echo "Nothing to cleanup"
}
}
Interacting with Bitbucket
The shared library already sets the build status of the built commit. It also
provides three convenience methods on BitbucketService
to interact with pull
requests:
-
String getPullRequests(String repo, String state = 'OPEN')
returns all open pull requests, which can be parsed usingreadJSON
. -
Map findPullRequest(String repo, String branch, String state = 'OPEN')
tries to find a pull request for the givenbranch
, and returns a map with its ID and target branch. -
void postComment(String repo, int pullRequestId, String comment)
allows to addcomment
to the PR identified bypullRequestId
.
To make use of these methods, you need to get an instance of the BitbucketService
in your Jenkinsfile
like this:
import org.ods.services.ServiceRegistry
import org.ods.services.BitbucketService
def sayHello(def context) {
stage('Say Hello') {
def bitbucketService = ServiceRegistry.instance.get(BitbucketService)
bitbucketService.postComment(context.repoName, 1, "Hello world")
}
}
Skipping pipeline runs
If the message of the built commit contains [ci skip]
, the pipeline is skipped. The Jenkins build status will be set to NOT_BUILT
, the Bitbucket build status to SUCCESSFUL
(as there is no "skipped" state). The pipeline will start to execute initially, but abort before launching any agent nodes or starting any of the stages defined in the Jenkinsfile
.
Automatically cloning environments on the fly
Caution! Cloning environments on-the-fly is an advanced feature and should only be used if you understand OpenShift well, as there are many moving parts and things can go wrong in multiple places.
Example:
autoCloneEnvironmentsFromSourceMapping: [ "hotfix": "prod", "review": "dev" ]
Instead of deploying multiple branches to the same environment, individual environments can be created on-the-fly. For example, the mapping "*": "review"
deploys all branches to the review
environment. To have one environment per branch / ticket ID, you can add the review
environment to autoCloneEnvironmentsFromSourceMapping
, e.g. like this: "review": "dev"
. This will create individual environments (named e.g. review-123
or review-foobar
), each cloned from the dev
environment.
Examples
If you use git-flow, the following config fits well:
branchToEnvironmentMapping: [ 'master': 'prod', 'develop': 'dev', 'release/': 'rel', 'hotfix/': 'hotfix', '*': 'preview' ] autoCloneEnvironmentsFromSourceMapping: [ 'rel': 'dev', 'hotfix': 'prod', 'preview': 'dev' ]
If you use GitHub Flow, the following config fits well:
branchToEnvironmentMapping: [ 'master': 'prod', '*': 'preview' ] autoCloneEnvironmentsFromSourceMapping: [ 'preview': 'prod' ]
If you use a custom workflow, the config could look like this:
branchToEnvironmentMapping: [ 'production': 'prod', 'master': 'dev', 'staging': 'uat' ] autoCloneEnvironmentsFromSourceMapping: [ 'uat': 'prod' ]
Stages
Each built-in stage (like odsComponentStageScanWithSonar
) takes two arguments:
-
context
(required, this is the pipeline context) -
config
(optional, a map of configuration options)
Example:
odsComponentStageScanWithSonar(context, [branch: 'production'])
odsComponentStageScanWithSonar
The "SonarQube Analysis" stage scans your source code and reports findings to
SonarQube. The configuration of the scan happens via the
sonar-project.properties
file in the repository being built.
If your SonarQube server edition allows to scan multiple branches (any commercial edition does), then this stage will automatically decorate pull requests in Bitbucket with feedback from SonarQube (if the PR already exists at the time of the Jenkins pipeline run).
In debug mode, the sonar-scanner
binary is started with the -X
flag.
If no sonar.projectVersion
is specified in sonar-project.properties
, it is
set to the shortened Git SHA.
Available options:
Option | Description |
---|---|
branch |
Branch(es) to scan. This can be a comma separated list. Next to exact matches, it also supports prefixes (e.g. |
requireQualityGatePass |
Whether to fail the build if the quality gate defined in the SonarQube project is not reached. Defaults to |
analyzePullRequests |
Whether to analyze pull requests and decorate them in Bitbucket. Turned on by default, however a scan is only performed if the |
longLivedBranches |
Branch(es) for which no PR analysis should be performed. If not set, it will be extracted from |
odsComponentStageScanWithSnyk
The "Snyk Security Scan" stage performs two tasks:
-
It uploads your 3rd party dependencies including their licenses for monitoring. Snyk will then notify developers about new vulnerabilities per email once they are reported to the Snyk Vulnerability Database.
-
It analyses your 3rd party dependencies including their licenses and breaks the build if vulnerable versions are found.
To get started, setup an organisation in snyk.io with exactly the same name as your ODS project name. Under "Settings", create a service account for this organisation and make a note of the displayed token. Edit your Jenkinsfile
and add the Snyk stage:
) { context -> ... odsComponentStageScanWithSnyk(context, [snykAuthenticationCode: <your token>]) ... }
It is recommended to read your authentication token dynamically, e.g. from an environment variable or a credential in your Jenkins master.
Available options:
Option | Description |
---|---|
snykAuthenticationCode |
Required! Authentication token of a service account within your organisation. |
failOnVulnerabilities |
Whether to fail the build when vulnerabilities are found. Defaults to |
organisation |
Name of the Snyk organisation. Default to |
projectName |
Name of the Snyk project name. Default to |
buildFile |
Build file from which to gather dependency information. Defaults to |
branch |
Branch(es) to scan. This can be a comma separated list. Next to exact matches, it also supports prefixes (e.g. |
severityThreshold |
Severity threshold for failing. If any found vulnerability has a severity equal or higher to the threshold, the snyk test will return with a failure status. Possible values are |
odsComponentStageBuildOpenShiftImage
Triggers (and follows) a build in the BuildConfig
related to the repository
being built.
The resulting image is tagged with context.shortGitCommit
.
If the directory referenced by openshiftDir
exists, the templates in there will be applied using Tailor. In addition to the configuration options below, one can use e.g. a Tailorfile
to adjust the behaviour of Tailor as needed.
Available options:
Option | Description |
---|---|
resourceName |
Name of |
imageTag |
Image tag to apply (defaults to |
buildArgs |
Pass build arguments to the image build process. |
imageLabels |
Pass labels which should be aded on the image. Each label will be prefixed with |
extensionImageLabels |
Extra image labels added into |
buildTimeoutMinutes |
Timeout of build (defaults to 15 minutes). |
dockerDir |
Docker context directory (defaults to |
openshiftDir |
Directory with OpenShift templates (defaults to |
tailorPrivateKeyCredentialsId |
Credentials name of the secret key used by Tailor (defaults to |
tailorSelector |
Selector scope used by Tailor (defaults to |
tailorVerify |
Whether Tailor verifies the live configuration against the desired state after application (defaults to |
tailorInclude |
Resource kind restriction used by Tailor (defaults to |
tailorParamFile |
Path to Tailor parameter file (defaults to none). Only relevant if the directory referenced by |
tailorPreserve |
Paths to preserve in the live configuration (defaults to |
tailorParams |
Additional parameters to pass to Tailor (defaults to |
odsComponentStageImportOpenShiftImage
Imports an image from another namespace.
By default, the source image is identified using the commit which triggered the pipeline run.
Available options:
Option | Description |
---|---|
resourceName |
Name of |
sourceProject |
OpenShift project from which to import the image identified by |
sourceTag |
Image tag to look for in the |
targetTag |
Image tag to apply to the imported image in the target project (defaults to |
odsComponentStageImportOpenShiftImageOrElse
Imports an image from another namespace if possible, otherwise execute the given closure.
Example:
odsComponentStageImportOpenShiftImageOrElse(context) {
stage('Build') {
// custom stage to build your application package
}
odsComponentStageBuildOpenShiftImage(context)
}
The stage takes the exact same options as odsComponentStageImportOpenShiftImage
.
Before running the image import, it checks whether the image (identified by the
sourceTag
) is present in a suitable project. This is the current
target project, and potentially one or more specified by the pipeline option
imagePromotionSequences
. For example, if imagePromotionSequences
is
['dev→test', 'test→prod']
(which is the default setting), then, given the
current target environment is test
, suitable environments are dev
(based on
dev→test
), and test
itself.
If the image is not present in a suitable project, the given closure is executed.
Using this "stage" allows you to avoid building a container image for the same
Git commit multiple times, reducing build times and increasing reliability as
you can promote the exact same image from one environment to another. Keep in
mind that image lookup works by finding an image tagged with the current Git
commit. If you merge a branch into another using a merge commit, the current Git
commit will differ from the previously built image tag, even if the actual
contents of the repository are the same. To ensure image importing kicks in, use
the --ff-only option on git merge
(this can also be enabled for pull
requests in Bitbucket under "Merge strategies"). There are a few consequences
when doing so, which should be kept in mind:
-
No merge commit is created, which has the downside that you do not see when a PR was merged, and that the merge commit is a convenient way to find the associated PR. However, it has the upside that your Git history is not polluted by merge commits.
-
Enforcing a fast-forward merge prevents you from merging a branch which is not up-to-date with the target branch. This has the downside that before merging, you may need to rebase your branch or merge the target branch into your branch if someone else updated the target branch in the meantime. While this may cause extra work, it has the upside that you cannot accidentally break the target branch (e.g. tests on your branch may work based on the outdated target branch, but fail after the merge).
In summary, using git merge --ff-only
provides safety, a clean history and
allows to promote the exact same image between environments.
odsComponentStageRolloutOpenShiftDeployment
Triggers (and follows) a rollout of the DeploymentConfig
related to the repository
being built.
It achieves this by tagging the image built in odsComponentStageBuildOpenShiftImage
with latest
. This might already trigger a rollout based on an existing ImageTrigger
. If none is set, the stage will start a manual rollout.
If the directory referenced by openshiftDir
exists, the templates in there will be applied using Tailor. In this case, it is recommended to remove any image triggers to avoid duplicate rollouts (one when configuration changes due to a config trigger and one when the image is tagged to latest
). In addition to the configuration options below, one can use e.g. a Tailorfile
to adjust the behaviour of Tailor as needed.
Available options:
Option | Description |
---|---|
resourceName |
Name of |
imageTag |
Image tag on which to apply the |
deployTimeoutMinutes |
Adjust timeout of rollout (defaults to 5 minutes). Caution: This needs to be aligned with the deployment strategy timeout (timeoutSeconds) and the readiness probe timeouts (initialDelaySeconds + failureThreshold * periodSeconds). |
deployTimeoutRetries |
Adjust retries to wait for the pod during a rollout (defaults to 5). |
openshiftDir |
Directory with OpenShift templates (defaults to |
tailorPrivateKeyCredentialsId |
Credentials name of the secret key used by Tailor (defaults to |
tailorSelector |
Selector scope used by Tailor (defaults to |
tailorVerify |
Whether Tailor verifies the live configuration against the desired state after application (defaults to |
tailorExclude |
Resource kind exclusion used by Tailor (defaults to |
tailorParamFile |
Path to Tailor parameter file (defaults to none). Only relevant if the directory referenced by |
tailorPreserve |
Paths to preserve in the live configuration (defaults to |
tailorParams |
Additional parameters to pass to Tailor (defaults to |
odsComponentStageUploadToNexus
Triggers the upload of an artifact to Nexus. Implementation is based on https://help.sonatype.com/repomanager3/rest-and-integration-api/components-api
Available options:
Option | Description |
---|---|
repositoryType |
default is the |
distributionFile |
default is |
repository |
the nexus repository name, default is |
for 'maven2' groupId |
default is the |
for 'maven2' version |
default is the |
for 'maven2' artifactId |
default is the |
for 'raw' targetDirectory |
default is the |