Orchestration Pipeline
Usage
Load the shared library in your Jenkinsfile
like this:
@Library('ods-jenkins-shared-library@4.x') _
odsOrchestrationPipeline(
debug: true,
odsImageTag: '4.x'
)
The release manager quickstarter comes with a Jenkinsfile
that is already setup like this.
Configuration
Automated Resolution of Dependencies
The library automatically resolves dependencies between repositories to be orchestrated so that they can be delivered in the correct order. Currently, repositories that want to be orchestrated need to be added to the repositories
list inside a release manager component’s metadata.yml
:
id: PHOENIX
name: Project Phoenix
repositories:
- id: A
- id: B
name: my-repo-B
- id: C
If a named repository wants to announce a dependency on another repo, the dependency needs to be listed in that repository’s release-manager.yml
, simply by referring to its repo.id
as defined in metadata.yml
:
dependencies:
- A
The library supports the following repository types: ods
, ods-infra
, ods-service
, ods-saas-service
, ods-test
and ods-library
. Setting a repository type is required so the orchestrator can make correct assumptions based on the nature of the component at hand:
id: PHOENIX
name: Project Phoenix
repositories:
- id: A
type: ods
- id: B
name: my-repo-B
type: ods
- id: C
type: ods
Repository Type: ods
This type designates ODS components designed for code development. Such repositories are based on quickstarters whose names start with be-
, ds-
, or fe-
, for backend, data science, and frontend, respectively. This is the default type.
If you use this type ODS expects to find JUnit XML test results. If you do not have any test results the pipeline will fail. If you are deploying something where JUnit XML test results are not available consider using Repository Type: ods-service.
Repository Type: ods-infra
This type designates ODS components designed for consuming on-prem or cloud services of arbitrary type using infrastructure as code. Such components are based on quickstarters whose names start with inf-
.
Repository Type: ods-saas-service
This type designates ODS components designed for documenting vendor-provided SaaS services.
Repository Type: ods-service
This type designates ODS components designed for running services of arbitrary type. Examples include repositories based on the airflow-cluster
quickstarter.
Automated Resolution of Repository Git URL
The library will attempt to resolve the repository URL based on the component’s origin remote URL and one of the following:
1) If the name
parameter is provided, and not empty, the last path part of the URL is resolved to ${repo-name}.git
.
2) If no name
parameter is provided, the last path part of the URL is resolved to ${project-id}-${repo-id}.git
(which is the repository name pattern used with OpenDevStack). Here ${project-id}
refers to the lowercase value of the top-level id
attribute in metadata.yml
.
Example: Resolve Git URL for Repository 'B'
id: PHOENIX
name: Project Phoenix
repositories:
- id: B
name: my-repo-B
Assuming your release manager component’s origin at https://github.com/my-org/my-pipeline.git
in this example, the Git URL for repository B
will resolve to https://github.com/my-org/my-repo-B.git
, based on the value in repositories[0].name
.
Example: Resolve Git URL for Repository 'C'
id: PHOENIX
name: Project Phoenix
repositories:
- id: C
Assuming your release manager component’s origin at https://github.com/my-org/my-pipeline.git
in this example, the Git URL for repository C
will resolve to https://github.com/my-org/phoenix-C.git
, based on the values in id
and repositories[0].name
.
Use of custom branch ONLY for Developer Preview
If preview-branch
parameter is provided for a repository, then it will be assumed as the default branch.
Automated Parallelization of Repositories
Instead of merely resolving repositories into a strictly sequential execution model, our library automatically understands which repositories form independent groups and can run in parallel for best time-to-feedback and time-to-delivery.
Partial rebuilding of components
By default the shared library will rebuild all type ods
components, no matter which ones changed since the last release. In order to build only the components whose source code changed (partial rebuilding as we will call it from now on), the following needs to be configured
in metadata.yml
allowPartialRebuild : true
If one repository should always be rebuilt, even if partial rebuild is configured on root level, forceRebuild : true
can be set at repository level, e.g.
id: PHOENIX
name: Project Phoenix
repositories:
- id: B
name: my-repo-B
forceRebuild : true
It is important to highlight that, despite having configured partial rebuild, the orchestration pipeline will still deploy all the components (both those which changed and which did not) to the target environment.
Optimization of runtime performance
By default the shared library will always pull the agent image from the internal docker repository. Depending on the
cluster node setup, this may decrease execution performance. In order to re-use loaded images, a knob in the Jenkinsfile
configuration of the stage odsOrchestrationPipeline
can be turned on:
alwaysPullImage: true
By default the orchestration pipeline will create a pod based on the jenkins-base-agent image to do much of its work. In seldom cases, ususally with a lot of repositories, one may hit an out of memory error on the pod named 'mro-XX'. In this case the below memory limit should be adjusted (defaulting to '1Gi')
mroAgentMemoryLimit = "1Gi"
Automated Generation of Compliance Documents
The library automatically generates Lean Validation (LeVA) compliance reports based on data in your Jira project, as well as data generated along the automated build, deploy, test, and release process by the release manager component.
Note: when you configure a Jira service in the release manager component’s metadata.yml
, our library expects your Jira project (identified by id
) to follow a specific structure. If your Jira project has not been set up by OpenDevStack lately, your structure will most likely be different. While we plan to support custom Jira setups in the future, you may disable the dependency on the Jira service entirely, as shown in the following example:
services:
bitbucket:
credentials:
id: my-bitbucket-credentials
# jira:
# credentials:
# id: my-jira-credentials
nexus:
repository:
name: leva-documentation
In this case, the library will fall back to the document chapter templates located in your release manager component’s docs
folder. Therein, you can provide chapter data to be loaded into the supported compliance documents.
Environment Promotion
This section will guide you through the "environment promotion" feature of the orchestration pipeline. It is assumed have the release manager quickstarter already provisioned and configured in your project.
What is the "environment promotion" feature?
Typically, software is running in different environments, such as one environment for development (DEV), one for quality assurance (QA), and one for production (PROD - this is what end-users of the software consume). Developers work on on the software in the development environment, and once they finish one version (a state) of the software, they bring that version to the QA environment, and once this version is deemed production-ready it is brought to the production environment so that users can consume the new version.
The environment promotion feature of the orchestration pipeline automates moving a certain version of the software from one environment to the next. Developers only have to tell the orchestration pipeline if a new version should be built (in DEV) and packaged as an installable "release bundle", or if an existing "release bundle" should be promoted to either the QA or the production environment.
The environment promotion feature is part of the regular orchestration pipeline. Therefore, the promotion is executed from various Jenkins stages. It is not possible to change the process itself, but you can customize how the promotion happens exactly for each of your software components.
Source Code Organisation
The components of your software are defined in the repositories
section of the metadata.yml
file in the release manager repository. In order for the orchestration pipeline to know which state of each component should be promoted, it needs to have some knowledge about how version control in your repositories is organised. Everything depends on a user-supplied build parameter named version
to the Jenkins pipeline. Other input parameters do not have any impact on source code lookup.
-
When no
version
is given, the orchestration pipeline will default toWIP
(work in progress). In this scenario, source code for each repository is taken from the default branch configured in repository. -
When a
version
is given, source code will be taken from a branchrelease/$VERSION
in each repository. When this branch does not exist yet, it will be created by the pipeline. Subsequent runs with the sameversion
input will take the source code from the created release branch - changes to the default branch will have no effect on this version! This is by design: it allows some developers to work on new features on the default branch (typicallymaster
) while others polish the release branch. To this end, the orchestration pipeline allows to enable separate development environments per version to isolate changes in OpenShift resources (see section "Environments" further down). -
The orchestration pipeline applies the same branching rules to the release manager repository - it will create a release branch per version. There is one small caveat here: Jenkins only considers the
Jenkinsfile
from the branch which is configured for a pipeline. That means that for a pipeline setup againstmaster
, Jenkins will always execute the latestJenkinsfile
frommaster
, even when you pass an explicitversion
to the pipeline. The orchestration pipeline will read e.g. themetadata.yml
file from the matching release branch, but theJenkinsfile
itself will be frommaster
. Usually, this should not be an issue as you should not make changes to theJenkinsfile
of the release manager repository anyway.
Release bundles
A specific "release bundle" is identified by four data points: a version
(as outlined above), a changeId
, a build number and an environment. The version
, changeId
and environment
are user-supplied input parameters to the release manager pipeline, the build number is calculated automatically. The changeId
can be any string meaningful to the user, its value does not have any effect on the operation of the orchestration pipeline. The environment input variable (such as DEV
) will be shortened to a single-letter token (e.g. D
).
Technically speaking, a release bundle is a certain state of the release manager repository and the state of each linked repository at that time. This state is identified by a Git tag. For example, a release bundle with version=1
, changeId=1234
, buildNumber=0
and environment=DEV
is identified by the Git tag v1-1234-0-D
. This tag is set on the release manager repository, and all repositories the metadata.yml
refers to at this time.
Environments
The orchestration pipeline assumes three "conceptual" environments: DEV, QA and PROD (with short token forms D, Q and P). Those environments are strictly ordered - a state should go from DEV to QA, and then from QA to PROD.
To ensure that software progresses along the DEV → QA → PROD path, release bundles from environment DEV can only be installed into QA, and only a release bundle from QA can be installed into PROD. Installing a release bundle from DEV into PROD is not allowed.
Each "conceptual" environment is mapped to an OpenShift namespace:
-
DEV to
$PROJECT-dev
(e.g.foo-dev
) -
QA to
$PROJECT-test
(e.g.foo-test
. Note that it is NOT-qa
!) -
PROD to
$PROJECT-prod
(e.g.foo-prod
)
Keep in mind that when you create a new project with OpenDevStack, you get three OpenShift namespaces:
-
foo-dev
(your DEV environment) -
foo-test
(your QA environment - unfortunately not named-qa
for historical reasons) -
foo-cd
(where Jenkins runs and the pipelines such as the orchestration pipeline are executed)
So while there is a corresponding namespace for DEV and QA, there is no namespace corresponding to the PROD environment out-of-the-box. This is because it is assumed that your PROD environment is likely on another cluster altogether. To create foo-prod
on another cluster, you (or someone with appropriate rights) can run the script located at https://github.com/opendevstack/ods-core/blob/master/ocp-scripts/create-target-project.sh. Then you need to tell orchestration pipeline two things: where the API of the external cluster is, and the credentials with which to access it. A typical configuration is:
id: foo
...
repositories: [ ... ]
environments:
prod:
apiUrl: https://api.example.com
credentialsId: foo-cd-foo-prod
This assumes you have the API token credentials stored in a secret of type kubernetes.io/basic-auth
named foo-prod
in the foo-cd
namespace. This secret needs to be synced with Jenkins (which is achieved by labeling it with credential.sync.jenkins.openshift.io=true
). The stored credentials need to belong to a serviceaccount with rights to admin the foo-prod
namespace. The easiest way to setup all of this is by running the script located at https://github.com/opendevstack/ods-core/blob/master/ocp-scripts/create-target-sa-secret.sh, which makes use of the output of the create-target-project.sh
ran earlier.
It is also possible to have the PROD environment on the same cluster, then you simply create a foo-prod namespace next to foo-dev and foo-test , and allow the foo-cd:jenkins account to admin that project. In that case, you do not need to configure anything in metadata.yml as the default configuration assumes the same cluster. The opposite is also possible: you can configure the QA environment to be on a different cluster than the DEV environment - simply follow the instructions above to create a foo-test namespace.
|
As mentioned in the "Source Code Organisation" section, the orchestration pipeline allows to enable separate development environments to isolate different versions. When this mode is enabled, pipeline runs with version=WIP
will deploy into the $PROJECT-dev
as usual, but pipeline runs with version=X
will deploy into $PROJECT-dev-X
. The $PROJECT-dev-X
environment has to be created beforehand (e.g. by cloning $PROJECT-dev
with its serviceaccounts and rolebindings). To enable this feature, set versionedDevEnvs
to true
in the config of your Jenkinsfile
, like this:
def config = [debug: true, odsImageTag: 'x.x', versionedDevEnvs: true]
Customizing the Release Manager configuration
Timeouts and retries
If one of your components take longer than 10 minutes (this is the default value) to be promoted from one environment to another, the Release Manager pipeline will exit due to this timeout.
You can increase this timeout by setting the openshiftRolloutTimeoutMinutes
per environment in the Release Manager repository in the metadata.yml
file.
Similarly, the number of retries is configurable with the openshiftRolloutTimeoutRetries
property.
The following example establishes a timeout of 120
minutes for both qa
and prod
environments with a total number of 3
retries.
...
environments:
prod:
apiUrl: https://...
credentialsId: ...
openshiftRolloutTimeoutMinutes: 120
openshiftRolloutTimeoutRetries: 3
qa:
openshiftRolloutTimeoutMinutes: 120
openshiftRolloutTimeoutRetries: 3
...
Walkthrough
Let’s start by assuming you have a project FOO with two components, X and Y. These components are defined under the repositories
section in the metadata.yml
file of the release manager repository. When you want to create a new release, you start the orchestration pipeline with input parameters - we will use version 1
and change ID 1234
in this example. The environment should be DEV
. At the end of the pipeline run, you’ll have a release bundle identified by the tag v1-1234-0-D
. This release can later be promoted as-is to QA. Once it is installed there, the same release bundle will be tagged with v1-1234-0-Q
which can then be promoted to PROD (where it will be tagged with v1-1234-0-P
).
To create a release bundle, the orchestration pipeline will first trigger the build of each component. Then, it will export all resources in your OpenShift namespace ($PROJECT-$ENVIRONMENT
, here foo-dev
) belonging to the component. By convention, this means all resources labeled with app=$PROJECT-$COMPONENT
(e.g. app=foo-x
). Any resources without such a label will NOT be part of the release bundle. The exported resources are stored in a template.yml
file (an OpenShift template) located in the openshift-exported
folder within each component repository. Further, the container image SHA of the running pod is retrieved and stored in the file image-sha
in the same folder. Once done, the orchestration pipeline will commit the two files, tag the commit with v1-1234-0-D
and push to the remote. After this process has been done for all repositories, the same tag is also applied to the release manager repository. At this stage, the "dev release bundle" is complete and can be installed into QA.
To trigger the installation of an existing release bundle, the user needs to supply a version
and changeId
which has previously been used to create a release bundle. In our example, supplying version=1
, changeId=1234
and environment=QA
will promote the release bundle identified by v1-1234-0-D
to the QA environment and tag it with v1-1234-0-Q
. Now that we have a "QA release bundle", we can promote it to PROD by supplying version=1
, changeId=1234
and environment=PROD
.
Customizing release bundle creation
As outlined above, a release bundle is essentially a state of all involved Git repositories. Each component repository contains two artifacts:
-
a container image SHA
-
OpenShift resource configuration (expressed in an OpenShift template)
You cannot modify the image SHA (it is the result of what the component pipeline builds), but you can influence the OpenShift template. One reason to do so is that e.g. routes or ConfigMap
values will need to differ between environments, and you need to tell the orchestration pipeline to parametrize the templates, and to supply the right values when the templates are applied in the target environment.
When the orchestration pipeline exports configuration, it has no way to tell which values should actually be parameters. For example, you might have a route x.foo-dev.dev-cluster.com
in DEV, and want this to be x.foo-test.dev-cluster.com
in QA and x.foo-prod.prod-cluster.com
in PROD. In the exported template, the value x.foo-dev.dev-cluster.com
will be hardcoded. To fix this, you can create three files in the release manager repository, dev.env
, qa.env
and prod.env
. These files may contain PARAM=value
lines, like this:
dev.env
X_ROUTE=x.foo-dev.dev-cluster.com
qa.env
X_ROUTE=x.foo-test.dev-cluster.com
prod.env
X_ROUTE=x.foo-prod.prod-cluster.com
All three files need to list the exact same parameters - otherwise applying the templates will fail. Once those param files are present, the orchestration pipeline will pick them up automatically. When you create a release bundle (in DEV), the param file is applied "in reverse", meaning that any concrete param value (on the right) will be substituted with the param key (on the left) in the template. Later when the template is applied in e.g. QA, the param keys are replaced with the concrete values from qa.env
.
It is necessary to have all the param files completed before you create a release bundle - if you want to change e.g. the value of a parameter in the prod.env file afterwards, you will need to create a new release bundle (as they are identified by Git tags, which do not move when you make new commits on the release branch).
|
Next to parametrizing templates, you can also adjust how the export is done. As the export is using Tailor, the best way to customize is to supply a Tailorfile
in the openshift-exported
folder, in which you can define the options you want to set, such as excluding certain labels or resource types, or preserving specific fields in the live configuration. Please see Tailor’s documentation for more information. It is also possible to have different configuration files per environment if you suffix with the $PROJECT
, e.g. Tailorfile.foo-dev
.
If you have component-specific parameters that differ between environments, a lightweight way to add these is via parameter files located in the openshift-exported folder matching the target project such as foo-dev.env , foo-test.env and foo-prod.env . These files are picked up automatically without special setup in a Tailorfile .
|
Authoring OpenShift configuration
In the process described above, the OpenShift configuration is exported and stored in the repositories in openshift-exported
. This approach is easy to get started with, but it does have limitations:
-
There is no defined state: whatever gets exported is what will be promoted, even if a certain configuration was meant to be only temporary or is specific to e.g. only the DEV environment.
-
There is little traceability: as configuration is done through the OpenShift web interface, it is not known who did the change and when, and no chance for other team members to review that change.
-
The parametrization of the exported template might produce incorrect results as it is just a string search-and-replace operation without further knowledge of the meaning of your configuration values.
To overcome these issues, it is possible to author the OpenShift templates yourself instead of exporting them. The fastest way to start with this is by renaming the folder openshift-exported
(containing the exported template) to openshift.
From this point on, the orchestration pipeline will skip the export, and apply whatever is defined in the openshift
folder.
If you are new to writing OpenShift templates, please read https://github.com/opendevstack/tailor#template-authoring. |
When you author templates, you can also store the secrets in the param files GPG encrypted (.env.enc
files). To achieve this, you need to create a private/public keypair for Jenkins, store the private key in a secret called tailor-private-key
in your foo-cd
namespace, and sync it as a Jenkins credentials item. Once the .env.enc
files are encrypted against the public key, the orchestration pipeline will automatically use the private key to decrypt the params on-the-fly. Please see Working with Secrets for more information.
Known Limitations
-
For versioned, separate DEV environments, pulling images from the
foo-cd
namespace is not possible (because thefoo-cd:jenkins
serviceaccount does not have admin rights infoo-cd
and therefore can’t grant access to it) -
Tagging means we are pointing to a concrete SHA of a Git repository. This enforces that no manual editing of exported config can happen between promotion to QA and promotion to PROD, which in effect forces everything to be parameterized properly.
-
JIRA always triggers the
master
branch of the release manager, which means theJenkinsfile
is always taken frommaster
(and NOT from the correct release branch - onlymetadata.yml
etc. are read from the release branch) -
There is only one QA namespace, preventing to test multiple releases at the same time.
-
The secret of the serviceaccount in the target cluster is known to the orchestration pipeline (as a Jenkins credential synced from OpenShift), therefore developers with edit/admin rights in the CD namespace have access to that secret
-
Tags could manually be set / moved (this can be prevented in Bitbucket by administrators)
-
Passwords etc. in the OpenShift configuration are stored in clear text in the export (this can be prevented by authoring templates and using a private key for encryption of param files)
-
During export, the templates are parameterized automatically, but this is done using string search-and-replace and unwanted replacements might occur (this can be prevented by authoring the templates manually).
-
By default, SonarQube scans (and reports) are only generated for the
master
branch of each component. As the orchestration pipeline automatically creates release branches for each version, no scans and reports are created on those. This can be changed by configuringsonarQubeBranch: '*'`
in each component’sJenkinsfile
, however keep in mind that quality trends etc. will be mixed up if you use the free version of SonarQube as that version does not have support for multiple branches. -
An existing QA-tag cannot be deployed again in PROD. This has been intentionally designed that way as any change to PROD needs its unique change ID, which results in a new tag.