Provisioning App: Internal architecture / Development

The Project is based on Spring Boot, using several technologies which can be seen in the build.gradle.

The provision app is merely an orchestrator that does HTTP REST calls to Atlassian Crowd, Jira, Confluence, Bitbucket and Jenkins (for openshift interaction).

The APIs exposed for direct usage, and also for the UI are in the controller package. The connectors to the various tools to create resources are in the services package

How to develop and run it locally

  1. Make sure that you have installed GIT and JAVA ( >= 11 ).

  2. Clone the project out of Github

$ git clone https://github.com/opendevstack/ods-provisioning-app.git

To run it locally two spring profiles are provided: odsbox and odsbox_quickstarters`. The profile odsbox configures the application to connect to the ODS development environment (ODSBOX).

Use this command to start it from the command-line:

./gradlew bootRun --args='--spring.profiles.active=odsbox,odsbox_quickstarters'
  1. Change directory into ods-provisioning-app

$ cd ods-provisioning-app
  1. If you want to build / run locally - create gradle.properties in the project’s root to configure connectivity to OpenDevStack NEXUS

nexus_url=<NEXUS HOST>
nexus_user=<NEXUS USER>
nexus_pw=<NEXUS_PW>

If you want to build / run locally without NEXUS, you can disable NEXUS by adding the following property to gradle.properties:

no_nexus=true

Alternatively, you can also configure the build using environment variables:

Gradle property Environment variable

nexus_url

NEXUS_HOST

nexus_user

NEXUS_USERNAME

nexus_pw

NEXUS_PASSWORD

no_nexus

NO_NEXUS

  1. You can start the application with the following command:

# to run the server execute
./gradlew bootRun

To overwrite the provided application.properties a configmap is created out of them and injected into /config/application.properties within the container. The base configuration map as well as the deployment yamls can be found in ocp-config, and overwrite parameters from application.

  1. After started the server it can be reached in the browser under

http://localhost:8080

How to deploy to OpenShift

In order to test your changes in a real environment, you should deploy the provisioning app in OpenShift. To do so, you need to have an existing OpeDevStack project (consisting of -dev, -test and -cd namespaces). If you don’t have one yet, you can create one via the provisioning app in the central namespace.

Now you can make use of the ods-provisioning-app quickstarter to set up the Bitbucket repository in your Bitbucket space. You can either register the quickstarter in the provisiong app in the central namespace, and then provision it from there; or use the script in https://github.com/BIX-Digital/ods-contrib/tree/master/quickstart-with-jenkins. Once you have provisioned the quickstarter, the first build will create a container image and place it in the ImageStream, using the commit SHA as image tag.

To deploy this image in the central namespace, you have to tag that image into the central namespace. From your local machine, run:

oc tag <PROJECT>-dev/<COMPONENT>:<GIT SHA> ods/ods-provisioning-app:<GIT SHA>

Then, in ods-configuration/ods-core.env, set PROV_APP_FROM_IMAGE to ods/ods-provisioning-app:<GIT SHA> and run the deployment using:

make install-provisioning-app

Frontend Code

The frontend is based on jquery and thymeleaf. All posting to the API happens out of java script (client.js).

ODS 3.x contains a new single page app UI (based on Angular) as an experimental feature located in the client folder. In order to use the UI a feature flag frontend.spa.enabled must be set to true in application.proprties. Please refer to client README on how to setup local development for the frontend code.

Backend Code

The backend is based on Spring Boot, authenticates against Atlassian Crowd (Using property provision.auth.provider=crowd) or OAUTH2/OpenID Connect provider (Using property provision.auth.provider=oauth2) and exposes consumable APIs (api/v2/project). Storage of created projects happens on the filesystem thru the StorageAdapter. Both frontend (html) and backend are tested thru Junit & Mockito

Authentication Implementation

By using the property provision.auth.provider=crowd or provision.auth.provider=oauth2, the application uses eigher CROWD or OAUTH2 authentication. Dependent of the property used, different spring beans are used for configuration. The switch between the two options is implemented via Spring’s ConditionalOnProperty annotation.

CROWD - specific configuration classes are located in the java package org.opendevstack.provision.authentication.crowd.

Example:

org.opendevstack.provision.authentication.crowd.CrowdSecurityConfiguration.java
@Configuration
@EnableWebSecurity
@EnableCaching
@EnableEncryptableProperties
@ConditionalOnProperty(name = "provision.auth.provider", havingValue = "crowd")
public class CrowdSecurityConfiguration extends WebSecurityConfigurerAdapter {
//...
}

OAUTH2 - specific configuration classes are located in the java package org.opendevstack.provision.authentication.oauth2.

Example:

org.opendevstack.provision.authentication.oauth2.Oauth2SecurityConfiguration.java
@Configuration
@Order(Ordered.HIGHEST_PRECEDENCE)
@ConditionalOnProperty(name = "provision.auth.provider", havingValue = "oauth2")
@EnableWebSecurity
@EnableOAuth2Client
public class Oauth2SecurityConfiguration extends WebSecurityConfigurerAdapter {
//...
}

Consuming REST APIs in Java

Generally this is a pain. To ease development, a few tools are in use:

  • Jackson (see link below)

  • OKTTP3 Client (see link below)

  • jsonschema2pojo generator (see link below)

The process for new operations to be called is:

  1. Look up the API call that you intend to make

  2. see if there is a JSON Schema available

  3. Generate (a) Pojo('s) for the Endpoint

  4. Use the pojo to build your request, convert it to JSON with Jackson and send it via OKHTTP3, and the Provision Application’s RestClient

Consuming REST APIs via curl

Basic Auth authentication is the recommended way to consume REST API. How to enable Basic Auth authentication is explained in Authentication Crowd Configuration.

The following sample script could be used to provision a new project, add a quickstarter to a project or remove a project. It uses Basic Auth to authenticate the request.

#!/usr/bin/env bash

set -eu

# Setup these variables
# PROVISION_API_HOST=<protocol>://<hostname>:<port>
# BASIC_AUTH_CREDENTIAL=<USERNAME>:<PASSWORD>
# PROVISION_FILE=provision-new-project-payload.json

PROV_APP_CONFIG_FILE="${PROV_APP_CONFIG_FILE:-prov-app-config.txt}"

if [ -f $PROV_APP_CONFIG_FILE ]; then
	cat $PROV_APP_CONFIG_FILE
	source $PROV_APP_CONFIG_FILE
else
	echo "No config file found, assuming defaults, current dir: $(pwd)"
fi

# not set - use post as operation, create new project
COMMAND="${1:-POST}"

echo
echo "Started provision project script with command (${COMMAND})!"
echo
echo "... encoding basic auth credentials in base64 format"
BASE64_CREDENTIALS=$(echo -n $BASIC_AUTH_CREDENTIAL | base64)
echo
echo "... sending request to '"$PROVISION_API_HOST"' (output will be saved in file './response.txt' and headers in file './headers.txt')"
echo
RESPONSE_FILE=response.txt

if [ -f $RESPONSE_FILE ]; then
	rm -f $RESPONSE_FILE
fi

if [ ${COMMAND^^} == "POST" ] || [ ${COMMAND^^} == "PUT" ]; then
echo
	echo "create or update project - ${COMMAND^^}"
	if [ ! -f $PROVISION_FILE ]; then
		echo "Input for provision api (${PROVISION_FILE}) does not EXIST, aborting\ncurrent: $(pwd)"
		exit 1
	fi
	echo "... ${COMMAND} project request payload loaded from '"$PROVISION_FILE"'"ยด
	echo
	echo "... displaying payload file content:"
	cat $PROVISION_FILE
	echo

	http_resp_code=$(curl --insecure --request ${COMMAND} "${PROVISION_API_HOST}/api/v2/project" \
	--header "Authorization: Basic ${BASE64_CREDENTIALS}" \
	--header 'Accept: application/json' \
	--header 'Content-Type: application/json' \
	--data @"$PROVISION_FILE" \
	--dump-header headers.txt -o ${RESPONSE_FILE} -w "%{http_code}" )
elif [ ${COMMAND^^} == "DELETE" ] || [ ${COMMAND^^} == "GET" ]; then
	echo "delete / get project - ${COMMAND^^}"
	if [ -z $2 ]; then
		echo "Project Key must be passed as second param in case of command == delete or get!!"
		exit 1
	fi

	http_resp_code=$(curl -vvv --insecure --request ${COMMAND} "${PROVISION_API_HOST}/api/v2/project/$2" \
	--header "Authorization: Basic ${BASE64_CREDENTIALS}" \
	--header 'Accept: application/json' \
	--header 'Content-Type: application/json' \
	--dump-header headers.txt -o ${RESPONSE_FILE} -w "%{http_code}" )
else
	echo "ERROR: Command ${COMMAND} not supported, only GET, POST, PUT or DELETE"
	exit 1
fi

echo "curl request successful..."
echo
echo "... displaying HTTP response body (content from './response.txt'):"
if [ -f ${RESPONSE_FILE} ]; then
	cat ${RESPONSE_FILE}
else
	echo "No request (body) response recorded"
fi

echo
echo "... displaying HTTP response code"
echo "http_resp_code=${http_resp_code}"
echo
if [ $http_resp_code != 200 ]
  then
    echo "something went wrong... endpoint responded with error code [HTTP CODE="$http_resp_code"] (expected was 200)"
    exit 1
fi
echo "provision project request (${COMMAND}) completed successfully!!!"

The PROVISION_FILE should point to a json file that defines the payload for the provision of a new project. This is an example:

{
    "projectName": "<PROJECT_NAME>",
    "projectKey": "<PROJECT_NAME>",
    "description": "project description",
    "projectType": "default",
    "cdUser": "project_cd_user",
    "projectAdminUser": "<ADMIN_USER>",
    "projectAdminGroup": "<ADMIN_GROUP>",
    "projectUserGroup": "<USER_GROUP>",
    "projectReadonlyGroup": "<READ_ONLY_GROUP>",
    "bugtrackerSpace": true,
    "platformRuntime": true,
    "specialPermissionSet": true,
    "quickstarters": []
}

For the provisioning of a quickstarter set the command from POST to value PUT instead. Following an example of the PROVISION_FILE for quickstarter provisioning:

{
    "projectKey":"<PROJECT-NAME>",
    "quickstarters":[{
        "component_type":"docker-plain",
        "component_id":"be-docker-example"
    }]
}

Pre Flight Checks

The provisioning of new project requires the creation of project in different servers (jira, bitbucket, confluence, openshift, etc…​) In case of an exception happens this process will be interrupted. This will leave the provision of a new project as incomplete. To avoid this situation a series of checks called "Pre Flight Checks" were implemented. These checks verify that all required conditions are given in the target system (jira, bitbucket, confluence) before provision a new project.

Response examples:

Following some samples of response of the provision new project endpoint POST api/v2/project

Pre Flight Check failed:

HTTP CODE: 503 Service Unavailable
{"endpoint":"ADD_PROJECT","stage":"CHECK_PRECONDITIONS","status":"FAILED","errors":[{"error-code":"UNEXISTANT_USER","error-message":"user 'cd_user_wrong_cd_user' does not exists in bitbucket!"}]}

Pre Flight Check due an exception:

HTTP CODE: 503 Service Unavailable
{"endpoint":"ADD_PROJECT","stage":"CHECK_PRECONDITIONS","status":"FAILED","errors":[{"error-code":"EXCEPTION","error-message":"Unexpected error when checking precondition for creation of project 'PROJECTNAME'"}]}

Pre Flight Check successfully passed and project was created:

HTTP CODE: 200 OK
{
    "projectName": "MYPROJECT",
    "description": "My new project",
    "projectKey": "MYPROJECT",
    ...
}

Failed Response due to exception after Pre Flight Checks succesfully passed:

HTTP CODE: 500 Internal Server Error

An error occured while creating project [PROJECTNAME
], reason [component_id 'ods-myproject-component106' is not valid name (only alpha chars are allowed with dashes (-) allowed in between.
] - but all cleaned up!

Option "onlyCheckPreconditions=TRUE":

The provision new project endpoint POST api/v2/project accepts a url parameter called onlyCheckPreconditions. By setting this parameter to true (POST api/v2/project?onlyCheckPreconditions=TRUE) only the Pre Flight Checks will be executed. This could be usefull for the development of new Pre Flight Checks or for integration scenarios. In this later case one could imagine to set this parameter to TRUE to verify all preconditions before creating a project.