Check out what's in the latest release of Kabanero Learn More

Build and deploy applications with Kabanero Pipelines

duration 30 minutes

Kabanero uses Tekton Pipelines to illustrate a continuous input and continuous delivery (CI/CD) workflow. Each of the featured Collections contains a default pipeline that builds the collection, publishes the image to a container registry, and then deploys the application to the Kubernetes cluster. You can also create your own tasks and pipelines and customize the pre-built pipelines and tasks. All tasks and pipelines are activated by Kabanero’s standard Kubernetes Operator.

To learn more about Tekton Pipelines and creating new tasks, see the Tekton Pipeline tutorial.

Tasks and Pipelines

Each collection contains a set of default tasks and pipelines that are created when the collection is built. The collections build process copies the task and pipeline files from the collections repo to your /pipelines directory.

If you are building a new collection, the default tasks and pipelines are automatically pulled into your collections repo. If you want to customize the default tasks or pipelines, you can choose to apply your changes either to all Application Stacks or to a specific Application Stack. To apply your changes to all, update the files in the incubator/common/pipelines/default directory in your collections repo. If you want to update tasks or pipelines for one, make your changes in the pipelines directory under that collection in your collections repo.

The build, push and deploy pipeline

This file is the primary pipeline that is associated with the collections. It validates that the collection is active, builds the Git source, publishes the container image, and deploys the application. It looks for git-source and docker-image resources that are used by the build-task and deploy-task files.

Tasks

This file builds a container image from the artifacts in the git-source repository by using Buildah. After the image is built, the build-task file publishes it to the docker-image URL by using Buildah.

This file modifies the app-deploy.yaml file, which describes the deployment options for the application. Deploy-task modifies app-deploy.yaml to point to the image that was published and deploys the application by using the Appsody operator. Generate your app-deploy.yaml file by running the appsody deploy --generate-only command.

By default, the pipelines run and deploy the application in the kabanero namespace. If you want to deploy the application in a different namespace, update the app-deploy.yaml file to point to that namespace.

For more information, see the kabanero-pipelines repo.

Running Pipelines

Explore how to use Pipelines to build and manage Collections.

Prerequisites

  1. Kabanero foundation must be installed on Red Hat Origin Community Distribution of Kubernetes (OKD) or the OpenShift Container Platform (OCP) cluster. At present Kabanero 0.3.0 and greater only supports OCP 4.2+. When OKD provides a comparable 4.x stream, Kabanero will again support OKD providing a full open source stack.

  2. Tekton Dashboard is installed by default with Kabanero’s Kubernetes Operator. To find the Tekton Dashboard URL, login to your cluster and run the oc get routes command or check in the console. You can also find the dashboard URL from the Kabanero getting started page in the OKD or OCP console.

  3. Dynamic volume provisioning or a persistent volume must be configured. See the following section for details.

Getting started

Set up a persistent volume to run pipelines

Pipelines require a configured volume that is used by the framework to share data across tasks. The build task, which uses Buildah, also requires a volume mount. The pipelinerun creates a Persistent Volume Claim (PVC) with a requirement for five Gi of persistent volume.

  1. Log in to your cluster. For example,

    oc login <master node IP>:8443
  2. Clone the kabanero-pipeplines repo

    git clone https://github.com/kabanero-io/kabanero-pipelines
Static Persistent Volumes

If you are not running your OpenShift cluster on a public cloud, you can set up a static persistent volume. For an example of how to use static persistent volume provisioning, see Static Persistent Volumes.

Dynamic Volume Provisioning

If you run your OpenShift cluster on a public cloud, you can set up a dynamic persistent volume by using your cloud provider’s default storage class. For an example of how to use dynamic persistent volume provisioning, see Dynamic Volume Provisioning.

Create secrets

Git secrets must be created in the kabanero namespace and associated with the service account that runs the pipelines. To configure secrets by using the Tekton Dashboard, see Create secrets.

Run pipelines by using the Tekton Dashboard Webhook Extension

You can use the Tekton Dashboard Webhook Extension to drive pipelines that automatically build and deploy an application whenever you update the code in your Git repo. Events such as commits or pull requests can be set up to automatically trigger pipeline runs.

Run pipelines by using a script

If you are developing a new pipeline and want to test it in a tight loop, you might want to use a script or manually drive the pipeline.

  1. Log in to your cluster. For example,

    oc login <master node IP>:8443
  2. Clone the Kabanero pipelines repo

    git clone https://github.com/kabanero-io/kabanero-pipelines
  3. Run the following script with the appropriate parameters

    cd ./pipelines/incubator/manual-pipeline-runs/
    ./manual-pipeline-run-script.sh -r [git_repo of the Appsody project] -i [docker registery path of the image to be created] -c [collections name of which pipeline to be run]"
    • The following example is configured to use the dockerhub container registry:

       ./manual-pipeline-run-script.sh -r https://github.com/mygitid/appsody-test-project -i index.docker.io/mydockeid/my-java-microprofile-image -c java-microprofile"
    • The following example is configured to use the local openshift container registry:

       ./manual-pipeline-run-script.sh -r https://github.com/mygitid/appsody-test-project -i docker-registry.default.svc:5000/kabanero/my-java-microprofile-image -c java-microprofile"

Run pipelines manually from the command line

  1. Login to your cluster. For example,

    oc login <master node IP>:8443
  2. Clone the Kabanero pipelines repo.

    git clone https://github.com/kabanero-io/kabanero-pipelines
    cd kabanero-pipelines
  3. Create Pipeline resources.

    Use the pipeline-resource-template.yaml file to create the PipelineResources. The pipeline-resource-template.yaml is provided in the Kabanero pipelines manual-pipeline-runs directory. Update the docker-image URL. You can use the sample GitHub repo or update it to point to your own GitHub repo.

  4. After you update the file, apply it as shown in the following example:

    oc apply -f <collection-name>-pipeline-resources.yaml

Activate tasks and pipelines

The installations that activate the featured collections also activate the tasks and pipelines. If you are creating a new task or pipeline, activate it manually, as shown in the following example.

oc apply -f <task.yaml>
oc apply -f <pipeline.yaml>

Run the pipeline

A sample manual-pipeline-run-template.yaml file is provided in the /pipelines/manual-pipeline-runs directory. Rename the template file to pipeline-run.yaml, for example, and update the file to replace collection-name with the name of your collection. After you update the file, run it as shown in the following example.

oc apply -f <collection-name>-pipeline-run.yaml

Run pipelines from the command line for your custom built collections

The following steps explain how to run pipelines against custom built collection images instead of the provided Kabanero collections.

Set up a container registry URL for the custom collection image

By default, pipelines pull the collection images for Docker hub. If you are publishing your collection images to any other repository, use the following process to configure the custom repository from which your pipelines pull the container images.

  1. After you clone the kabanero-pipelines repository, find the collection-image-registry-map.yaml configmap template file. Add your container registry URL to this file in place of the default-collection-image-registry-url statement.

    cd kabanero-pipelines/pipelines/common/
    vi collection-image-registry-map.yaml
  2. Apply the following configmap file, which will set your container registry.

    oc apply -f collection-image-registry-map.yaml

Set up a container registry URL for a custom collection image that is stored in a container registry with an internal route URL on the OCP cluster

For an internal OpenShift registry, set up the collection-image-registry-map.yaml file with the internal registry URL.

NOTE : In this case, the service account that is associated with the pipelines must be configured to allow the pipelines pull from the internal registry without configuring a secret.

Set up a container registry URL for a custom collection image that is stored in a container registry with an external route URL

For a container image with an external container registry route URL, you must set up a Kubernetes secret. To set up this secret, update the default-collection-image-registry-secret.yaml template file with a Base64 formatted username and password and apply it to the cluster, as described in the following steps.

  1. First, update the collection-image-registry-map.yaml file with your container registry file, as described in step 1 of Set up a container registry URL for the custom collection image.

  2. Find the default-collection-image-registry-secret.yaml template file in the cloned kabanero-pipelines repo (kabanero-pipelines/pipelines/common) and update it with the username and token password for the container registry URL you specified previously.

  3. Create a Base64 format version of the username and password for the external route container registry URL.

    echo -n <your-registry-username> | base64
    echo -n <your-registry-password> | base64
  4. Update the default-collection-image-registry-secret.yaml file with the Base64 formatted username and password.

    vi default-collection-image-registry-secret.yaml
  5. Apply the default-collection-image-registry-secret.yaml file to the cluster

    oc apply -f default-collection-image-registry-secret.yaml
  6. You can now run the pipeline by following the steps in the preceding Run pipelines from the command line for your custom built collections section.

Check the status of the pipeline run

You can check the status of the pipeline run from the OKD console, command line, or Tekton dashboard.

Check pipeline run status from the Tekton dashboard

  1. Log in to the Tekton Dashboard and click `Pipeline runs' in the sidebar menu.

  2. Find your pipeline run in the list and click it to check the status and find logs. You can see logs and status for each step and task.

Check pipeline run status from the command line

Enter the following command in the terminal:

oc get pipelineruns
oc -n kabanero describe pipelinerun.tekton.dev/<pipeline-run-name>

You can also see pods for the pipeline runs, for which you can specify oc describe and oc logs to get more details.

If the pipeline run was successful, you can see a Docker image in our Docker registry and a pod that’s running your application.

Way to go! What's next?

What could make this guide better?

Raise an issue to share feedback

Create a pull request to contribute to this guide

Need help?

Ask a question on Stack Overflow

Where to next?