Check out what's in the latest release of Kabanero Learn More

Working with pipelines

duration 30 minutes

Introduction

Pipelines enable a continuous integration and continuous delivery (CI/CD) workflow. A set of default tasks and pipelines are provided that can be associated with application stacks. These pipelines leverage steps and tasks that provide the following capabilities:

  • build the application stack
  • enforce the governance policy
  • publish the image to a container registry
  • scan the published image
  • sign the image
  • retag an image
  • deploy the application to the Kubernetes cluster
  • promote a service to a GitOps repository (This feature is Tech Preview in this release)
  • deploy a complete microservice scenario using Kustomize (This feature is Tech Preview in this release)

You can also create your own tasks and pipelines and customize the pre-built pipelines and tasks. All tasks and pipelines are activated by the product operator.

To learn more about pipelines and creating new tasks, see the pipeline tutorial.

Kabanero tasks and pipelines

A set of Kabanero tasks and pipelines are provided in the Kabanero pipelines repository. Details of some of the primary pipelines and tasks are described in the following sections.

Events pipelines

The tasks and pipelines provided in the Kabanero pipelines repository events directory are geared to work with the Kabanero events operator. Follow the instructions in the Integrating the events operator guide to set up the organization webhook and EventMediator to drive these pipelines.

There are four primary pipelines that help illustrate the following work flow:

  1. A developer makes an update to the application and creates a new pull request

    This action triggers the build-pl pipeline, which builds the application code and builds the application image using the build-task. The pull request (PR) is updated with the results of the build pipeline.

  2. The pull request is then merged into the master branch

    This action triggers the build-push-promote-pl.yaml pipeline, which completes the following tasks:

    • enforces the governance policy
    • builds the code
    • signs the image (optional)
    • pushes the image to the image registry
    • scans the image
    • deploys the image on the cluster (optional)
    • promotes the service to the configured GitOps repository (optional).

    The pipeline invokes the following tasks to accomplish these steps:

    • build-push-promote-task.yaml: This task runs a pre-build governance policy check to validate that the stack version in the application repository is allowed to build, based on the governance policy that is configured. If successful, a container image is built from the artifacts in the GitHub source repository by using the appsody build command. This command leverages Buildah to build the image and generates the app-deploy.yaml that is used for deployment. If there is already a copy of the app-deploy.yaml file in the source repository, it is merged with the new one generated by this step. After the image is built, the image is signed if signing has been configured. For more information on configuring image signing, which is an optional step, see the image signing operator. The image is then pushed to the configured image registry.

      (Tech preview feature) A ConfigMap called gitops-map in the Kabanero namespace can optionally be configured to promote the service to a GitOps repository after the build. This step invokes the services promote command to create a PR with the updated app-deploy.yaml file in the configured GitOps repository. The following key value pairs should be setup in the ConfigMap:

      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: gitops-map
        namespace: kabanero
      data:
        gitops-repository-url: <can be specified here if common for all the pipelines in the cluster or in the event mediator for specific versions of the pipeline>
        gitops-repository-type: <github,gitlab,ghe>
        gitops-commit-user-name: <user_name_to_commit_using>
        gitops-commit-user-email: <user_email_to_commit_using>
      

      A secret called gitops-token must also be created in the Kabanero namspace. The following example yaml creates the secret.

      apiVersion: v1
      kind: Secret
      metadata:
        name: gitops-token
      annotations:
        tekton.dev/git-0: https://github.com
      namespace: kabanero
      type: kubernetes.io/basic-auth
      stringData:
        username: <gitops_repo_username>
        password: <gitops_repo_access_token>
      
    • deploy-task.yaml: If the webhooks-tekton-local-deploy property is set to true in the mediator, the image is deployed to the namespace configured in the app-deploy.yaml. By default, the application is deployed in the kabanero namespace.

    • image-scan-task.yaml: The image-scan-task task initiates a container scan of the image published by the build-push-task using OpenSCAP. The results of the scan are published in the logs of the task.

  3. A release of the application is created

    This event triggers the image-retag-pl.yaml pipeline, which leverages the image-retag-task.yaml to create a new tag of the image to match with the git release.

  4. (Tech preview feature) The pull request in the GitOps repository is merged

    When the PR that was created by the promote step of the build-push-promote-pl is merged in the GitOps repository, it triggers the deploy-kustomize-pl.yaml pipeline, which leverages the deploy-kustomize-task.yaml to trigger a deployment to the environment configured in the GitOps repository.

Incubator pipelines

The set of Kabanero tasks and pipelines that are provided in the Kabanero pipelines repository illustrate work flows that work best with the Tekton webhooks extension.

Details of some of the primary pipelines and tasks:

  • build-deploy-pl.yaml

    This is the primary pipeline that showcases a majority of the tasks supplied in the Kabanero pipelines incubator repository. It enforces governance policy, builds the code, optionally signs the image, pushes it to the image registry, scans the image, and can conditionally deploy the image on the cluster. When running the pipeline via a webhook, the pipeline leverages the triggers functionality to conditionally deploy the application only when a pull request is merged in the GitHub repository. Other actions that trigger the pipeline run enforce governance policy, build, push, and scan the image.

  • build-push-task.yaml

    This file builds a container image from the artifacts in the git-source repository by using appsody build. The appsody build command leverages the Buildah options. After the image is built, the image is published to the configured container registry. The build-push-task also generates the app-deploy.yaml that is used by the deploy-task. If there is already a copy of the app-deploy.yaml file in the source repository, it is merged with the new one generated by this step.

    To enable image signing, please refer to the image signing operator documentation.

  • deploy-task.yaml

    Deploy-task uses the app-deploy.yaml file to deploy the application to the cluster by using the application deployment operator. By default, the pipelines run and deploy the application in the kabanero namespace. If you want to deploy the application in a different namespace, update the app-deploy.yaml file to point to that namespace.

  • image-scan-task.yaml

    The image-scan-task task initiates a container scan of the image published by the build-push-task using OpenSCAP. The results of the scan are published in the logs of the task.

For more tasks and pipelines, see the kabanero-pipelines repository.

Experimental GitOps pipelines (Tech Preview)

There are two pipelines in the the experimental GitOps directory that demonstrate GitOps workflows when using the Tekton webhooks extension.

The build-push-promote-pl.yaml pipeline runs the following tasks:

  • enforces the governance policy
  • builds the code
  • signs the image (optional)
  • pushes the image to the image registry
  • scans the image
  • promotes the service to the configured GitOps repository (optional)

The pipeline invokes the following tasks to accomplish the steps listed:

  • build-push-promote-task.yaml: This task first does a pre-build governance policy check to validate the stack version in the application repository is allowed to build based on the governance policy that is configured. It then builds a container image from the artifacts in the git-source repository by using appsody build. The appsody build command leverages Buildah to build the image. The image is then optionally signed and pushed to the configured image registry.

    A configmap file called gitops-map in the Kabanero namespace can optionally be configured to promote the service to a GitOps repository after the build. The step will invoke the services promote command to create a PR with the updated app-deploy.yaml file in the configured GitOps repository. The following key value pairs should be set up in the configmap file:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: gitops-map
      namespace: kabanero
    data:
      gitops-repository-url: <can be specified here if common for all the pipelines in the cluster or in the event mediator for specific versions of the pipeline>
      gitops-repository-type: <github,gitlab,ghe>
      gitops-commit-user-name: <user_name_to_commit_using>
      gitops-commit-user-email: <user_email_to_commit_using>
    

    A secret called gitops-token also has to be created in the Kabanero namspace. This example yaml creates the secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: gitops-token
    annotations:
      tekton.dev/git-0: https://github.com
    namespace: kabanero
    type: kubernetes.io/basic-auth
    stringData:
      username: <gitops_repo_username>
      password: <gitops_repo_access_token>
    
  • image-scan-task.yaml: The image-scan-task task initiates a container scan of the image published by the build-push-task using OpenSCAP. The results of the scan are published in the logs of the task.

The deploy-kustomize-pl.yamlpipeline leverages the deploy-kustomize-task.yaml task to trigger a deployment to the environment configured in the GitOps repository.

Before running pipelines

Before you run a pipeline, a number of tasks might be required. These tasks are covered in the following sections.

Associating pipelines with application stacks in the Kabanero custom resource

The pipelines can be associated with an application stack in the Kabanero custom resource definition (CRD). Use the examples shown to update your configuration.

  • Events pipelines

    When the product operator activates the Kabanero CRD, it associates the pipelines in the pipelines archive with each of the stacks in the stack hub. The default pipelines are intended to work with all the stacks in the stack hub in the previous example. When the operator activates all the pipeline resources (such as the tasks, trigger bindings, and pipelines) in the archive, it adds a suffix to the name of the resource with the shorted digest of the pipelines archive. This configuration provides an easy way to have multiple versions of the same pipeline to be active on the cluster.

    Example:

      apiVersion: kabanero.io/v1alpha2
      kind: Kabanero
      metadata:
        name: kabanero
      spec:
        version: "0.9.1"
        stacks:
          repositories:
          - name: central
            https:
              url: https://github.com/kabanero-io/collections/releases/download/0.9.0/kabanero-index.yaml
          pipelines:
          - id: default
            sha256: caf603b69095ec3d128f1c2fa964a2964509854e306fb3c5add8addc8f7f7b71
            https:
              url: https://github.com/kabanero-io/kabanero-pipelines/releases/download/0.9.1/kabanero-events-pipelines.tar.gz
    
  • Incubator pipelines

    When the product operator activates the CRD, it associates the pipelines in the pipelines archive with each of the stacks in the stack hub. The default pipelines are intended to work with all the stacks in the stack hub in the previous example. All of the pipeline-related resources (such as the tasks, trigger bindings, and pipelines) prefix the name of the resource with the keyword StackId. When the operator activates these resources, it replaces the keyword with the name of the stack it is activating. These are the default set of pipelines activated when you install the default.yaml Kabanero CR.

    Example:

      apiVersion: kabanero.io/v1alpha2
      kind: Kabanero
      metadata:
        name: kabanero
        namespace: kabanero
      spec:
        version: "0.9.1"
        stacks:
          repositories:
          - name: central
            https:
              url: https://github.com/kabanero-io/stacks/releases/download/0.9.0/kabanero-index.yaml
          pipelines:
          - id: default
            sha256: deb5162495e1fe60ab52632f0879f9c9b95e943066590574865138791cbe948f
            https:
              url: https://github.com/kabanero-io/kabanero-pipelines/releases/download/0.9.1/default-kabanero-pipelines.tar.gz
    
  • Experimental gitops pipelines (Tech Preview)

You can use the experimental GitOps pipelines when you want to use Tekton Webhooks to drive GitOps actions in pipelines. Unlike the other pipelines, the GitOps pipelines are not associated with individual stacks, but are instead scoped to the instance in order to build, promote, and deploy all the stacks. Because they are scoped to the instance, they must be specified in the Kabanero CR, in the gitops / pipelines section.

Example:

```yaml
apiVersion: kabanero.io/v1alpha2
kind: Kabanero
metadata:
  name: kabanero
  namespace: kabanero
spec:
  version: "0.9.0"
  gitops:
    pipelines:
      - id: gitops-pipelines
        sha256: 683e8a05482a166ad4d76b6358227d3807a66e7edd8bc80483d6a88bca6c4095
        https:
          url: https://github.com/kabanero-io/kabanero-pipelines/releases/download/0.9.0/kabanero-gitops-pipelines.tar.gz
  stacks:
    repositories:
    - name: central
      https:
        url: https://github.com/kabanero-io/stacks/releases/download/0.9.0/kabanero-index.yaml
```

Creating and updating your own tasks and pipelines

The default tasks and pipelines can be updated by forking the Kabanero Pipelines repository and editing the files under pipelines/. An easy way to generate the archive for use by the Kabanero CRD is to run the package.sh script from the root directory of the pipelines project. The script generates the archive files with the necessary pipeline artifacts and a manifest.yaml file that describes the contents of the archive. It generates the pipelines archive file under ci/assests. It generates separate archives for the legacy incubator pipelines, events pipelines, and the experimental GitOps pipelines.

Alternatively, you can run the Travis build against a release of your pipelines repository, which also generates the archive file with a manifest.yaml file and attaches it to your release.

For more detailed instructions, see Curating Pipelines

Using stacks published to internal and private registries in pipelines

If you are publishing your application stack images to any registry other than Docker hub, you can specify your custom registry when you initialize a stack by using the --stack-registry option in the appsody init command. Specifying a custom registry updates the stack name in the .appsody-config.yaml to include the registry information that is consumed by the pipeline.

Transport layer security (TLS) verification for image registry access in pipelines

When accessing image registries to pull and push images, pipelines use the configuration the user established in the cluster resource, image.config.openshift.io/cluster. Scenarios are presented of image registries that can be used and the setup steps required.

Enable TLS verification for Private image registries:

If you use a private image registry and your registry uses certificates that are signed by trusted CA authorities, no further configuration is needed to enable TLS verification. Review the default truststore on the nodes of your cluster to ensure that you have the CA of your certificate in the list. With self-signed certificates, you must ensure that the CA certificate is added to the appropriate config map. Use the steps that follow to add the CA to the appropriate configmap.

  • Ensure that you have access to the ca.crt files for your private registries.

  • Use the instructions for adding certificate authorities to the cluster to create a configmap with the key as your private registry hostname and value as the content of your private registry ca.crt file. This configmap must be present in the openshift-config namespace.

Enable TLS verification for Internal OpenShift image registry external routes

When you use the internal OpenShift image registry that is provided in your OCP cluster and want to access it using the external route,

  • Run an oc patch command to enable the default external route when you do not have the external route setup for your internal image registry on your cluster.
oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"defaultRoute":true}}'
  • Run the oc get command to verify that you have the external route and check externalRegistryHostnames in the output.
oc get image.config.openshift.io/cluster -o yaml

Sample output

# oc get image.config.openshift.io/cluster -o yaml
apiVersion: config.openshift.io/v1
kind: Image
metadata:
  annotations:
    release.openshift.io/create-only: "true"
  creationTimestamp: "2020-03-24T16:34:58Z"
  generation: 11
  name: cluster
  resourceVersion: "8570660"
  selfLink: /apis/config.openshift.io/v1/images/cluster
  uid: 7ba0dae1-c579-4adf-828d-44b0c3652bae
spec: {}
status:
  externalRegistryHostnames:
  - default-route-openshift-image-registry.apps.incult.os.fyre.ibm.com
  internalRegistryHostname: image-registry.openshift-image-registry.svc:5000
  • Once you have the external route set up, you can find the ca certificate value for the external route hostname by following these steps from the cluster.
oc get configmap  -n openshift-image-registry

NAME                          DATA   AGE
image-registry-certificates   2      7d
serviceca                     1      7d
trusted-ca                    1      7d
[root@rugger-inf ~]#

From the results of this command, try to search the content of configmap image-registry-certificates to find the relevant external route certificate.

  • Once you get the certifiate value for internal image registry external route from previous step, use the steps in section, Private image registries to create a configmap in openshift-config namespace with the key as your internal registry external route as hostname and value as the content of your external route certificate value.

Disable TLS verification for image registry access

If your private or internal registry does not have a valid TLS certificate or supports only HTTP connections, you must configure your cluster differently by disabling TLS verification first to allow pulling images from this registry. Use the oc patch command with your registry to accomplish this. If you use the oc path command for multiple image registries, the registry URL entries must be separated by commas.

oc patch --type=merge --patch='{
 "spec": {
   "registrySources": {
     "insecureRegistries": [
       "<registry>", "abc.com" , "pqr.com"
     ]
   }
 }
}' image.config.openshift.io/cluster

NOTE: When accessing the internal registry using the internal route, the pipelines disable TLS verification by default.

Running pipelines

Explore how to use pipelines to build and manage application stacks.

Prerequisites

  1. Kabanero foundation must be installed on a supported Kubernetes deployment.

  2. A persistent volume must be configured. See the Getting started section for details.

  3. Secrets for the GitHub repository (if private) and image repository

Getting started

Follow these steps:

  1. Set up a persistent volume to run pipelines

    Pipelines require a configured volume that is used by the framework to share data across tasks. The pipeline run creates a Persistent Volume Claim (PVC) with a requirement for five GB of persistent volume.

    • Static persistent volumes

      If you are not running your cluster on a public cloud, you can set up a static persistent volume using NFS. For an example of how to use static persistent volume provisioning, see Static persistent volumes.

    • Dynamic volume provisioning

      If you run your cluster on a public cloud, you can set up a dynamic persistent volume by using your cloud provider’s default storage class. For an example of how to use dynamic persistent volume provisioning, see Dynamic volume provisioning.

  2. Create secrets

    Git secrets must be created in the kabanero namespace and associated with the service account that runs the pipelines. To configure secrets using the pipelines dashboard, see Create secrets.

    Alternatively, you can configure secrets in the Kubernetes console or set them up by using the Kubernetes CLI.

Running pipelines by using the Kabanero events operator webhooks

You can create an organization webhook to automatically drive the pipelines based on events in the GitHub repo. Events such as commits, tagging, or pull requests can be set to automatically trigger pipeline runs. Follow the instructions in the Integrating events operator guide to set up your webhook.

Running pipelines by using the Tekton pipelines dashboard webhook extension

You can use the pipelines dashboard webhook extension to drive pipelines that automatically build and deploy an application whenever you update the code in your GitHub repository. Events such as commits or pull requests can be set up to automatically trigger pipeline runs.

Running pipelines by using a script

If you are developing a new pipeline and want to test it in a tight loop, you can use a script or manually drive the pipeline.

  1. Log in to your cluster. For example,

    oc login <master node IP>:8443
    
  2. Clone the pipelines repository:

    git clone https://github.com/kabanero-io/kabanero-pipelines
    
  3. Run this script with the appropriate parameters:

    cd ./pipelines/sample-helper-files/./manual-pipeline-run-script.sh -r [git_repo of the Appsody project] -i [docker registery path of the image to be created] -c [application stack name of which pipeline to be run]"
    
    • When using the dockerhub container registry:

        ./manual-pipeline-run-script.sh -r https://github.com/mygitid/appsody-test-project -i index.docker.io/mydockeid/my-java-openliberty-image -c java-openliberty"
      
    • When using the local OpenShift container registry:

        ./manual-pipeline-run-script.sh -r https://github.com/mygitid/appsody-test-project -i docker-registry.default.svc:5000/kabanero/my-java-openliberty-image -c java-openliberty"
      

Running pipelines manually from the command line

Follow these steps to run a pipeline directly from the command line:

  1. Login to your cluster. For example,

    oc login <master node IP>:8443
    
  2. Clone the pipelines repo.

    git clone https://github.com/kabanero-io/kabanero-pipelines
    cd kabanero-pipelines
    
  3. Create pipeline resources.

    Use the pipeline-resource-template.yaml file to create the PipelineResources. The pipeline-resource-template.yaml is provided in the pipelines /pipelines/sample-helper-files directory. Update the docker-image URL. You can use the sample GitHub repository or update it to point to your own GitHub repository.

  4. After you update the file, apply it as shown in this example:

    oc apply -f <stack-name>-pipeline-resources.yaml
    

Activating tasks and pipelines

The installations that activate the featured application stacks also activate the tasks and pipelines. If you are creating a new task or pipeline, activate it manually, as shown in this example:

oc apply -f <task.yaml>
oc apply -f <pipeline.yaml>

Running the pipeline

A sample manual-pipeline-run-template.yaml file is provided in the /pipelines/sample-helper-files directory. Rename the template file to a name of your choice (for example, pipeline-run.yaml), and update the file to replace application-stack-name with the name of your application stack. After you update the file, run it as shown in this example:

oc apply -f <application-stack-name>-pipeline-run.yaml

Checking the status of the pipeline run

You can check the status of the pipeline run from the Kubernetes console, command line, or pipelines dashboard.

Checking pipeline run status from the pipelines dashboard

  1. Log in to the pipelines dashboard and click Pipeline runs in the sidebar menu.

  2. Find your pipeline run in the list and click it to check the status and find logs. You can see logs and status for each step and task.

Checking pipeline run status from the command line

Enter the following command in the terminal:

oc get pipelineruns
oc -n kabanero describe pipelinerun.tekton.dev/<pipeline-run-name>

You can also see pods for the pipeline runs, for which you can specify oc describe and oc logs to get more details.

If the pipeline run was successful, you can see a Docker image in our Docker registry and a pod that is running your application.

Troubleshooting

To find solutions for common issues and troubleshoot problems with pipelines, see the Pipelines Troubleshooting Guide.

Way to go! What's next?

What could make this guide better?

Raise an issue to share feedback

Edit or create new guides to help contribute

Need help?

Ask a question on Stack Overflow

Where to next?