Hands-On
by Joan Flotats

CI/CD Hands-On: A Simple But Functional Continuous Integration Workflow [Part 1]

CI/CD is a well-established software development dogma. The Internet is full of articles and pages talking about CI/CD. They always have the same CI/CD image. I bet you know the image I’m talking about.

I read dozens of articles about the topic and experienced the implementation of an end-to-end CI/CD pipeline. The reality is that implementing a CI/CD pipeline is far more complex than reading articles, understanding the CI/CD overall picture, and using the theory. A CI/CD pipeline development requires interdisciplinary and experienced teams.

This article explains how to build a minimum viable CI pipeline of a Python application. You can adapt the article content to other languages and requirements. The sample uses FastAPI and GitHub Actions.

GitHub Example: https://github.com/joan-mido-qa/continuous-integration-example

CI: Continuous Integration

Let me add my two cents to the existing continuous integration descriptions. Continuous integration stands for regularly merging automatically tested, approved, and deliverable code changes into the project repository.

This example uses GitHub Actions to automatically execute the required checks on each ‘Pull Request’ or ‘Push to Main’ event to guarantee that the code sticks to the repository quality standards. The market offers a diverse collection of CI/CD tools: JenkinsTravisCircleCI, GitLab, etc. Choose the one that best fits your pipeline requirements.

The example workflow checks that the new code follows the formatting rules running pre-commit. Then, it executes the small tests using Pytest and, finally, the medium ones installing the application Helm Chart on a KinD cluster.

Your continuous integration workflow will depend on your team size, maturity, application requirements, and branching strategy.

Static Code Analysis

Analyze the code changes without executing them. The static analysis tools check that your code sticks to the formatting rules, does not use deprecated or corrupted dependencies, and is readable and simple enough. They also suggest coding anti-patterns and bugs depending on the programming language.

We will explain how to install, configure, and run Pre-commit. You can combine Pre-commit with other analysis tools like Sonar or Synk.

Pre-commit

Pre-commit is a tool written in Python. To configure it on your repository is as simple as creating a YAML file and adding the versioned hooks you want to run before every commit. Pre-commit automatically manages the dependencies required by the hooks and auto-fixes the found errors. It supports multiple file types: JSON, YAML, tf, py, ts, etc.

Save infrastructure costs by locally running your code checks before pushing them. You can run Pre-commit on your CI to check the format of the pushed code.

Install, configure, and run the Pre-commit tool:

repos:
-   repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v2.3.0
    hooks:
    -   id: check-yaml
    -   id: end-of-file-fixer
    -   id: trailing-whitespace
$ pip install pre-commit
$ pre-commit install
$ pre-commit run --all-files

Python Hook Suggestions:

  • Mypy: Static type checker for Python
  • Ruff: Static format checker for Python
  • Refurb: Suggest coding best practices for Python
  • Commitizen: Ensure standard commits usage and version management

Test

Unit, Integration, and End-to-End testing definitions and scope are sometimes diffuse. As I did with the Continuous Integration description, I will add my two cents to the Software Engineering at Google test types:

  • Small: Fast tests. Test small pieces of code. Use test doubles or mocked environments (e.g. SQLite). It is not required to build any artifact. Time: ~ 60 seconds.
  • Medium: Test the interaction between multiple pieces of code. They may include building the artifacts, using third-party artifacts (e.g. database), and connecting to the localhost network. Usage of faked environments (e.g. docker-compose, Kind, Minikube, etc.) or external services (e.g. Azure Blob Storage or AWS S3). Time: ~ 300 seconds.
  • Large: They use production-like environments (e.g. Performance Testing). Time: + 900 seconds.

Having or not having medium/large tests on your continuous integrations pipeline depends on your requirements.

Small

The example uses Pytest to run the tests and FastAPI testing client to mock the environment. No secrets; your programming language testing tool should provide you with all the required dependencies to test your application.

Additionally, you can add a minimum test coverage check and upload it as part of your results. Test coverage is a tricky metric. A high test coverage does not implicitly mean having a well-tested code, but a 50% is more than a 0% tested code.

Medium

KinD is a docker-in-docker lightweight Kubernetes cluster used for local development or CI. We use Kind to set up a testing environment and run the tests against it:

  1. Create the Kind cluster
  2. Build the Docker Image
  3. Load the Docker Image to Kind
  4. Install MetalLB and apply the required CDRs
  5. Install Ingress-Nginx
  6. Install your Helm Chart
  7. Setup your OS host

Load Docker Images

Kind will fail to download your image because it is not downloadable from a registry. Kind requires the image to be loaded before using it.

MetalLB

MetalLB is a bare metal Kubernetes load-balancer. Read more about why a load-balancer is required on the MetalLB web page.

Once installed using the Helm Chart, we can create the required CRDs:

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: kind-advertisement
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: kind-address-pool
spec:
  addresses:
  - "172.26.255.0/24"

Docker creates a subnet for the Kind cluster (e.g. 172.26.0.0/16). Inspect the Kind network interface to know the assigned IP address range and use the address as a value for the IPAddressPool resource. More info about MetalLB configuration is on the KinD web page.

Expose Application

Install Ingress-Nginx Helm Chart. Then, install your application Helm Chart, defining an Ingress object. Set the ingressClassName property to nginx and define a host (e.g. api.local). Finally, modify the /etc/host to append the following line:

 192.168.1.10 api.local

You can define as many hosts as you want, pointing to the same address. Nginx will do the rest.

Develop a tool to start, update, and delete a local environment using Kind. Developers can use it to easily debug the application, reproduce reported bugs locally, or run the test on CI.

This examples works for Linux based distributions. For Windows/MacOS may not work as it is, changes may be required.

Delivery

Before delivering the required artifacts, the workflow executes the linting and testing steps.

We use Commitizen to manage the releases of the artifacts. Commtizen automatically updates the artifact version and pushes the changes. It creates a new git tag with the configured tag format. You also can configure Commtizen to update your Changelog with the latest changes.

[tool.commitizen]
tag_format = "v$major.$minor.$patch"
version_scheme = "semver"
version_provider = "pep621"
major_version_zero = true
update_changelog_on_bump = true
version_files = [
    "charts/ci-example/Chart.yaml:version",
    "charts/ci-example/Chart.yaml:appVersion"
]

The workflow uses the Commitizen output version to set the Docker Image and Helm Chart tag.

You can have different versions for each artifact (Image and Chart). But then your Chart and Image changes must be backward compatible. It will add complexity to the development and release process. To avoid it, we use the same version for both artifacts.

Conclusions

This article sketches out a simple but functional continuous integration workflow. It may need changes to work for other programming languages or fit your requirements, but some steps should be easily exportable and work as they are.

CI/CD Hands-on: Continuous Deployment [Part 2] Coming Soon …

Share