With Service accounts, you can deploy your DAGs with the continuous integration/continuous deployment (CI/CD) tool of your choice. This guide will walk you through configuring your CI/CD tool to use a Astronomer EE service accounts in order to build and push your Airflow project Docker images to the private Docker registry that is installed with Astronomer EE.

For background information and best practices on CI/CD, we recommend reading the article An Introduction to CI/CD Best Practices from DigitalOcean.

Steps for Setting up CI/CD with Your Astronomer Airflow Project

Before we get started, this guide assumed you have installed Astronomer Enterprise Edition or are using Astronomer Cloud Edition, have the astro-cli v0.6.0 or newer installed locally and are familiar with your CI/CD tool of choice. You can check your astro-cli version with the astro version command.

Create a Service Account

In order to authenticate your CI/CD pipeline to the private Docker registry, you'll need to create a service account. This service account access can be revoked at any time by deleting the service account through the astro-cli or orbit-ui.

Note that you're able to create Service Accounts at both the Workspace and Deployment level. Creating them at the Workspace level allows you to customize how your deployment pipeline works and allows you to deploy to multiple Airflow instances with one push, while creating them at the Deployment level ensures that your CI/CD pipeline will only deploy to that specific cluster.

Here are a few examples of creating service accounts with various permission levels via the Astronomer CLI.

Deployment Level Service Account

astro service-account create -d [DEPLOYMENTUUID] --label [SERVICEACCOUNTLABEL]

Workspace Level Service Account

astro service-account create -w [WORKSPACEUUID] -l [SERVICEACCOUNTLABEL]

System Level Service Account

astro service-account create -s --label [SERVICEACCOUNTLABEL]

If you prefer to provision a service account through the orbit-ui you can create a service account on the project configuration page at the following link (replacing [BaseDomain] for your configured base domain).

In both cases, this will spit out an API key that will be used for the CI/CD process.


Configuring Your CI/CD Pipeline

Depending on your CI/CD tool, configuration will be slightly different. This section will focus on outlining what needs to be accomplished, not the specifics of how. At it's core, your CI/CD pipeline will be authenticating to the private registry installed with the platform, then building, tagging and pushing an image to that registry.

An example pipeline (using DroneCI) could look like:

    image: astronomerio/ap-build:0.0.7
      - docker build -t${DRONE_BUILD_NUMBER} .
      - /var/run/docker.sock:/var/run/docker.sock
      event: push
      branch: [ master, release-* ]

    image: astronomerio/ap-build:0.0.7
      - echo $${DOCKER_PASSWORD_TEST}
      - docker login -u _ -p $${DOCKER_PASSWORD_TEST}
      - docker push${DRONE_BUILD_NUMBER}
    secrets: [ docker_password_test ]
      - /var/run/docker.sock:/var/run/docker.sock
      event: push
      branch: [ master, release-* ]

If you are using CircleCI, it might look like:

# Python CircleCI 2.0 configuration file
# Check for more details
version: 2
    machine: true
      - checkout
      - restore_cache:
          - v1-dependencies-{{ checksum "requirements.txt" }}
          # fallback to using the latest cache if no exact match is found
          - v1-dependencies-
      - run:
          name: run linter
          command: |
            pycodestyle .

      - save_cache:
            - ./venv
          key: v1-dependencies-{{ checksum "requirements.txt" }}
      - image:  astronomerio/ap-build:0.0.7
      - checkout
      - setup_remote_docker:
          docker_layer_caching: true
      - run:
          name: Push to Docker Hub
          command: |
            docker build -t$TAG .
            docker login -u _ -p $DOCKER_KEY
            docker push$TAG

  version: 2
      - build
      - deploy:
            - build
                - master

Breaking this down:

Authenticating to Docker

After you have created a service account, you will want to store the generated API key in an environment variable, or your secret management tool of choice.'

The first step of this pipeline is to authenticate against the registry:

docker login registry.$${BASE_DOMAIN} -u _ -p $${API_KEY_SECRET}

In this example, the BASE_DOMAIN is (for Astronomer Cloud). The API_KEY_SECRET is the API Key that you got from the CLI or the UI stored in your secret manager

Building and Pushing an Image

Once you are authenticated you can build, tag and push your Airflow image to the private registry, where a webhook will trigger an update of your Airflow deployment on the platform.

Registry Address The registry address tells Docker where to push images to. In this case it will be the private registry installed with Astronomer EE, which will be located at registry.${BASE_DOMAIN}.

For example, if you are using Astronomer's cloud platform, you will use:

Release Name Release name refers to the release name of your Airflow Deployment. It will follow the pattern of [SPACE THEMED ADJ.]-[SPACE THEMED NOUN]-[4-DIGITS] (in this example, infrared-photon-7780).

Tag Name Tag name allows you to track all Airflow deployments made for that cluster over time. While the tag name can be whatever you want, we recommend denoting the source and the build number in the name.

In the below example we use the prefix ci- and the ENV ${DRONE_BUILD_NUMBER}. This guarentees that we always know which CI/CD build triggered the build and push.


docker build -t registry.${BASE_DOMAIN}/${RELEASE_NAME}/airflow:ci-${DRONE_BUILD_NUMBER} .

If you would like to see a more complete working example please visit our full example using Drone-CI.

Check out this video for a full walkthrough of this process: