Welcome to Astronomer.

This guide will help you kick off your trial on Astronomer by walking you through a sample DAG deployment from start to finish.

Whether you're exploring our Enterprise or Cloud offering, we've designed this to be a great way to get to know our platform.

Start Trial

If you haven't already, start an Astronomer Trial.


You can auth in via Google, Github, or standard username/password authentication.

This is how you'll log into both the Astronomer App and the CLI.

Note: Once your account is created, you won't be able to change your method of authorization.

Create a Workspace

If you're the first person at your org on Astronomer, you'll want to create a Workspace. You can think of Workspaces the same way you'd think of teams - a space that specific user groups have access to, with varying levels of permissions.

Airflow deployments are hierarchically lower - from a Workspace, you can create one or more Airflow deployments.

To read more about navigating our app, go here.

Join another Workspace

If you're new to Astronomer but someone else on your team has an existing Workspace you want to join, your team member will be able to add you as a user to that shared Workspace directly from their account.

Role-based Access Control (RBAC) is roped into Astronomer v0.9 and beyond. For more info on that functionality, check out this doc.

Note: If you have any trouble with the confirmation email, check your spam filter. If that doesn't do the trick, reach out to us.

Start with the Astronomer CLI

Astronomer's open-source CLI is the easiest way to run Apache Airflow on your machine.

From the CLI, you can establish a local testing environment and deploy to Astronomer Cloud whenever you're ready.



To start using the CLI, make sure you've already installed:

  • Docker, version 18.09 or higher

Install Command

To install the Astronomer CLI (v0.7.5-2), run:

$ curl -sSL | sudo bash -s -- v0.7.5-2

Note: If you're running on Windows, check out our Windows Install Guide.

Initialize your Airflow Project

Create a new project directory on your machine and cd into it. This is where you'll store all files necessary to build and deploy our Airflow image.

$ mkdir <directory-name> && cd <directory-name>

Once you're in that project directory, run:

$ astro airflow init

This will generate some skeleton files:

├── dags # Where your DAGs go
│   ├── # An example dag that comes with the initialized project
├── Dockerfile # For Astronomer's Docker image and runtime overrides
├── include # For any other files you'd like to include
├── packages.txt # For OS-level packages
├── plugins # For any custom or community Airflow plugins
└── requirements.txt # For any Python packages

Running this command generates an example DAG for you to deploy while getting started.

The DAG itself doesn't have much functionality (it just prints the date a bunch of times), but it'll give you a chance to get accustomed to our deployment flow.

If you'd like to deploy some more functional example DAGs, check out the ones we've open sourced.

Develop Locally

With those files in place, you're ready to push to your local Airflow environment.

Start Airflow

$ astro airflow start

This command will spin up 3 Docker containers on your machine, each for a different Airflow component:

  • Postgres: Airflow's Metadata Database
  • Webserver: The Airflow component responsible for rendering the Airflow UI
  • Scheduler: The Airflow component responsible for monitoring and triggering tasks

You should see the following output:

$ astro airflow start
Sending build context to Docker daemon  11.78kB
Step 1/1 : FROM astronomerinc/ap-airflow:0.7.5-1.10.1-onbuild
# Executing 5 build triggers
 ---> Using cache
 ---> Using cache
 ---> Using cache
 ---> Using cache
 ---> Using cache
 ---> fac83ad0b2d8
Successfully built fac83ad0b2d8
Successfully tagged astro-trial/airflow:latest
INFO[0000] [0/3] [postgres]: Starting                   
INFO[0000] Recreating postgres                          
INFO[0001] [1/3] [postgres]: Started                    
INFO[0001] [1/3] [scheduler]: Starting                  
INFO[0001] Recreating scheduler                         
INFO[0003] [2/3] [scheduler]: Started                   
INFO[0003] [2/3] [webserver]: Starting                  
INFO[0003] Recreating webserver                         
INFO[0005] [3/3] [webserver]: Started                   
Airflow Webserver: http://localhost:8080/admin/
Postgres Database: localhost:5432/postgres

Note: If you're running version v0.7.5 of the CLI and get a manifest not found error, make sure to have the following image in your Dockerfile:

FROM astronomerinc/ap-airflow:0.7.5-1.10.1-onbuild

More info here.

Verify Docker Containers

To verify that all 3 docker containers were created, you can also run:

$ docker ps

Note: Running astro airflow start will by default start your project with the Airflow Webserver exposed at port 8080 and postgres exposed at port 5432.

If you already have either of those ports allocated, you can either stop existing docker containers or change the port.

Access the Airflow UI

To check out the Airflow UI on your local Airflow project, navigate here - http://localhost:8080/admin/

See your Sample DAG

The sample DAG automatically generated in your directory should be populated in your local Airflow's UI.

Sample DAG

Try a Code Change

A few tips for when you're developing locally:

  • Any DAG Code changes will immediately render in the Airflow UI as soon as they're saved in your source-code editor

  • If you make changes to your Dockerfile, packages.txt or requirements.txt, you'll have to rebuild your image by running:

    $ astro airflow stop && astro airflow start

Check out your Logs

As you're developing locally, you'll want to pull logs for easy troubleshooting. Check out our Logs and Source Control doc for guidelines.

Customize Your Image

To stay slim, our base image is Alpine Linux. If you have already-written code ready to go, let's throw it in.

A few things you can do:

  • Add DAGs in the dags folder
  • Add custom airflow plugins to the plugins directory
  • Python packages can go in requirements.txt
  • OS-level packages can go in packages.txt
  • Astronomer's Docker Image and some Environment Variables can go in your Dockerfile (guidelines)

If you're unfamiliar with Alpine Linux, check out some examples of what you'll need based on your use-case:

Note: If you're interested in trying out our Debian image (alpha), reach out to us or read more about customizing your image in our Customizing Your Image doc.

Deploy to Astronomer Cloud

Create an Airflow Deployment on Cloud

Now that we've made sure your DAGs run successfully when developing locally, you're ready to create a deployment on Astronomer.

  1. Log into Astronomer
  2. Navigate to the Workspace you want to create a deployment from
  3. Hit New Deployment on the top right of the page
  4. Give your Deployment a Name + Description
  5. Choose your Executor (we'd recommend starting with Local)
  6. Wait a few minutes for your Webserver and Scheduler to spin up

Deployment Config

For a full walk-through, check out our doc on Configuring your Deployment and Deploying your Code.

Deploy your First DAG

You're ready to deploy your first DAG to Astronomer Cloud.

Authenticate to the Astronomer CLI

To log into your existing account and pass our authorization flow, run:

$ astro auth login

Make sure you're in the right place

To get ready for deployment, make sure:

  • You're in the right Workspace
  • Your target deployment lives under that Workspace

Follow our CLI Getting Started Guide for more specific guidelines and commands.


When you're ready to deploy your DAGs, run:

$ astro airflow deploy

This command will return a list of deployments available in your Workspace and prompt you to pick one.

View your Example DAG on your Astronomer Cloud Deployment

After you deploy your example DAG, you'll be able to see it running in your Cloud deployment.

What's Next

Now that you're set up on Astronomer and familiar with our deployment flow, consider a few next steps:

Additional Resources