Pricing

Compare astro plans

Select the right plan for your team with our complete feature comparison and detailed pricing breakdown.

Plan Features

Developer

Team

Business

Enterprise

Astro AI

$10 in usage tokens included each month per organization

Prompt tokens (Input)

Public preview pricing for AI-assisted DAG authoring & testing

$3.75 per million tokens

Response tokens (Output)

Public preview pricing for AI-assisted DAG authoring and testing

$18.75 per million tokens

Security & Governance

Google/Github IdP Auth

SAML-based SSO

Private Networking

PrivateLink, VPC Peering, Transit Gateway, Site-to-Site VPN, etc.

Non-Owner Roles

Up to 2UnlimitedUnlimitedUnlimited

Astro Teams/Groups

Up to 2

Audit Logging (Astro + Airflow)

7 days retention90 days retention90 days retention

Dedicated Cluster

Deployment Roles

CI/CD Enforcement

SSO Enforcement

HIPAA / PII BAA Agreement

Workspace Authorization for Clusters

Custom RBAC Roles

IP Access List

Scale & Operational Efficiency

Hibernating Deployments

Astro API Access

Airflow API Access

Connection Management

Alerting

Deploy Rollbacks

High Availability

Metrics Forwarding

Log Forwarding

SCIM Provisioning

Org-level Dashboards

Support & Success

Support Availability

Not Included24x5 Availability24x7 Availability24x7 Availability

SLA Response for P1 Tickets

6 Hour Initial Response SLA (P1)1 Hour Initial Response SLA (P1)1 Hour Initial Response SLA (P1)

Slack

First 30 DaysFirst 30 DaysUnlimited

Office Hours

First Come / First ServeFirst Come / First ServePriority Scheduling

Astro AI

$10 in usage tokens included each month per organization

Prompt tokens (Input)

Public preview pricing for AI-assisted DAG authoring & testing

$3.75 per million tokens

Response tokens (Output)

Public preview pricing for AI-assisted DAG authoring and testing

$18.75 per million tokens

Security & Governance

Google/Github IdP Auth

SAML-based SSO

Private Networking

PrivateLink, VPC Peering, Transit Gateway, Site-to-Site VPN, etc.

Non-Owner Roles

Up to 2

Astro Teams/Groups

Audit Logging (Astro + Airflow)

Dedicated Cluster

Deployment Roles

CI/CD Enforcement

SSO Enforcement

HIPAA / PII BAA Agreement

Workspace Authorization for Clusters

Custom RBAC Roles

IP Access List

Scale & Operational Efficiency

Hibernating Deployments

Astro API Access

Airflow API Access

Connection Management

Alerting

Deploy Rollbacks

High Availability

Metrics Forwarding

Log Forwarding

SCIM Provisioning

Org-level Dashboards

Support & Success

Support Availability

Not Included

SLA Response for P1 Tickets

Slack

Office Hours

Component Pricing

All product tiers use the same dimensions of our usage-based pricing model: your Airflow cluster, deployment sizing, and worker compute. Networking costs are passed through from the cloud provider.

Prices listed below are reflective of AWS us-east-1, Azure eastus, GCP us-east1.

Learn more about Astro’s architecture ➔

Cluster Pricing

FAQs

What if I need to run individual tasks on bigger workers?

You might have a large number of tasks that require low amounts of CPU and memory, but a small number of tasks that are resource intensive — e.g., machine learning tasks.

To address this use case, we recommend using worker queues. Worker queues allow you to configure different groups of workers for different groups of tasks. That way, you’ll only be charged for the larger worker type if and when a task that requires that worker type actually runs.

Specifically, you can:

  • Create a default queue with a small worker type. For example, A5.
  • Create a second queue called large-task with a larger worker type. For example, A10.
  • Set the Minimum Worker Count for the large-task queue to 0 if your resource-intensive tasks run infrequently.
  • In your DAG, assign the larger task to the “large-task” queue.

To learn more about worker queues, see Worker queues in Astronomer documentation.

What if I need additional ephemeral storage for workers?

All Astro workers include an amount of ephemeral storage by default: 10 GiB of for Celery workers, and 0.25 GiB for Kubenetes Executor and Kubernetes Pod Operator workers. You can configure additional ephemeral storage at a rate of $0.0002 per GiB per hour.

How will I be charged for the Kubernetes Executor and Kubernetes Pod Operator?

In Airflow, the Kubernetes Executor (KE) and the KubernetesPodOperator (KPO) allow you to run a single task in an isolated Kubernetes Pod. Astro measures the total amount of CPU and Memory allocated across your KE/KPO infrastructure at any given time. Astro bills for the number of A5 workers necessary to accommodate the total amount of CPU and Memory rounded up to the nearest A5. One A5 worker corresponds to 1 CPU and 2 GiB Memory.

For example:

  • If you are running 4 tasks concurrently, with each being allocated 0.25 cpu and 0.5 GiB memory, then you will be charged for 1 A5 for the duration of the infrastructure running those tasks.

  • Similarly, if you are running 3 tasks concurrently, with each being allocated 0.25 cpu and 0.5 GiB memory, then you would still be charged for 1 A5. In this case the total amount allocated is 0.75 cpu and 1.5 GiB memory which rounds up to single A5.

  • If you have 5 concurrent tasks that are each allocated 2 CPU and 4 GiB memory, that is a total of 10 CPU cores and 20 GiB memory and maps to 10 A5s.

In order to ensure reliability Astro will allocate the limit requested by each task. If a task has not specified limits then the Deployment defaults will be used.

In addition, ephemeral storage limits of greater than 0.25 GiB per pod will be charged at a rate of $0.0002 per GiB per hour.

What networking costs are passed through from the Cloud Provider?

This varies slightly by cloud:

Get started free.

OR

API Access
Alerting
SAML-Based SSO
Airflow AI Assistant
Deployment Rollbacks
Audit Logging

By proceeding you agree to our Privacy Policy, our Website Terms and to receive emails from Astronomer.