Astro Pricing
The fully-managed platform to take Apache Airflow® to the next level.
Available on AWS, Azure, and Google Cloud.
Developer
For developers and data teams that are getting started. Pay-as-you-go starting at $0.35/hr.
- Flexible, scale-to-zero compute
- API Access
- 14-day free trial including $300 credit
Team
For teams with pipelines in production that require Airflow support.
- Everything in Developer
- Network Isolation
- Audit Logging
- 24 x 5 Support Availability
Plan Features
Straightforward Pricing
Astro offers transparent pricing tailored to your team’s needs. All product tiers use the same dimensions of our usage-based pricing model: your Airflow cluster, deployment sizing, and worker compute. Networking costs are passed through from the cloud provider.
Cluster Pricing
Configure your cluster type based on networking and security needs.
Type | Price |
---|---|
Standard | Included on all Plans |
Dedicated | Starts at $2.40 per/hour on Team |
Deployment Pricing
Easy to create, easy to delete, easy to pay for.
Deployment Size | Resources | Developer Plan Price |
---|---|---|
Small | 1 vCPU, 2 GiB memory | $0.35 per/hour |
Small High Availability | 2 vCPU, 4 GiB memory | $0.70 per/hour |
Medium | 2 vCPU, 4 GiB memory | $0.57 per/hour |
Medium High Availability | 4 vCPU, 8 GiB memory | $1.14 per/hour |
Large | 4 vCPU, 8 GiB memory | $0.77 per/hour |
Large High Availability | 8 vCPU, 16 GiB memory | $1.54 per/hour |
Worker Pricing
Astro offers the largest worker compute options in the managed Airflow market by 8x.
You only pay for workers when you need them.
Worker Size | Resources | Developer Plan Price |
---|---|---|
A5 | 1 vCPU, 2 GiB memory | $0.13 per/hour |
A10 | 2 vCPU, 4 GiB memory | $0.26 per/hour |
A20 | 4 vCPU, 8 GiB memory | $0.52 per/hour |
A40 | 8 vCPU, 16 GiB memory | $1.04 per/hour |
A60 | 12 vCPU, 24 GiB memory | $1.56 per/hour |
A120 | 24 vCPU, 48 GiB memory | $3.12 per/hour |
A160 | 32 vCPU, 64 GiB memory | $4.16 per/hour |
FAQs
What if I need to run individual tasks on bigger workers?
You might have a large number of tasks that require low amounts of CPU and memory, but a small number of tasks that are resource intensive — e.g., machine learning tasks.
To address this use case, we recommend using worker queues. Worker queues allow you to configure different groups of workers for different groups of tasks. That way, you’ll only be charged for the larger worker type if and when a task that requires that worker type actually runs.
Specifically, you can:
- Create a default queue with a small worker type. For example, A5.
- Create a second queue called
large-task
with a larger worker type. For example, A10. - Set the Minimum Worker Count for the
large-task
queue to 0 if your resource-intensive tasks run infrequently. - In your DAG, assign the larger task to the “large-task” queue.
To learn more about worker queues, see Worker queues in Astronomer documentation.
What if I need additional ephemeral storage for workers?
All Astro workers include an amount of ephemeral storage by default: 10 GiB of for Celery workers, and 0.25 GiB for Kubenetes Executor and Kubernetes Pod Operator workers. You can configure additional ephemeral storage at a rate of $0.0002 per GiB per/hour.
How will I be charged for the Kubernetes Executor and Kubernetes Pod Operator?
In Airflow, the Kubernetes Executor (KE) and the KubernetesPodOperator (KPO) allow you to run a single task in an isolated Kubernetes Pod. Astro measures the total amount of CPU and Memory allocated across your KE/KPO infrastructure at any given time. Astro bills for the number of A5 workers necessary to accommodate the total amount of CPU and Memory rounded up to the nearest A5. One A5 worker corresponds to 1 CPU and 2 GiB Memory.
For example:
If you are running 4 tasks concurrently, with each being allocated 0.25 cpu and 0.5 GiB memory, then you will be charged for 1 A5 for the duration of the infrastructure running those tasks.
Similarly, if you are running 3 tasks concurrently, with each being allocated 0.25 cpu and 0.5 GiB memory, then you would still be charged for 1 A5. In this case the total amount allocated is 0.75 cpu and 1.5 GiB memory which rounds up to single A5.
If you have 5 concurrent tasks that are each allocated 2 CPU and 4 GiB memory, that is a total of 10 CPU cores and 20 GiB memory and maps to 10 A5s.
In order to ensure reliability Astro will allocate the limit requested by each task. If a task has not specified limits then the Deployment defaults will be used.
In addition, ephemeral storage limits of greater than 0.25 GiB per pod will be charged at a rate of $0.0002 per GiB per/hour.
What networking costs are passed through from the Cloud Provider?
This varies slightly by cloud:
AWS: Data Transfer within and between AWS regions, and out to the Internet (inclusive of Data Processing by NAT Gateway and PrivateLink VPC Endpoints). Includes Site-to-Site VPN uptime charges if configured for private connectivity to data sources.
GCP: Data Transfer within and between GCP regions, and out to the Internet (inclusive of Data Processing). Includes Cloud VPN and Private Service Connect endpoint uptime charges if configured for private connectivity to data sources.
Azure: Azure: Peered and Non-Peered Data Transfer within and between Azure regions, and out to the Internet (inclusive of Data Processing by Load Balancer and Private Link). Includes VPN Gateway uptime charges if configured for private connectivity to data sources.