Astro Features
Everything you need to build, run, and observe production data and AI pipelines, organized by the capabilities that matter most to your workflow.
Build
Ship production-ready pipelines faster.
Write Dags with an AI that actually understands Airflow. Test against production-parity environments in your browser. Deploy with a single command click.
new
Astro IDEAstro IDE is a browser-based development environment built specifically for Airflow. Unlike general-purpose AI coding assistants, it understands your data stack including your Airflow version, operators, and Dag patterns. The built-in AI generates code using modern syntax and modular structures, while ephemeral test deployments let you validate Dags against production-parity environments without installing Docker locally. Go from natural language prompt to running Dag without leaving your browser.
new
Astronomer's AI Agent ToolingOpen-source AI agent tooling maintained by Astronomer brings Airflow expertise to local development environments. You can install it in Claude Code, Cursor, VS Code, or 25+ other AI coding tools to get specialized skills for Dag authoring, debugging, Airflow 2→3 migration, lineage tracing, and warehouse operations. The agent tooling understands Airflow versions, operator patterns, and best practices, and accesses real execution data like task logs, run history, and failure patterns rather than guessing at solutions. Your development stays in your preferred editor without switching tools or leaving your local workflow.
Astro CLI
The Astro CLI provides a complete local development experience for teams that prefer working in their own editors. Running astro dev start spins up a local Airflow environment with hot-reloading, where the Airflow UI automatically opens when ready and your Dag changes reflect immediately without manual restarts. When you're ready to deploy, astro deploy builds your project image, authenticates to Astro, and pushes to your deployment in a single command.
CI/CD & Preview Environments
Preview Deployments let you test Dag changes in isolated environments before merging to production. When you create a feature branch, Astro can automatically create a corresponding preview deployment that mirrors your production configuration including the same Airflow version, connections, and environment variables. Astro provides pre-built CI/CD templates for GitHub Actions, GitLab CI, Azure DevOps, AWS CodeBuild, Bitbucket, CircleCI, and Jenkins that let you run your test suite before deploying and block merges if tests fail.
Connection & Environment Management
Centrally manage Airflow connections and environment variables across all deployments. You can create connections once and link them to multiple deployments, store sensitive values securely with workspace-level permissions, and update connection details in one place to have changes propagate automatically to all linked deployments.
Programmatic Infrastructure Management
Manage Astro infrastructure programmatically with Infrastructure as Code or direct API access. Use the Terraform provider to define deployments, workspaces, API tokens, and environment variables in version-controlled configurations, or use the Astro REST API for direct programmatic control from any language or framework. Both approaches let you automate provisioning and changes across environments while maintaining consistency and auditability.
dbt Integration with Cosmos
Cosmos is an open-source framework maintained by Astronomer that turns dbt projects into Airflow Dags with model-level visibility. Instead of running your entire dbt project as a single task, Cosmos creates individual Airflow tasks for each dbt model, test, seed, and snapshot, giving you granular control over retries, dependencies, and resource allocation at the model level. When a model fails, you can rerun just that model and its downstream dependencies while lineage flows through from dbt models to Airflow tasks to downstream consumers for end-to-end visibility in Astro.
Run
Scale to any workload without managing infrastructure.
The Astro engine delivers 2.5x concurrent tasks compared to other managed Airflow services with elastic auto-scaling, multi-cloud deployment, and enterprise reliability built in.
Astro Executor
Execution model built exclusively for Airflow 3 Deployments on Astro. Instead of routing tasks through an external message broker, agents pull work directly from the Airflow API server. This removes the intermediary layer where tasks can be lost during restarts or scaling events. When tasks are actively running, Astro coordinates scale-down so agents finish their work before any infrastructure changes take effect.
Elastic Auto-Scaling
Workers scale automatically based on how many tasks are queued and running at any moment. Every 10 seconds, Astro recalculates the number of workers needed and adjusts capacity accordingly, so deployments absorb burst workloads without manual intervention. Set the minimum worker count to zero for non-default worker queues so they only consume resources when tasks are active. Configure the minimum and maximum bounds per queue, and Astro handles everything in between.
Task-Optimized Worker Queues
Create separate worker queues within the same Deployment, each with its own compute size and scaling behavior. You might route resource-intensive ML jobs to a large worker type while keeping lightweight SQL queries on smaller, cheaper nodes, or configure some queues to scale to zero while others stay always available. Worker types range from 1 vCPU up to 32 vCPU, so you can right-size compute for what each workload actually needs. Assign tasks to a queue by adding a queue='queue-name' parameter in your Dag code.
Event-driven Scheduling
Trigger Dags the moment data arrives rather than waiting for a fixed schedule. Airflow 3 introduces event-driven scheduling through AssetWatcher and MessageQueueTrigger, with native support for Amazon SQS and Apache Kafka; custom configurations are also possible. Removing the uniqueness constraint on logical dates means you can run multiple instances of the same Dag at the same time, which is essential for inference workloads and parallel experimentation. Custom message queue integrations are also supported for teams using other event sources.
Remote Execution
Run tasks in your own infrastructure while Astro handles orchestration. Agents use only outbound connections, so you do not need to open inbound firewall ports or change your network perimeter. Your data, code, secrets, and logs stay within your environment, with only scheduling and health metadata traveling to Astro's orchestration plane. HIPAA compliance is available with a signed Business Associate Agreement and a dedicated single-tenant cluster on Business and Enterprise plans.
High Availability
Run production workloads across multiple availability zones to prevent single points of failure from taking down your pipelines. Dag execution continues without interruption if a node or availability zone goes down. Astro provides a 99.5% uptime SLA, with 24/7 support and 1-hour response times for critical issues on Business and Enterprise plans.
Multi-cloud Deployment
Deploy on AWS, Azure, or Google Cloud across 55+ regions and choose the location that minimizes latency and meets your data residency requirements. All three cloud providers are available through their respective marketplaces, so you can apply committed cloud spend toward your Astro subscription. For teams with stricter network requirements, Astro Private Cloud provides dedicated single-tenant infrastructure within your own cloud environment.
Inference Execution
Run AI and ML inference jobs on demand, with as many parallel instances as you need. Airflow 3 removes the execution date constraint that previously prevented multiple Dag runs from starting at the same time. Trigger concurrent inference requests via the API as they arrive, without waiting for a predefined schedule. This supports GenAI, predictive, agentic, and analytical workloads where near real-time execution matters.
Scheduler-managed Backfills
Reprocess historical or newly available data without relying on CLI sessions that can be interrupted mid-run. In Airflow 3, backfills are managed by the scheduler, so they continue running reliably for jobs that take hours or days. Trigger backfills from the UI, API, or CLI and choose whether to reprocess missing runs, failed runs, or all runs within a date range. You can pause or cancel any backfill in progress at any time.
Observe
Detect issues before they impact downstream systems.
Real-time lineage, SLA monitoring, data quality, and automated RCA all built directly into the orchestration layer, not bolted on as a separate tool.
Real-Time Pipeline Lineage
Astro automatically extracts lineage metadata from Airflow tasks as pipelines execute using OpenLineage, letting you view upstream and downstream dependencies in an interactive graph that updates in real-time. When you connect your Snowflake or Databricks warehouse, lineage visibility extends to table-level dependencies including tables not directly touched by Airflow tasks. This makes it easy to quickly assess blast radius when issues occur by tracing failures to upstream root causes or identifying all downstream assets affected by a change.
Data Product SLA Tracking
Define service level agreements for your most important data products and track whether they meet business requirements. You can set timeliness SLAs for delivery by specific times on selected days (like every day by 9am EST), freshness SLAs for required update intervals (such as never more than 2 hours stale), or custom SLAs with full control using cron expressions. Astro evaluates SLAs by checking successful runs of final assets in your data product against your defined parameters, and the Data Products dashboard provides a unified view for business stakeholders to see which products are late or stale, who owns them, and historical SLA compliance at a glance.
Predictive Alerting
Go beyond reactive alerting with proactive alerts that notify you before failures disrupt delivery. Proactive SLA Alerts monitor upstream dependencies of your data products and notify you when delays in upstream tasks risk causing SLA misses, while Proactive Failure Monitors alert you when any upstream or final Dag in your data product fails. Alerts include direct links to lineage views showing the failing Dag and task plus identification of all downstream assets impacted by the failure, and integrate with Slack, PagerDuty, email, and webhooks for routing to your existing incident response workflows.
AI-Assisted RCA
Accelerate root cause analysis with AI assistance that helps you diagnose and resolve failures faster. When tasks fail, Astro generates AI-powered summaries that highlight what broke, why, and how to fix it without requiring you to sift through Airflow logs. Combined with real-time lineage visualization, you can quickly understand the blast radius of failures and trace issues back to their source.
Data Quality Monitoring
Monitor data quality in your warehouse with built-in checks or custom SQL monitors. You can track column null percentages, row volume changes, and schema drift with pre-built monitors, or define custom SQL-based checks tailored to your specific requirements. Run monitors on scheduled intervals or trigger them based on pipeline events for real-time validation, and when a quality issue is detected, trace it back to the upstream Airflow task that caused it with table-level visibility that extends beyond what Airflow orchestrates.
Asset Catalog
The Asset Catalog automatically captures all pipeline assets, datasets, and tables for complete visibility across your data platform. You can browse assets by namespace, filter by source system, and see ownership information for each asset, providing a single source of truth for what exists in your data ecosystem, who owns it, and when it was last updated.
Health Dashboard
Monitor data product health across all deployments from a unified dashboard. You can see which products are late or stale, who owns them, and when they were last updated at a glance, while tracking operational metrics like Dag runtimes, task failures, and SLA compliance across your entire Airflow ecosystem.
Enterprise
Security and governance without compromise.
Astro is built for organizations with the most stringent data security requirements—comprehensive compliance certifications, fine-grained access control, and enterprise-grade encryption.
Compliance Certifications
Astro is certified compliant with SOC 2 Type II and PCI DSS. HIPAA compliance is available through a Business Associate Agreement on Business and Enterprise plans, alongside a dedicated single-tenant cluster. Astronomer is GDPR-compliant and offers Data Processing Agreements for organizations with EU data protection requirements. You can request SOC 2 Type II reports, penetration test reports, and other compliance documentation from the Astronomer Trust Center.
Single Sign-On & SCIM
Connect Astro to your identity provider so your team authenticates through the same credentials they use everywhere else. Astro supports Okta, Microsoft Entra ID, OneLogin, and Ping Identity, and you can enforce multi-factor authentication through your IdP. User provisioning and deprovisioning happen automatically through SCIM for Okta and Microsoft Entra ID, so access updates the moment your identity provider changes. SCIM is available on Enterprise plans.
Role-Based Access Control
Assign precise permissions at three levels: Organization, Workspace, and Deployment. Five predefined roles are available at each Organization and Workspace level, and on Enterprise plans you can create custom Deployment roles tailored to your team. For example, create a read-only stakeholder role that can view Dag runs without triggering them, or a restricted operator role that can trigger Dags but cannot modify configurations. Deployment API tokens can be scoped to individual Deployments, making them safer for CI/CD automation than organization-level credentials.
Encryption
All data in transit is encrypted with TLS 1.2 and strong ciphers, and internal control plane services communicate using mutual TLS for an additional layer of protection. Data at rest is encrypted with AES-256 using native cloud provider encryption technologies across both the control plane and data plane. Connections and environment variables that you mark as secrets receive an extra encryption layer, keeping their values hidden in the Astro UI.
Network Isolation
Keep traffic between Astro and your data sources off the public internet using VPC peering, AWS PrivateLink, Transit Gateways, or VPN on dedicated clusters. An IP access list restricts access to the Astro UI and API to known IP ranges, available on Enterprise plans. Remote Execution agents communicate using only outbound connections, so no inbound firewall rules are required.
Secret Management
Connect Astro to your existing secrets backend so credentials are never stored in Airflow's metadata database. Supported backends include AWS Secrets Manager, AWS Systems Manager Parameter Store, Azure Key Vault, Google Cloud Secret Manager, and HashiCorp Vault (Airflow 3 Deployments only). If you prefer not to store credentials at all, Customer Managed Workload Identity authenticates deployments to cloud resources using existing IAM roles, with no secrets to store or rotate in Astro.
Audit Logging
Track every user action, API call, and control plane event across your Organization with a detailed audit trail. Logs capture activity across the Astro UI, CLI, container registry, and internal services, recording who did what, where, and when. Export logs to your existing security and compliance tools to support internal reviews and regulatory investigations.
Deployment Rollbacks
When a code deploy introduces a problem, roll back to any previous deploy from the last three months without rebuilding manually. Rollbacks restore your project code (including all Dags), Astro Runtime version, and Dag deploy setting, creating a new deploy from the prior code state. Environment variables and resource configurations are not affected by a rollback, so they remain under your direct control.
Get started free.
OR
By proceeding you agree to our Privacy Policy, our Website Terms and to receive emails from Astronomer.