Get started with Remote Execution
This guide provides everything you need to set up Remote Execution on Astro, including prerequisites, configuration requirements, and a step-by-step checklist.
What you’ll configure
Remote Execution requires both required and optional components. Understand what you need before starting setup.
Required components
You must configure these components for Remote Execution to function:
All required components must be configured before agents can successfully execute tasks. The setup checklist below provides the recommended order.
Optional components
These components enhance functionality but aren’t required for basic operation:
Setup
Follow these steps in order to set up Remote Execution:
1. Meet prerequisites
Before you begin, ensure you have:
- An Astro Deployment configured for Remote Execution mode. See Create a deployment.
- Kubernetes 1.30 or later
Recommended Kubernetes configuration
Configure singleProcessOOMKill: true in your kubelet configuration. With this setting,
Kubernetes kills only the process that runs out of memory instead of the entire Pod. Without it, one task’s
out-of-memory error kills all tasks on the worker and loses all task logs.
See Kubernetes documentation for details.
- Helm 3 or later
- Permissions to create Astro agents
- A Deployment API token with Deployment Admin permissions
- Network access from your Kubernetes cluster to Astro. Add
https://<your-cluster-id>.external.astronomer.run/to your organization’s allowlist. To find your cluster ID, go to Settings > Clusters in the Astro UI.
2. Register and configure agents
Install the Remote Execution Agent Helm chart in your Kubernetes cluster:
- Create an Agent Token in the Astro UI
- Download and configure the
values.yamlfile - Install the Helm chart with your configuration
- Verify agent heartbeat in the Astro UI
See Register and configure agents.
3. Configure secrets backend
Configure a secrets backend so agents can access Airflow connections and variables:
- AWS Secrets Manager
- Azure Key Vault
- Google Cloud Secret Manager
- HashiCorp Vault
See Configure secrets backend.
4. Configure XCom backend
Configure object storage for passing data between tasks:
- AWS S3
- Azure Blob Storage
- GCP Cloud Storage
5. Configure DAG sources
Define how agents access your DAG code:
- GitDagBundle (recommended): Dags in a Git repository with automatic versioning
- LocalDagBundle: Dags in container images or persistent volumes
6. Optionally configure logging
Set up task log export to view logs in external platforms or the Airflow UI:
- Export logs with a logging sidecar
- Link Airflow UI to external logging platform
- Show logs in Airflow UI from object storage
See Configure logging.
7. Deploy your project
Deploy your Airflow project to the Remote Execution deployment:
- Initialize a Remote Execution project with Astro CLI
- Build and push the server image (orchestration plane)
- Build and push client images (execution plane)
- Update agents to use the new client image
See Deploy a Remote Execution project.
8. Verify agents are running
Confirm agents are healthy and processing tasks:
- Check agent heartbeat status in the Astro UI
- View agent Pods in your Kubernetes cluster
- Trigger a test DAG run and verify task execution
- Check task logs to confirm logging configuration
Next steps
After completing setup:
- Configure OpenLineage to enable data lineage and Astro Observe features