Helm chart configuration reference

This reference describes configuration values for the Remote Execution Agent Helm chart. For complete configuration options, see the values.yaml file downloaded from the Astro UI.

Required configuration values

The following values must be configured before installing the Helm chart:

Agent authentication

agentToken / agentTokenSecretName / agentTokenFile

You must specify exactly one of these to provide the agent token generated in the Astro UI.

  • agentToken: Token value as plain text in values.yaml (not recommended for production)
  • agentTokenSecretName: Name of existing Kubernetes secret containing the token
  • agentTokenFile: Path to file containing the token (agent reads at runtime)

See Agent token configuration for detailed instructions.

Image registry access

imagePullSecretName / imagePullSecretData

You must specify exactly one of these to allow agents to pull images from the registry.

  • imagePullSecretName: Name of existing Kubernetes secret with Docker credentials
  • imagePullSecretData: Docker config JSON as string (Helm creates secret named image-pull-secret)

See Image pull secret configuration for detailed instructions.

Kubernetes namespace

namespace

Kubernetes namespace where the agent will be deployed.

  • If createNamespace: true, Helm creates the namespace
  • If createNamespace: false, namespace must exist before installation

If using agentTokenSecretName and imagePullSecretName, set createNamespace: false and create the namespace manually with secrets already present.

See Install in restricted Kubernetes namespace for restricted namespace configuration.

Resource name prefix

resourceNamePrefix

Name prefix for all Kubernetes resources (Deployments, ConfigMaps, Secrets) created by the Helm chart.

Secrets backend

secretBackend

Airflow secrets backend class for accessing connections and variables. Required for agent operation.

Supported backends:

  • airflow.providers.amazon.aws.secrets.secrets_manager.SecretsManagerBackend
  • airflow.providers.microsoft.azure.secrets.key_vault.AzureKeyVaultBackend
  • airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
  • airflow.providers.hashicorp.secrets.vault.VaultBackend
  • airflow.secrets.local_filesystem.LocalFilesystemBackend (not recommended for production)

See Configure secrets backend for detailed configuration instructions.

XCom backend

xcomBackend

Airflow XCom backend class for passing data between tasks. Required for agent operation.

Typically set to: airflow.providers.common.io.xcom.backend.XComObjectStorageBackend

See Configure XCom backend for detailed configuration instructions.

DAG bundles

dagBundleConfigList

JSON string defining how agents access dag code. Required for running dags.

See Configure DAG sources for detailed configuration instructions.

Common environment variables

commonEnv

Environment variables applied to all agent components (worker, DAG processor, triggerer). Used to configure secrets backend parameters, XCom paths, logging settings, and other Airflow configuration.

Example:

1commonEnv:
2 - name: AIRFLOW__SECRETS__BACKEND_KWARGS
3 value: '{"connections_prefix": "airflow/connections", "variables_prefix": "airflow/variables"}'
4 - name: AIRFLOW__COMMON_IO__XCOM_OBJECTSTORAGE_PATH
5 value: "s3://bucket/xcom"

Worker resource configuration

workers

Workers are configured as a list in values.yaml. Each entry defines a worker Deployment with its own name, resource allocation, replica count, and optional queue assignment.

ParameterDescriptionDefault
nameUnique identifier for the worker Deployment. Used in Kubernetes resource names.default-worker
replicasNumber of worker Pod replicas. Ignored when hpa.enabled is true.1
queuesComma-separated list of Airflow queues this worker listens on.default
resources.requests.cpuMinimum CPU allocated to the worker Pod.
resources.requests.memoryMinimum memory allocated to the worker Pod.
resources.limits.cpuMaximum CPU the worker Pod can use.
resources.limits.memoryMaximum memory the worker Pod can use.
envList of environment variables specific to this worker.[]
volumesAdditional volumes to mount on the worker Pod.[]
volumeMountsMount paths for the additional volumes.[]
nodeSelectorKubernetes node selector for scheduling worker Pods.{}
tolerationsKubernetes tolerations for scheduling worker Pods.[]
serviceAccount.nameCustom service account name. Overrides the default {{ resourceNamePrefix }}-worker-{{ worker.name }}.
serviceAccount.createWhether the Helm chart creates the service account. Set to false when using a pre-existing service account.true

Example with two workers:

1workers:
2 - name: default-worker
3 replicas: 2
4 queues: "default"
5 resources:
6 requests:
7 cpu: "500m"
8 memory: "1Gi"
9 limits:
10 cpu: "2"
11 memory: "4Gi"
12 - name: high-memory-worker
13 replicas: 1
14 queues: "high-memory"
15 resources:
16 requests:
17 cpu: "1"
18 memory: "4Gi"
19 limits:
20 cpu: "4"
21 memory: "16Gi"

When you configure multiple workers, each worker creates a separate Kubernetes Deployment. The service account name for each worker defaults to {{ resourceNamePrefix }}-worker-{{ worker.name }}. If you use IRSA (AWS), Workload Identity (GCP), or managed identity (Azure), annotate each worker’s service account.

Horizontal Pod Autoscaler

workers[].hpa

Each worker supports a Horizontal Pod Autoscaler (HPA) configuration to automatically scale the number of worker Pod replicas based on resource utilization or custom metrics.

When hpa.enabled is true, the Helm chart creates a HorizontalPodAutoscaler resource for the worker Deployment. The replicas value is ignored because the HPA controls replica count.

ParameterDescriptionDefault
hpa.enabledEnable the Horizontal Pod Autoscaler for this worker.false
hpa.minReplicasMinimum number of worker Pod replicas.1
hpa.maxReplicasMaximum number of worker Pod replicas.10
hpa.metricsList of metric targets that the HPA uses to make scaling decisions. Follows the Kubernetes HPA metrics spec.

Example with CPU-based autoscaling:

1workers:
2 - name: default-worker
3 queues: "default"
4 resources:
5 requests:
6 cpu: "500m"
7 memory: "1Gi"
8 limits:
9 cpu: "2"
10 memory: "4Gi"
11 hpa:
12 enabled: true
13 minReplicas: 1
14 maxReplicas: 5
15 metrics:
16 - type: Resource
17 resource:
18 name: cpu
19 target:
20 type: Utilization
21 averageUtilization: 80

You must set resources.requests for the metrics you use in HPA targets. For example, CPU-based autoscaling requires resources.requests.cpu to be set. Without resource requests, the HPA cannot calculate utilization percentages.

You can combine multiple metrics to define more sophisticated scaling behavior. The HPA evaluates all specified metrics and scales to the highest recommended replica count.

Example with CPU and memory metrics:

1 hpa:
2 enabled: true
3 minReplicas: 2
4 maxReplicas: 10
5 metrics:
6 - type: Resource
7 resource:
8 name: cpu
9 target:
10 type: Utilization
11 averageUtilization: 75
12 - type: Resource
13 resource:
14 name: memory
15 target:
16 type: Utilization
17 averageUtilization: 80

Optional configuration

Logging sidecar

loggingSidecar

Optional sidecar for exporting task logs to external platforms or viewing logs in the Airflow UI before task completion.

See Configure logging sidecar for configuration instructions.

OpenLineage

openLineage

Optional configuration for data lineage collection.

You must configure OpenLineage to use Astro Observe with Remote Execution Deployments.

See Configure OpenLineage for configuration instructions.

Sentinel monitoring

sentinel

Optional monitoring sidecar (agent version 1.2.0+).

See Enable Sentinel monitoring for configuration instructions.

Cloud provider annotations

annotations and labels

Kubernetes annotations and labels to configure Pods to run using a specific IAM role (AWS), workload identity (GCP) or managed identity (Azure).

Helm commands

After the Remote Execution Agent is installed, any updates to the agent use the helm upgrade command.

Install agent

$helm repo add astronomer https://helm.astronomer.io
$helm repo update
$helm install astro-agent astronomer/astro-remote-execution-agent -f values.yaml

Update agent

$helm upgrade astro-agent astronomer/astro-remote-execution-agent -f values.yaml

View current configuration

$helm get values astro-agent