Helm chart configuration reference

This reference describes configuration values for the Remote Execution Agent Helm chart. For complete configuration options, see the values.yaml file downloaded from the Astro UI.

Required configuration values

The following values must be configured before installing the Helm chart:

Agent authentication

agentToken / agentTokenSecretName / agentTokenFile

You must specify exactly one of these to provide the agent token generated in the Astro UI.

  • agentToken: Token value as plain text in values.yaml (not recommended for production)
  • agentTokenSecretName: Name of existing Kubernetes secret containing the token
  • agentTokenFile: Path to file containing the token (agent reads at runtime)

See Agent token configuration for detailed instructions.

Image registry access

imagePullSecretName / imagePullSecretData

You must specify exactly one of these to allow agents to pull images from the registry.

  • imagePullSecretName: Name of existing Kubernetes secret with Docker credentials
  • imagePullSecretData: Docker config JSON as string (Helm creates secret named image-pull-secret)

See Image pull secret configuration for detailed instructions.

Kubernetes namespace

namespace

Kubernetes namespace where the agent will be deployed.

  • If createNamespace: true, Helm creates the namespace
  • If createNamespace: false, namespace must exist before installation

If using agentTokenSecretName and imagePullSecretName, set createNamespace: false and create the namespace manually with secrets already present.

See Install in restricted Kubernetes namespace for restricted namespace configuration.

Resource name prefix

resourceNamePrefix

Name prefix for all Kubernetes resources (Deployments, ConfigMaps, Secrets) created by the Helm chart.

Secrets backend

secretBackend

Airflow secrets backend class for accessing connections and variables. Required for agent operation.

Supported backends:

  • airflow.providers.amazon.aws.secrets.secrets_manager.SecretsManagerBackend
  • airflow.providers.microsoft.azure.secrets.key_vault.AzureKeyVaultBackend
  • airflow.providers.google.cloud.secrets.secret_manager.CloudSecretManagerBackend
  • airflow.providers.hashicorp.secrets.vault.VaultBackend
  • airflow.secrets.local_filesystem.LocalFilesystemBackend (not recommended for production)

See Configure secrets backend for detailed configuration instructions.

XCom backend

xcomBackend

Airflow XCom backend class for passing data between tasks. Required for agent operation.

Typically set to: airflow.providers.common.io.xcom.backend.XComObjectStorageBackend

See Configure XCom backend for detailed configuration instructions.

DAG bundles

dagBundleConfigList

JSON string defining how agents access dag code. Required for running dags.

See Configure DAG sources for detailed configuration instructions.

Common environment variables

commonEnv

Environment variables applied to all agent components (worker, DAG processor, triggerer). Used to configure secrets backend parameters, XCom paths, logging settings, and other Airflow configuration.

Example:

1commonEnv:
2 - name: AIRFLOW__SECRETS__BACKEND_KWARGS
3 value: '{"connections_prefix": "airflow/connections", "variables_prefix": "airflow/variables"}'
4 - name: AIRFLOW__COMMON_IO__XCOM_OBJECTSTORAGE_PATH
5 value: "s3://bucket/xcom"

Worker resource configuration

workers

Workers are configured as a list in values.yaml. Each entry defines a worker Deployment with its own name, resource allocation, replica count, and optional queue assignment.

ParameterDescriptionDefault
nameUnique identifier for the worker Deployment. Used in Kubernetes resource names.default-worker
replicasNumber of worker Pod replicas. Ignored when hpa.enabled is true.1
queuesComma-separated list of Airflow queues this worker listens on.default
resources.requests.cpuMinimum CPU allocated to the worker Pod.
resources.requests.memoryMinimum memory allocated to the worker Pod.
resources.limits.cpuMaximum CPU the worker Pod can use.
resources.limits.memoryMaximum memory the worker Pod can use.
envList of environment variables specific to this worker.[]
volumesAdditional volumes to mount on the worker Pod.[]
volumeMountsMount paths for the additional volumes.[]
nodeSelectorKubernetes node selector for scheduling worker Pods.{}
tolerationsKubernetes tolerations for scheduling worker Pods.[]
serviceAccount.nameCustom service account name. Overrides the default {{ resourceNamePrefix }}-worker-{{ worker.name }}.
serviceAccount.createWhether the Helm chart creates the service account. Set to false when using a pre-existing service account.true
terminationGracePeriodSecondsThe grace period for the worker Pod to finish existing tasks before terminating.600

Example with two workers:

1workers:
2 - name: default-worker
3 replicas: 2
4 queues: "default"
5 resources:
6 requests:
7 cpu: "500m"
8 memory: "1Gi"
9 limits:
10 cpu: "2"
11 memory: "4Gi"
12 - name: high-memory-worker
13 replicas: 1
14 queues: "high-memory"
15 resources:
16 requests:
17 cpu: "1"
18 memory: "4Gi"
19 limits:
20 cpu: "4"
21 memory: "16Gi"

When you configure multiple workers, each worker creates a separate Kubernetes Deployment. The service account name for each worker defaults to {{ resourceNamePrefix }}-worker-{{ worker.name }}. If you use IRSA (AWS), Workload Identity (GCP), or managed identity (Azure), annotate each worker’s service account.

Horizontal Pod Autoscaler

workers[].hpa

Each worker supports a Horizontal Pod Autoscaler (HPA) configuration to automatically scale the number of worker Pod replicas based on resource utilization or custom metrics.

When hpa.enabled is true, the Helm chart creates a HorizontalPodAutoscaler resource for the worker Deployment. The replicas value is ignored because the HPA controls replica count.

ParameterDescriptionDefault
hpa.enabledEnable the Horizontal Pod Autoscaler for this worker.false
hpa.minReplicasMinimum number of worker Pod replicas.1
hpa.maxReplicasMaximum number of worker Pod replicas.10
hpa.metricsList of metric targets that the HPA uses to make scaling decisions. Follows the Kubernetes HPA metrics spec.

Example with CPU-based autoscaling:

1workers:
2 - name: default-worker
3 queues: "default"
4 resources:
5 requests:
6 cpu: "500m"
7 memory: "1Gi"
8 limits:
9 cpu: "2"
10 memory: "4Gi"
11 hpa:
12 enabled: true
13 minReplicas: 1
14 maxReplicas: 5
15 metrics:
16 - type: Resource
17 resource:
18 name: cpu
19 target:
20 type: Utilization
21 averageUtilization: 80

You must set resources.requests for the metrics you use in HPA targets. For example, CPU-based autoscaling requires resources.requests.cpu to be set. Without resource requests, the HPA cannot calculate utilization percentages.

You can combine multiple metrics to define more sophisticated scaling behavior. The HPA evaluates all specified metrics and scales to the highest recommended replica count.

Example with CPU and memory metrics:

1 hpa:
2 enabled: true
3 minReplicas: 2
4 maxReplicas: 10
5 metrics:
6 - type: Resource
7 resource:
8 name: cpu
9 target:
10 type: Utilization
11 averageUtilization: 75
12 - type: Resource
13 resource:
14 name: memory
15 target:
16 type: Utilization
17 averageUtilization: 80

Triggerer resource configuration

triggerer

The triggerer runs deferred tasks asynchronously. Configure the triggerer to control replica count, async capacity, resource allocation, and Pod-level settings.

ParameterDescriptionDefault
replicasNumber of triggerer Pod replicas.1
asyncSlotsNumber of concurrent triggers the triggerer Pod can compute.1000
imageDocker image for the triggerer. Defaults to the top-level image if not set.
imagePullPolicyImage pull policy for the triggerer. Defaults to the top-level imagePullPolicy if not set.
resources.limits.cpuMaximum CPU the triggerer Pod can use.1
resources.limits.ephemeral-storageMaximum ephemeral storage the triggerer Pod can use.1Gi
resources.limits.memoryMaximum memory the triggerer Pod can use.2Gi
resources.requests.cpuMinimum CPU allocated to the triggerer Pod.1
resources.requests.ephemeral-storageMinimum ephemeral storage allocated to the triggerer Pod.1Gi
resources.requests.memoryMinimum memory allocated to the triggerer Pod.2Gi
envList of environment variables specific to the triggerer.[]
livenessProbeLiveness probe configuration for the triggerer Pod.See following example
readinessProbeReadiness probe configuration for the triggerer Pod.See following example
podSecurityContextPod security context for the triggerer Pod. By default, the agent runs as a non-root user with UID 50000 and group ID 50000.~
containerSecurityContextSecurity context for the triggerer container.{}
initContainersInit containers to add to the triggerer Pod.[]
extraContainersSidecar containers to add to the triggerer Pod.[]
volumesAdditional volumes to mount on the triggerer Pod.[]
volumeMountsMount paths for the additional volumes.[]
nodeSelectorKubernetes node selector for scheduling triggerer Pods.~
affinityAffinity rules for the triggerer Pod.{}
tolerationsKubernetes tolerations for scheduling triggerer Pods.[]

Example:

1triggerer:
2 replicas: 2
3 asyncSlots: 500
4 resources:
5 limits:
6 cpu: "1"
7 ephemeral-storage: "1Gi"
8 memory: "2Gi"
9 requests:
10 cpu: "1"
11 ephemeral-storage: "1Gi"
12 memory: "2Gi"
13 livenessProbe:
14 httpGet:
15 path: healthz
16 port: 39091
17 initialDelaySeconds: 30
18 periodSeconds: 5
19 failureThreshold: 3
20 successThreshold: 1
21 timeoutSeconds: 5
22 readinessProbe:
23 httpGet:
24 path: healthz
25 port: 39091
26 initialDelaySeconds: 30
27 periodSeconds: 5
28 failureThreshold: 3
29 successThreshold: 1
30 timeoutSeconds: 5
31 podSecurityContext:
32 runAsUser: 50000
33 fsGroup: 50000
34 runAsNonRoot: true
35 seccompProfile:
36 type: RuntimeDefault
37 containerSecurityContext:
38 allowPrivilegeEscalation: false
39 readOnlyRootFilesystem: true
40 capabilities:
41 drop:
42 - ALL
43 initContainers:
44 - name: vault-agent
45 image: vault:1.13.0
46 command: ["vault", "agent", "-config=/etc/vault/config.hcl"]
47 volumeMounts:
48 - name: vault-config
49 mountPath: /etc/vault
50 env:
51 - name: VAULT_ADDR
52 value: "https://vault.example.com"
53 - name: VAULT_TOKEN
54 valueFrom:
55 secretKeyRef:
56 key: token
57 name: vault-token
58 extraContainers:
59 - name: logging-sidecar
60 image: timberio/vector:0.45.0-debian
61 env:
62 - name: VECTOR_CONFIG
63 value: /etc/vector/vector.yaml
64 volumeMounts:
65 - name: vector-config
66 mountPath: /etc/vector
67 resources:
68 limits:
69 cpu: "0.5"
70 memory: "256Mi"
71 requests:
72 cpu: "0.5"
73 memory: "256Mi"
74 volumes:
75 - name: task-logs
76 emptyDir: {}
77 - name: vector-config
78 configMap:
79 name: vector-config
80 volumeMounts:
81 - name: task-logs
82 mountPath: /var/log/airflow
83 readOnly: true
84 nodeSelector:
85 node.kubernetes.io/instance-type: c5.large

To run multiple triggerers, increase replicas. The configured number of replicas runs continuously. For restricted namespaces with Pod security standards set to restricted, configure podSecurityContext and containerSecurityContext to meet your cluster’s requirements.

Optional configuration

Logging sidecar

loggingSidecar

Optional sidecar for exporting task logs to external platforms or viewing logs in the Airflow UI before task completion.

See Configure logging sidecar for configuration instructions.

OpenLineage

openLineage

Optional configuration for data lineage collection.

You must configure OpenLineage to use Astro Observe with Remote Execution Deployments.

See Configure OpenLineage for configuration instructions.

Sentinel monitoring

sentinel

Monitoring service for agent health reporting (agent version 1.2.0+). Astronomer recommends enabling Sentinel for all deployments.

See Sentinel for Remote Execution Agents for configuration instructions.

Cloud provider annotations

annotations and labels

Kubernetes annotations and labels to configure Pods to run using a specific IAM role (AWS), workload identity (GCP) or managed identity (Azure).

Helm commands

After the Remote Execution Agent is installed, any updates to the agent use the helm upgrade command.

Install agent

$helm repo add astronomer https://helm.astronomer.io
$helm repo update
$helm install astro-agent astronomer/astro-remote-execution-agent -f values.yaml

Update agent

$helm upgrade astro-agent astronomer/astro-remote-execution-agent -f values.yaml

View current configuration

$helm get values astro-agent