Scale Airflow resources

Configure CPU, memory, and replica settings for Airflow Deployment components including scheduler, webserver, workers, triggerer, Dag processor, and API server.

Resource values are plain integers: CPU in millicpu and memory in MiB.

Component resources

Scheduler

The scheduler orchestrates Dag runs and task scheduling.

1scheduler:
2 replicas: 1
3 resources:
4 requests:
5 cpu: 500
6 memory: 1920
7 limits:
8 cpu: 1000
9 memory: 3840

Scaling considerations

  • Add additional replicas for high availability.
  • Increase memory for complex Dag dependencies.
  • Set safeToEvict: false to prevent cluster autoscaler eviction.

Webserver

1webserver:
2 resources:
3 requests:
4 cpu: 500
5 memory: 1920
6 limits:
7 cpu: 1000
8 memory: 3840

API server (Airflow 3+)

1apiServer:
2 replicas: 1
3 resources:
4 requests:
5 cpu: 1000
6 memory: 3840
7 limits:
8 cpu: 2000
9 memory: 7680

Dag processor (Airflow 2.3+, required in Airflow 3)

1dagProcessor:
2 enabled: true
3 replicas: 1
4 resources:
5 requests:
6 cpu: 1000
7 memory: 3840
8 limits:
9 cpu: 2000
10 memory: 7680

In Airflow 2, the Dag processor defaults to 0 replicas and must be explicitly enabled. In Airflow 3, Houston automatically sets dagProcessor.enabled: true and enforces a minimum of 1 replica regardless of configuration.

Triggerer

1triggerer:
2 replicas: 1
3 resources:
4 requests:
5 cpu: 500
6 memory: 1920
7 limits:
8 cpu: 1000
9 memory: 3840

Workers (Celery Executor)

1workers:
2 replicas: 2
3 resources:
4 requests:
5 cpu: 1000
6 memory: 3840
7 limits:
8 cpu: 2000
9 memory: 7680
10 terminationGracePeriodSeconds: 600

Sizing recommendations

Small workloads (fewer than 50 Dags)

1scheduler:
2 resources:
3 requests: { cpu: 500, memory: 1920 }
4 limits: { cpu: 1000, memory: 3840 }
5workers:
6 replicas: 1
7 resources:
8 requests: { cpu: 1000, memory: 3840 }

Medium workloads (50–200 Dags)

1scheduler:
2 resources:
3 requests: { cpu: 500, memory: 1920 }
4 limits: { cpu: 1000, memory: 3840 }
5dagProcessor:
6 enabled: true
7 resources:
8 requests: { cpu: 1000, memory: 3840 }
9workers:
10 replicas: 3
11 resources:
12 requests: { cpu: 1000, memory: 3840 }

Large workloads (more than 200 Dags)

1scheduler:
2 replicas: 2
3 resources:
4 requests: { cpu: 1000, memory: 3840 }
5 limits: { cpu: 2000, memory: 7680 }
6dagProcessor:
7 enabled: true
8 replicas: 2
9 resources:
10 requests: { cpu: 1000, memory: 3840 }
11workers:
12 replicas: 10

Autoscale workers with KEDA

Kubernetes Event-driven Autoscaling (KEDA) scales Celery workers based on task queue depth. Enable KEDA for a Deployment using the updateDeploymentKedaConfig mutation:

1mutation {
2 updateDeploymentKedaConfig(
3 deploymentUuid: "<deployment-uuid>"
4 state: true
5 ) {
6 id
7 label
8 }
9}

Monitor resources

$# View current resource usage
$kubectl top pods -n <deployment-namespace>
$
$# Check resource limits
$kubectl describe pod <pod-name> -n <deployment-namespace>