Scale Airflow resources
Configure CPU, memory, and replica settings for Airflow Deployment components including scheduler, webserver, workers, triggerer, Dag processor, and API server.
Resource values are plain integers: CPU in millicpu and memory in MiB.
Component resources
Scheduler
The scheduler orchestrates Dag runs and task scheduling.
Scaling considerations
- Add additional replicas for high availability.
- Increase memory for complex Dag dependencies.
- Set
safeToEvict: falseto prevent cluster autoscaler eviction.
Webserver
API server (Airflow 3+)
Dag processor (Airflow 2.3+, required in Airflow 3)
In Airflow 2, the Dag processor defaults to 0 replicas and must be explicitly enabled. In Airflow 3, Houston automatically sets dagProcessor.enabled: true and enforces a minimum of 1 replica regardless of configuration.
Triggerer
Workers (Celery Executor)
Sizing recommendations
Small workloads (fewer than 50 Dags)
Medium workloads (50–200 Dags)
Large workloads (more than 200 Dags)
Autoscale workers with KEDA
Kubernetes Event-driven Autoscaling (KEDA) scales Celery workers based on task queue depth. Enable KEDA for a Deployment using the updateDeploymentKedaConfig mutation: