Skip to main content
Version: Airflow 3.x

Apache Airflow® Executors

Executors are a configuration property of the scheduler process of every Apache Airflow® environment. The executor you choose for a task determines where and how a task is run. You can choose from several pre-configured executors that are designed for different use cases, or you can define a custom executor. Airflow 2.10 introduced the experimental multiple executor configuration feature, which allows you to specify executors for individual tasks without needing to use pre-configured combined executor options. With Airflow 3.0, the Edge Executor was introduced. This means that tasks now can either be run where you deploy Airflow, or in other infrastructure with remote execution.

In this guide, you'll learn how to choose and configure executors in Airflow.

info

To learn more about how to adjust scaling parameters for task execution in Airflow, see Scaling Airflow to optimize performance.

Assumed knowledge

To get the most out of this guide, you should have an understanding of:

Executor Types

  • LocalExecutor The LocalExecutor runs each task instance in a separate process on the same machine as the scheduler, enabling parallel execution without requiring external workers or message brokers. This executor is a good choice for development environments and local testing; it can also be used for very light weight production use cases.

  • EdgeExecutor: Specialized executor introduced for Airflow 3 to run tasks in remote environments. This executor is a good choice to run task instances with full task isolation for users running open-source Airflow. The EdgeExecutor is part of the edge3 provider.

    • AstroExecutor: Astro users have access to the AstroExecutor instead of the EdgeExecutor. It offers improved reliability over the CeleryExecutor and has less start up time for task instances than the KubernetesExecutor. Additionally, this executor also enables Astro customers to execute their tasks remotely, which enables users to use Remote Execution to run task instances in remote environments with full task instance isolation. This is ideal for task instances handling sensitive data that cannot leave a remote location.
  • CeleryExecutor: Uses a Celery backend (such as Redis, RabbitMq, Redis Sentinel or another message queue system) to coordinate tasks between pre-configured workers. This executor is ideal for high volumes of shorter running tasks or in environments with consistent task loads. The CeleryExecutor is available as part of the Celery provider.

  • KubernetesExecutor: Calls the Kubernetes API to create a separate Kubernetes pod for each task to run, enabling users to pass in custom configurations for each of their tasks and use resources efficiently. The KubernetesExecutor is available as part of the CNCF Kubernetes provider.

There are other available executors that we don't describe here, like the experimental AWS ECS Executor and the experimental AWS Batch Executor, but these are used much less frequently.

Additionally, you can write and use your own custom executor, although this is also uncommon.

Choosing an executor

For small local testing and development, Astronomer recommends using:

  • LocalExecutor:

    • 🔥 Pros: Simple setup, no external components needed, parallelism on a single machine, good for small to medium DAGs.

    • ⚠️ Cons: Limited to one machine’s resources, no task distribution across hosts, not suitable for large-scale workloads.

    • Possible Usage Scenarios:

    • You’re running Airflow on a single machine but want to execute multiple tasks in parallel (unlike the old SequentialExecutor, which ran one task at a time). The LocalExecutor uses Python’s multiprocessing to run tasks in parallel locally. Great for development, testing, or very small-scale production setups.

    • You’re using Airflow in a CI/CD environment or for running lightweight DAGs where setting up a full distributed executor (like Celery or Kubernetes) is overkill. LocalExecutor gives you concurrency without the complexity of managing additional services.

For all other production use-cases, you should use one of the following executors:

  • CeleryExecutor:

    • 🔥 Pros: Scalable, mature, works well for distributed execution, supports queues, fault-tolerant.

    • ⚠️ Cons: Requires extra infra (broker + workers), more operational overhead, not cloud-native, trickier debugging.

    • Possible Usage Scenarios:

    • Ideal when your DAGs are complex or you need to scale horizontally. You can run multiple Celery workers across different machines to distribute task execution and handle high throughput.

    • Great for teams running Airflow in production with consistent workloads. It offers high-scalability, task queueing, and monitoring through a message broker like RabbitMQ or Redis.

  • KubernetesExecutor:

    • 🔥 Pros: Cloud-native, autoscaling via pods, strong task isolation, great resource efficiency, flexible with Docker images.

    • ⚠️ Cons: Slower task startup (cold pods), requires in-depth Kubernetes knowledge to configure properly, log handling and debugging can be tricky.

    • Possible Usage Scenarios:

    • Each task runs in its own Kubernetes pod, which is perfect for isolating dependencies, handling resource requests/limits, or running custom Docker images per task.

    • Perfect for cloud-native environments where you want to scale workloads dynamically without pre-provisioning workers. Tasks consume resources only when running.

Or use a remote executor:

Remote Executors let you run a task somewhere else—not on your worker, not in your pod, but any remote system that can receive the task payload, run it, and send back results (or at least notify success/failure).

Situations where you might want to use Remote Execution include:

  • Running tasks that need access to/use sensitive data that cannot leave a secured environment, such as an on-prem server. This is a common requirement in highly regulated industries, such as financial services and healthcare.

  • Running tasks that require specialized compute, for example, a GPU or TPU machine to train neural networks.

When using Remote Execution, only the information essential for running the task, such as scheduling details and heartbeat pings, is available to Airflow system components; everything else stays within the remote environment.

Which remote executor you select depends on how you're running Airflow:

  • EdgeExecutor: (For OSS Airflow Users)

  • AstroExecutor: (For Astro Users)

With remote executors, Airflow itself becomes the central coordinator. The hands-on task execution then gets done remotely, where the data resides. This brings efficiencies and other to how tasks are executed in Airflow.

Configure your executor on Astro

Astro users can choose and configure their executors when creating a deployment. See Manage Airflow executors on Astro for more information. Astro supports the AstroExecutor, CeleryExecutor, and the KubernetesExecutor.

Astronomer's open-source local development environment, the Astro CLI, uses the LocalExecutor.

Configure your executor for self-hosted Airflow

When working with self-hosted Airflow solutions, you can set your executor using the core.executor Airflow config variable.

Note that when using self-hosted Airflow with executors appropriate for production, you will need to configure your own Celery or Kubernetes setup. For more information on available configuration parameters, see the configuration references for the CeleryExecutor and KubernetesExecutor.

Run multiple executors concurrently

It is possible to use multiple executors in the same open-source Airflow environment. This feature was first introduced in Airflow 2.10 on an experimental basis. It allows users running open-source Airflow environments to specify executors at the task level using a task parameter. For more information see the Airflow documentation on using multiple executors concurrently. Astro users running tasks that have significantly different resource needs can use the worker queues feature.

When using multiple executors, you need to provide the relevant classes to the core.executor Airflow config variable as a comma separated string. You can provide short names for your executor classes after a : character:

[core]
executor = 'CeleryExecutor,KubernetesExecutor,my_custom_package.MyCustomExecutor:MyExecutor'

The first executor in the list is the default executor. To assign a specific task to another executor from the list, set its executor parameter to the class name or short name of the executor.

# from airflow.decorators import task

@task(executor="MyExecutor")
def my_task_with_custom_execution():
print("Hi! :)")

my_task_with_custom_execution()

You can also override the default executor for all tasks in one DAG using the default_args DAG parameter. Note that this feature is not yet supported by Astro Runtime.

Was this page helpful?