Configure task-level Pod resources

Astro automatically allocates resources to Pods created by the KubernetesPodOperator. Unless otherwise specified in your task-level configuration, the amount of resources your task Pod can use is defined by your default Pod resource configuration. To optimize your resource usage, Astronomer recommends specifying compute resource requests and limits for each task.

Setup

1

Define container resources

Define a kubernetes.client.models.V1ResourceRequirements object and provide that to the container_resources argument of the KubernetesPodOperator. For example:

The following code example ensures that when this dag runs, it launches a Kubernetes Pod with exactly 800m of CPU and 3Gi of memory as long as that infrastructure is available in your Deployment. After the task finishes, the Pod terminates gracefully.

1from airflow.configuration import conf
2from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
3from kubernetes.client import models as k8s
4
5compute_resources = k8s.V1ResourceRequirements(
6 limits={"cpu": "800m", "memory": "3Gi"},
7 requests={"cpu": "800m", "memory": "3Gi"}
8)
9
10namespace = conf.get("kubernetes", "NAMESPACE")
11
12KubernetesPodOperator(
13 namespace=namespace,
14 image="<your-docker-image>",
15 cmds=["<commands-for-image>"],
16 arguments=["<arguments-for-image>"],
17 labels={"<pod-label>": "<label-name>"},
18 name="<pod-name>",
19 container_resources=compute_resources,
20 task_id="<task-name>",
21 get_logs=True,
22 in_cluster=True,
23)
For Astro Hosted environments, if you set resource requests to be less than the maximum limit, Astro automatically requests the maximum limit that you set. This means that you might consume more resources than you expected if you set the limit much higher than the resource request you need. Check your Billing and usage to view your resource use and associated charges.