Set up Remote Execution Agents on Astro

Remote Execution is a feature in Airflow 3 that allows you to run your Airflow tasks on any machine, in the cloud or on-premises. When using Remote Execution, only the information that’s essential for running the task, such as scheduling details and heartbeat pings, is available to Airflow system components. Everything else stays within the remote environment, making this a key feature in highly regulated industries.

This tutorial covers when to use Remote Execution and how to set it up on Astro with Remote Execution Agents running on AWS EKS or on-premises infrastructure. While this guide focuses on these specific environments, the concepts and steps can be adapted for other Kubernetes clusters, for example running on GCP or Azure.

Remote execution on Astro is only available for Airflow 3.x Deployments on the Enterprise tier or above. See Astro Plans and Pricing.

When to use remote execution

You might want to use Remote Execution in the following situations:

  • Running tasks that need to access and/or use sensitive data that cannot leave a particular environment, such as an on-premises server. This requirement is common in highly regulated industries like financial services and health care.
  • Running tasks that require specialized compute, such as a GPU or TPU machine to train neural networks.

You can accomplish Remote Execution in two ways:

This tutorial covers the steps for setting up Remote Execution Agents on Astro to run on AWS EKS and on-premises.

Time to complete

This tutorial takes approximately one hour to complete.

Assumed knowledge

To get the most out of this tutorial, you should have an understanding of:

Prerequisites

Step 1: Create a Remote Execution Deployment

To start registering Remote Execution Agents, you first need to create a dedicated Remote Execution Deployment on Astro.

  1. Make sure you have a dedicated cluster in your Astro Workspace. If you don’t, you can create a new dedicated cluster. When creating a new cluster, you can leave the VPC Subnet range at its default setting (172.20.0.0/19) or customize it for your needs. Note that it can take up to an hour for a new cluster to be provisioned. If you later want to use customer managed workload identity to read logs from Remote Exection Agents running on AWS EKS, you need to create your dedicated cluster on AWS.

  2. Create a Remote Execution Deployment in your Astro Workspace.

    • Select Remote Execution as the execution mode.
    • Select your dedicated cluster.

    Create a Remote Execution Deployment

Step 2: Create an Agent Token

Your Remote Execution Agents will need to authenticate themselves to your Astro Deployment. To do this, you need to create an Agent Token.

  1. In the Astro UI, select the Remote Execution Deployment you created in the previous step and click on the Remote Agents tab.

  2. Select Tokens.

  3. Click on +Agent Token and create a new Agent Token.

    Create an Agent Token

  4. Make sure to save the Agent Token in a secure location as you will need it later.

Step 3: Create a Deployment API Token

Your Remote Execution Agents will also need to fetch the right images from your Astro Deployment. To do this, you need to create a Deployment API Token.

  1. In the Astro UI, select the Remote Execution Deployment you created in Step 1 and click on the Access tab.

  2. Select API Tokens.

  3. Click on + API Token.

  4. Select Add Deployment API Token and create a new Deployment API Token with Admin permissions.

    Create a Deployment API Token

  5. Make sure to save the Deployment API Token in a secure location as you will need it later.

Step 4: Retrieve your values.yaml file

  1. In the Astro UI, select the Remote Execution Deployment you created in Step 1 and click on the Remote Agents tab.

  2. Click on Register a Remote Agent.

    Register a Remote Agent

  3. Download the values.yaml file you are given.

Note that no Remote Execution Agents show up in the list yet, they will only appear in the Remote Agents tab when they start heartbeating!

Step 5A: Set up your Kubernetes cluster on EKS

This step covers the setup for deploying the Remote Execution Agent on AWS EKS. For a simple on-premises setup see Step 5B.

  1. Authenticate your machine to your AWS account. If your organization uses SSO, use aws configure sso and log in via the browser. Make sure to set the AWS_PROFILE environment variable to the profile (CLI profile name) you used to log in with export AWS_PROFILE=<your-profile-name>. You can verify your profile by running aws sts get-caller-identity.

  2. To create a new EKS cluster, you need to define its parameters in a my-cluster.yaml file. Make sure the workers node group is large enough to support your intended workload and the Agent specifications in your values.yaml file for all 3 Agents. You can use the below example as a starting point, make sure to update <your-cluster-name> and <your-region> with your own values.

    1apiVersion: eksctl.io/v1alpha5
    2kind: ClusterConfig
    3
    4metadata:
    5 name: <your-cluster-name>
    6 region: <your-region> # it is recommended to use the same region as your Astro Cluster
    7 version: "1.33"
    8
    9cloudWatch:
    10 clusterLogging:
    11 enableTypes: ["api", "audit", "authenticator", "controllerManager", "scheduler"]
    12
    13iam:
    14 withOIDC: true # This setting is important for the IRSA role that will interact with S3 to save logs/xcom
    15
    16nodeGroups:
    17 - name: workers
    18 instanceType: m5.xlarge # 4 vCPUs, 16 GiB RAM - minimum for 3x1CPU + k8s overhead
    19 desiredCapacity: 2 # Number of nodes to start with
    20 minSize: 0 # Minimum number of nodes
    21 maxSize: 4 # Maximum number of nodes
    22 volumeSize: 50 # EBS volume size in GB
    23 amiFamily: AmazonLinux2023
    24 labels: { role: worker }
    25 tags:
    26 k8s.io/cluster-autoscaler/enabled: "true"
    27 k8s.io/cluster-autoscaler/remote-execution-airflow-cluster: "owned"
  3. Create the EKS cluster by running the following command. Note the cluster creation can take up to 15-25 minutes.

    $eksctl create cluster -f my-cluster.yaml
  4. Configure kubectl to use your new EKS cluster by running the following command. Replace <your-cluster-name> with the name of your cluster.

    $aws eks update-kubeconfig --name <your-cluster-name>
  5. Verify that kubectl is aimed at the right cluster by running:

    $kubectl get nodes

    The output should look similar to this:

    $NAME STATUS ROLES AGE VERSION
    >ip-123-45-67-89.ec2.internal Ready <none> 16m v1.33.4-eks-99d6cc0
    >ip-123-45-67-90.ec2.internal Ready <none> 16m v1.33.4-eks-99d6cc0

Step 5B: Set up your local Kubernetes cluster

Alternatively, you can deploy the Remote Execution Agent on your on-premises cluster. If you want to test Remote Execution locally, a good option is to use the Kubernetes feature of Orbstack or Docker Desktop. In this step we’ll use Orbstack as an example.

  1. Enable the Kubernetes feature in Orbstack.

    Enable Kubernetes in Orbstack

  2. Switch to the orbstack context:

    $kubectl config use-context orbstack

Step 6: Deploy the Remote Execution Agent

  1. Create a new namespace for the Remote Execution Agent by running:

    $kubectl create namespace <your-namespace>
  2. Create a secret containing the Agent Token named my-agent-token by running the following command. Replace <your-agent-token> with the Agent Token you created in Step 2. Replace <your-namespace> with the namespace you created.

    $kubectl create secret generic my-agent-token \
    >--from-literal=token=<your-agent-token> \
    >--namespace <your-namespace>
  3. Create a secret containing the Deployment API Token named my-astro-registry-secret by running the following command. Replace <your-deployment-api-token> with the Deployment API Token you created in Step 3 and replace <your-namespace> with your namespace.

    $kubectl create secret docker-registry my-astro-registry-secret \
    >--namespace <your-namespace> \
    >--docker-server=images.astronomer.cloud \
    >--docker-username=cli \
    >--docker-password=<your-deployment-api-token>
  4. Modify your values.yaml file to add <your-namespace>, as well as the names for your agent token (agentTokenSecretName) and deployment API token (imagePullSecretName).

    1resourceNamePrefix: "astro-agent" # you can choose any prefix you want
    2namespace: <your-namespace>
    3imagePullSecretName: my-astro-registry-secret
    4agentTokenSecretName: my-agent-token
  5. Modify your values.yaml file to add your Dag bundle configuration to the dagBundleConfigList section.

    1dagBundleConfigList: <your-dag-bundle-config>

    Note that you need to store your Dags in a Dag bundle accessible to your Remote Execution Agents. Below is an example of a GitDagBundle configuration working with a Git connection named git_default (set in the commonEnv section later in this tutorial).

    1dagBundleConfigList: '[{"name": "gitbundle-1", "classpath": "airflow.providers.git.bundles.git.GitDagBundle", "kwargs": {"git_conn_id": "git_default", "subdir": "dags", "tracking_ref": "main", "refresh_interval": 10}}]'
  6. Modify your values.yaml file to add your XCom backend configuration to the xcomBackend section. For this tutorial we’ll use the Object Storage XCom Backend. The credentials are set in the commonEnv section later in this tutorial.

    1xcomBackend: "airflow.providers.common.io.xcom.backend.XComObjectStorageBackend"
  7. Modify your values.yaml file to set a secrets backend, we’ll use the Local Filesystem Secrets Backend as a placeholder. Note that if you want to install an external secrets backend, you need to provide the relevant provider packages to the worker containers and credentials in commonEnv. For more information on how to interact with secrets backends, see Configure a secrets backend.

    1secretBackend: "airflow.secrets.local_filesystem.LocalFilesystemBackend"
  8. Modify your values.yaml file to add necessary environment variables to the commonEnv section. Make sure to replace all placeholders with your own values.

    1commonEnv:
    2 - name: ASTRONOMER_ENVIRONMENT
    3 value: "cloud"
    4
    5 # This is the connection used in the GitDagBundle. If you want to access a private repo you need an access token with read and write permissions.
    6 - name: AIRFLOW_CONN_GIT_DEFAULT
    7 value: '{"conn_type": "git", "login": "<your GH login>", "password": "<access_token>", "host": "https://github.com/<account>/<repo>"}'
    8
    9 # Update with your credentials that have access to your XCom S3 bucket!
    10 - name: AIRFLOW_CONN_AWS_DEFAULT
    11 value: '{"conn_type": "aws", "login": "<your-access-key>", "password": "<your-secret-key>", "extra": {"region_name": "<your-region>"}}'
    12
    13 # These two environment variables are needed for the custom XCom backend
    14 - name: AIRFLOW__COMMON_IO__XCOM_OBJECTSTORAGE_PATH
    15 value: "s3://aws_default@<your-bucket>/xcom" # replace the bucket with your XCom bucket. Uses the aws_default connection
    16 - name: AIRFLOW__COMMON_IO__XCOM_OBJECTSTORAGE_THRESHOLD
    17 value: "0" # all XCom will be stored in Object storage
    18
    19 # Add any necessary environment variables for your secrets backend
  9. Install the Helm chart by running the following command. Replace <your-namespace> with your namespace.

    $helm repo add astronomer https://helm.astronomer.io/
    >helm repo update
    >helm install astro-agent astronomer/astro-remote-execution-agent --namespace <your-namespace> --values values.yaml
  10. Verify that the 3 Remote Execution Agent pods are running by running the following command. Replace <your-namespace> with your namespace.

    $kubectl get pods -n <your-namespace>

    The output should look similar to this:

    $NAME READY STATUS RESTARTS AGE
    >astro-agent--dag-processor-7b46c75566-dsdlq 1/1 Running 0 87s
    >astro-agent--triggerer-6cb88c8db7-kx9d2 1/1 Running 0 87s
    >astro-agent--worker-default-worker-779c98cfb5-7chg2 1/1 Running 0 86s

On Astro you can see the 3 Remote Execution Agent pods happily heartbeating to your Astro Deployment. When you open the Airflow UI on this Astro Deployment, you’ll be able to see and interact with all Dags contained in the configured Dag bundles.

Remote Execution Agent pods heartbeating

You can now run tasks on the remote EKS cluster! In order to be able to use XCom, see Step 7 for more information.

If you ever need to update the helm chart you can use the following command. Replace <your-namespace> with your namespace.

$helm upgrade astro-agent astronomer/astro-remote-execution-agent --namespace <your-namespace> --values values.yaml

Step 7: Configure XCom

If you want to use XCom to pass information between tasks running using Remote Execution, you need to configure a custom XCom backend. You already laid the foundation for this in Step 6 when setting the following:

1xcomBackend: "airflow.providers.common.io.xcom.backend.XComObjectStorageBackend"
2commonEnv:
3 # ...
4 - name: AIRFLOW_CONN_AWS_DEFAULT
5 value: '{"conn_type": "aws", "extra": {"region_name": "us-east-1"}}'
6
7 - name: AIRFLOW__COMMON_IO__XCOM_OBJECTSTORAGE_PATH
8 value: "s3://aws_default@<your bucket>/xcom"
9 - name: AIRFLOW__COMMON_IO__XCOM_OBJECTSTORAGE_THRESHOLD
10 value: "0"

But in order for the worker pod to be able to use the XCom backend, you need to install the necessary Airflow provider packages on it. To make installation faster we recommend using a constraints file.

  1. Create your constraints.txt file (see GitHub for an example). Make sure that it includes the Airflow Common IO provider and the Amazon provider with the s3fs extra.

  2. Add the constraints file as a configmap to the k8s cluster. Replace <your-namespace> with the namespace you created in Step 6.

    $kubectl create configmap constraints-configmap --from-file=constraints.txt -n <your-namespace>
  3. Update your values.yaml file to install the necessary provider packages in the workers section. Update the versions as needed. Note that you also need to update the PYTHONPATH environment variable to include the shared packages. Note that your image version likely differs from the one in the example below.

    1initContainers:
    2 - name: install-amazon-provider-s3fs
    3 image: images.astronomer.cloud/baseimages/astro-remote-execution-agent:3.0-4-python-3.12-astro-agent-1.0.2
    4 command:
    5 - "pip"
    6 - "install"
    7 - "--target"
    8 - "/shared/packages"
    9 - "--prefer-binary"
    10 - "--constraint"
    11 - "/constraints/constraints.txt"
    12 - "apache-airflow-providers-amazon[s3fs]==9.9.0"
    13 - "apache-airflow-providers-common-io==1.6.1"
    14 volumeMounts:
    15 - name: shared-packages
    16 mountPath: /shared/packages
    17 - name: constraints
    18 mountPath: /constraints
    19
    20env:
    21 - name: PYTHONPATH
    22 value: "/shared/packages:$PYTHONPATH"
  4. Update your values.yaml file to mount the constraints file.

    1 volumes:
    2 - name: shared-packages
    3 emptyDir: {}
    4 - name: constraints
    5 configMap:
    6 name: constraints-configmap
    7
    8 volumeMounts:
    9 - name: shared-packages
    10 mountPath: /shared/packages
    11 readOnly: true
  5. Update the helm chart by running the following command. Replace <your-namespace> with your namespace.

    $helm upgrade astro-agent astronomer/astro-remote-execution-agent --namespace <your-namespace> --values values.yaml
  6. Run a Dag that uses XCom to verify the setup. Remember that you need to push the Dag to your Dag bundle location for it to be accessible to the Remote Execution Agent.

If you’d like to see your task logs displayed in the Airflow UI, see our docs on Task logging for remote Deployments.

Step 8: (optional, AWS only) Use a secrets backend

If you want to use a secrets backend to store your connections and variables, you need to configure the Remote Execution Agent to use it.

  1. First, you need an IAM role to attach this policy to. The IAM role’s trust policy needs to include the EKS OIDC ID. So you need to fetch that first. Replace <YOUR_EKS_CLUSTER_NAME> with the name of your EKS cluster and <YOUR_AWS_REGION> with the region of your EKS cluster.

    $OIDC_ISSUER_URL=$(aws eks describe-cluster --name <YOUR_EKS_CLUSTER_NAME> --query "cluster.identity.oidc.issuer" --output text)
    >EKS_OIDC_ID=$(echo "$OIDC_ISSUER_URL" | sed -e 's|https://oidc.eks.<YOUR_AWS_REGION>.amazonaws.com/id/||')
    >echo $EKS_OIDC_ID
  2. Create a new file called my-airflow-trust-policy.json and add the following trust policy. Replace <your-account-id> with your AWS account ID, <your-region> with the region of your EKS cluster, <your-namespace> with the namespace you created in Step 6, and <your-cluster-oidc-id> with the EKS OIDC ID you fetched in the previous substep.

    1{
    2 "Version": "2012-10-17",
    3 "Statement": [
    4 {
    5 "Effect": "Allow",
    6 "Principal": {
    7 "Federated": "arn:aws:iam::<your-account-id>:oidc-provider/oidc.eks.<your-region>.amazonaws.com/id/<your-cluster-oidc-id>"
    8 },
    9 "Action": "sts:AssumeRoleWithWebIdentity",
    10 "Condition": {
    11 "StringEquals": {
    12 "oidc.eks.<your-region>.amazonaws.com/id/<your-cluster-oidc-id>:sub": "system:serviceaccount:<your-namespace>:*",
    13 "oidc.eks.<your-region>.amazonaws.com/id/<your-cluster-oidc-id>:aud": "sts.amazonaws.com"
    14 }
    15 }
    16 }
    17 ]
    18}
  3. Create a new IAM role called RemoteAgentsRole with the trust policy you created in the previous step.

    $aws iam create-role \
    >--role-name RemoteAgentsRole \
    >--assume-role-policy-document file://my-airflow-trust-policy.json
  4. Create a new file called my-airflow-secrets-policy.json and add the following policy. Replace <your-region> with the region of your EKS cluster and <your-account-id> with your AWS account ID.

    1{
    2 "Version": "2012-10-17",
    3 "Statement": [
    4 {
    5 "Effect": "Allow",
    6 "Action": [
    7 "secretsmanager:GetSecretValue",
    8 "secretsmanager:DescribeSecret",
    9 "secretsmanager:ListSecrets"
    10 ],
    11 "Resource": "arn:aws:secretsmanager:<your-region>:<your-account-id>:secret:airflow/*"
    12 }
    13 ]
    14}

    Create the policy using the following command.

    $aws iam create-policy \
    >--policy-name AirflowSecretsManagerAccess \
    >--policy-document file://my-airflow-secrets-policy.json
  5. Attach the AirflowSecretsManagerAccess policy to the RemoteAgentsRole role.

    $aws iam attach-role-policy \
    >--role-name RemoteAgentsRole \
    >--policy-arn arn:aws:iam::<your-account-id>:policy/AirflowSecretsManagerAccess
  6. Update the serviceAccount section in your values.yaml file to annotate the role to your service accounts. Replace <your-account-id> with your AWS account ID.

    1serviceAccount:
    2 workers:
    3 annotations:
    4 eks.amazonaws.com/role-arn: arn:aws:iam::<your-account-id>:role/RemoteAgentsRole
    5
    6 dagProcessor:
    7 annotations:
    8 eks.amazonaws.com/role-arn: arn:aws:iam::<your-account-id>:role/RemoteAgentsRole
    9
    10 triggerer:
    11 annotations:
    12 eks.amazonaws.com/role-arn: arn:aws:iam::<your-account-id>:role/RemoteAgentsRole
  7. Update the commonEnv section in your values.yaml file to configure the secrets backend. Replace <your-role-arn> with the ARN of the IAM role you created in the previous step.

    1 secretBackend: "airflow.providers.amazon.aws.secrets.secrets_manager.SecretsManagerBackend"
    2
    3 commonEnv:
    4 - name: AIRFLOW__SECRETS__BACKEND_KWARGS
    5 value: '{"connections_prefix": "airflow/connections", "variables_prefix": "airflow/variables"}'
    6 - name: AWS_DEFAULT_REGION
    7 value: '<your-region>'
  8. Since the secrets backend is also used in the Dag processor and Triggerer components and part of the Airflow Amazon provider, you need to install the necessary provider packages on these components as well, like you did for the worker pods when configuring the XCom backend in Step 7. Note that your image version likely differs from the one in the example below.

    1dagProcessor:
    2 # ... other dagProcessor config ...
    3
    4 # Add PYTHONPATH to dagProcessor env
    5 env:
    6 - name: PYTHONPATH
    7 value: "/shared/packages:$PYTHONPATH"
    8
    9 # Add initContainers (replace initContainers: [])
    10 initContainers:
    11 - name: install-amazon-provider-s3fs
    12 image: images.astronomer.cloud/baseimages/astro-remote-execution-agent:3.0-4-python-3.12-astro-agent-1.0.2
    13 command:
    14 - "pip"
    15 - "install"
    16 - "--target"
    17 - "/shared/packages"
    18 - "--prefer-binary"
    19 - "--constraint"
    20 - "/constraints/constraints.txt"
    21 - "apache-airflow-providers-amazon[s3fs]==9.9.0"
    22 - "apache-airflow-providers-common-io==1.6.1"
    23 volumeMounts:
    24 - name: shared-packages
    25 mountPath: "/shared/packages"
    26 - name: constraints
    27 mountPath: "/constraints"
    28
    29 # Add volumes (replace volumes: [])
    30 volumes:
    31 - name: shared-packages
    32 emptyDir: {}
    33 - name: constraints
    34 configMap:
    35 name: constraints-configmap
    36
    37 # Add volumeMounts (replace volumeMounts: [])
    38 volumeMounts:
    39 - name: shared-packages
    40 mountPath: /shared/packages
    41
    42# In values.yaml, under triggerer section
    43triggerer:
    44 # ... other triggerer config ...
    45
    46 # Add PYTHONPATH to triggerer env
    47 env:
    48 - name: PYTHONPATH
    49 value: "/shared/packages:$PYTHONPATH"
    50
    51 # Add initContainers (replace initContainers: [])
    52 initContainers:
    53 - name: install-amazon-provider-s3fs
    54 image: images.astronomer.cloud/baseimages/astro-remote-execution-agent:3.0-4-python-3.12-astro-agent-1.0.2
    55 command:
    56 - "pip"
    57 - "install"
    58 - "--target"
    59 - "/shared/packages"
    60 - "--prefer-binary"
    61 - "--constraint"
    62 - "/constraints/constraints.txt"
    63 - "apache-airflow-providers-amazon[s3fs]==9.9.0"
    64 - "apache-airflow-providers-common-io==1.6.1"
    65 volumeMounts:
    66 - name: shared-packages
    67 mountPath: "/shared/packages"
    68 - name: constraints
    69 mountPath: "/constraints"
    70
    71 # Add volumes (replace volumes: [])
    72 volumes:
    73 - name: shared-packages
    74 emptyDir: {}
    75 - name: constraints
    76 configMap:
    77 name: constraints-configmap
    78
    79 # Add volumeMounts (replace volumeMounts: [])
    80 volumeMounts:
    81 - name: shared-packages
    82 mountPath: /shared/packages
  9. Update the helm chart with the new values.yaml file.

    $helm upgrade astro-agent astronomer/astro-remote-execution-agent --namespace <your-namespace> --values values.yaml
  10. Now your tasks have access to the secrets backend! You can store connections under airflow/connections and variables under airflow/variables.

Step 9: (optional, AWS only) Configure logs in the Airflow UI

When using Remote Execution with a Deployment running on AWS and the Remote Execution Agent running on AWS, you can configure your task logs to be read from an S3 bucket using a customer workload identity.

  1. Create a new IAM policy called AirflowS3Access and attach the following policy. Replace <your-logging-bucket> with the name of your logging bucket. Make sure to record the policy ARN arn:aws:iam::<your-acccoun-id>:policy/AirflowS3Access from the output of the command.

    $aws iam create-policy \
    >--policy-name AirflowS3Access \
    >--policy-document file://my-airflow-s3-policy.json

    This is the policy you need to create in the my-airflow-s3-policy.json file.

    1{
    2 "Version": "2012-10-17",
    3 "Statement": [
    4 {
    5 "Action": [
    6 "s3:ListBucket"
    7 ],
    8 "Resource": [
    9 "arn:aws:s3:::<your-logging-bucket>"
    10 ],
    11 "Effect": "Allow",
    12 "Sid": "ListObjectsInBucket"
    13 },
    14 {
    15 "Action": [
    16 "s3:GetObject",
    17 "s3:PutObject",
    18 "s3:DeleteObject"
    19 ],
    20 "Resource": [
    21 "arn:aws:s3:::<your-logging-bucket>/*"
    22 ],
    23 "Effect": "Allow",
    24 "Sid": "AllObjectActions"
    25 },
    26 {
    27 "Sid": "AssumeRole",
    28 "Effect": "Allow",
    29 "Action": "sts:AssumeRole",
    30 "Resource": "*"
    31 }
    32 ]
    33}
  2. Attach the AirflowS3Access policy to the RemoteAgentsRole role you created and addd to the service account annotations in Step 8. Replace <your-account-id> with your AWS account ID.

    $aws iam attach-role-policy \
    >--role-name RemoteAgentsRole \
    >--policy-arn arn:aws:iam::<your-account-id>:policy/AirflowS3Access
  3. Update the commonEnv section in your values.yaml file to configure the logs to be written to S3. Replace <your-logging-bucket> with the name of your logging bucket and <your-deployment-id> with the ID of your deployment.

    1commonEnv:
    2 # ...
    3 - name: AIRFLOW__LOGGING__REMOTE_LOGGING
    4 value: "True"
    5 - name: AIRFLOW__LOGGING__REMOTE_LOG_CONN_ID
    6 value: "astro_aws_logging"
    7 - name: AIRFLOW_CONN_ASTRO_AWS_LOGGING
    8 value: "s3://" # means the credentials are fetched from IRSA
    9 - name: AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER
    10 value: "s3://<your-logging-bucket>/<your-deployment-id>"
    11 - name: AIRFLOW__LOGGING__LOGGING_CONFIG_CLASS
    12 value: "astronomer.runtime.logging.logging_config"
    13 - name: ASTRONOMER_ENVIRONMENT
    14 value: "cloud"
  4. Update the helm chart with the new values.yaml file. Upon the next Dag run you should be able to see the logs in your S3 bucket.

  5. To see the logs in the Airflow UI, you need to configure the Astro Deployment to use the S3 bucket for task logs. In the Astro UI, navigate to your Deployment and click the Details tab. Click Edit in the Advanced section.

    Astro UI showing where to configure the task logs.

    Select Bucket Storage in the Task Logs field and add the Bucket URL as s3://<your-logging-bucket>/<your-deployment-id>. Select Customer Managed Identity in the Workload Identity for Bucket Storage field and use your RemoteAgentsRole IAM role ARN for the Workload Identity ARN before running the provided bash script.

    Astro UI showing the task logs configuration.

  6. Now you should be able to see the task logs in the Airflow UI.