Forward logs to Amazon S3

APC uses Vector for log collection and forwarding. You can configure Vector to send Airflow task logs to Amazon S3 for long-term storage, compliance, or integration with other analytics tools.

If you previously configured S3 log forwarding using Fluentd in APC 0.37 or earlier, you must replace your fluentd.s3 configuration with the Vector extraSinks configuration described in this document. Fluentd is no longer used for log collection in APC 1.0.

Architecture

Vector continues forwarding logs to Elasticsearch for the Airflow UI while also sending copies to S3.

The logs forwarded to S3 are Airflow task logs and deployment logs, not APC platform logs from Houston, Commander, or Registry.

Prerequisites

  • An existing S3 bucket
  • AWS IAM credentials with S3 write access
  • APC 1.0 or later
1

Configure AWS IAM

Create IAM policy

Create an IAM policy with S3 write permissions:

1{
2 "Version": "2012-10-17",
3 "Statement": [
4 {
5 "Effect": "Allow",
6 "Action": [
7 "s3:ListBucket"
8 ],
9 "Resource": "arn:aws:s3:::your-logs-bucket"
10 },
11 {
12 "Effect": "Allow",
13 "Action": [
14 "s3:PutObject",
15 "s3:GetObject"
16 ],
17 "Resource": "arn:aws:s3:::your-logs-bucket/*"
18 }
19 ]
20}

For more information on S3 permissions, see Amazon S3 actions.

Provide credentials to Vector

2

Configure Vector S3 sink

Add the S3 sink to your values.yaml:

1vector:
2 extraSinks:
3 s3_logs:
4 type: aws_s3
5 inputs:
6 - transform_remove_fields
7 bucket: "your-logs-bucket"
8 region: "us-east-1"
9 key_prefix: "airflow-logs/{{ "{{ namespace }}" }}/{{ "{{ release }}" }}/%Y/%m/%d/"
10 compression: gzip
11 encoding:
12 codec: json
13 batch:
14 max_bytes: 10485760
15 timeout_secs: 300
16 request:
17 retry_attempts: 5

Configuration options

For a full list of available options, see the Vector aws_s3 sink configuration reference.

OptionDescriptionExample
bucketS3 bucket namemy-logs-bucket
regionAWS regionus-east-1
key_prefixS3 object key prefix with templatinglogs/%Y/%m/%d/
compressionCompression algorithmgzip, zstd, none
encoding.codecOutput formatjson, text, ndjson
batch.max_bytesMax batch size before flush10485760 (10MB)
batch.timeout_secsMax time before flush300 (5 minutes)

Key prefix templating

Use template variables in key_prefix:

VariableDescription
{{ "{{ namespace }}" }}Kubernetes namespace
{{ "{{ release }}" }}Deployment release name
%Y, %m, %dDate components
%H, %M, %STime components

Example: airflow-logs/{{ "{{ namespace }}" }}/%Y/%m/%d/%H/

3

Apply configuration

Push the configuration to your APC installation. For detailed instructions, see Apply a config change.

$helm upgrade astronomer astronomer/astronomer \
> -f values.yaml \
> --namespace astronomer

Verify Vector pods restart with the new configuration:

$kubectl rollout status daemonset/astronomer-vector -n astronomer
4

Verify log delivery

Check Vector logs

$kubectl logs -n astronomer -l app=vector --tail=100 | grep -i s3

List S3 objects

$aws s3 ls s3://your-logs-bucket/airflow-logs/ --recursive | head -20

Read a log file

$aws s3 cp s3://your-logs-bucket/airflow-logs/path/to/file.json.gz - | gunzip | head -5

Advanced configuration

Filter logs by severity

Only forward ERROR and WARNING logs to S3 using a VRL filter condition:

1vector:
2 extraTransforms:
3 filter_errors:
4 type: filter
5 inputs:
6 - transform_remove_fields
7 condition:
8 type: vrl
9 source: '.level == "ERROR" || .level == "WARNING"'
10
11 extraSinks:
12 s3_errors:
13 type: aws_s3
14 inputs:
15 - filter_errors
16 bucket: "your-logs-bucket"
17 # ... rest of config

Partition by deployment

Organize logs by deployment namespace:

1vector:
2 extraSinks:
3 s3_logs:
4 type: aws_s3
5 inputs:
6 - transform_remove_fields
7 bucket: "your-logs-bucket"
8 key_prefix: "deployments/{{ "{{ namespace }}" }}/{{ "{{ pod }}" }}/%Y/%m/%d/"
9 # ... rest of config

Multiple destinations

Forward to both S3 and another system:

1vector:
2 extraSinks:
3 s3_archive:
4 type: aws_s3
5 inputs:
6 - transform_remove_fields
7 bucket: "archive-bucket"
8 # ... config
9
10 splunk_realtime:
11 type: splunk_hec
12 inputs:
13 - transform_remove_fields
14 endpoint: "https://splunk.example.com:8088"
15 token: "${SPLUNK_TOKEN}"

S3 lifecycle policies

Configure S3 lifecycle rules to manage log retention:

1{
2 "Rules": [
3 {
4 "ID": "ArchiveOldLogs",
5 "Status": "Enabled",
6 "Filter": {
7 "Prefix": "airflow-logs/"
8 },
9 "Transitions": [
10 {
11 "Days": 30,
12 "StorageClass": "STANDARD_IA"
13 },
14 {
15 "Days": 90,
16 "StorageClass": "GLACIER"
17 }
18 ],
19 "Expiration": {
20 "Days": 365
21 }
22 }
23 ]
24}

Apply via AWS CLI:

$aws s3api put-bucket-lifecycle-configuration \
> --bucket your-logs-bucket \
> --lifecycle-configuration file://lifecycle.json

Troubleshooting

Logs not appearing in S3

  1. Check Vector pod logs:

    $kubectl logs -n astronomer -l app=vector | grep -i error
  2. Verify AWS credentials:

    $kubectl exec -n astronomer -it ds/astronomer-vector -c vector -- \
    > sh -c 'echo $AWS_ACCESS_KEY_ID'
  3. Inspect the logs for credential errors or permission issues.

    Look for lines containing CredentialsNotLoaded (no credentials found) or Invalid credentials (credentials rejected by AWS). For example:

    2026-04-16T18:27:48.827213Z ERROR vector::topology::builder: msg="Healthcheck failed." error=Invalid credentials component_kind="sink" component_type="aws_s3" component_id=s3_logs

    To see which credentials Vector loaded, look for lines matching aws_config::profile::credentials:

    2026-04-16T18:27:48.247566Z INFO aws_config::profile::credentials: constructed abstract provider from config file chain=ProfileChain { base: AccessKey(Credentials { provider_name: "ProfileFile", access_key_id: "AKIA5WLLPVSPD7JDVSXF", secret_access_key: "** redacted **", expires_after: "never" }), chain: [] }

    These lines show the access key ID in use, which can help confirm whether the correct credentials are being picked up.

Permission denied errors

Verify your IAM policy includes both s3:PutObject and s3:ListBucket permissions. The bucket resource ARN should not include /* for ListBucket.

High latency

Adjust batch settings for faster delivery:

1vector:
2 extraSinks:
3 s3_logs:
4 batch:
5 max_bytes: 5242880 # 5MB
6 timeout_secs: 60 # 1 minute