Set up APC audit log shipping

This document explains how to enable the Astro Private Cloud (APC) audit log sidecar and ship events to one supported sink. For background on what the feature does and which configurations are supported, see APC audit logging overview.

Exactly one sink can be enabled per installation for this release. Enabling the sidecar with zero or more than one sink causes a Helm validation error.

Prerequisites

  • An Astro Private Cloud 2.x installation that you can upgrade with Helm.
  • Access to the Helm values file used by the installation.
  • kubectl configured against the target cluster.
  • Permissions to create or modify cloud resources for the sink you choose:
    • CloudWatch on EKS: permissions to create IAM policies and roles, and a CloudWatch log group.
    • GCP Cloud Logging on GKE: permissions to create Google service accounts and grant IAM bindings in the target project.
    • Elasticsearch: a reachable endpoint and, if required, credentials and a CA certificate.

Choose a sink

Use the AWS CloudWatch Logs sink when the Astro Private Cloud control plane runs on Amazon EKS. Use the GCP Cloud Logging sink when it runs on Google Kubernetes Engine (GKE). Use the Elasticsearch sink on Amazon EKS, GKE, or Azure Kubernetes Service (AKS).

Use this sink when the Astro Private Cloud control plane runs on Amazon EKS. The recommended authentication method is IAM Roles for Service Accounts (IRSA). Static AWS credentials held in a Kubernetes secret are supported as a fallback when IRSA isn’t in use on the EKS cluster.

Prerequisites

  • The Astro Private Cloud control plane runs on Amazon EKS.
  • The AWS CLI is installed and authenticated against the target account.
  • For the IRSA path, the EKS cluster has, or can be associated with, an OIDC identity provider.
  • For the static-credentials path, an IAM principal with permission to write to the target CloudWatch log group.
Environment variables

The following variables are referenced throughout this section. Set them to match your installation before running the commands.

$export EKS_CLUSTER_NAME="<cluster-name>"
$export AWS_REGION="<region>"
$export AWS_ACCOUNT_ID="<account-id>"
$export K8S_NAMESPACE="astronomer"
$export HELM_RELEASE="astronomer"
$export K8S_SA="${HELM_RELEASE}-houston-bootstrapper"
$export IRSA_ROLE_NAME="HoustonCloudWatchRole"
$export CW_LOG_GROUP="/astronomer/houston/audit"
1

Create the CloudWatch log group

$aws logs create-log-group \
> --log-group-name "$CW_LOG_GROUP" \
> --region "$AWS_REGION"

Optionally set a retention policy on the log group:

$aws logs put-retention-policy \
> --log-group-name "$CW_LOG_GROUP" \
> --region "$AWS_REGION" \
> --retention-in-days 30
2

Associate the EKS cluster OIDC provider

$OIDC_URL=$(aws eks describe-cluster \
> --name "$EKS_CLUSTER_NAME" \
> --region "$AWS_REGION" \
> --query "cluster.identity.oidc.issuer" \
> --output text)
$OIDC_ID=${OIDC_URL#https://}
$
$aws iam list-open-id-connect-providers \
> | grep "$(echo "$OIDC_ID" | awk -F/ '{print $NF}')" \
> || eksctl utils associate-iam-oidc-provider \
> --cluster "$EKS_CLUSTER_NAME" \
> --region "$AWS_REGION" \
> --approve
3

Create the IAM policy

$cat > /tmp/houston-cloudwatch-policy.json <<EOF
${
> "Version": "2012-10-17",
> "Statement": [
> {
> "Sid": "CloudWatchDescribe",
> "Effect": "Allow",
> "Action": [
> "logs:DescribeLogGroups",
> "logs:DescribeLogStreams"
> ],
> "Resource": "arn:aws:logs:${AWS_REGION}:${AWS_ACCOUNT_ID}:log-group:*"
> },
> {
> "Sid": "CloudWatchWriteAuditGroup",
> "Effect": "Allow",
> "Action": [
> "logs:CreateLogGroup",
> "logs:CreateLogStream",
> "logs:PutLogEvents"
> ],
> "Resource": [
> "arn:aws:logs:${AWS_REGION}:${AWS_ACCOUNT_ID}:log-group:${CW_LOG_GROUP}",
> "arn:aws:logs:${AWS_REGION}:${AWS_ACCOUNT_ID}:log-group:${CW_LOG_GROUP}:*"
> ]
> }
> ]
>}
$EOF
$
$aws iam create-policy \
> --policy-name HoustonCloudWatchLogsPolicy \
> --policy-document file:///tmp/houston-cloudwatch-policy.json
4

Create the IRSA role and attach the policy

$cat > /tmp/trust-policy.json <<EOF
${
> "Version": "2012-10-17",
> "Statement": [
> {
> "Effect": "Allow",
> "Principal": {
> "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ID}"
> },
> "Action": "sts:AssumeRoleWithWebIdentity",
> "Condition": {
> "StringEquals": {
> "${OIDC_ID}:sub": "system:serviceaccount:${K8S_NAMESPACE}:${K8S_SA}",
> "${OIDC_ID}:aud": "sts.amazonaws.com"
> }
> }
> }
> ]
>}
$EOF
$
$aws iam create-role \
> --role-name "$IRSA_ROLE_NAME" \
> --assume-role-policy-document file:///tmp/trust-policy.json
$
$aws iam attach-role-policy \
> --role-name "$IRSA_ROLE_NAME" \
> --policy-arn "arn:aws:iam::${AWS_ACCOUNT_ID}:policy/HoustonCloudWatchLogsPolicy"

If the IRSA role already exists and you only need to bind it to a new EKS cluster, use aws iam update-assume-role-policy against the existing role instead of aws iam create-role.

5

Apply the Helm values override

Add the following to the values file used by the installation and run helm upgrade:

1astronomer:
2 houston:
3 serviceAccount:
4 annotations:
5 eks.amazonaws.com/role-arn: "arn:aws:iam::<AWS_ACCOUNT_ID>:role/HoustonCloudWatchRole"
6 logging:
7 loggingSidecar:
8 enabled: true
9 cloudwatch:
10 enabled: true
11 region: "<AWS_REGION>"
12 logGroupName: "/astronomer/houston/audit"
13 useIRSA: true

Configure static AWS credentials (fallback)

Use this configuration on EKS when IRSA isn’t in use.

1

Create the CloudWatch log group

$aws logs create-log-group \
> --log-group-name "$CW_LOG_GROUP" \
> --region "$AWS_REGION"
2

Create a Kubernetes secret with AWS credentials

$kubectl create secret generic houston-cloudwatch-creds \
> --from-literal=aws_access_key_id="<AWS_ACCESS_KEY_ID>" \
> --from-literal=aws_secret_access_key="<AWS_SECRET_ACCESS_KEY>" \
> -n "$K8S_NAMESPACE"

The IAM principal whose credentials you use must be allowed to write to the target log group. The policy shown in the IRSA section is a suitable template.

3

Apply the Helm values override

1astronomer:
2 houston:
3 logging:
4 loggingSidecar:
5 enabled: true
6 cloudwatch:
7 enabled: true
8 region: "<AWS_REGION>"
9 logGroupName: "/astronomer/houston/audit"
10 useIRSA: false
11 secretName: "houston-cloudwatch-creds"

Verify

After the upgrade completes, confirm that the Vector sidecar is running and that audit events are reaching CloudWatch:

$HOUSTON_POD=$(kubectl get pods -n "$K8S_NAMESPACE" \
> -l component=houston \
> -o jsonpath='{.items[0].metadata.name}')
$
$kubectl logs -n "$K8S_NAMESPACE" "$HOUSTON_POD" -c vector --tail=20
$
$aws logs tail "$CW_LOG_GROUP" --region "$AWS_REGION" --since 5m

Perform any action that the APC API audits, such as creating a Workspace, and confirm a matching event appears in the log group within a few seconds.

Disable audit log shipping

To stop shipping audit events, set the sidecar to disabled and run helm upgrade:

1astronomer:
2 houston:
3 logging:
4 loggingSidecar:
5 enabled: false

helm upgrade removes the Vector sidecar from the next Pod restart. The APC API continues to emit audit events to standard output, so you can still inspect recent events with kubectl logs on the APC API and APC Worker Pods.

Common issues

The sidecar is enabled with more than one sink. Edit the values file so that only one of cloudwatch.enabled, gcpCloudLogging.enabled, or elasticsearch.enabled is true, then run helm upgrade again.

The sidecar is enabled but no sink is selected. Set one of cloudwatch.enabled, gcpCloudLogging.enabled, or elasticsearch.enabled to true, or set loggingSidecar.enabled to false.

gcpCloudLogging.projectId, gcpCloudLogging.resource.location, and gcpCloudLogging.resource.clusterName are required when GCP Cloud Logging is enabled. Whitespace-only values are also rejected. Set all three to concrete values and run helm upgrade again.

The IAM principal that Vector uses can’t write to the target log group.

  • For IRSA, check that the IRSA role trust policy names the correct OIDC provider, namespace, and service account, and that HoustonCloudWatchLogsPolicy is attached to the role. Also verify that the eks.amazonaws.com/role-arn annotation on houston.serviceAccount.annotations points to the same role ARN.
  • For static credentials, check that houston-cloudwatch-creds contains valid aws_access_key_id and aws_secret_access_key values and that the corresponding IAM user or role is allowed to write to the target log group.

The monitored resource is invalid. Confirm that resource.type is k8s_container, and that resource.location and resource.clusterName match the GKE cluster as it appears in Cloud Logging.

Check that endpoint is reachable from within the cluster and that, if TLS is in use, the CA certificate in caSecretName signs the server certificate presented by the endpoint. If basic auth is enabled, verify that the username and password in houston-elasticsearch-creds are correct.

Next steps