Install Astro Private Cloud in unified mode

Use this guide to deploy an Astro Private Cloud (APC) unified cluster, where control plane and data plane components run together in a single Kubernetes cluster. Unified mode combines management services, such as Astro UI, Houston, and NATS, with runtime services like Commander, Config Syncer, and data plane ingress so that platform operators can evaluate APC without maintaining separate clusters.

If you prefer to keep control plane and data plane Helm releases separate but run them in the same Kubernetes cluster, follow the dedicated control plane and data plane install guides sequentially. That approach consumes slightly more resources than unified mode but keeps responsibilities isolated.

At some points in this installation procedure, your particular environment configurations might require you to take additional steps, or, enable you to skip certain steps.

When you see a callout like this, read it carefully and follow the instructions if they apply to your installation.

Prerequisites

Many sections in this document reuse the control plane installation workflow. Integrate the data plane-specific configuration from Install data plane where called out to ensure unified clusters contain all runtime functionality.

The following prerequisites apply when running APC on Amazon EKS. See the Other tab if you run a different version of Kubernetes on AWS.
  • An EKS Kubernetes cluster, running a version of Kubernetes certified as compatible on the Kubernetes Version Compatibility Reference that provides the following components:
  • A PostgreSQL instance, accessible from your Kubernetes cluster, and running a version of Postgres certified as compatible on the Version Compatibility Reference.
  • PostgreSQL superuser permissions.
  • Permission to create and modify resources on AWS.
  • Permission to generate a certificate that covers a defined set of subdomains.
  • An SMTP service and credentials. For example, Mailgun, or Sendgrid.
  • The AWS CLI.
  • (Optional) eksctl for creating and managing your Astronomer cluster on EKS.
  • A machine meeting the following criteria with access to the Kubernetes API Server:
    • Network access to the Kubernetes API Server - either direct access or VPN.
    • Network access to load-balancer resources that are created when APC is installed later in the procedure - either direct access or VPN.
    • Configured to use the DNS servers where APC DNS records can be created.
    • Helm (minimum v3.6).
    • The Kubernetes CLI (kubectl).
  • (Situational) The OpenSSL CLI might be required to troubleshoot certain certificate-related conditions.
Ensure your cluster meets both the control plane and data plane prerequisites, because unified mode deploys services from each.

Ingress controller considerations

Astro Private Cloud requires a Kubernetes Ingress controller to function and provides an integrated Ingress controller by default. Before installing, decide whether to use a third-party ingress controller or use Astronomer’s integrated ingress controller.

Astronomer generally recommends you use the integrated Ingress controller, but Astro Private Cloud also supports certain third-party ingress-controllers.

Ingress controllers typically need elevated permissions, including a ClusterRole, to function. Specifically, the Astro Private Cloud Ingress controller requires the ability to:

  • List all namespaces in the cluster.
  • View ingresses in the namespaces.
  • Retrieve secrets in the namespaces to locate and use private TLS certificates that service the ingresses.

If you have complex regulatory requirements, you might need to use an Ingress controller that’s approved by your organization and disable Astronomer’s integrated controller. You configure the Ingress controller during the installation.

Step 1: Plan the structure of your Astro Private Cloud environments

Before installing APC, consider how many instances of the platform you want to host because you install each of these instances on separate Kubernetes clusters, following the instructions in this document.

Each instance of APC can host multiple Airflow environments, or Deployments. Some common types of APC instances you might consider hosting are:

  • Sandbox: The lowest environment that contains no sensitive data, used only by system-administrators to experiment, and not subject to change control.
  • Development: User-accessible environment that is subject to most of the same restrictions of higher environments, with relaxed change control rules.
  • Staging: All network, security, and patch versions are maintained at the same level as in the production environment. However, it provides no availability guarantees and includes relaxed change control rules.
  • Production: The production instance hosts your production Airflow environments. You can choose to host development Airflow environments here or in environments with lower levels of support and restrictions.

Plan each environment as a pairing of one control plane with one or more data planes. If your organization runs any clusters for APC 1.0 on OpenShift, keep the full environment on OpenShift—mixing OpenShift and non-OpenShift clusters across the control plane/data plane boundary is not supported. Create a project folder for every environment you plan to host to contain its configuration files. For example, if you want to install a development environment, create a folder named ~/astronomer-dev/control-plane.

Note that in addition to the default cluster in unified mode, additional data plane clusters can be registered by following the Install a Data Plane guide.

Certain files in the project directory might contain secrets when you set up your sandbox or development environments. For your first install, keep these secrets in a secure place on a suitable machine. As you progress to higher environments, such as staging or production, secure these files separately in a vault and use the remaining project files in your directory to serve as the basis for your CI/CD deployment.

Step 2: Create values.yaml from a template

APC uses Helm to apply platform-level configurations. Choose your cloud provider tab below to copy a ready-to-use apc-values.yaml. Then, in the following steps, update image tags, domains, and secrets before deploying.

As you work with the template configuration, use the following guidelines to avoid installation issues:

  • Do not make any changes to the values.yaml file until instructed to do.
  • Do not run helm upgrade or upgrade.sh until instructed to do so.
  • Ignore any instructions to run helm upgrade from other Astronomer documentation until after you complete this unified mode installation procedure.
1###########################################
2### Astronomer global configuration for EKS
3###########################################
4global:
5 # Installation mode for the control plane
6 plane:
7 mode: unified
8
9 # Base domain for all control plane subdomains exposed through ingress
10 baseDomain: env.astronomer.your.domain
11
12 # For development or proof-of-concept, you can use an in-cluster database.
13 # This NOT supported in production.
14 # postgresqlEnabled: true
15
16 # Name of secret containing TLS certificate, change if not using "astronomer-tls"
17 # tlsSecret: astronomer-tls
18
19 # List of secrets containing the cert.pem of trusted private certification authorities
20 # Example command: `kubectl -n astronomer create secret generic private-root-ca --from-file=cert.pem=./private-root-ca.pem`
21 # privateCaCerts:
22 # - private-root-ca
23
24 # Expose Postgres metrics for Prometheus to scrape
25 # prometheusPostgresExporterEnabled: true
26
27 # Use a sidecar for exporting task logs
28 loggingSidecar:
29 enabled: true
30 name: sidecar-log-consumer
31
32 # Enable dag-only deployments
33 dagOnlyDeployment:
34 enabled: true
35
36 # Database SSL configuration
37 ssl:
38 # Enable SSL connection to Postgres -- must be false if using in-cluster database
39 enabled: true
40
41#########################
42### Ingress configuration
43#########################
44# nginx:
45 # Static IP address the nginx ingress should bind to
46 # loadBalancerIP: ~
47
48 # Set privateLoadbalancer to 'false' to make nginx request a LoadBalancer on a public vnet
49 # privateLoadBalancer: true
50
51 # Dictionary of arbitrary annotations to add to the nginx ingress.
52 # For full configuration options, see https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/
53 # Change to 'elb' if your node group is private and doesn't utilize a NAT gateway
54 # ingressAnnotations: {service.beta.kubernetes.io/aws-load-balancer-type: nlb}
55
56 # If all subnets are private, auto-discovery may fail.
57 # You must enter the subnet IDs manually in the annotation below.
58 # service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-id-1,subnet-id-2
59
60################################
61### Astronomer app configuration
62################################
63astronomer:
64 houston:
65 upgradeDeployments:
66 enabled: false
67
68 # Application configuration for Houston
69 config:
70 publicSignups: true ## set to false immediately after initial system admin user created
71
72 # Allowed user email domains for system level roles
73 # allowedSystemLevelDomains: []
74
75 # Default configuration for deployments.
76 # Can be overridden on a per-data-plane basis.
77 deployments:
78 # Enable Airflow 3 deployments for clusters
79 airflowV3:
80 enabled: true
81
82 # Allow deletions to immediately remove the database and namespace
83 # hardDeleteDeployment: true
84
85 # Allows you to set your release names
86 # manualReleaseNames: true
87
88 # Flag to enable using IAM roles (don't enter a specific role)
89 # serviceAccountAnnotationKey: eks.amazonaws.com/role-arn
90
91 # Required if dagOnlyDeployment is enabled
92 configureDagDeployment: true
93
94 # Enables the API for updating deployments
95 # enableUpdateDeploymentImageEndpoint: true
96 # upsertDeploymentEnabled: true
97
98 # email:
99 # enabled: false
100 # reply: noreply@your.domain
101
102 # secret:
103 # - envName: "EMAIL__SMTP_URL" # Reference to the Kubernetes secret for SMTP credentials. Only required if email is used.
104 # secretName: "astronomer-smtp"
105 # secretKey: "connection"
106
107 # User authentication mechanism. One of the following should be enabled.
108 auth:
109 github:
110 # Allow users authenticate with Github, enabled by default
111 enabled: false
112 # local:
113 # # Allow users and passwords in the Houston database, disabled by default
114 # enabled: false
115 openidConnect:
116 # okta:
117 # enabled: false
118 # microsoft:
119 # enabled: false
120 # adfs:
121 # enabled: false
122 # custom:
123 # enabled: false
124 google:
125 # Allow users to authenticate with Google, enabled by default
126 enabled: false
127
128#################################
129## Default tagged groups enabled
130#################################
131# tags:
132 # Enable platform components by default (nginx, astronomer)
133 # platform: true
134
135 # Enable monitoring stack (prometheus, kube-state)
136 # monitoring: true
137
138 # Enable logging stack (elasticsearch, vector)
139 # logging: true
Email delivery is disabled by default. If you want to enable it, you can configure it in a later step: Configure outbound SMTP email.

Configure public signups

The apc-values.yaml examples leave astronomer.houston.config.publicSignups: true, so you can create the initial administrator account. You can control account creation in Disable anonymous account creation.

Step 3: Choose and configure a base domain

When you install APC it creates a variety of services that your users access to manage, monitor, and run Airflow.

Choose a base domain such as astronomer.example.com, astro-sandbox.example.com, or astro-prod.example.internal for which:

  • You have the ability to create and edit DNS records
  • You have the ability to issue TLS certificates
  • The following addresses are used by Astronomer components:
    • app.<base-domain>
    • deployments.<base-domain>
    • houston.<base-domain>
    • alertmanager.<base-domain>
    • prometheus.<base-domain>
    • registry.<base-domain>

The base domain itself does not need to be available and can point to another service not associated with Astronomer or Airflow. If the base domain is available, you can choose to establish a vanity redirect from <base-domain> to app.<base-domain> later in the installation process.

When choosing a base domain, consider the following:

  • The name you choose must be be resolvable by both your users and Kubernetes itself.
  • All hostnames must remain under the base domain (for example, app.<base-domain>), so ensure you can create DNS records and issue TLS certificates for those subdomains.
  • You need to have or obtain a TLS certificate that is recognized as valid by your users. If you use the APC integrated container registry, the TLS certification must also be recognized as valid by Kubernetes itself.
  • Wildcard certificates are only valid one level deep. For example, an ingress controller that uses a certificate called *.example.com can provide service for app.example.com but not app.astronomer-dev.example.com.
  • The bottom-level hostnames, such as app, registry, or prometheus, are fixed and cannot be changed.

The base domain is visible to end users. They can view the base domain in the following scenarios:

  • When users access the APC UI. For example, https://app.sandbox-astro.example.com.
  • When users access an Airflow Deployment. For example, https://deployments.sandbox-astro.example.com/deployment-release-name/airflow.
  • When users authenticate to the Astro CLI. For example, astro login sandbox-astro.example.com.
If you install APC on OpenShift and also want to use OpenShift’s integrated ingress controller, you can use the the hostname of the default OpenShift ingress controller as your base domain, such as app.apps.<OpenShift-domain>. Doing this requires permission to reconfigure the route admission policy for the standard ingress controller to InterNamespaceAllowed. See Third Party Ingress Controller - Configuration notes for OpenShift for additional information and options.

Configure the base domain

Locate the global.baseDomain in your values.yaml file and change it to your base domain as shown in the following example:

1global:
2 # Base domain for all subdomains exposed through ingress
3 baseDomain: sandbox-astro.example.com

Step 4: Create the APC platform namespace

In your Kubernetes cluster, create a Kubernetes namespace to contain the APC platform. The following example uses apc as the namespace.

1kubectl create namespace apc

APC uses the contents of this namespace to provision and manage Airflow instances running in other namespaces. Each Airflow instance has its own isolated namespace.

Step 5: Request and validate an Astronomer TLS certificate

To install APC you need a TLS certificate that is valid for several domains. One of the domains is the primary name on the certificate, also known as the common name (CN). The additional domains are equally valid, supplementary domains known as Subject Alternative Names (SANs).

Astronomer requires a private certificate in the APC platform namespace, even if you use a third-party ingress controller that doesn’t otherwise require it.

Request an ingress controller TLS certificate

Request a TLS certificate from your security team for APC. In your request, include the following:

  • Your chosen base domain as the Common Name (CN). If your certificate authority will not issue certificates for the bare base domain, use app.<base-domain> as the CN instead.
  • Either request a wildcard SAN of *.<base-domain> (plus an explicit SAN for <base-domain>) or list each hostname individually:
    • app.<base-domain> (omit if already used as the Common Name)
    • deployments.<base-domain> (required for Airflow UIs and APIs)
    • houston.<base-domain>
    • prometheus.<base-domain>
    • registry.<base-domain> (required if you keep the integrated container registry enabled)
    • alertmanager.<base-domain> (required if you keep the integrated Alertmanager enabled)
  • If you use the APC integrated container registry, specify that that the encryption type of the certificate must be RSA.
  • Request the following return format:
    • A key.pem containing the private key in pem format
    • Either a full-chain.pem (containing the public certificate additional certificates required to validate it, in pem format) or a bare cert.pem and explicit affirmation that there are no intermediate certificates and that the public certificate is the full chain.
    • Either the private-root-ca.pem in pem format of the private Certificate Authority used to create your certificate or a statement that the certificate is signed by a public Certificate Authority.
If you use the APC integrated container registry, the encryption type used on your TLS certificate must be RSA. Cerbot users must include --key-type rsa when requesting certificates. Most other solutions generate RSA keys by default.

Validate the received certificate and associated items

Ensure that you received each of the following three items:

  • A key.pem containing the private key in pem format.
  • Either a full-chain.pem, in pem format, that contains the public certificate additional certificates required to validate it or a bare cert.pem and explicit affirmation that there are no intermediate certificates and that the public certificate is the full chain.
  • Either the private-root-ca.pem in pem format of the the private Certificate Authority used to create your certificate or a statement that the certificate is signed by public Certificate Authority.

To validate that your security team generated the correct certificate, run the following command using the openssl CLI:

1openssl x509 -in <your-certificate-filepath> -text -noout

This command generates a report. If the X509v3 Subject Alternative Name section of this report includes either a single *.<base-domain> wildcard domain or all subdomains, then the certificate creation was successful.

Confirm that your full-chain certificate chain is ordered correctly. To determine your certificate chain order, run the following command using the openssl CLI:

1openssl crl2pkcs7 -nocrl -certfile <your-full-chain-certificate-filepath> | openssl pkcs7 -print_certs -noout

The command generates a report of all certificates. Verify that the certificates are in the following order:

  • Domain
  • (Optional) Intermediate
  • Root

(Optional) Additional validation for the Astronomer integrated container registry

If you don’t plan to store images in Astronomer’s integrated container registry and instead plan to store all container images using an external container registry, you can skip this step.

The APC integrated container registry requires that your private key signs traffic originating from the APC platform using the RSA encryption method. Confirm that the key is signing traffic correctly before proceeding.

Run the following command to extract the bare public cert, if it was not already included in the files provided by your certificate authority, from the full-chain certificate file:

1openssl crl2pkcs7 -nocrl -certfile full-chain.pem | openssl pkcs7 -print_certs -noout > cert.pem

Examine the public certificate and ensure all Signature Algorithms are listed as sha1WithRSAEncryption.

1openssl x509 -in cert.pem -text|grep Algorithm
2 Signature Algorithm: sha1WithRSAEncryption
3 Public Key Algorithm: rsaEncryption
4 Signature Algorithm: sha1WithRSAEncryption

If your key is not compatible with the APC integrated container registry, ask your Certificate Authority to re-issue the credentials and emphasize the need for an RSA cert, or plan to use an external container registry instead.

Step 6: Store and configure the ingress controller TLS certificate

Determine whether or not your certificate was issued by an intermediate certificate-authority. If you do not know, assume you use an intermediate certificate and attempt to obtain a full-chain.pem bundle from your certificate authority.

Certificates issued by operators of root certificate authorities, including but not limited to LetsEncrypt, are frequently issued from intermediate certificate authorities associated with a trusted root CA.

APC backend services have stricter trust requirements than most web-browsers. Web Browsers might auto-complete the chain and consider your certificate valid, even if you don’t provide the intermediate certificate-authority’s public certificate. APC backend services can reject the same certificate, and cause dag and image deploys to fail.

If, and only if, your certificate was issued directly by the root Certificate Authority of a universally trusted certificate authority, and not from one of their intermediaries, then the server.crt is also the full-chain certificate bundle.

Identify your full-chain public certificate .pem file and use it while storing and configuring the ingress controller TLS certificate.

The --cert parameter must reference your full-chain.pem, which includes the server certificate and any intermediate certificates, if any. Using the server cert directly causes dag and image deploys to fail.

Run the following command to store the public full-chain certificate in the APC Platform Namespace in a tls-type Kubernetes secret. You can create a custom name for this secret. The following example uses the name, astronomer-tls.

1kubectl -n <astronomer platform namespace> create secret tls astronomer-tls --cert <fullchain-pem-filepath> --key <your-private-key-filepath>

However, if your security team has instructed you that there are no intermediate certificates, run the following command.

1kubectl -n astronomer create secret tls astronomer-tls --cert full-chain.pem --key server_private_key.pem

Naming the secret astronomer-tls with no substitutions is recommended when using a third-party ingress controller.

Step 7: (Optional) Configure a third-party ingress controller

If you use APC’s integrated ingress controller, you can skip this step.

Complete the full setup as described in Third-party Ingress-Controllers, which includes steps to configure ingress controllers in specific environment types. When you’re done, return to this page and continue to the next step.

Step 8: Configure a private certificate authority

Skip this step if you don’t use a private Certificate Authority (private CA) to sign the certificate used by your ingress-controller. Or, if you don’t use a private CA for any of the following services that the APC platform interacts with.

APC trusts public Certificate Authorities automatically.

APC must be configured to trust any private Certificate Authorities issuing certificates for systems APC interacts with, including, but not limited to the following:

  • Ingress controller
  • Email server, unless disabled
  • Any container registries that Kubernetes pulls from
  • If using OAUTH, the OAUTH provider
  • If using external Elasticsearch, any external Elasticsearch instances
  • If using external Prometheus, any external Prometheus instances

Perform the procedure described in Configuring private CAs for each certificate authority used to sign TLS certificates. After creating the trust secret (for example astronomer-ca), add it to global.privateCaCerts in values.yaml so platform components trust the issuer.

Astro CLI users must also configure both their operating system and container solution, [Docker Desktop or Podman(configure-desktop-container-solution-extra-cas), to trust the private certificate Authority that was used to create the certificate used by the APC ingress controller and any third-party container registries.

Step 9: Confirm your Kubernetes cluster trusts required CAs

If at least one of the following circumstances apply to your installation, you must complete this step:

  • You configured APC to pull platform container images from an external container registry that uses a certificate signed by a private CA.
  • You plan for your users to deploy Airflow images to APC’s integrated container registry and Astronomer is using a TLS certificate issued by a private CA.
  • Users will deploy images to an external container registry and that registry is using a TLS certificate issued by a private CA.

Kubernetes must be able to pull images from one or more container registries for APC to function. By default, Kubernetes only trusts publicly signed certificates. This means that by default, Kubernetes does not honor the list of certificates trusted by the APC platform.

Many enterprises configure Kubernetes to trust additional certificate authorities as part of their standard cluster creation procedure. Contact your Kubernetes Administrator to find out what, if any, private certificates are currently trusted by your Kubernetes Cluster. Then, consult your Kubernetes administrator and Kubernetes provider’s documentation for instructions on configuring Kubernetes to trust additional CAs.

Follow procedures for your Kubernetes provider to configure Kubernetes to trust each CA associated with your container registries, including the integrated container registry, if applicable.

Certain clusters do not provide a mechanism to configure the list of certificates trusted by Kubernetes.

While configuring the Kubernetes list of cluster certificates is a customer responsibility, APC includes an optional component that can, for certain Kubernetes cluster configurations, add certificates defined in global.privateCaCerts to the list of certificates trusted by Kubernetes. This can be enabled by setting global.privateCaCertsAddToHost.enabled and global.privateCaCertsAddToHost.addToContainerd to true in your values.yaml file and setting global.privateCaCertsAddToHost.containerdConfigToml to:

1[host."https://registry.<baseApp>"]
2 ca = "/etc/containerd/certs.d/<registry hostname>/<secret name>.pem"

For example, if your base domain is astro-sandbox.example.com and the CA public-certificate is stored in the platform namespace in a secret named my-private-ca, the global.privateCaCertsAddToHost section would be:

1 global:
2 privateCaCertsAddToHost:
3 enabled: true
4 addToContainerd: true
5 hostDirectory: /etc/containerd/certs.d
6 containerdConfigToml: |-
7 [host."https://registry.astro-sandbox.example.com"]
8 ca = "/etc/containerd/certs.d/registry.astro-sandbox.example.com/my-private-ca.pem"

Step 10: Configure outbound SMTP email

APC requires the ability to send email to:

  • Notify users of errors with their Airflow Deployments.
  • Send emails to invite new users to Astronomer.
  • Send certain platform alerts, enabled by default but can be configured.

APC sends all outbound email using SMTP.

If SMTP is not available in the environment where you’re installing APC follow instructions in configure APC to not send outbound email, and then skip the rest of this section.
  1. Obtain a set of SMTP credentials from your email administrator for you to use to send email from APC. When you request an email address and display name, remember that these emails are not designed for users to reply directly to them. Request all the following information:
    • Email address.
    • Email display name requirements. Some email servers require a From line of: Do Not Reply <donotreply@example.com>.
    • SMTP username. This is usually the same as the email address.
    • SMTP password.
    • SMTP hostname.
    • SMTP port.
    • Whether or not the connection supports TLS.
If there is a / or any other escape character in your username or password, you might need to URL encode those characters.
  1. Ensure that your Kubernetes cluster has permissions configured to send outbound email to the SMTP server.

  2. Change the configuration in values.yaml from noreply@my.email.internal to an email address that is valid to use with your SMTP credentials.

  3. Construct an email connection string and store it in a secret in the Astronomer platform namespace. The following example shows how to store the connection in a secret called astronomer-smtp. Make sure to url-encode the username and password if they contain special characters.

    1kubectl -n astronomer create secret generic astronomer-smtp --from-literal connection="smtp://my@40user:my%40pass@smtp.email.internal/?requireTLS=true"

    In general, an SMTP URI is formatted as smtps://USERNAME:PASSWORD@HOST/?pool=true. The following table contains examples of the URI for some of the most popular SMTP services:

    ProviderExample SMTP URL
    AWS SESsmtp://AWS_SMTP_Username:AWS_SMTP_Password@email-smtp.us-east-1.amazonaws.com/?requireTLS=true
    SendGridsmtps://apikey:SG.sometoken@smtp.sendgrid.net:465/?pool=true
    Mailgunsmtps://xyz%40example.com:password@smtp.mailgun.org/?pool=true
    Office365smtp://xyz%40example.com:password@smtp.office365.com:587/?requireTLS=true
    Custom SMTP-relaysmtp://smtp-relay.example.com:25/?ignoreTLS=true

    If your SMTP provider is not listed, refer to the provider’s documentation for information on creating an SMTP URI.

If there is a / or any other escape character in your username or password, you might need to URL encode those characters.

Step 11: Configure volume storage classes

Skip this step if your cluster defines a volume storage class, and you want to use it for all volumes associated with APC and its Airflow Deployments.

Astronomer strongly recommends that you do not back any volumes used for APC with mechanical hard drives.

Create storage-class-config.yaml in your project directory and update the configuration to match your environment:

1global:
2 prometheus:
3 persistence:
4 storageClassName: "<desired-storage-class>"
5 elasticsearch:
6 common:
7 persistence:
8 storageClassName: "<desired-storage-class>"
9astronomer:
10 registry:
11 persistence:
12 storageClassName: "<desired-storage-class>"
13 houston:
14 config:
15 deployments:
16 helm:
17 dagDeploy:
18 persistence:
19 storageClass: "<desired-storage-class>"
20 airflow:
21 redis:
22 persistence:
23 storageClassName: "<desired-storage-class>"
24nats:
25 nats:
26 jetStream:
27 fileStorage:
28 storageClassName: "<desired-storage-class>"
29# this option does not apply when using an external postgres database
30# bundled postgresql not a supported option, only for use in proof-of-concepts
31postgresql:
32 persistence:
33 storageClass: "<desired-storage-class>"

Merge these values into values.yaml. You can do this manually or by placing merge_yaml.py and the configuration as a new file in your project directory and running python merge_yaml.py storage-class-config.yaml values.yaml.

Step 12: Configure the database

Astronomer requires a central Postgres database that acts as the backend for the APC Houston API and hosts individual metadata databases for all Deployments created on the platform.

If, while evaluating APC you need to create a temporary environment where Postgres is not available, locate the global.postgresqlEnabled option already present in your values.yaml and set it to true, then skip the remainder of this step.

Note that global.postgresqlEnabled to true is an unsupported configuration, and should never be used on any development, staging, or production environment.

If you use Azure Database for either PostgreSQL or another Postgres instance that does not enable the pg_trgm by default, you must enable the pg_trgm extension prior to installing APC. If pg_trgm is not enabled, the install will fail. pg_tgrm is enabled by default on Amazon RDS and Google Cooud SQL for PostgresQL.

For instructions on enabling the pg_trgm extension for Azure Flexible Server, see PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server.

Additional requirements apply to the following databases:

  • AWS RDS:
    • t2 medium is the minimum RDS instance size you can use.
  • Azure Flexible Server:
    • You must enable the pg_trgm extension as per the advisory earlier in this section.
    • Set global.ssl.modeto prefer in your values.yaml file.

Create a Kubernetes Secret named astronomer-bootstrap that points to your database. You must URL encode any special characters in your Postgres password.

The in-cluster Postgres option (global.postgresqlEnabled: true) is deprecated and should only be used for short-lived testing. Always rely on an external Postgres instance for any persistent environment.

To create this secret, run the following command replacing the astronomer platform namespace, username, password, database hostname, and database port with their respective values. Remember that username and password must be url-encoded if they contain special-characters:

1kubectl -n <astronomer platform namespace> create secret generic astronomer-bootstrap \
2 --from-literal connection="postgres://<url-encoded username>:<url-encoded password>@<database hostname>:<database port>"

For example, for a username named bob with password abc@abc at hostname some.host.internal, you would run:

1kubectl -n astronomer create secret generic astronomer-bootstrap \
2 --from-literal connection="postgres://bob:abc%40abc@some.host.internal:5432"

Step 13: Configure the Docker registry used for platform images

Skip this step if you are installing APC onto a Kubernetes cluster that can pull container images from public image repositories and you don’t want to mirror these images locally.

If you can retrieve images from a registry that can be reached without credentials, ensure the endpoint hosting the registry is restricted to trusted networks, for example, private subnets or VPN access. Avoid exposing the platform image registry directly to the public internet. No additional Astronomer configuration is required beyond setting the repository locations later in this step.

For additional examples (including per-Deployment registry settings and air gapped workflows), see Configure a custom registry for Deployment images.

Step 14: Determine which version of APC to install

Astronomer recommends new APC installations use the most recent version available in either the Stable or Long Term Support (LTS) release-channel. Keep this version number available for the following steps.

See APC’s lifecycle policy and version compatibility reference for more information.

Step 15: Fetch Airflow Helm charts

If you have internet access to https://helm.astronomer.io, run the following command on the machine where you want to install APC:

1helm repo add astronomer https://helm.astronomer.io/
2helm repo update

If you don’t have internet access to https://helm.astronomer.io, download the APC Platform Helm chart file corresponding to the version of APC you are installing or upgrading to from https://helm.astronomer.io/astronomer-<version number>.tgz. For example, for APC v1.0.0 you would download https://helm.astronomer.io/astronomer-1.0.0.tgz. This file does not need to be uploaded to an internal chart repository.

Step 16: Create and customize upgrade.sh

Create a file named upgrade.sh in your platform deployment project directory containing the following script. Specify the following values at the beginning of the script:

  • CHART_VERSION: Your APC version, including patch and a v prefix. For example, v1.0.0.
  • RELEASE_NAME: Your Helm release name. astronomer is strongly recommended.
  • NAMESPACE: The namespace to install platform components into. astronomer is strongly recommended.
  • CHART_NAME: Set to astronomer/astronomer if fetching platform images from the internet. Otherwise, specify the filename if you’re installing from a file (for example astronomer-1.0.0.tgz).
1#!/bin/bash
2set -xe
3
4# typically astronomer
5RELEASE_NAME=<astronomer-platform-release-name>
6# typically astronomer
7NAMESPACE=<astronomer-platform-namespace>
8# typically astronomer/astronomer
9CHART_NAME=<chart name>
10# format is v<major>.<minor>.<path> e.g. v1.0.0
11CHART_VERSION=<v-prefixed version of the APC platform chart>
12# ensure all the above environment variables have been set
13
14helm repo add --force-update astronomer https://helm.astronomer.io
15helm repo update
16
17# upgradeDeployments false ensures that Airflow charts are not upgraded when this script is run
18# If you deployed a config change that is intended to reconfigure something inside Airflow,
19# then you may set this value to "true" instead. When it is "true", then each Airflow chart will
20# restart. Note that some stable version upgrades require setting this value to true regardless of your own configuration.
21# If you are currently on APC 0.25, 0.26, or 0.27, you must upgrade to version 0.28 before upgrading to 0.29. A direct upgrade to 0.29 from a version lower than 0.28 is not possible.
22helm upgrade --install --namespace $NAMESPACE \
23 -f ./values.yaml \
24 --reset-values \
25 --version $CHART_VERSION \
26 --debug \
27 --set astronomer.houston.upgradeDeployments.enabled=false \
28 $RELEASE_NAME \
29 $CHART_NAME $@

Step 17: Mirror platform images

This step is optional but strongly recommended for production environments so your cluster can pull platform images from a registry you control.
  1. Gather the list of required platform images using one of the following methods:

Mac and Linux users with jq installed can set CHART_VERSION in the following snippet and run it to produce a list of images.

1CHART_VERSION=<v-prefixed version of the APC platform chart>
2UNPREFIXED_CHART_VERSION=${CHART_VERSION#v}
3curl -s https://updates.astronomer.io/astronomer-software/releases/astronomer-${UNPREFIXED_CHART_VERSION}.json | jq -r '(.astronomer.images, .airflow.images) | to_entries[] | "\(.value.repository):\(.value.tag)"'| sort
  1. Copy these images to the container registry using the naming scheme you configured when you set up a custom image registry.

Step 18: Fetch Airflow/Astro Runtime updates

If you are installing APC into an egress-controlled or Air gapped environment, perform the following steps.

By default, APC checks for Airflow updates, which are included in the Astro Runtime, once per day at midnight, by querying https://updates.astronomer.io/astronomer-runtime. This returns a JSON file with details about the latest available Astro Runtime versions.

In an egress-controlled or air gapped environment, you need to store the JSON file in the cluster itself, avoiding the external check. To store the JSON file in the cluster, complete the following steps:

  1. Download the JSON files and store them in a Kubernetes configmap by running the following commands:
1curl -XGET https://updates.astronomer.io/astronomer-runtime -o astro_runtime_releases.json
2
3kubectl -n <astronomer platform namespace> create configmap astro-runtime-base-images --from-file=astro_runtime_releases.json
  1. Add your configmap name, astro-runtime-base-images to your Houston configuration using the runtimeReleasesConfigMapName configuration:
1astronomer:
2 houston:
3 runtimeReleasesConfigMapName: astro-runtime-base-images
4 config:
5 airgapped:
6 enabled: true

Step 19: (OpenShift only) Apply OpenShift-specific configuration

If you’re not installing APC into an OpenShift Kubernetes cluster, skip this step.

Add the following values into values.yaml. You can do this manually or by placing the configuration as a new file, along with merge_yaml.py in your project directory and running python merge_yaml.py openshift.yaml values.yaml.

1global:
2 openshiftEnabled: true
3 sccEnabled: false
4 extraAnnotations:
5 kubernetes.io/ingress.class: openshift-default
6 route.openshift.io/termination: "edge"
7 authSidecar:
8 enabled: true
9 dagOnlyDeployment:
10 securityContext:
11 fsGroup: ""
12 nodeExporterEnabled: false
13 vectorEnabled: false
14 loggingSidecar:
15 enabled: true
16 name: sidecar-log-consumer
17elasticsearch:
18 sysctlInitContainer:
19 enabled: false
20
21# bundled postgresql not a supported option, only for use in proof-of-concepts
22postgresql:
23 securityContext:
24 enabled: false
25 volumePermissions:
26 enabled: false

Only Ingress objects with the annotation route.openshift.io/termination: "edge" are supported for generating routes in OpenShift 4.11 and later. Other termination types are no longer supported for automatic route generation.

If you’re on an older version of OpenShift, route creation should be done manually.

APC on OpenShift is only supported when using a third-party ingress-controller and using the logging sidecar feature of APC. The above configuration enables both of these items.

Step 20: (Optional) Limit Astronomer to a namespace pool

By default, APC automatically creates namespaces for each new Airflow Deployment.

You can restrict the Airflow management components of APC to a list of predefined namespaces and configure it to operate without a ClusterRole by following the instructions in Configure a Kubernetes namespace pool for APC. If you want to disable creation of role and rolebindings for commander, config-syncer, and kubestate metrics, you can set global.features.namespacePools.createRbac to false.

When global.rbacEnabled is set to false, the platform no longer creates any role, rolebindings, or service accounts. The user must define default roles to the k8s default service account to continue with the platform install. See Bring your own Kubernetes service accounts for setup steps.

Step 21: (Optional) Enable sidecar logging

Running a logging sidecar to export Airflow task logs is essential for running APC in a multi-tenant cluster.

By default, APC creates a privileged DaemonSet to aggregate logs from Airflow components for viewing from within Airflow and the APC UI.

You can replace this privileged Daemonset with unprivileged logging sidecars by following instructions in Export logs using container sidecars.

Step 22: (Optional) Integrate an external identity provider

APC includes integrations for several of the most popular OAUTH2 identity providers (IdPs), such as Okta and Microsoft Entra ID. Configuring an external IdP allows you to automatically provision and manage users in accordance with your organization’s security requirements. See Integrate an auth system to configure the identity provider of your choice in your values.yaml file.

Step 23: Install APC using Helm

Deploy the control plane using the upgrade.sh script you created earlier. Confirm RELEASE_NAME, NAMESPACE, and CHART_VERSION reflect your environment, then execute:

$./upgrade.sh

To review manifests before applying them, run ./upgrade.sh --dry-run or use helm template with the same flags defined in the script.

Step 24: Configure DNS for the integrated ingress controller

Whether you use Astronomer’s integrated ingress controller or a third-party controller, publish the same set of DNS records so users can reach control plane services.

  • If you use the integrated controller, get the load balancer address directly:

    1kubectl -n <astronomer platform namespace> get svc astronomer-nginx
  • If you use a third-party controller, ask your ingress administrator for the hostname or IP address that should front the Astronomer routes (refer back to Configure a third-party ingress controller).

Create either a wildcard record such as *.sandbox-astro.example.com or individual CNAME records for the following hostnames so that traffic routes through the chosen load balancer:

  • app.<base-domain> (required)
  • deployments.<base-domain> (required for Airflow UIs and APIs)
  • houston.<base-domain> (required)
  • prometheus.<base-domain> (required)
  • registry.<base-domain> (required if you keep the integrated container registry enabled)
  • alertmanager.<base-domain> (required if you keep the integrated Alertmanager enabled)
  • <base-domain> (optional but recommended, provides a vanity redirect to app.<base-domain>)

Astronomer generally recommends pointing the zone apex (@) directly to the load balancer address and mapping the remaining hostnames as CNAMEs to that apex. In lower environments, you can safely use a low TTL (for example 60 seconds) to speed up troubleshooting during the initial rollout.

After your DNS provider propagates the records, verify them with tools like dig <hostname> or getent hosts <hostname>. You can complete this DNS work after verifying the platform pods—Astronomer services stay healthy without external DNS, but end users need these records to sign in.

Step 25: Verify you can access the UI

Visit https://app.<base-domain> in your web-browser to view APC’s web interface. If any components are not ready, consult the debugging guide or contact Astronomer support with the relevant logs and events.

Congratulations, you have configured and installed an APC platform instance - your new Airflow control plane.

From the UI, you’ll be able to both invite and manage users as well as create and monitor Airflow Deployments on the platform.

Step 26: Disable anonymous account creation

Leave astronomer.houston.config.publicSignups: true only until you create your first administrator. Afterwards, secure the platform using the following steps:

  1. If you keep public sign-ups enabled, turn on outbound email (astronomer.houston.config.email.enabled: true), specify a trusted domain list under astronomer.houston.config.allowedSystemLevelDomains, and verify that users can only join through an approved identity provider.
  2. Otherwise, set astronomer.houston.config.publicSignups: false so new accounts require an invitation.
  3. Apply the updated configuration with helm upgrade targeting the control plane release.

Additional customization

The following topics include optional information about one or multiple topics in the installation guide:

Next steps

Register the data plane with the control plane

Start adding users, workspaces and deployments in your newly installed or upgraded APC environment at https://app.<base-domain>.