Install the Astro Private Cloud Control Plane

Use this guide to deploy the Astro Private Cloud (APC) control plane with the Helm-based Astronomer platform charts. The control plane hosts central management services such as the APC UI, Houston API, monitoring coordination, and authentication.

If your organization needs time to issue TLS certificates, configure DNS, or approve firewall changes, review the data plane installation guide in parallel. You can request the control plane and data plane prerequisites at the same time so the clusters are ready when the infrastructure tickets close.

Overview

Astro Private Cloud supports two deployment patterns:

  • Split plane: Deploy a dedicated control plane using this guide, then provision one or more data planes with Install a Data Plane. This separation keeps management services isolated from workload execution and lets you scale each plane independently.
  • Unified mode: Run both control plane and data plane services inside a single cluster using Install in Unified Mode. Unified mode is useful for labs or proofs-of-concept but concentrates failures and resource usage.

Choose the pattern that matches your reliability and compliance requirements. After selecting a pattern, determine how many APC environments you need. An environment refers to the pairing of control plane and its associated data planes. In unified mode, this maps to a single Kubernetes cluster, whereas in split mode, the environment is the combination of the control plane and all the registered data plane Kubernetes clusters.

Each APC environment can host multiple Airflow Deployments, potentially on multiple data planes. Common types include:

  • Sandbox: The lowest environment that contains no sensitive data, used only by system-administrators to experiment, and not subject to change control.
  • Development: User-accessible environment that is subject to most of the same restrictions of higher environments, with relaxed change control rules.
  • Staging: All network, security, and patch versions are maintained at the same level as in the production environment. However, it provides no availability guarantees and includes relaxed change control rules.
  • Production: The production instance hosts your production Airflow environments. You can choose to host development Airflow environments here or in environments with lower levels of support and restrictions.

If your organization runs any clusters for APC 1.0 on OpenShift, all clusters in the environment must be on OpenShift. Mixing OpenShift and non-OpenShift clusters across the control plane/data plane boundary is not supported.

Create a project folder for every environment you plan to host to contain its configuration files. For example, if you want to install a development environment, create a folder named ~/astronomer-dev/control-plane.

Certain files in the project directory might contain secrets when you set up your sandbox or development environments. For your first install, keep these secrets in a secure place on a suitable machine. As you progress to higher environments, such as staging or production, secure these files separately in a vault and use the remaining project files in your directory to serve as the basis for your CI/CD deployment.

Prerequisites

The following prerequisites apply when running Astro Private Cloud on Amazon EKS. See the Other tab if you run a different version of Kubernetes on AWS.
  • An EKS Kubernetes cluster, running a version of Kubernetes certified as compatible on the Kubernetes Version Compatibility Reference that provides the following components:
  • A PostgreSQL instance, accessible from your Kubernetes cluster, and running a version of Postgres certified as compatible on the Version Compatibility Reference.
  • PostgreSQL superuser permissions.
  • Permission to create and modify resources on AWS.
  • Permission to generate a certificate that covers a defined set of subdomains.
  • An SMTP service and credentials. For example, Mailgun or Sendgrid.
  • A machine meeting the following criteria with access to the Kubernetes API Server:
    • The AWS CLI.
    • (Optional) eksctl for creating and managing your Astronomer cluster on EKS.
    • Network access to the Kubernetes API Server - either direct access or VPN.
    • Network access to load-balancer resources that are created when Astro Private Cloud is installed later in the procedure - either direct access or VPN.
    • Configured to use the DNS servers where Astro Private Cloud DNS records can be created.
    • Helm (minimum v3.6).
    • The Kubernetes CLI (kubectl).
  • (Situational) The OpenSSL CLI might be required to troubleshoot certain certificate-related conditions.

Ingress controller considerations

Astro Private Cloud requires a Kubernetes Ingress controller to function and provides an integrated Ingress controller by default. Before installing, decide whether to use a third-party ingress controller or use Astronomer’s integrated ingress controller.

Astronomer generally recommends you use the integrated Ingress controller, but Astro Private Cloud also supports certain third-party ingress-controllers.

Ingress controllers typically need elevated permissions, including a ClusterRole, to function. Specifically, the Astro Private Cloud Ingress controller requires the ability to:

  • List all namespaces in the cluster.
  • View ingresses in the namespaces.
  • Retrieve secrets in the namespaces to locate and use private TLS certificates that service the ingresses.

If you have complex regulatory requirements, you might need to use an Ingress controller that’s approved by your organization and disable Astronomer’s integrated controller. You configure the Ingress controller during the installation.

Step 1: Create values.yaml from a template

Astro Private Cloud uses Helm to apply platform-level configurations. Choose your cloud provider tab below to copy a ready-to-use values.yaml. In this guide, you will customize the template to your requirements.

As you work with the template configuration, keep the following in mind.

  • Do not make any changes to this file until instructed to do so in later steps.
  • Do not run helm upgrade or upgrade.sh until instructed to do so in later steps.
  • Fully complete the installation in this guide before following any configuration instructions on other Astronomer documentation pages.
1###########################################
2### Astronomer global configuration for EKS
3###########################################
4global:
5 # Installation mode for the control plane
6 plane:
7 mode: control
8
9 # Base domain for all control plane subdomains exposed through ingress
10 baseDomain: env.astronomer.your.domain
11
12 # For development or proof-of-concept, you can use an in-cluster database.
13 # This NOT supported in production.
14 # postgresqlEnabled: true
15
16 # Name of secret containing TLS certificate, change if not using "astronomer-tls"
17 # tlsSecret: astronomer-tls
18
19 # List of secrets containing the cert.pem of trusted private certification authorities
20 # Example command: `kubectl -n astronomer create secret generic private-root-ca --from-file=cert.pem=./private-root-ca.pem`
21 # privateCaCerts:
22 # - private-root-ca
23
24 # Expose Postgres metrics for Prometheus to scrape
25 # prometheusPostgresExporterEnabled: true
26
27 # Enable sidecar logging by default
28 loggingSidecar:
29 enabled: true
30
31 # Database SSL configuration
32 ssl:
33 # Enable SSL connection to Postgres -- must be false if using in-cluster database
34 enabled: true
35
36#########################
37### Ingress configuration
38#########################
39# nginx:
40 # Static IP address the nginx ingress should bind to
41 # loadBalancerIP: ~
42
43 # Set privateLoadbalancer to 'false' to make nginx request a LoadBalancer on a public vnet
44 # privateLoadBalancer: true
45
46 # Dictionary of arbitrary annotations to add to the nginx ingress.
47 # For full configuration options, see https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/
48 # Change to 'elb' if your node group is private and doesn't utilize a NAT gateway
49 # ingressAnnotations: {service.beta.kubernetes.io/aws-load-balancer-type: nlb}
50
51 # If all subnets are private, auto-discovery may fail.
52 # You must enter the subnet IDs manually in the annotation below.
53 # service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-id-1,subnet-id-2
54
55################################
56### Astronomer app configuration
57################################
58astronomer:
59 houston:
60 upgradeDeployments:
61 enabled: false
62
63 # Application configuration for Houston
64 config:
65 publicSignups: true ## set to false immediately after initial system admin user created
66
67 # Allowed user email domains for system level roles
68 # allowedSystemLevelDomains: []
69
70 # Default configuration for deployments.
71 # Can be overridden on a per-data-plane basis.
72 deployments:
73 # Enable Airflow 3 deployments for clusters
74 airflowV3:
75 enabled: true
76
77 # Allow deletions to immediately remove the database and namespace
78 # hardDeleteDeployment: true
79
80 # Allows you to set your release names
81 # manualReleaseNames: true
82
83 # Flag to enable using IAM roles (don't enter a specific role)
84 # serviceAccountAnnotationKey: eks.amazonaws.com/role-arn
85
86 # Required if dagOnlyDeployment is enabled
87 # configureDagDeployment: true
88
89 # Enables the API for updating deployments
90 # enableUpdateDeploymentImageEndpoint: true
91 # upsertDeploymentEnabled: true
92
93 # email:
94 # enabled: false
95 # reply: noreply@your.domain
96
97 # secret:
98 # - envName: "EMAIL__SMTP_URL" # Reference to the Kubernetes secret for SMTP credentials. Only required if email is used.
99 # secretName: "astronomer-smtp"
100 # secretKey: "connection"
101
102 # User authentication mechanism. One of the following should be enabled.
103 auth:
104 github:
105 # Allow users authenticate with Github, enabled by default
106 enabled: false
107 # local:
108 # # Allow users and passwords in the Houston database, disabled by default
109 # enabled: false
110 openidConnect:
111 # okta:
112 # enabled: false
113 # microsoft:
114 # enabled: false
115 # adfs:
116 # enabled: false
117 # custom:
118 # enabled: false
119 google:
120 # Allow users to authenticate with Google, enabled by default
121 enabled: false
Email delivery is disabled by default. If you want to enable it, you can configure it in a later step: Configure outbound SMTP email.
The snippets in this section leave astronomer.houston.config.publicSignups: true so you can create the initial administrator account. You will lock down account creation in Disable anonymous account creation.
The snippets in this section do not enable any authentication mechanisms. You need to enable at least one mechanism to login as the first admin user.

Step 2: Choose and configure a base domain

When you install Astro Private Cloud, it creates a variety of services that your users access to manage, monitor, and run Airflow.

Choose a base domain such as astronomer.example.com, astro-sandbox.example.com, astro-prod.example.internal for which:

  • You have the ability to create and edit DNS records
  • You have the ability to issue TLS certificates
  • The following hostnames are used by the Control Plane components:
    • app.<base-domain>
    • houston.<base-domain>
    • alertmanager.<base-domain>
    • prometheus.<base-domain>

The base domain itself does not need to be available and can point to another service not associated with Astronomer or Airflow.

When choosing a base domain, consider the following:

  • The name you choose must be be resolvable by both your users and Kubernetes itself.
  • All data planes in the environment must be hosted as a sub-domain under this common base domain, e.g. dp-01.<base-domain>, so ensure you can create DNS records and issue TLS certificates for subdomains of this base domain.
  • You need to have or obtain a TLS certificate that is recognized as valid by your users. If your organization hosts a registry for APC images, ensure the TLS certificate is trusted by Kubernetes as well.
  • Wildcard certificates are only valid one level deep. For example, an ingress controller that uses a certificate called *.example.com can provide service for app.example.com but not app.astronomer-dev.example.com.
  • The bottom-level sub-domains, such as app and prometheus, are fixed and cannot be changed.

The base domain is visible to end users. They can view the base domain in the following scenarios:

  • When users access the Astro Private Cloud UI. For example, https://app.sandbox-astro.example.com.
  • When users authenticate to the Astro CLI. For example, astro login sandbox-astro.example.com.
If you install Astro Private Cloud on OpenShift and also want to use OpenShift’s integrated ingress controller, you can use the the hostname of the default OpenShift ingress controller as your base domain, such as app.apps.<OpenShift-domain>. Doing this requires permission to reconfigure the route admission policy for the standard ingress controller to InterNamespaceAllowed. See Third Party Ingress Controller - Configuration notes for OpenShift for additional information and options.

Configure the base domain

Locate the global.baseDomain in your values.yaml file and change it to your base domain as shown in the following example:

1global:
2 # Base domain for all subdomains exposed through ingress
3 baseDomain: sandbox-astro.example.com

Step 3: Create the Astro Private Cloud platform namespace

In your Kubernetes cluster, create a Kubernetes namespace to contain the Astro Private Cloud platform. This guide refers to this namespace as <apc platform namespace> below. For example, if you chose apc-cp you would create the namespace as follows:

1kubectl create namespace apc-cp

Step 4: Request and validate an Astronomer TLS certificate

To install Astro Private Cloud, you need a TLS certificate that is valid for several domains. One of the domains is the primary name on the certificate, also known as the common name (CN). The additional domains are equally valid, supplementary domains known as Subject Alternative Names (SANs).

Astro Private Cloud requires a private certificate to be present in the Astro Private Cloud platform namespace, even if you use a third-party ingress controller that doesn’t otherwise require it.

Request an ingress controller TLS certificate

Request a TLS certificate for the control plane from your security team for Astro Private Cloud. In your request, include the following:

  • Your chosen base domain as the Common Name (CN). If your certificate authority will not issue certificates for the bare base domain, use app.<base-domain> as the CN instead.
  • Either request a wildcard SAN of *.<base-domain> (plus an explicit SAN for <base-domain>) or list each hostname individually:
    • app.<base-domain> (omit if already used as the Common Name)
    • houston.<base-domain>
    • prometheus.<base-domain>
    • alertmanager.<base-domain> (required if you keep the integrated Alertmanager enabled)
Wildcards only cover a single DNS segment. You cannot reuse a data plane wildcard such as *.<domainPrefix>.<base-domain> for the control-plane hosts (app.<base-domain>, houston.<base-domain>, and so on); request a certificate that explicitly matches the control-plane names listed above.
  • Request the following return format:
    • A key.pem containing the private key in pem format
    • Either a full-chain.pem (containing the public certificate additional certificates required to validate it, in pem format) or a bare cert.pem and explicit affirmation that there are no intermediate certificates and that the public certificate is the full chain.
    • Either the private-root-ca.pem in pem format of the private Certificate Authority used to create your certificate or a statement that the certificate is signed by a public Certificate Authority.

Validate the received certificate and associated items

Ensure that you received each of the following three items:

  • A key.pem containing the private key in pem format.
  • Either a full-chain.pem, in pem format, that contains the public certificate additional certificates required to validate it or a bare cert.pem and explicit affirmation that there are no intermediate certificates and that the public certificate is the full chain.
  • Either the private-root-ca.pem in pem format of the the private Certificate Authority used to create your certificate or a statement that the certificate is signed by public Certificate Authority.

To validate that your security team generated the correct certificate, run the following command using the openssl CLI:

1openssl x509 -in <your-certificate-filepath> -text -noout

This command will generate a report. If the X509v3 Subject Alternative Name section of this report includes either a single *.<base-domain> wildcard domain or all subdomains, then the certificate creation was successful.

Confirm that your full-chain certificate chain is ordered correctly. To determine your certificate chain order, run the following command using the openssl CLI:

1openssl crl2pkcs7 -nocrl -certfile <your-full-chain-certificate-filepath> | openssl pkcs7 -print_certs -noout

The command generates a report of all certificates. Verify that the certificates are in the following order:

  • Domain
  • Intermediate (optional)
  • Root

Step 5: Store and configure the ingress controller TLS certificate

Determine whether or not your certificate was issued by an intermediate certificate-authority. If you do not know, assume you use an intermediate certificate and attempt to obtain a full-chain.pem bundle from your certificate authority.

Certificates issued by operators of root certificate authorities, including but not limited to LetsEncrypt, are frequently issued from intermediate certificate authorities associated with a trusted root CA.

Astro Private Cloud backend services have stricter trust requirements than most web-browsers. Web Browsers might auto-complete the chain and consider your certificate valid, even if you don’t provide the intermediate certificate-authority’s public certificate. Astro Private Cloud backend services can reject the same certificate, and cause dag and image deploys to fail.

If, and only if, your certificate was issued directly by the root Certificate Authority of a universally trusted certificate authority, and not from one of their intermediaries, then the server.crt is also the full-chain certificate bundle.

Identify your full-chain public certificate .pem file and use it while storing and configuring the ingress controller TLS certificate.

Run the following command to store the public full-chain certificate in the Astro Private Cloud Platform Namespace in a tls-type Kubernetes secret. You can create a custom name for this secret. The following example uses the default name, astronomer-tls.

The --cert parameter must reference your full-chain.pem, which includes the server certificate and any intermediate certificates, if any. Using the server cert directly causes dag and image deploys to fail.
1kubectl -n <apc platform namespace> create secret tls astronomer-tls --cert <fullchain-pem-filepath> --key <your-private-key-filepath>

Naming the secret astronomer-tls with no substitutions is recommended when using a third-party ingress controller. If you use another name for the secret, you must uncomment and update the tlsSecret in your values.yaml file.

Step 6: (Optional) Configure a third-party ingress controller

Skip this step if the control plane will keep using Astronomer’s built-in ingress controller. Configure a custom ingress only when this cluster must integrate with your organization’s ingress stack. The data plane guide includes its own instructions for data plane ingress changes.

If you need a third-party controller, follow the provider-specific guidance in Third-party Ingress-Controllers for the control plane cluster, then return here before continuing.

Step 7: Configure a private certificate authority

Skip this step if you don’t use a private Certificate Authority (private CA) to sign the certificate used by your ingress-controller. Or, if you don’t use a private CA for any of the following services that the Astro Private Cloud platform interacts with.

Astro Private Cloud trusts public Certificate Authorities automatically.

Astro Private Cloud must be configured to trust any private Certificate Authorities issuing certificates for systems Astro Private Cloud interacts with, including but not limited-to:

  • ingress controller
  • email server, unless disabled
  • any container registries that Kubernetes pulls from
  • if using OAUTH, the OAUTH provider
  • if using external elasticsearch, any external elasticsearch instances
  • if using external Elasticsearch, any external Elasticsearch instances
  • if using external Prometheus, any external Prometheus instances

Perform the procedure described in Configuring private CAs for each certificate authority used to sign TLS certificates. After creating the trust secret (for example astronomer-ca), add it to global.privateCaCerts in values.yaml so platform components trust the issuer.

Astro CLI users must also configure both their operating system and container solution, Docker Desktop or Podman, to trust the private certificate Authority that was used to create the certificate used by the Astro Private Cloud ingress controller and any third-party container registries.

Step 8: Confirm your Kubernetes cluster trusts required CAs

Skip this step unless the Astro Private Cloud control plane will pull platform container images from an external container registry that uses a certificate signed by a private CA.

Kubernetes must be able to pull images from one or more container registries for Astro Private Cloud to function. By default, Kubernetes only trusts publicly signed certificates. This means that by default, Kubernetes does not honor the list of certificates trusted by the Astro Private Cloud platform.

Many enterprises configure Kubernetes to trust additional certificate authorities as part of their standard cluster creation procedure. Contact your Kubernetes Administrator to find out what, if any, private certificates are currently trusted by your Kubernetes Cluster. Then, consult your Kubernetes administrator and Kubernetes provider’s documentation for instructions on configuring Kubernetes to trust additional CAs.

Follow procedures for your Kubernetes provider to configure Kubernetes to trust each CA associated with your container registries.

Certain clusters do not provide a mechanism to configure the list of certificates trusted by Kubernetes.

While configuring the Kubernetes list of cluster certificates is a customer responsibility, Astro Private Cloud includes an optional component that can, for certain Kubernetes cluster configurations, add certificates defined in global.privateCaCerts to the list of certificates trusted by Kubernetes. This can be enabled by setting global.privateCaCertsAddToHost.enabled and global.privateCaCertsAddToHost.addToContainerd to true in your values.yaml file and setting global.privateCaCertsAddToHost.containerdConfigToml to:

[host."https://<image registry hostname>"]
ca = "/etc/containerd/certs.d/<image registry hostname>/<secret name>.pem"

For example, if your registry lives at my-registry.example.com and you store the CA certificate in a secret named my-private-ca, the global.privateCaCertsAddToHost section would be:

1 global:
2 privateCaCertsAddToHost:
3 enabled: true
4 addToContainerd: true
5 hostDirectory: /etc/containerd/certs.d
6 containerdConfigToml: |-
7 [host."https://my-registry.example.com"]
8 ca = "/etc/containerd/certs.d/my-registry.example.com/my-private-ca.pem"

Step 9: Configure outbound SMTP email

Astro Private Cloud requires the ability to send email to:

  • Notify users of errors with their Airflow Deployments.
  • Send emails to invite new users to Astro Private Cloud.
  • Send certain platform alerts, enabled by default but can be configured.

Astro Private Cloud sends all outbound email using SMTP.

If SMTP is not available in the environment where you’re installing Astro Private Cloud, follow instructions in configure Astro Private Cloud to not send outbound email, and then skip the rest of this section.
  1. Obtain a set of SMTP credentials from your email administrator for you to use to send email from Astro Private Cloud. When you request an email address and display name, remember that these emails are not designed for users to reply directly to them. Request all the following information:
    • Email address
    • Email display name requirements. Some email servers require a From line of: Do Not Reply <donotreply@example.com>.
    • SMTP username. This is usually the same as the email address.
    • SMTP password
    • SMTP hostname
    • SMTP port
    • Whether or not the connection supports TLS
If there is a / or any other escape character in your username or password, you may need to URL encode those characters.
  1. Ensure that your Kubernetes cluster has access to send outbound email to the SMTP server.

  2. Change the configuration in values.yaml from noreply@your.domain to an email address that is valid to use with your SMTP credentials.

  3. Construct an email connection string and store it in a secret in the Astro Private Cloud platform namespace. The following example shows how to store the connection in a secret called astronomer-smtp for a user my@user with a password my@pass. Make sure to url-encode the username and password if they contain special characters.

    1kubectl -n <apc platform namespace> create secret generic astronomer-smtp --from-literal connection="smtp://my%40user:my%40pass@smtp.email.internal/?requireTLS=true"

    In general, an SMTP URI is formatted as smtps://USERNAME:PASSWORD@HOST/?pool=true. The following table contains examples of the URI for some of the most popular SMTP services:

    ProviderExample SMTP URL
    AWS SESsmtp://AWS_SMTP_Username:AWS_SMTP_Password@email-smtp.us-east-1.amazonaws.com/?requireTLS=true
    SendGridsmtps://apikey:SG.sometoken@smtp.sendgrid.net:465/?pool=true
    Mailgunsmtps://xyz%40example.com:password@smtp.mailgun.org/?pool=true
    Office365smtp://xyz%40example.com:password@smtp.office365.com:587/?requireTLS=true
    Custom SMTP-relaysmtp://smtp-relay.example.com:25/?ignoreTLS=true

    If your SMTP provider is not listed, refer to the provider’s documentation for information on creating an SMTP URI.

  4. Ensure this secret is referenced in the values.yaml file via an entry in the astronomer.houston.secret list. For example:

    1astronomer:
    2 houston:
    3 secret:
    4 - envName: "EMAIL__SMTP_URL" # Reference to the Kubernetes secret for SMTP credentials. Only required if email is used.
    5 secretName: "astronomer-smtp"
    6 secretKey: "connection"

Step 10: Configure volume storage classes

Skip this step if a single default storage class is sufficient for every control plane component. Otherwise, set the fields below to point at the storage classes you want to use. Astronomer recommends solid-state storage for all volumes.

Key fields to review in values.yaml:

  • global.storageClass: Fallback storage class for control plane components.
  • postgresql.persistence.storageClass: Only required if you enable the bundled Postgres database (not recommended outside of testing environments).
  • prometheus.persistence.storageClassName: Used by the control plane Prometheus when retaining metrics locally.
  • alertmanager.persistence.storageClassName: Required if Alertmanager should keep state on disk.
  • nats.jetstream.fileStorage.storageClassName: Only relevant if you enable JetStream persistence; most control plane deployments leave JetStream stateless.

Example: to point Prometheus at a custom storage class called fast-storage, add:

1prometheus:
2 persistence:
3 storageClassName: fast-storage

When you have the desired values, merge them into values.yaml manually or by using merge_yaml.py.

Step 11: Configure the database

Astro Private Cloud requires a central Postgres database that acts as the backend for Astro Private Cloud’s Houston API.

If, while evaluating Astro Private Cloud, you need to create a temporary environment where Postgres is not available, locate the global.postgresqlEnabled option already present in your values.yaml and set it to true, then skip the remainder of this step.

Note that global.postgresqlEnabled to true is an unsupported configuration, and should never be used on any development, staging, or production environment.

If you use Azure Database for either PostgreSQL or another Postgres instance that does not enable the pg_trgm by default, you must enable the pg_trgm extension prior to installing Astro Private Cloud. If pg_trgm is not enabled, the install will fail. pg_tgrm is enabled by default on Amazon RDS and Google Cloud SQL for PostgresQL.

For instructions on enabling the pg_trgm extension for Azure Flexible Server, see PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server.

Additional requirements apply to the following databases:

  • AWS RDS:
    • t2 medium is the minimum RDS instance size you can use.
  • Azure Flexible Server:
    • You must enable the pg_trgm extension as per the advisory earlier in this section.
    • Set global.ssl.modeto prefer in your values.yaml file.

Create a Kubernetes Secret in the namespace chosen for the install, named astronomer-bootstrap, that points to your database. You must URL encode any special characters in your Postgres password.

The in-cluster Postgres option (global.postgresqlEnabled: true) should only be used for short-lived testing. Always rely on an external Postgres instance for any persistent environment.

To create this secret, run the following command replacing the APC platform namespace, username, password, database hostname, and database port with their respective values:

1kubectl -n <apc platform namespace> create secret generic astronomer-bootstrap \
2 --from-literal connection="postgres://<url-encoded username>:<url-encoded password>@<database hostname>:<database port>"

For example, for a username named bob with password abc@abc for the database dbname at hostname some.host.internal, you would run:

1kubectl -n astronomer create secret generic astronomer-bootstrap \
2 --from-literal connection="postgres://bob:abc%40abc@some.host.internal:5432/dbname"
This secret must be named astronomer-bootstrap and must be present in the APC platform namespace before you install Astro Private Cloud.

Step 12: Configure the Docker registry used for platform images

Skip this step if you are installing Astro Private Cloud onto a Kubernetes cluster that can pull container images from public image repositories and you don’t want to mirror these images locally.

Docker registry secrets will also need to be created in any data planes you register with this environment, which will be covered in the data plane installation guide.

If your registry can be reached without credentials, ensure the endpoint is restricted to trusted networks (for example private subnets or VPN access). Avoid exposing the platform image registry directly to the public internet. No additional APC configuration is required beyond setting the repository locations later in this step.

For additional examples (including per-deployment registry settings and air gapped workflows), see Configure a custom registry for Deployment images.

Step 13: Determine which version of Astro Private Cloud to install

Astronomer recommends new Astro Private Cloud installations use the most recent version available in either the Stable or Long Term Support (LTS) release-channel. Keep this version number available for the following steps. For a separate control plane and data plane topology, at least version 1.0.0 of Astro Private Cloud is required.

See Astro Private Cloud’s lifecycle policy and version compatibility reference for more information.

Step 14: Fetch Airflow Helm charts

If you have internet access to https://helm.astronomer.io, run the following command on the machine where you want to install Astro Private Cloud:

1helm repo add astronomer https://helm.astronomer.io/
2helm repo update

If you don’t have internet access to https://helm.astronomer.io, download the Astro Private Cloud Platform Helm chart file corresponding to the version of Astro Private Cloud you are installing or upgrading to from https://helm.astronomer.io/astronomer-<version number>.tgz. For example, for Astro Private Cloud v1.0.0 you would download https://helm.astronomer.io/astronomer-1.0.0.tgz. This file does not need to be uploaded to an internal chart repository.

Step 15: Create and customize upgrade.sh

Create a file named upgrade.sh in your platform deployment project directory containing the following script. Specify the following values at the beginning of the script:

  • CHART_VERSION: Your Astro Private Cloud version, including patch and a v prefix. For example, v1.0.0.
  • RELEASE_NAME: Your Helm release name. astronomer is strongly recommended.
  • NAMESPACE: The namespace to install platform components into. astronomer is strongly recommended.
  • CHART_NAME: Set to astronomer/astronomer if fetching platform images from the internet. Otherwise, specify the filename if you’re installing from a file (for example astronomer-1.0.0.tgz).
Do not run this script after you create it. Your installation uses this script later, when you run your final upgrades and install processes.
1#!/bin/bash
2set -xe
3
4# typically astronomer
5RELEASE_NAME=<astronomer-platform-release-name>
6# typically astronomer
7NAMESPACE=<astronomer-platform-namespace>
8# typically astronomer/astronomer
9CHART_NAME=<chart name>
10# format is v<major>.<minor>.<path> e.g. v1.0.0
11CHART_VERSION=<v-prefixed version of the Astro Private Cloud platform chart>
12# ensure all the above environment variables have been set
13
14helm repo add --force-update astronomer https://helm.astronomer.io
15helm repo update
16
17# upgradeDeployments false ensures that Airflow charts are not upgraded when this script is run
18# If you deployed a config change that is intended to reconfigure something inside Airflow,
19# then you may set this value to "true" instead. When it is "true", then each Airflow chart will
20# restart. Note that some stable version upgrades require setting this value to true regardless of your own configuration.
21helm upgrade --install --namespace $NAMESPACE \
22 -f ./values.yaml \
23 --reset-values \
24 --version $CHART_VERSION \
25 --debug \
26 --set astronomer.houston.upgradeDeployments.enabled=false \
27 $RELEASE_NAME \
28 $CHART_NAME $@

Step 16: Mirror platform images

This step is optional but strongly recommended for production environments so your cluster can pull platform images from a registry you control.
  1. Gather the list of required platform images using one of the following methods:

Mac and Linux users with jq installed can set CHART_VERSION in the following snippet and run it to produce a list of images.

1CHART_VERSION=<v-prefixed version of the Astro Private Cloud platform chart>
2UNPREFIXED_CHART_VERSION=${CHART_VERSION#v}
3curl -s https://updates.astronomer.io/astronomer-software/releases/astronomer-${UNPREFIXED_CHART_VERSION}.json | jq -r '(.astronomer.images, .airflow.images) | to_entries[] | "\(.value.repository):\(.value.tag)"'| sort
  1. Copy the above images to the container registry using the naming scheme you configured when you set up a custom image registry.

Step 17: Fetch Astro Runtime updates

If you are installing Astro Private Cloud into an egress-controlled or air gapped environment, perform the following steps.

By default, Astro Private Cloud checks for Airflow updates, which are included in the Astro Runtime, once per day at midnight by querying https://updates.astronomer.io/astronomer-runtime. This returns a JSON file with details about the latest available Astro Runtime versions.

In an egress-controlled or air gapped environment, you need to store the JSON file in the cluster itself, avoiding the external check. To store the JSON file in the cluster, complete the following steps:

  1. Download the JSON files and store them in a Kubernetes configmap by running the following commands:
1curl -XGET https://updates.astronomer.io/astronomer-runtime -o astro_runtime_releases.json
2
3kubectl -n <apc platform namespace> create configmap astro-runtime-base-images --from-file=astro_runtime_releases.json
  1. Add your configmap name, astro-runtime-base-images to your Houston configuration using the runtimeReleasesConfigMapName configuration:
1astronomer:
2 houston:
3 runtimeReleasesConfigMapName: astro-runtime-base-images
4 config:
5 airgapped:
6 enabled: true

Step 18: (OpenShift only) Apply OpenShift-specific configuration

If you’re not installing Astro Private Cloud into an OpenShift Kubernetes cluster, skip this step.

Add the following values into values.yaml. You can do this manually or by placing the configuration as a new file, along with merge_yaml.py in your project directory and running python merge_yaml.py openshift.yaml values.yaml.

1global:
2 openshiftEnabled: true
3 sccEnabled: false
4 extraAnnotations:
5 kubernetes.io/ingress.class: openshift-default
6 route.openshift.io/termination: "edge"
7 authSidecar:
8 enabled: true
9 dagOnlyDeployment:
10 securityContext:
11 fsGroup: ""
12 vectorEnabled: false
13 loggingSidecar:
14 enabled: true
15 name: sidecar-log-consumer
16elasticsearch:
17 sysctlInitContainer:
18 enabled: false
19
20# bundled postgresql not a supported option, only for use in proof-of-concepts
21postgresql:
22 securityContext:
23 enabled: false
24 volumePermissions:
25 enabled: false

Only Ingress objects with the annotation route.openshift.io/termination: "edge" are supported for generating routes in OpenShift 4.11 and later. Other termination types are no longer supported for automatic route generation.

If you’re on an older version of OpenShift, route creation should be done manually.

Astro Private Cloud on OpenShift is only supported when using a third-party ingress-controller and using the logging sidecar feature of Astro Private Cloud. The above configuration enables both of these items.

Step 19: (Optional) Integrate an external identity provider

Astro Private Cloud includes integrations for several of the most popular OAUTH2 identity providers (IdPs), such as Okta and Microsoft Entra ID. Configuring an external IdP allows you to automatically provision and manage users in accordance with your organization’s security requirements. See Integrate an auth system to configure the identity provider of your choice in your values.yaml file.

Step 20: Install Astro Private Cloud using Helm

Deploy the control plane using the upgrade.sh script you created earlier. Confirm RELEASE_NAME, NAMESPACE, and CHART_VERSION reflect your environment, then execute:

$./upgrade.sh

To review manifests before applying them, run ./upgrade.sh --dry-run or use helm template with the same flags defined in the script.

Step 21: Configure DNS to point to the ingress controller

Whether you use Astronomer’s integrated ingress controller or a third-party controller, publish the same set of DNS records so users can reach control plane services.

  • If you use the integrated controller, get the load balancer address directly:

    1kubectl -n <apc platform namespace> get svc astronomer-nginx
  • If you use a third-party controller, ask your ingress administrator for the hostname or IP address that should front the Astronomer routes (refer back to Configure a third-party ingress controller).

Create either a wildcard record such as *.sandbox-astro.example.com or individual CNAME records for the following hostnames so that traffic routes through the chosen load balancer:

  • app.<base-domain> (required)
  • houston.<base-domain> (required)
  • prometheus.<base-domain> (required)
  • alertmanager.<base-domain> (required if you keep the integrated Alertmanager enabled)
  • <base-domain> (optional but recommended, provides a vanity redirect to app.<base-domain>)

Astronomer generally recommends pointing the zone apex (@) directly to the load balancer address and mapping the remaining hostnames as CNAMEs to that apex. In lower environments, you can safely use a low TTL (for example 60 seconds) to speed up troubleshooting during the initial rollout.

After your DNS provider propagates the records, verify them with tools like dig <hostname> or getent hosts <hostname>. You can complete this DNS work after verifying the platform pods—Astronomer services stay healthy without external DNS, but end users need these records to sign in.

Step 22: Verify you can access the UI

Visit https://app.<base-domain> in your web-browser to view Astro Private Cloud’s web interface. If any components are not ready, consult the debugging guide or contact Astronomer support with the relevant logs and events.

Congratulations, you have configured and installed an Astronomer for Private Cloud platform instance - your new Airflow control plane!

From the Astro Private Cloud UI, you’ll be able to both invite and manage users as well as create and monitor Airflow Deployments on the platform.

Step 23: Disable anonymous account creation

Leave astronomer.houston.config.publicSignups: true only long enough to create your first administrator. Afterwards, secure the platform as follows:

  1. If you keep public signups enabled, turn on outbound email (astronomer.houston.config.email.enabled: true), specify a trusted domain list under astronomer.houston.config.allowedSystemLevelDomains, and verify that users can only join through an approved identity provider.
  2. Otherwise, set astronomer.houston.config.publicSignups: false so new accounts require an invitation.
  3. Apply the updated configuration with helm upgrade targeting the control plane release.

Additional customization

The following topics include optional information about one or multiple topics in the installation guide:

Next steps

Register the data planes with the control plane

Add the data planes to the control plane to begin creating Airflow Deployments. See Register a data plane with the APC control plane for instructions on exchanging tokens, approving connectivity, and assigning deployments.