Install the Astro Private Cloud data plane

Use this guide to deploy an Astro Private Cloud (APC) data plane with the Helm-based Astronomer platform charts. Data planes host Airflow runtimes and execute dag workloads while relying on the APC control plane for shared services such as the UI, Houston API, monitoring coordination, and authentication.

Prerequisites

Before you begin, deploy the control plane and verify network connectivity between the clusters. See Install the APC Control Plane for setup steps.

While it is possible to register a data plane to multiple control planes, Astronomer does not recommend or support this configuration.

The following prerequisites apply when running Astro Private Cloud on Amazon EKS. See the Other tab if you run a different version of Kubernetes on AWS.
  • An EKS Kubernetes cluster, running a version of Kubernetes certified as compatible on the Kubernetes Version Compatibility Reference that provides the following components:
  • A PostgreSQL instance, accessible from your Kubernetes cluster, and running a version of Postgres certified as compatible on the Version Compatibility Reference.
  • PostgreSQL superuser permissions.
  • Permission to create and modify resources on AWS.
  • Permission to generate a certificate that covers a defined set of subdomains.
  • An SMTP service and credentials. For example, Mailgun or Sendgrid.
  • The AWS CLI.
  • (Optional) eksctl for creating and managing your Astronomer cluster on EKS.
  • A machine meeting the following criteria with access to the Kubernetes API Server:
    • Network access to the Kubernetes API Server - either direct access or VPN.
    • Network access to load-balancer resources that are created when Astro Private Cloud is installed later in the procedure - either direct access or VPN.
    • Configured to use the DNS servers where Astro Private Cloud DNS records can be created.
    • Helm (minimum v3.6).
    • The Kubernetes CLI (kubectl).
  • (Situational) The OpenSSL CLI might be required to troubleshoot certain certificate-related conditions.

Ingress controller considerations

Astro Private Cloud requires a Kubernetes Ingress controller in the data plane to function and provides an integrated Ingress controller by default. Before installing, you need to decide whether to use a third-party ingress controller or use Astronomer’s integrated ingress controller.

Astronomer generally recommends you use the integrated Ingress controller, but Astro Private Cloud also supports certain third-party ingress-controllers.

Ingress controllers typically need elevated permissions, including a ClusterRole, to function. Specifically, the Astro Private Cloud Ingress controller requires the ability to:

  • List all namespaces in the cluster.
  • View ingresses in the namespaces.
  • Retrieve secrets in the namespaces to locate and use private TLS certificates that service the ingresses.

If you have complex regulatory requirements, you might need to use an Ingress controller that’s approved by your organization and disable Astronomer’s integrated controller. You configure the Ingress controller during the installation.

Step 1: Create a directory to hold files used when provisioning the data plane

Reuse the same top-level directory you created during the control plane install (for example, ~/astronomer-dev) and create a subdirectory for each data plane. Using names such as ~/astronomer-dev/dp-dev-01a, ~/astronomer-dev/dp-dev-01b, or ~/astronomer-dev/dp-prod-01a keeps the control plane environment obvious in the path and scales cleanly as you add more data planes.

Certain files in the project directory might contain secrets when you set up your sandbox or development environments. For your first install, keep these secrets in a secure place on a suitable machine. As you progress to higher environments, such as staging or production, secure these files separately in a vault and use the remaining project files in your directory to serve as the basis for your CI/CD deployment.

Step 2: Create values.yaml from a template

Astro Private Cloud uses Helm to apply platform-level configurations. Choose your cloud provider tab below to copy a ready-to-use values.yaml, then update image tags, domains, and secrets before deploying.

As you work with the template configuration, keep the following in mind.

  • Do not make any changes to this file until instructed to do so in later steps.
  • Do not run helm upgrade or upgrade.sh until instructed to do so in later steps.
  • Ignore any instructions to run helm upgrade from other Astronomer documentation until you’ve completed this installation.
1###########################################
2### Astronomer global configuration for EKS
3###########################################
4global:
5 # Installation mode for the control plane
6 plane:
7 mode: data
8 # Unique prefix for all data plane subdomains exposed through ingress
9 domainPrefix: dp-01
10
11 # Base domain shared with the control plane
12 baseDomain: env.astronomer.your.domain
13
14 # For development or proof-of-concept, you can use an in-cluster database.
15 # This NOT supported in production.
16 # postgresqlEnabled: true
17
18 # Name of secret containing TLS certificate, change if not using "astronomer-tls"
19 # tlsSecret: astronomer-tls
20
21 # List of secrets containing the cert.pem of trusted private certification authorities
22 # Example command: `kubectl -n astronomer create secret generic private-root-ca --from-file=cert.pem=./private-root-ca.pem`
23 # privateCaCerts:
24 # - private-root-ca
25
26 # Expose Postgres metrics for Prometheus to scrape
27 # prometheusPostgresExporterEnabled: true
28
29 # Database SSL configuration
30 ssl:
31 # Enable SSL connection to Postgres -- must be false if using in-cluster database
32 enabled: true
33
34#########################
35### Ingress configuration
36#########################
37# nginx:
38 # Static IP address the nginx ingress should bind to
39 # loadBalancerIP: ~
40
41 # Set privateLoadbalancer to 'false' to make nginx request a LoadBalancer on a public vnet
42 # privateLoadBalancer: true
43
44 # Dictionary of arbitrary annotations to add to the nginx ingress.
45 # For full configuration options, see https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/
46 # Change to 'elb' if your node group is private and doesn't utilize a NAT gateway
47 # ingressAnnotations: {service.beta.kubernetes.io/aws-load-balancer-type: nlb}
48
49 # If all subnets are private, auto-discovery may fail.
50 # You must enter the subnet IDs manually in the annotation below.
51 # service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-id-1,subnet-id-2
52
53#####################################
54### Astronomer platform configuration
55#####################################
56
57# tags:
58 # platform: true
59 # monitoring: true
60 # logging: true

Step 3: Choose and configure the data plane domain prefix

Assign a unique value to global.plane.domainPrefix. Astronomer uses this prefix as the leftmost label for every data plane hostname (for example, commander.<domainPrefix>.<baseDomain> and prometheus.<domainPrefix>.<baseDomain>) and includes it in monitoring metadata.

  • Use a DNS-compliant label: lowercase letters, numbers, and hyphens only; 1–63 characters; and no leading or trailing hyphen.
  • Confirm you can create DNS records and issue TLS certificates for the resulting hostnames. A later step lists the exact FQDNs that require coverage.

Update your values file with the chosen suffix. For example:

1global:
2 plane:
3 domainPrefix: apc-dp-01a

Step 4: Configure a base domain

Set global.baseDomain in this data plane’s values.yaml to the same value used by the control plane. All planes must share the exact base domain so HTTPS certificates and DNS records align.

Update your values file with the base domain. For example:

1global:
2 baseDomain: sandbox-astro.example.com

Step 5: Create the Astro Private Cloud platform namespace

In your Kubernetes cluster, create a Kubernetes namespace to contain the Astro Private Cloud platform, for example apc-dp-01:

1kubectl create namespace apc-dp-01

Astro Private Cloud installs components into this namespace to provision and manage Airflow Deployments running in other namespaces. Each Airflow Deployment has its own isolated namespace.

Step 6: Request and validate an Astronomer TLS certificate

To install Astro Private Cloud, you need a TLS certificate that is valid for several domains. One of the domains is the primary name on the certificate, also known as the common name (CN). The additional domains are equally valid, supplementary domains known as Subject Alternative Names (SANs).

Astronomer requires a private certificate to be present in the Astro Private Cloud platform namespace, even if you use a third-party ingress controller that doesn’t otherwise require it.

Request an ingress controller TLS certificate

Request a TLS certificate from your security team for Astro Private Cloud. In your request, include the following:

  • Use <domainPrefix>.<baseDomain> as the Common Name (CN). If your certificate authority will not issue certificates for the bare base domain, use commander.<domainPrefix>.<baseDomain> as the CN instead.
  • Add Subject Alternative Names (SANs) for either of the following options:
    • Option 1: request a wildcard SAN of *.<domainPrefix>.<baseDomain> plus an explicit SAN for <domainPrefix>.<baseDomain>.
    • Option 2: list each hostname individually:
      • <domainSuffix>.<baseDomain> (Commander metadata service)
      • commander.<domainPrefix>.<baseDomain>
      • prometheus.<domainSuffix>.<baseDomain>
      • prom-proxy.<domainPrefix>.<baseDomain>
      • registry.<domainPrefix>.<baseDomain> (only if you keep the integrated registry enabled)
      • es-proxy.<domainPrefix>.<baseDomain> and elasticsearch.<domainPrefix>.<baseDomain> (only when logging tags are enabled)
Wildcards only cover a single label. A control plane certificate like *.<baseDomain> will not secure data plane hosts such as commander.<domainPrefix>.<baseDomain>. Request a dedicated wildcard *.<domainPrefix>.<baseDomain> or list each hostname explicitly.
  • If you use the Astro Private Cloud integrated container registry, specify that that the encryption type of the certificate must be RSA.
  • Request the following return format:
    • A key.pem containing the private key in pem format
    • Either a full-chain.pem (containing the public certificate additional certificates required to validate it, in pem format) or a bare cert.pem and explicit affirmation that there are no intermediate certificates and that the public certificate is the full chain.
    • Either the private-root-ca.pem in pem format of the private Certificate Authority used to create your certificate or a statement that the certificate is signed by a public Certificate Authority.
If you’re using the Astro Private Cloud integrated container registry, the encryption type used on your TLS certificate must be RSA. Certbot users must include --key-type rsa when requesting certificates. Most other solutions generate RSA keys by default.

Validate the received certificate and associated items

Ensure that you received each of the following three items:

  • A key.pem containing the private key in pem format.
  • Either a full-chain.pem, in pem format, that contains the public certificate additional certificates required to validate it or a bare cert.pem and explicit affirmation that there are no intermediate certificates and that the public certificate is the full chain.
  • Either the private-root-ca.pem in pem format of the the private Certificate Authority used to create your certificate or a statement that the certificate is signed by public Certificate Authority.

To validate that your security team generated the correct certificate, run the following command using the openssl CLI:

1openssl x509 -in <your-certificate-filepath> -text -noout

This command will generate a report. If the X509v3 Subject Alternative Name section of this report includes either a single *.<baseDomain> wildcard domain or all subdomains, then the certificate creation was successful.

Confirm that your full-chain certificate chain is ordered correctly. To determine your certificate chain order, run the following command using the openssl CLI:

1openssl crl2pkcs7 -nocrl -certfile <your-full-chain-certificate-filepath> | openssl pkcs7 -print_certs -noout

The command generates a report of all certificates. Verify that the certificates are in the following order:

  • Domain
  • Intermediate (optional)
  • Root

(Optional) Additional validation for the Astronomer integrated container registry

If you don’t plan to store images in Astronomer’s integrated container registry and instead plan to store all container images using an external container registry, you can skip this step.

The Astro Private Cloud integrated container registry requires that your private key signs traffic originating from the Astro Private Cloud platform using the RSA encryption method. Confirm that the key is signing traffic correctly before proceeding.

Run the following command to extract the bare public cert, if it was not already included in the files provided by your certificate authority, from the full-chain certificate file:

1openssl crl2pkcs7 -nocrl -certfile full-chain.pem | openssl pkcs7 -print_certs -noout > cert.pem

Examine the public certificate and ensure all Signature Algorithms are listed as sha1WithRSAEncryption.

1openssl x509 -in cert.pem -text|grep Algorithm
2 Signature Algorithm: sha1WithRSAEncryption
3 Public Key Algorithm: rsaEncryption
4 Signature Algorithm: sha1WithRSAEncryption

If your key is not compatible with the Astro Private Cloud integrated container registry, ask your Certificate Authority to re-issue the credentials and emphasize the need for an RSA cert, or use an external container registry.

Step 7: Store and configure the ingress controller TLS certificate

Determine whether or not your certificate was issued by an intermediate certificate-authority. If you do not know, assume you use an intermediate certificate and attempt to obtain a full-chain.pem bundle from your certificate authority.

Certificates issued by operators of root certificate authorities, including but not limited to LetsEncrypt, are frequently issued from intermediate certificate authorities associated with a trusted root CA.

Astro Private Cloud backend services have stricter trust requirements than most web-browsers. Web Browsers might auto-complete the chain and consider your certificate valid, even if you don’t provide the intermediate certificate-authority’s public certificate. Astro Private Cloud backend services can reject the same certificate, and cause dag and image deploys to fail.

If, and only if, your certificate was issued directly by the root Certificate Authority of a universally trusted certificate authority, and not from one of their intermediaries, then the server.crt is also the full-chain certificate bundle.

Identify your full-chain public certificate .pem file and use it while storing and configuring the ingress controller TLS certificate.

The --cert parameter must reference your full-chain.pem, which includes the server certificate and any intermediate certificates, if any. Using the server cert directly causes dag and image deploys to fail.

Run the following command to store the public full-chain certificate in the Astro Private Cloud Platform Namespace in a tls-type Kubernetes secret. You can create a custom name for this secret. The following example uses the name, astronomer-tls.

1kubectl -n <astronomer platform namespace> create secret tls astronomer-tls --cert <fullchain-pem-filepath> --key <your-private-key-filepath>

Naming the secret astronomer-tls with no substitutions is recommended when using a third-party ingress controller.

Step 8: (Optional) Configure a third-party ingress controller

If you use Astro Private Cloud’s integrated ingress controller, you can skip this step.

Complete the full setup as described in Third-party Ingress-Controllers, which includes steps to configure ingress controllers in specific environment types. When you’re done, return to this page and continue to the next step.

Step 9: Configure a private certificate authority

Skip this step if you don’t use a private Certificate Authority (private CA) to sign the certificate used by your ingress-controller. Or, if you don’t use a private CA for any of the following services that the Astro Private Cloud platform interacts with.

Astro Private Cloud trusts public Certificate Authorities automatically.

Astro Private Cloud must be configured to trust any private Certificate Authorities issuing certificates for systems Astro Private Cloud interacts with, including but not limited-to:

  • ingress controller
  • any container registries that Kubernetes pulls from
  • if using OAUTH, the OAUTH provider
  • if using external Elasticsearch, any external Elasticsearch instances
  • if using external Prometheus, any external Prometheus instances

Perform the procedure described in Configuring private CAs for each certificate authority used to sign TLS certificates. After creating the trust secret (for example astronomer-ca), add it to global.privateCaCerts in values.yaml so platform components trust the issuer.

Astro CLI users must also configure both their operating system and container solution, Docker Desktop or Podman, to trust the private certificate Authority that was used to create the certificate used by the Astro Private Cloud ingress controller and any third-party container registries.

Step 10: Confirm your Kubernetes cluster trusts required CAs

If at least one of the following circumstances apply to your installation, complete this step:

  • Users will deploy images to an external container registry and that registry is using a TLS certificate issued by a private CA.
  • You plan for your users to deploy Airflow images to Astro Private Cloud’s integrated container registry and Astronomer is using a TLS certificate issued by a private CA.

Kubernetes must be able to pull images from one or more container registries for Astro Private Cloud to function. By default, Kubernetes only trusts publicly signed certificates. This means that by default, Kubernetes does not honor the list of certificates trusted by the Astro Private Cloud platform.

Many enterprises configure Kubernetes to trust additional certificate authorities as part of their standard cluster creation procedure. Contact your Kubernetes Administrator to find out what, if any, private certificates are currently trusted by your Kubernetes Cluster. Then, consult your Kubernetes administrator and Kubernetes provider’s documentation for instructions on configuring Kubernetes to trust additional CAs.

Follow procedures for your Kubernetes provider to configure Kubernetes to trust each CA associated with your container registries, including the integrated container registry, if applicable.

Certain clusters do not provide a mechanism to configure the list of certificates trusted by Kubernetes.

While configuring the Kubernetes list of cluster certificates is a customer responsibility, Astro Private Cloud includes an optional component that can, for certain Kubernetes cluster configurations, add certificates defined in global.privateCaCerts to the list of certificates trusted by Kubernetes. This can be enabled by setting global.privateCaCertsAddToHost.enabled and global.privateCaCertsAddToHost.addToContainerd to true in your values.yaml file and setting global.privateCaCertsAddToHost.containerdConfigToml to:

[host."https://registry.<domainPrefix>.<baseDomain>"]
ca = "/etc/containerd/certs.d/<registry hostname>/<secret name>.pem"

For example, if your base domain is apc-01.mydomain.internal, the domain prefix is apc-dp-01a, and the CA public certificate is stored in the namespace in a secret named my-private-ca, the global.privateCaCertsAddToHost section would be:

1 global:
2 privateCaCertsAddToHost:
3 enabled: true
4 addToContainerd: true
5 hostDirectory: /etc/containerd/certs.d
6 containerdConfigToml: |-
7 [host."https://registry.apc-dp-01a.apc-01.mydomain.internal"]
8 ca = "/etc/containerd/certs.d/registry.apc-dp-01a.apc-01.mydomain.internal/my-private-ca.pem"

Step 11: Configure volume storage classes

Skip this step if your cluster defines a volume storage class, and you want to use it for all volumes associated with Astro Private Cloud and its Airflow Deployments.

Astronomer strongly recommends that you do not back any volumes used for Astro Private Cloud with mechanical hard drives.

Create storage-class-config.yaml in your project directory and update the configuration to match your environment:

1global:
2 prometheus:
3 persistence:
4 storageClassName: "<desired-storage-class>"
5
6astronomer:
7 registry:
8 persistence:
9 storageClassName: "<desired-storage-class>"
10 elasticsearch:
11 common:
12 persistence:
13 storageClassName: "<desired-storage-class>" # Required only if logging is enabled

Remove the elasticsearch section unless you plan to enable logging (tags.logging: true).

Merge these values into values.yaml. You can do this manually or by placing merge_yaml.py and the configuration as a new file in your project directory and running python merge_yaml.py storage-class-config.yaml values.yaml.

Step 12: Configure the database

The data plane needs access to a database to create and manage the Airflow Deployment databases. To do this an admin user with the ability to create databases and users should be configured and placed into the astronomer-bootstrap secret in the data plane namespace.

  1. Ensure firewalls, network policies, and routing rules allow pods in this data plane cluster to reach the database host/port.

  2. Create a Kubernetes secret with the admin user connection string in the data plane namespace:

    $kubectl -n <dataplane namespace> create secret generic astronomer-bootstrap \
    > --from-literal connection="postgres://<url-encoded username>:<url-encoded password>@<database hostname>:<database port>"

    If the secret already exists, use kubectl apply to update it instead of recreating it.

If your organization rotates database credentials automatically, include the data plane namespace in the same rotation workflow so the secret stays in sync.

Step 13: Configure an external Docker registry for Airflow images

Astro Private Cloud users create customized Airflow container images when they deploy project code to the platform. These images frequently contain sensitive information and must be stored in a secure location accessible to Kubernetes.

Ensure network access from the cluster to your registry endpoint and limit visibility to trusted networks (for example private subnets or VPN access). No additional Astronomer configuration is required.

See Configure a custom registry for Deployment images for full details, including per-deployment registries and air gapped workflows.

Step 14: Determine which version of Astro Private Cloud to install

Astronomer recommends new Astro Private Cloud installations use the most recent version available in either the Stable or Long Term Support (LTS) release-channel. Keep this version number available for the following steps. For a separate control plane and data plane topology, at least version 1.0.0 of Astro Private Cloud is required.

See Astro Private Cloud’s lifecycle policy and version compatibility reference for more information.

Step 15: Fetch Airflow Helm charts

If you have internet access to https://helm.astronomer.io, run the following command on the machine where you want to install Astro Private Cloud:

1helm repo add astronomer https://helm.astronomer.io/
2helm repo update

If you don’t have internet access to https://helm.astronomer.io, download the Astro Private Cloud Platform Helm chart file corresponding to the version of Astro Private Cloud you are installing or upgrading to from https://helm.astronomer.io/astronomer-<version number>.tgz. For example, for Astro Private Cloud v1.0.0 you would download https://helm.astronomer.io/astronomer-1.0.0.tgz. This file does not need to be uploaded to an internal chart repository.

Step 16: Create and customize upgrade.sh

Create a file named upgrade.sh in your platform deployment project directory containing the following script. Specify the following values at the beginning of the script:

  • CHART_VERSION: Your Astro Private Cloud version, including patch and a v prefix. For example, v1.0.0.
  • RELEASE_NAME: Your Helm release name. astronomer is strongly recommended.
  • NAMESPACE: The namespace to install platform components into. astronomer is strongly recommended.
  • CHART_NAME: Set to astronomer/astronomer if fetching platform images from the internet. Otherwise, specify the filename if you’re installing from a file (for example astronomer-1.0.0.tgz).
1#!/bin/bash
2set -xe
3
4# typically astronomer
5RELEASE_NAME=<astronomer-platform-release-name>
6# typically astronomer
7NAMESPACE=<astronomer-platform-namespace>
8# typically astronomer/astronomer
9CHART_NAME=<chart name>
10# format is v<major>.<minor>.<path> e.g. v0.32.9
11CHART_VERSION=<v-prefixed version of the Astro Private Cloud platform chart>
12# ensure all the above environment variables have been set
13
14helm repo add --force-update astronomer https://helm.astronomer.io
15helm repo update
16
17# upgradeDeployments false ensures that Airflow charts are not upgraded when this script is run
18# If you deployed a config change that is intended to reconfigure something inside Airflow,
19# then you may set this value to "true" instead. When it is "true", then each Airflow chart will
20# restart. Note that some stable version upgrades require setting this value to true regardless of your own configuration.
21# If you are currently on Astro Private Cloud 0.25, 0.26, or 0.27, you must upgrade to version 0.28 before upgrading to 0.29. A direct upgrade to 0.29 from a version lower than 0.28 is not possible.
22helm upgrade --install --namespace $NAMESPACE \
23 -f ./values.yaml \
24 --reset-values \
25 --version $CHART_VERSION \
26 --debug \
27 $RELEASE_NAME \
28 $CHART_NAME $@

Step 18: (OpenShift only) Apply OpenShift-specific configuration

If you’re not installing Astro Private Cloud into an OpenShift Kubernetes cluster, skip this step.

Add the following values into values.yaml. You can do this manually or by placing the configuration as a new file, along with merge_yaml.py in your project directory and running python merge_yaml.py openshift.yaml values.yaml.

1global:
2 nginxEnabled: false
3 openshiftEnabled: true
4 sccEnabled: false
5 extraAnnotations:
6 kubernetes.io/ingress.class: openshift-default
7 route.openshift.io/termination: "edge"
8 authSidecar:
9 enabled: true
10 dagOnlyDeployment:
11 securityContext:
12 fsGroup: ""
13 vectorEnabled: false
14 loggingSidecar:
15 enabled: true
16 name: sidecar-log-consumer
17elasticsearch:
18 sysctlInitContainer:
19 enabled: false
20
21# bundled postgresql not a supported option, only for use in proof-of-concepts
22postgresql:
23 securityContext:
24 enabled: false
25 volumePermissions:
26 enabled: false

Only Ingress objects with the annotation route.openshift.io/termination: "edge" are supported for generating routes in OpenShift 4.11 and later. Other termination types are no longer supported for automatic route generation.

If you’re on an older version of OpenShift, route creation should be done manually.

Astro Private Cloud on OpenShift is only supported when using a third-party ingress-controller and using the logging sidecar feature of Astro Private Cloud. The above configuration enables both of these items.

Step 19: (Optional) Limit Astronomer to a namespace pool

By default, Astro Private Cloud automatically creates namespaces for each new Airflow Deployment.

You can restrict the Airflow management components of Astro Private Cloud to a list of predefined namespaces and configure it to operate without a ClusterRole by following the instructions in Configure a Kubernetes namespace pool for Astro Private Cloud. If you want to disable creation of role and rolebindings for commander, config-syncer, and kubestate metrics, you can set global.features.namespacePools.createRbac to false.

When global.rbacEnabled is set to false, the platform no longer creates any role, rolebindings, or service accounts. The user must define default roles to the k8s default service account to continue with the platform install. See Bring your own Kubernetes service accounts for setup steps.

Step 20: (Optional) Enable sidecar logging

Running a logging sidecar to export Airflow task logs is essential for running Astro Private Cloud in a multi-tenant cluster.

By default, Astro Private Cloud creates a privileged DaemonSet to aggregate logs from Airflow components for viewing from within Airflow and the Astro Private Cloud UI.

You can replace this privileged Daemonset with unprivileged logging sidecars by following instructions in Export logs using container sidecars.

Step 21: Install the data plane using Helm

Deploy the data plane using the upgrade.sh script you created earlier. Confirm RELEASE_NAME, NAMESPACE, and CHART_VERSION reflect your environment, then execute:

$./upgrade.sh

To review manifests before applying them, run ./upgrade.sh --dry-run or use helm template with the same flags defined in the script.

Step 22: Configure DNS to point to the ingress controller

Whether you use Astronomer’s integrated ingress controller or a third-party controller, publish the same set of DNS records so users can reach data plane services.

  • If you use the integrated controller, get the load balancer address directly:

    1kubectl -n <astronomer platform namespace> get svc astronomer-dp-nginx
  • If you use a third-party controller, ask your ingress administrator for the hostname or IP address that should front the Astronomer routes (refer back to the details you gathered in Step 9).

Create either a wildcard record for *.<domainPrefix>.<baseDomain>, such as *.apc-dp-01a.apc-01.example.com or individual CNAME records for the following data plane hostnames so that traffic routes through the chosen load balancer:

  • <domainPrefix>.<baseDomain>
  • commander.<domainPrefix>.<baseDomain>
  • prometheus.<domainSuffix>.<baseDomain>
  • prom-proxy.<domainSuffix>.<baseDomain>
  • registry.<domainSuffix>.<baseDomain> (only if you keep the integrated registry enabled)
  • prometheus.<domainPrefix>.<baseDomain>
  • prom-proxy.<domainPrefix>.<baseDomain>
  • registry.<domainPrefix>.<baseDomain> (only if you keep the integrated registry enabled)
  • es-proxy.<domainPrefix>.<baseDomain> and elasticsearch.<domainPrefix>.<baseDomain> (only when logging tags are enabled)

Astronomer generally recommends pointing the zone apex (@) directly to the load balancer address and mapping the remaining hostnames as CNAMEs to that apex. In lower environments, you can safely use a low TTL (for example 60 seconds) to speed up troubleshooting during the initial rollout.

After your DNS provider propagates the records, verify them with tools like dig commander.<domainPrefix>.<baseDomain> or getent hosts commander.<domainPrefix>.<baseDomain>. You can complete this DNS work after verifying the platform pods—Astronomer services stay healthy without external DNS, but end users need these records to sign in.

Step 23: Verify Pods creation

To verify all pods are up and running, run:

1kubectl -n <astronomer platform namespace> get pods

All pods should be in Running status. For example,

1$ kubectl -n astronomer get pods
2
3NAME READY STATUS RESTARTS AGE
4astronomer-commander-6bd95b6f9b-t2dg7 1/1 Running 0 6m6s
5astronomer-commander-6bd95b6f9b-vz8kn 1/1 Running 0 5m50s
6astronomer-dp-nginx-5657d4869b-lsczb 1/1 Running 0 20m
7astronomer-dp-nginx-5657d4869b-n4dhz 1/1 Running 0 20m
8<snip>

If all pods are not in running status, check the guide on debugging your installation or contact Astronomer support for additional configuration assistance.

If you added the podLabels configuration, you can also search for Pods created by Astro Private Cloud by searching for the key-value pair in the label you created. See Add Pod labels.

Congratulations, you have configured and installed an Astro Private Cloud platform instance—your new data plane!

Additional customization

The following topics include optional information about one or multiple topics in the installation guide:

Next steps

Register the data plane with the control plane

Add the data plane to the control plane UI so Houston can schedule workloads onto it. See Register a data plane with the APC control plane for instructions on exchanging tokens, approving connectivity, and assigning deployments.