Astro Private Cloud control plane architecture
Astro Private Cloud (APC) separates responsibilities between a control plane cluster and one or more data plane clusters. The control plane hosts the shared management experience while delegating Airflow execution to attached data planes. This document summarizes the components that run inside a control plane, how they interact with data planes, and what infrastructure operators must provide. For the runtime side of the platform, see Data Plane Architecture, or review Unified Architecture if you run both roles in a single cluster.
Responsibilities
A control plane cluster focuses on:
- Platform management: The APC control service (Houston API), the web interface (Astro UI), the event streaming broker (NATS JetStream), and supporting cronjobs store platform configuration, authenticate users, and orchestrate deployments across data planes.
- Tenant management: Admins create workspaces, manage users and teams, integrate IDP, control registry access tokens, and publish runtime images from this cluster.
- Centralized telemetry: The metrics store (Prometheus) scrapes both local platform service metrics and federates metrics from attached data planes. The alert routing service (Alertmanager) raises notifications for the entire platform.
- Coordinate with data planes: The control plane exposes TLS-secured ingress endpoints (
app.<base-domain>
,houston.<base-domain>
, etc.) for users and the Astro CLI, and it keeps secure connections open to each data plane’s deployment orchestrator (Commander) and metrics gateway so configuration and telemetry stay in sync.
The control plane never hosts user Airflow workloads. Instead, it maintains references to external Kubernetes clusters (data planes) and coordinates their lifecycle.
Core Components in Control Mode
APC enables the following components when global.plane.mode
is set to control
(or unified
):
- Astro Private Cloud web interface (UI) (
charts/astronomer/templates/astro-ui/*
): Provides the web application for administrators and users. Runs only on the control plane. - Astro Private Cloud control service (Houston API) & cronjobs (
charts/astronomer/templates/houston/**/*
): Manages platform metadata, user auth, workspace creation, registry tokens, and periodic tasks (cleanup jobs, metrics aggregation). - Internal event streaming service (NATS JetStream) (
charts/nats/templates/*
): Event bus used by Houston and deployment orchestrator (Commander) to coordinate deployments. - Alert routing service (Alertmanager) (
charts/alertmanager/templates/*
): Consolidates alerts sent by Prometheus. - Central metrics store (Prometheus): The shared Prometheus StatefulSet (
charts/prometheus/templates/prometheus-statefulset.yaml
) runs in all modes, but its federation jobs, ingress, and auth proxy are tailored to aggregate data plane metrics when running as a control plane. - Control plane NGINX ingress (
charts/nginx/templates/controlplane/*
): Exposes the Control Plane UI and API endpoints for users and the Astro CLI. - Optional Postgres/Pgbouncer (
charts/postgresql
,charts/pgbouncer
): Most Deployments use an external database, but ifglobal.postgresqlEnabled=true
, the embedded database is deployed regardless of plane mode. In a split Deployment this chart is typically disabled. Astronomer recommends only using embedded Postgres for testing or development environments.
These services rely on a handful of Kubernetes constructs such as ClusterRoles, service accounts, network policies, and ingress controllers that the chart generates automatically when mode=control
or mode=unified
.
Network Surface
Control plane ingress endpoints typically include:
app.<base-domain>
: APC web interface (UI) for administrators and workspace users.houston.<base-domain>
: API traffic for the UI, Astro CLI, and deployment orchestrator (Commander) callbacks.alertmanager.<base-domain>
andprometheus.<base-domain>
: Optional if those dashboards are exposed.registry.<base-domain>
: When hosting the integrated container registry on the control plane. In a split deployment the registry generally lives on data planes; expose or disable depending on your design.
Control plane pods also reach out to data planes using deployment orchestrator (Commander) and Prometheus federation endpoints sharing tokens that Houston issues.
Connection to Data Planes
Each data plane is registered with the control plane through Houston. After registration:
deployment orchestrator (Commander)
in data planes receives payloads from the Control Plane Houston for desired state, applies Helm releases for each Airflow deployment, and posts status updates.Secret distribution job (Config Syncer)
(data plane cronjob) pushes tokens into Airflow namespaces; Houston maintains them.- Data plane Prometheus remote-writes into control plane Prometheus or exposes a federate endpoint that control Prometheus scrapes.
- Optional logging plane components such as the log forwarder (Vector) and log store (Elasticsearch) ship logs upward or to an external destination.
Unified Mode Comparison
Setting global.plane.mode
to unified
deploys both control plane and data plane components into the same cluster. That configuration is helpful for testing or small environments but loses the strict separation and isolation you gain with split deployments. For a detailed comparison, see the accompanying Unified Architecture page.
Next Steps
- Deploy the control plane by following the Install Control Plane guide.
- Provision data plane clusters and register them via the Astro UI.
- Review network policies and firewall rules to secure traffic between planes.
- Configure monitoring and alert policies using Prometheus and the alert routing service (Alertmanager).