Astro Private Cloud unified architecture

Astro Private Cloud (APC) can run in unified mode, where the control plane and data plane components coexist inside the same Kubernetes cluster. Unified installations can bootstrap easily, but sacrifice the isolation and scaling benefits of dedicated planes found in split mode. This document explains which components run when you enable global.plane.mode: unified, how the components interact, and when unified mode makes sense.

Responsibilities in Unified Mode

Because control and data responsibilities share one cluster in unified mode, the following characteristics apply:

  • Management and execution share infrastructure: The Houston API platform management service, the Astro UI web interface, the Commander Deployment orchestrator, ingress controllers, logging pipelines, and metrics components all run together.
  • Reduced cross-cluster networking: Commander communicates locally with Houston, and Prometheus, the central metrics store, ingests and scrapes everything in-cluster without federation.
  • Shared security surface: Ingress endpoints for the Astro UI and Houston API (app.<base-domain>, houston.<base-domain>, deployments.<base-domain>, etc.) and registry/logging endpoints all originate from the same cluster.
  • Simplified operations for sandbox/test: Backups, upgrades, and monitoring target a single cluster.

Unified mode is ideal for proof-of-concepts, lab environments, or small teams that need APC features without requiring multiple clusters.

Components Enabled in Unified Mode

When you enable global.plane.mode: unified in the values.yaml file, APC enables both the control plane and data plane Helm charts, which includes the following services:

  • Control plane services: The web interface (Astro UI), the platform management service (Houston API), scheduled jobs, the event streaming broker (NATS JetStream), the alert routing service (Alertmanager), control-plane NGINX ingress, registry token jobs, and the central metrics store (Prometheus).
  • Data plane services: The deployment orchestrator (Commander), the secret distribution job (Config Syncer), data plane NGINX ingress, the platform registry (Registry), the log forwarder daemonset (Vector), the log store (Elasticsearch - if enabled), the cluster state exporter (kube-state-metrics), metrics gateway endpoints (Prometheus federation/auth) used locally, and namespace-pool helpers.
  • Shared services: Optional Postgres/pgBouncer (for Houston), base Prometheus StatefulSet, Airflow CRDs.

Because everything runs in the same cluster, you must size and secure it appropriately to handle both management workloads and tenant Airflow namespaces.

Network and DNS Footprint

Unified clusters expose a superset of ingress hostnames, commonly:

  • app.<base-domain>: Astro UI.
  • houston.<base-domain>: Houston API for UI, CLI, Commander callbacks.
  • deployments.<base-domain>: Ingress route for Airflow UIs.
  • registry.<base-domain>: Platform container registry (Registry) if enabled.
  • alertmanager.<base-domain> and prometheus.<base-domain>: Optional dashboards.
  • <base-domain> (optional): Redirect to app.<base-domain>.

Because both planes run in the same cluster, there are no cross-cluster TLS certificates. This means that a single certificate that includes these names is sufficient.

Operational Implications

AspectUnified ModeSplit Mode
Cluster Count12+
Blast RadiusManagement + execution share failure domainControl plane isolated from data plane failures
ScalingCluster must scale to meet both UI and Airflow workload demandEach plane scales independently
UpgradesSingle maintenance window, but downtime affects both responsibilitiesControl plane upgrades isolated from data plane workloads
Network SecurityFewer external egress rulesRequires firewall between control plane and data planes
Recommended ForTest environments, small teams, early POCsProduction workloads needing higher isolation

Transitioning to Split Mode

Many Organizations start with a unified mode and later migrate to split mode for isolation or scalability. Plan ahead for migrating to split mode by:

  • Using external Postgres and registry storage so migrating the control plane does not require moving data twice.
  • Keeping DNS hostnames consistent with the future split design, for example app.<base-domain> and deployments.<base-domain>, so you can later point records at different clusters.
  • Managing configuration through values files or automation so you can reproduce control plane settings in a dedicated cluster.