Astro Private Cloud release notes
This document contains release notes for each version of Astro Private Cloud (APC).
Version 2.0 is the latest long-term support (LTS) version of Astro Private Cloud. To upgrade to version 2.0, see Upgrade Astro Private Cloud. For more information about APC release channels, see Release and lifecycle policies. To read release notes specifically for the Astro CLI, see Astro CLI release notes.
Because Astronomer has separate maintenance life cycles for each minor version of APC, the same change can be introduced multiple times across minor versions, resulting in multiple identical release notes. When a new minor version releases, such as version 1.1.0, all changes from previously released versions are included in the new minor version.
If you’re upgrading to receive a specific change, ensure the release note for the change appears either:
- Within your target minor version.
- In a patch version that was released before the first release of your target minor version.
2.0.0
Release date: May 1, 2026
Astro Private Cloud 2.0 introduces three key new capabilities: Data Plane Disaster Recovery for business continuity across clusters, Config Governance for fine-grained control of APC and Airflow configuration, and Control Plane Audit Logging for compliance-grade activity tracking.
New features
-
Data Plane Disaster Recovery: Astro Private Cloud now offers cross-cluster and cross-region data plane disaster recovery. You can now trigger a failover of all Airflow Deployments in a data plane from a source cluster to a destination cluster. A failover is triggered via a single action through the Astro Private Cloud API or GUI. Two modes are supported: Controlled, which drains in-flight tasks before promoting the destination (for planned maintenance or migrations), and Forced, which promotes the destination immediately when the source is unreachable (for outages).
Data Plane DR helps to enable business continuity and meet regulatory requirements (DORA, OCC, MAS, APRA) that mandate a recovery path for production workloads, and replaces lengthy manual rebuilds with an automated platform operation. See Data plane failover.
-
Config Governance: You can now turn APC and Airflow features on or off at each level of the APC data model: platform, data plane, Workspace, and Airflow Deployment. Platform administrators set defaults at the platform level, and have the option to allow Workspace and Deployment owners to override settings without impacting other Workspaces or Deployments.
Config Governance addresses the problem of different teams, use cases, and workloads requiring different feature sets, configurations, and guardrails. With Config Governance, a single Astro Private Cloud installation can manage use cases from a locked-down regulated workload to a permissive experimental deployment, and let Deployment owners fine-tune within the bounds their administrator has set.
-
Control Plane Audit Logging: The Control Plane now emits structured audit events for authentication, authorization, and Airflow Deployment lifecycle actions, including login and logout, permission and role changes, service account creation and revocation, and Deployment create, update, and delete operations. Events are written to the platform log stream alongside existing Control Plane logs, so they can be exported to your existing SIEM or log aggregation tool.
Audit logging is a baseline requirement for SOC 2, ISO 27001, PCI-DSS, and the internal controls that banking and insurance customers use. Control Plane audit logs, in addition to the already-available Airflow Deployment audit logs, give compliance and security teams a single, structured source of truth for who did what, when.
Additional improvements
-
Cluster-level metric collection: Node exporter and cAdvisor can now be enabled through built-in Helm flags (
global.nodeExporter.enabled,global.cadvisor.enabled), both disabled by default. When enabled, four additional panels appear on the Metrics tab for each Deployment in the APC UI: CPU Usage, Memory Usage, Network Rx, and Network Tx. Cluster-level metric collection requires ClusterRole. The feature is disabled by default so the default Astro Private Cloud configuration does not require ClusterRole. See Deployment metrics. -
Git-sync relay operational metrics: Deployments that use the git-sync relay for DAG deployment can now emit StatsD metrics for sync operations, repository size, and sync performance, giving visibility into git-sync throughput and latency per Deployment. This feature is disabled by default. See Git-sync relay metrics.
-
Data Plane metadata includes custom logging state: The
dataPlanemetadata response now includes the state ofcustomLogging.enabled, making it visible whether custom logging is active for a given Data Plane without requiring a separate configuration lookup. -
Strict schema validation for Helm configuration: APC 2.0 introduces a strict schema validator that rejects unknown or misspelled keys in
values.yamlbefore they reach the cluster, replacing the previous silent-ignore behavior. This catches configuration mistakes at install or upgrade time instead of surfacing them as runtime failures. The validator can be temporarily disabled per-Deployment where a newer feature hasn’t yet been added to the schema (for example, git-sync relay metrics in the initial 2.0 release). -
Direct upgrade path from Astronomer Software 0.37 to APC 2.0: Customers still running Astronomer Software 0.37 can now upgrade directly to APC 2.0 without the intermediate hop through APC 1.0. See Upgrade 0.37 to 2.0.
-
Configurable Prometheus scrape interval and timeout for federated data planes: The scrape_interval for the federated-dataplanes job is now configurable, and a new scrape_timeout setting has been added. This reduces Prometheus load in Control Plane/Data Plane deployments running a large number of Deployments, which previously hit a fixed 5s scrape interval.
-
All images available in Azure Container Registry, as well as Quay and Docker Hub: The APC image set is now mirrored to Azure Container Registry (ACR) alongside Quay and Docker Hub, simplifying installs in Azure-restricted environments.
-
Containerd 2.0 is now supported: The DaemonSet that manages self-signed cert trust for containerd has been updated for GKE 1.33+ (containerd 2.0), and the private CA install instructions are now functional on the latest GKE versions.
Bug fixes
- Fixed an issue where the scheduler and triggerer were not scaled down during an Airflow version downgrade, allowing core components to continue running while the database migration was triggered.
- Fixed an issue where navigating directly to a Deployment URL appended the entire URL to the end of the path, resulting in a broken redirect. The Airflow UI is now accessible when using Deployment URLs directly.
- Fixed an issue where the
paginatedDeployRevisionsGraphQL query returned anINTERNAL_SERVER_ERRORon the Deploy History page, and the release name appeared asnull. - Fixed an issue where git-sync Deployments failed to become healthy because the git-sync container used deprecated environment variable names.
- Fixed an issue where the Grafana plugin failed to initialize because the root filesystem was read-only and dashboard YAML files were missing from the image.
- Fixed an issue where Workspace Users could create new workspaces through the UI despite the UI displaying a permission denied error.
- Fixed an issue where the UI version tooltip displayed an incorrect version after a Helm upgrade due to a version mismatch between the root and sub-chart
Chart.yamlfiles. - Fixed an issue where each Airflow task log line appeared twice in the UI when using KubernetesExecutor with Elasticsearch logging via Vector, caused by both log pipelines consuming the same source and writing to the same sink.
- Fixed an issue where the Elasticsearch host configuration was not automatically generated from data plane metadata, causing the Airflow chart-level property to incorrectly override the Astronomer-owned logging configuration.
- Fixed an issue where documentation links in the APC UI pointed to a broken URL.
- Fixed an issue where hard-deleting a Deployment did not clean up the associated Airflow database and roles in Postgres, blocking the recreation of a Deployment with the same release name.
- Fixed an issue where errored KubernetesExecutor worker Pods were not automatically cleaned up, causing Pods to accumulate in the namespace. Errored worker Pods are now cleaned up by default, with the option to override this behavior through Deployment-level variables.
- Fixed an issue where
dag_processor_managerand DAG parsing logs from the scheduler were not exported to Elasticsearch, preventing operators from filtering these logs in Kibana for root cause analysis. - Fixed an issue where the dag server did not support mounting private CA certificates, causing deploy revision updates to fail with certificate errors in environments with private CAs enabled.
- Fixed an issue where git-sync sidecars in DaemonSet logging Deployments emitted JSON logs with numeric
levelfields, causing Elasticsearchdocument_parsing_exceptionerrors and breaking task log retrieval in the Airflow UI. - Fixed an issue where the APC API
update-runtime-checkjob did not honor HTTP/HTTPS proxy environment variables, preventing the job from reaching the runtime updates endpoint in proxy-configured environments. - Fixed an issue where expected error messages in the APC API were logged at the
ERRORlevel, creating confusion during incident triage. - Fixed an issue where the Airflow UI redirected to an incorrect OAuth URL after SSO authentication when accessed directly via the Deployment URL, appending the Airflow URL to the OAuth callback path.
- Fixed an issue where Prometheus recording rules for container CPU and memory metrics silently failed due to
kube_pod_infoemitting duplicatenodelabels during Pod scheduling transitions, causing data gaps in Grafana dashboards. - Fixed an issue where the deployment configuration UI accepted Scheduler and Worker CPU values that were not multiples of 100 mCPU, which are invalid for Kubernetes-backed deployments. The UI now displays a banner prompting users to enter CPU resources in multiples of 100 mCPU.
Breaking Changes
- Config Governance schema change: The schema used to enable and disable features in
values.yamlhas changed to support the new Platform → Data Plane → Workspace → Deployment hierarchy. In-place upgrades from APC 1.x and 0.37 are supported through migration scripts that translate existing configurations to the new schema. Review the migration guide before upgrading. astroRuntimeEnabledflag removed: Astro Runtime is now the only supported runtime for Airflow deployments (Astronomer Certified has been deprecated). TheastroRuntimeEnabledflag has been removed from houston-api, the APC UI, and the APC Helm charts. Any deployment configs, automation, or GitOps workflows that explicitly setastroRuntimeEnabledmust remove that field before upgrading.- Default cluster name simplified: Clusters auto-created during a 1.x install or upgrade are now named
defaultinstead ofdefault-populated-by-db-bootstrapper. Existing clusters aren’t renamed, but any scripts, IaC, or external tooling that reference the previous name should be updated prior to upgrading.
Known Issues
- Data plane failover may stall if Commander is unavailable for an extended period during a failover: If the Commander component in the Data Plane Kubernetes cluster becomes unavailable for approximately 4–5 minutes during an in-progress failover, the failover can enter a stalled state even after Commander recovers. In this state, some Deployments may not progress, either failing to clean up on the source cluster or failing to come up on the destination cluster. This issue is only observed when Commander itself is unavailable; it does not occur when the entire Data Plane Kubernetes cluster is down. Workaround: Restart the Pilot Deployment in the cluster where Commander was unavailable (either the source or destination cluster, depending on which side experienced the Commander outage). The failover will then resume and complete, marking itself as either successful or failed.
Resolved CVE list
- CVE-2024-45337
- CVE-2025-15467
- CVE-2025-62718
- CVE-2025-68121
- CVE-2026-4519
- CVE-2026-24051
- CVE-2026-27140
- CVE-2026-27144
- CVE-2026-28387
- CVE-2026-28388
- CVE-2026-28389
- CVE-2026-28390
- CVE-2026-29181
- CVE-2026-29785
- CVE-2026-32280
- CVE-2026-32281
- CVE-2026-32282
- CVE-2026-32283
- CVE-2026-32829
- CVE-2026-32887
- CVE-2026-33186
- CVE-2026-33540
- CVE-2026-33810
- CVE-2026-34986
- CVE-2026-35172
- CVE-2026-39883
- CVE-2026-40175
- CVE-2026-41676
- CVE-2026-41678
- CVE-2026-41681
Product lifecycle, support, and compatibility matrix
- For Astro Private Cloud compatibility with Kubernetes, Postgres, and Astro Runtime versions, see Version compatibility.
- For the Astro Private Cloud support lifecycle, see Astro Private Cloud release and lifecycle policy.