Breaking changes and removals

As part of the Astro Private Cloud 1.0 release (upgrading from Astronomer Software 0.x), several features have been removed or replaced, resulting in breaking changes. This page outlines each change, the user impact, and recommended migration steps to ensure a smooth upgrade.

Removal of Prometheus Exporters (nodeExporterEnabled, blackboxExporterEnabled)

Removed

  • nodeExporterEnabled
  • blackboxExporterEnabled

Background

Previously, these flags controlled the automatic deployment of Node Exporter and Blackbox Exporter for Prometheus metrics collection.

Why

These exporters generated large volumes of metrics that were often unused, increasing Prometheus data size and impacting performance. To improve efficiency, they are no longer provisioned by default.

User Impact

  • Node Exporter and Blackbox Exporter will not be installed automatically.
  • Users who rely on these exporters must now deploy them manually as optional add-ons.

Migration Guide

  1. Remove the flags nodeExporterEnabled and blackboxExporterEnabled from your values.yaml.
  2. If required, deploy Prometheus Node Exporter or Blackbox Exporter manually via Helm or your observability stack.
  3. Update your Prometheus scrape configs to include the new endpoints.

NATS Streaming (STAN) → JetStream

Starting in v1.0.0, the default messaging backend has been migrated from NATS Streaming (STAN) to NATS JetStream, since STAN is now deprecated and no longer actively maintained. JetStream is the recommended replacement, providing improved scalability, reliability, and native support in the NATS ecosystem.

Impact

  • Any workloads or integrations using STAN-specific clients or APIs will need to migrate to JetStream-compatible clients.
  • STAN persistence and cluster configuration options are no longer available.
  • Message storage and delivery semantics are now managed via JetStream streams and consumers, which differ from STAN channels and subscriptions.

Required Action for Customers

  • If you were relying only on the default eventing pipeline provided by the platform (no direct STAN client usage), no action is needed.
  • If you had applications directly using STAN clients, you must:
    1. Update your applications to use JetStream clients (see NATS JetStream docs).
    2. Replace STAN channels with JetStream streams and consumers.
    3. Verify persistence, delivery guarantees, and retention policies are configured as expected in JetStream.

Rollout Recommendation

  • Deploy v1.0.0 in a staging environment first.
  • Verify that applications previously consuming from STAN channels can successfully consume from the equivalent JetStream streams/consumers.
  • Validate message delivery guarantees (at-most-once, at-least-once) align with your workloads.
  • After validation, proceed with production upgrade.

Removal of SingleNamespace Mode

  • Removed: All SingleNamespace logic from Helm charts and APIs.
  • Background: SingleNamespace mode was a proposed feature that allowed multiple Airflow deployments to run in the same namespace as platform components.
  • Why: It was never fully supported and is now deprecated.
  • User Impact: Deployments relying on SingleNamespace will fail to run. Standard multi-namespace deployments are required.
  • Migration Guide:
    1. Move Airflow deployments to their own namespaces.
    2. Update automation/scripts referencing shared namespaces.
    3. Validate that platform and Airflow components run in isolated namespaces.

Removal of Astronomer Units (AU) and Standardization of CPU/Memory Settings

  • Starting in v1.0.0, the AU (Astronomer Unit) abstraction for resource configuration has been removed.
  • Resource management for Airflow components is now standardized using granular CPU and memory settings.

Removed

  • AU as a measure of resources.

Replacement

  • Standardized CPU and memory resource requests/limits for all Airflow components.

Why

  • Provides more clarity and flexibility in resource management.
  • Aligns with Kubernetes-native resource definitions.

User Impact

  • Any references to AU will no longer be valid.
  • Resource requests/limits must now be set explicitly using CPU and memory values.

Migration Guide

  • No manual conversion is required: existing AU settings will be automatically translated into their equivalent CPU and memory configurations during upgrade.
  • For new or updated deployments, configure resource requests/limits directly with CPU and memory.

Replacement of Create/Update Deployment Mutations

Starting in v1.0.0, the createDeployment and updateDeployment GraphQL mutations have been removed. These mutations have been replaced with a single upsertDeployment mutation.

Removed

  • createDeployment
  • updateDeployment

Replacement

  • upsertDeployment

Why

  • Consolidates deployment creation and updates into a single, atomic operation.
  • Simplifies the API and improves consistency for deployment management.

User Impact

  • Any client or integration using the old mutations will no longer work.
  • All deployment creation and updates must now use the upsertDeployment mutation.
  • Existing scripts, SDKs, or automation tools calling createDeployment or updateDeployment must be updated.

Migration Guide

  1. Identify any existing usage of the createDeployment or updateDeployment mutations.
  2. Replace calls with the upsertDeployment mutation.
  3. Validate that deployment creation and update workflows function correctly with the new mutation.
  4. Update any SDKs, scripts, or automation tools accordingly.

Fluentd → Vector for DaemonSet-based Logging

Starting in v1.0.0, the default logging DaemonSet has been migrated from Fluentd to Vector. This provides significant improvements in performance (lower CPU/memory footprint), reliability, and a simpler configuration model.

Impact

  • Any customizations made to the Fluentd DaemonSet (custom plugins, filters, config changes) will not automatically migrate.
  • The container image has changed from fluent/fluentd to timberio/vector.
  • Log processing and forwarding are now handled using Vector’s configuration model, which differs from Fluentd’s configuration syntax.

Required Action for Customers

  • If you were relying only on the default log collection pipeline (container logs with Kubernetes metadata → default sink), no action is needed. Logs will continue to be collected and shipped.
  • If you had custom Fluentd configurations, you must:
    1. Translate them into equivalent Vector configuration (see Vector docs).
    2. Provide these overrides via the DaemonSet’s ConfigMap.
  • Validate that your logging backend (e.g., Elasticsearch, Loki, S3, Kafka) is still receiving logs as expected after upgrade.

Rollout Recommendation

  • Deploy v1.0.0 in a staging environment first.
  • Compare log volume, metadata, and formatting between Fluentd (pre-upgrade) and Vector (post-upgrade).
  • After verification, proceed with production upgrade.

Removal of enableHoustonInternalAuthorization Flag

Starting in v1.0.0, the enableHoustonInternalAuthorization flag has been removed. Authentication and authorization are now gated by installation mode.

Removed

  • enableHoustonInternalAuthorization flag

Replacement / New Behavior

  • Unified Mode: Internal authorization is automatically enforced at the cluster level.
  • Control Plane / Data Plane (CP/DP) Mode: NGINX ingress-based authentication is used for access control.

Why

  • Simplifies the configuration by standardizing authorization behavior per installation mode.
  • Reduces configuration errors and improves security posture.

User Impact

  • Any references to the enableHoustonInternalAuthorization flag will no longer have any effect.
  • Customers upgrading must rely on the default authorization behavior for their installation mode.
  • Custom setups that previously relied on toggling this flag may need to adjust access control configurations according to the new mode.

Migration Guide

  1. Remove any usage of the enableHoustonInternalAuthorization flag from configuration files.
  2. For Unified Mode, ensure cluster-level access policies meet your security requirements.
  3. For CP/DP (Control/Data Plane) Mode, configure NGINX ingress authentication appropriately.
  4. Validate that all users and services can access the platform as expected after upgrade.

Removal of sysAdminScalabilityImprovementsEnabled flag

Removed

  • sysAdminScalabilityImprovementsEnabled

Background

This flag enabled a series of internal optimizations to improve scalability and API performance in environments with a large number of users and deployments.

Why

The underlying improvements are now part of the platform’s core logic and no longer require feature gating. This simplifies configuration and ensures consistent performance across all environments.

User Impact

  • The flag is no longer required.
  • No configuration change is needed unless your CI/CD pipelines explicitly set this flag.

Migration Guide

  1. Remove the flag sysAdminScalabilityImprovementsEnabled from your Helm values or environment configuration.
  2. Redeploy the platform — the optimizations are applied automatically.

Removal of unsupported Runtime Allowance (enableSystemAdminCanUseNonSupportedRuntime)

Removed

  • enableSystemAdminCanUseNonSupportedRuntime

Background

This flag previously allowed system administrators to deploy Airflow runtimes that were not officially supported by Astronomer.

Why

To ensure reliability, maintainability, and supportability, the platform now strictly enforces supported runtime versions.

User Impact

  • System administrators will no longer be able to provision non-supported Airflow runtimes.
  • Deployments using unsupported runtimes will fail validation during creation or upgrade.

Migration Guide

  1. Identify any deployments using non-supported runtimes using:
    $astro deployment inspect

Removal of updateServiceAccount GraphQL Mutation

Starting in v1.0.0, the legacy GraphQL mutation updateServiceAccount is removed.

Removed

  • updateServiceAccount (GraphQL mutation)

Replacement

  • Use one of the following mutations, depending on the scope of the service account you want to update:
    • updateSystemServiceAccount (for system-level service accounts)
    • updateWorkspaceServiceAccount (for workspace-level service accounts)
    • updateDeploymentServiceAccount (for deployment-level service accounts)

Why

  • Scoped mutations improve security and make it clearer which service account you are targeting for updates.

User Impact

  • Any API calls, scripts, or CI/CD automations that use updateServiceAccount will fail after upgrading to v1.0.0.
  • You must update all API interactions to use the correct scoped mutation.

Migration Guide

  1. Look for any queries, scripts, or API calls that use updateServiceAccount.
  2. Depending on the level of the service account you want to update, use:
    • updateSystemServiceAccount
    • updateWorkspaceServiceAccount
    • updateDeploymentServiceAccount
  3. Replace all uses of the legacy mutation with the appropriate scoped mutation.
  4. After making changes, verify that your service account update functionality works as expected.