Network configuration
This guide covers network configuration for Astro Private Cloud (APC), including ingress, proxy settings, private networking, and network policies.
Architecture
APC uses NGINX as its default ingress controller. For TLS certificate configuration, see TLS certificate management. To use a pre-existing ingress controller, see Use a third-party ingress controller.
DNS requirements
Before configuring ingress, create a wildcard DNS record pointing to the load balancer IP or hostname that NGINX will provision:
APC creates the following endpoints from global.baseDomain. All endpoints are served over HTTPS and covered by a single wildcard TLS certificate for *.<baseDomain>.
Control plane endpoints
Airflow Deployment endpoints
Each Airflow Deployment gets a path-based URL under the deployments subdomain:
Data plane endpoints
In a split-plane deployment, both planes share the same global.baseDomain. The data plane uses global.plane.domainPrefix (a unique cluster identifier, for example dp01) to scope its endpoints under <domainPrefix>.<baseDomain>:
Both planes must use the same global.baseDomain. Auth tokens issued by the control plane Houston are scoped to .<baseDomain>, so a mismatched base domain causes authentication failures across planes.
Create a separate wildcard DNS record for the data plane:
The data plane creates the following endpoints:
Metadata endpoint
The /metadata path on <domainPrefix>.<baseDomain> is served by Commander and returns a JSON document. The control plane reads this endpoint to fetch data plane details, including:
- Kubernetes version and health status
- Commander and registry URLs
- Data plane chart version, cloud provider, and region
- Namespace pools configuration and namespace labels
- Elasticsearch configuration
- External secret manager configuration
This endpoint is required for the data plane to function correctly with the control plane.
Retrieve your load balancer IP or hostname after installing APC. For the data plane, run kubectl get svc -n <astronomer-namespace> -l component=dp-ingress-controller. For the control plane, run kubectl get svc -n <astronomer-namespace> -l component=cp-ingress-controller. Create the wildcard DNS record pointing to that address before users try to access the platform.
Ingress configuration
Load balancer
To expose the platform through a cloud load balancer, add the following to your values.yaml:
Private load balancer
To provision an internal (non-public) load balancer, set privateLoadBalancer: true. APC automatically applies the appropriate annotation for AWS, GCP, and Azure:
To add custom annotations to the ingress service beyond the auto-applied cloud annotations, use ingressAnnotations:
Custom ingress annotations
global.extraAnnotations applies annotations to all APC ingress resources — the platform UI, Houston API, registry, Grafana, Prometheus, Alertmanager, and Elasticsearch ingresses. Its behavior depends on whether you’re using the auth sidecar or the default NGINX ingress controller.
With global.authSidecar.enabled: true
All annotations under global.extraAnnotations are applied to every platform and Airflow ingress object, giving full control for environments using a bring-your-own ingress controller:
For OpenShift route passthrough TLS:
With default NGINX ingress
global.extraAnnotations is still respected, but you can’t override annotations that APC manages. Use this to add custom annotations globally without conflicting with platform defaults.
The following annotations are protected and can’t be overridden via global.extraAnnotations:
kubernetes.io/ingress.classnginx.ingress.kubernetes.io/custom-http-errors
global.extraAnnotations applies to all ingress resources cluster-wide. nginx.ingressAnnotations applies only to the NGINX load balancer service. Use global.extraAnnotations for ingress routing behavior and nginx.ingressAnnotations for cloud load balancer configuration.
NodePort
Source IP preservation
By default, nginx.preserveSourceIP: false sets externalTrafficPolicy: Cluster on the NGINX service, which distributes traffic evenly across all nodes but replaces the original client IP with the node IP through SNAT.
Set preserveSourceIP: true to use externalTrafficPolicy: Local, which preserves the original client IP in Airflow logs and Astro UI audit logs:
With externalTrafficPolicy: Local, traffic is only routed to nodes running an NGINX pod. This can cause uneven load distribution if NGINX pods aren’t spread across all nodes.
NGINX request limits
Adjust these values for environments with slow upstreams, large Dag bundles, or long-running API calls:
The default proxyBodySize of 1024m limits Dag bundle upload sizes. Increase this value if you see 413 Request Entity Too Large errors.
Connect to public endpoints
Egress configuration
By default, Airflow Deployments can reach public endpoints. When network policies are enabled, you can restrict outbound traffic using Kubernetes NetworkPolicy resources. See Network policies.
Configure a proxy
APC’s Houston API reads proxy settings from environment variables on the Houston pod. Configure proxy settings using the houston.env Helm value in the astronomer subchart:
Houston reads the proxy URL from the following environment variables, checked in order of precedence:
HTTPS_PROXY/https_proxy/GLOBAL_AGENT_HTTPS_PROXY— for HTTPS requestsHTTP_PROXY/http_proxy/GLOBAL_AGENT_HTTP_PROXY— for HTTP requests
NO_PROXY is respected automatically by the proxy agent. Include .svc.cluster.local to prevent internal Kubernetes service traffic from routing through the proxy, kubernetes.default.svc to ensure Commander can reach the Kubernetes API server without proxying, and .<baseDomain> if any platform components resolve each other using external DNS rather than cluster-internal DNS.
The GLOBAL_AGENT_* variants apply to HTTP libraries that use global-agent for proxy bootstrapping, in addition to the standard HTTP_PROXY and HTTPS_PROXY variables used by axios.
In a proxy environment, apply the same environment variables to every platform pod that makes outbound calls. Set the same variables on Commander and Astro UI using astronomer.commander.env and astronomer.astroUI.env respectively.
To verify that Houston is using the proxy, check the Houston pod logs for the following line:
Configure proxy for Airflow Deployments
Configure proxy settings for Airflow tasks by setting environment variables on each Airflow Deployment:
Private networking
VPC/private subnet access
To resolve private hostnames from pods, add hostAliases to the relevant component. Configuration differs between platform components and Airflow Deployment components.
Platform components
The following platform components support hostAliases: Prometheus, registry, Houston, Houston worker, and Commander:
Airflow Deployment components
The following Airflow components support hostAliases: scheduler, API server, webserver, triggerer, and workers:
VPN/direct connect
For AWS PrivateLink, Azure Private Link, or GCP Private Service Connect:
- Configure the private endpoint in your cloud provider.
- Create Kubernetes Service and Endpoints resources pointing to the private IP.
- Reference the service name in your Airflow connections.
Service mesh integration
Istio
APC is compatible with Istio, but APC doesn’t configure or manage Istio. You must install and configure Istio on your cluster yourself before enabling this feature in APC.
To enable Istio compatibility mode:
The rootNamespace must match the root config namespace of your Istio installation. In most Istio installations this is istio-system or istio-config. If this value is incorrect, the default Sidecar egress policy won’t apply to Airflow Deployment namespaces.
APC uses the networking.istio.io/v1alpha3 API for Sidecar resources. This API is deprecated in Istio 1.22 and later. Clusters running Istio 1.22+ will see deprecation warnings in the Istio control plane logs.
What APC creates when Istio is enabled
APC creates three Istio Sidecar resources to control egress traffic:
The default-sidecar-config resource in the rootNamespace acts as a cluster-wide default for any namespace that doesn’t have its own Sidecar resource, which covers all Airflow Deployment namespaces.
Namespace labeling for sidecar injection
APC doesn’t automatically label namespaces for Istio sidecar injection. You must label the platform namespace and any Airflow Deployment namespaces yourself for injection to take effect on long-running pods (Houston, Commander, Astro UI, Registry):
If you use namespace pools, apply the label automatically to all Airflow namespaces using global.namespaceLabels:
Airflow Deployment egress under Istio
The default-sidecar-config restricts Airflow pod egress to the same namespace, istio-system, Elasticsearch, and SQL proxy only. Dag tasks that connect to external databases or APIs through the service mesh will be blocked.
To allow additional egress for a specific Airflow namespace, create a custom Sidecar resource in that namespace that overrides the default:
Istio Sidecar resources only restrict traffic to services registered in the mesh. Egress to external public IPs not registered as Kubernetes services remains reachable by default unless you set outboundTrafficPolicy: REGISTRY_ONLY on the mesh.
Sidecar injection behavior
APC disables Istio sidecar injection on pods that shouldn’t have a proxy:
- NGINX (control plane and data plane) — injection disabled; also has
traffic.sidecar.istio.io/includeInboundPorts: ""set unconditionally on all NGINX pods to prevent accidental inbound port interception even if Istio is installed but not yet enabled in values - Houston cronjobs and Helm hooks — injection disabled; short-lived batch jobs don’t need a sidecar proxy.
- Config syncer, Commander JWKS hook, NATS JetStream job — injection disabled for the same reason
Prometheus is the only platform component that receives the Istio sidecar, with explicit resource allocation:
What APC doesn’t configure
APC doesn’t create VirtualServices, DestinationRules, Gateways, AuthorizationPolicy, or PeerAuthentication resources. Configure mTLS mode and traffic management policies directly in Istio. See the Istio mTLS documentation for details.
Network policies
Enable network policies
When both networkPolicy.enabled and defaultDenyNetworkPolicy are true, APC creates a NetworkPolicy named <release-name>-default-deny-ingress in the platform namespace that denies all inbound traffic by default. Per-component network policies then selectively allow the required connections.
The default-deny policy covers ingress only. Egress from platform and Airflow pods is unrestricted by default. Apply custom NetworkPolicy resources to restrict outbound traffic.
Set defaultDenyNetworkPolicy: false if your cluster already has a deny-all policy in place, or if other services running in the platform namespace would be disrupted.
Default policies
When enabled, APC creates network policies for:
- Platform component communication
- Airflow Deployment isolation
- Ingress traffic
APC’s Houston network policy allows ingress from any pod labeled tier: airflow across all namespaces, using namespaceSelector: {}. Isolation between Airflow Deployments is enforced through Kubernetes RBAC, not network policies.
Custom policies
Apply custom Kubernetes NetworkPolicy resources to extend or restrict traffic beyond the default APC-managed policies.
Allow specific egress
Restrict cross-namespace
Namespace labeling
Any namespace management requires cluster-scoped permissions. These settings only take effect if APC is granted a cluster role to manage namespace resources.
global.networkNSLabels
Set global.networkNSLabels: true to label the platform namespace with platform: <release-name>. This also adds namespace label selectors to network policies, restricting traffic to pods in namespaces that carry the matching label.
For example, if APC is installed with release name apc-private-cloud in namespace astro-cloud, enabling this feature patches the namespace with platform=apc-private-cloud, allowing you to filter it with:
global.namespaceLabels
global.namespaceLabels applies custom labels to Airflow Deployment namespaces. Its behavior depends on who owns namespace provisioning:
With customer-managed namespaces (namespace pools): When customers create and own their namespaces, APC no longer has access to patch them and skips namespace label updates. Customers are responsible for applying the required labels to their own namespaces.
With Astronomer-owned provisioning (cluster role): APC manages namespace creation and applies both its own required labels and any labels specified under global.namespaceLabels:
TLS scope
global.ssl.enabled controls TLS for database connections only — the connection from platform components to PostgreSQL. It doesn’t enable mTLS between platform services or TLS on internal cluster communication.
Don’t rely on global.ssl.enabled to secure inter-service traffic. APC doesn’t configure Istio — if you need service mesh mTLS, install and configure Istio directly in your cluster.
DNS and service discovery
Internal DNS
Airflow Deployments use Kubernetes DNS for internal service resolution:
- Services:
<service>.<namespace>.svc.cluster.local - Pods:
<pod-ip>.<namespace>.pod.cluster.local
Firewall requirements
Ingress ports
Egress requirements
Control plane and data plane connectivity
In a split-plane deployment, the control plane reaches the data plane over HTTPS, and the data plane reaches the control plane Houston API:
Ensure firewall rules between the two cluster load balancers permit outbound HTTPS on port 443 in both directions.
Internal cluster ports
Troubleshoot
Connectivity issues
Proxy issues
DNS issues
Best practices
- Enable network policies to enforce least-privilege access.
- Include
.svc.cluster.localandkubernetes.default.svcinNO_PROXYto prevent internal Kubernetes traffic from routing through a proxy. - Use private load balancers for internal-only access.
- Document all egress requirements for firewall teams.
- Test connectivity before deploying Dags.
- Use Kubernetes services for internal connections.