Clean up and delete task metadata from Airflow DB
The Houston API graphql query, cleanupAirflowDb
, triggers the Airflow metadata cleanup job. You can run a cleanup job to automatically delete task and DAG metadata from your Deployment. This job runs an Astronomer custom cleanup script for all of your Deployments and exports the results in a CSV-formatted file structure to your configured external storage service.
You can enable this feature by setting the config flag in astronomer.houston.cleanupAirflowDb.enabled
to true
in your values.yaml
file.
There are two ways to use this feature:
- Scheduled Cleanup: You can configure a Kubernetes CronJob to run the cleanup job at regular intervals by defining the schedule and job parameters in the
astronomer.houston.cleanupAirflowDb
section of yourvalues.yaml
file. - Manual Cleanup: The Houston API GraphQL query,
cleanupAirflowDb
, manually triggers the Airflow metadata cleanup job for immediate execution.
olderThan
configuration. Ensure that none of your historical data is required to run current DAGs or tasks before enabling this feature.Prerequisites
- System admin user privileges
- External storage credentials that allow read/write permissions to your storage
- (AWS Cloud Provider) The AWS CLI
Step 1: Configure your external storage credentials
Google Cloud Storage
AWS
-
You must provision a GCP Service Account with appropriate read/write permissions to your bucket. Export these credentials as a JSON file.
-
Create a Kubernetes secret in your Astronomer platform namespace with a name such as
astronomer-gcs-keyfile
. Then, run the following commands to update your environment:
You use this Kubernetes secret to configure providerEnvSecretName
when you configure the cleanup job and env.name
when you set the storage provider secret.
(Optional) Configure a connection ID
If you want to run jobs for specific Deployments or within a Workspace or run manually triggered jobs using an API query, you can choose to configure an Airflow connection to your external storage service so that it can be stored as an environment variable. You must use the service account credentials to authenticate to your service when configuring your connection.
Google Cloud Storage
AWS
-
You must provision a GCP Service Account with appropriate read/write permissions to your bucket. Export these credentials as a JSON file.
-
Create an Airflow connection using these credentials. See Airflow documentation to learn how to configure your connection.
config.yaml
, which can be decoded.You can use this connection as your connectionId
when you make API queries as the cleanupjob trigger, but it is not required.
Step 3: Configure the cleanup job
The cronjob configuration provides the default values that your cleanup job uses whether you run a scheduled or manual cleanup job.
The following example shows the automatic cleanup job configuration that runs at 5:23AM and cleans up Deployments that are more than one year old.
Step 4: Set the storage provider secret
In the Houston config section of your values.yaml
file, set the storage provider secret that you configured in Step 1, so that the cleanup job can export your cleanup results to your cloud storage.
cleanupAirflowDb.enabled: true
enabled at multiple levels. You can only have the job enabled one of the three scope levels.The env.name
value includes the must match the secret name that you configured for providerEnvSecretName
in your values.yaml
file.
Deprecated Configuration: Using the Scheduler
Setting the provider secret in the Scheduler is the only method for configuring the secret for versions 0.37.0-0.37.3.
Google Cloud Storage
AWS
Configure the provider secret in Houston
Google Cloud Storage
AWS
Configure the storage provider secret in a Deployment
Google Cloud Storage
AWS
Step 5: (Optional) Set container CPU and memory limits or requests
You can set limits and requests for CPU and Memory of the cleanup container by adding the following to your cleanupAirflowDb
configuration. These configurations become the new defaults for your cleanup job if you do not pass any additional configurations in your GraphQL mutation. Additionally, if you don’t use the manual trigger and instead use the cleanup cronjob, these resources also become the new default used when scheduling cleanup jobs.
resourceSpec
in an API query. See Scenario 4: Configure custom Pod Resources.Step 6: Apply your configuration
Apply your platform configuration changes to enable cleanup jobs and to set your cronjob schedule.
astronomer.houston.upgradeDeployments.enabled
to true
.Step 7: (Optional) Manually trigger the cleanup job
The following configuration enables you to trigger a cleanup job manually using a Houston API query. When you use the cleanup job in this way, the values you include in the query are used instead of the defaults set in the values.yaml
configuration. This means you must specify the Deployment or Workspace in your query that you want to clean up.
Restrict cleanup to manual-only triggers
In Step 3, you set an automatic schedule for your platform to clean up task metadata by setting the astronomer.houston.cleanupAirflowDb.enabled
configuration to true
. To enable only triggering cleanup jobs manually, you must instead set astronomer.houston.cleanupAirflowDb.enabled
to false
. Manually triggered cleanup jobs require you to use a Houston API query and specify the Deployments where you want to archive metadata.
The following examples shows different mutations that you can use depending on your needs. See Houston API examples for all examples and scenarios that you can use to work with the Houston API.
Houston API Parameters
dryRun: true
to test this feature without deleting any data. When dry runs are enabled, the cleanup job will only print the data that it plans to modify in the serial output of the webserver Pod. To view the dryRun events of the cleanup job, check the logs of your webserver Pod for each Deployment.Scenario 1: Cleanup Deployments per Workspace
You can use the following query to clean up Deployments in a specific Workspace. Configure the workspaceId
parameter with the Workspace whose Deployments you want to clean up. You can also find Workspace IDs with the sysWorkspaces
Houston API query.
The following example shows some configured query variables to clean up all Deployments older than 1 day within the Workspace, cma40n66l000008l89nye86o1
, that uses GCP as a cloud provider.
Scenario 2: Cleanup Deployments across your system
You can use the following query to clean up specific Deployments in a specific workspace. Configure the workspaceId
parameter with the Workspace and the deploymentIds
with the specific Deployments.. You can also find Workspace IDs with the sysWorkspaces
Houston API query.
The following example shows some configured query variables to clean up the specified Deployments older than 1 day within the Workspace, cma42z570000008l8f6rpc72f
.
Scenario 3: Clean up Deployment using Airflow connection ID
connectionId
, from the Airflow UI or CLI.You can also both clean up all Deployments in a Workspace or specific Deployments and export the clean up logs to a storage provider configured in an Airflow Connection.
The following code example cleans up the specified Deployments that are more than 1 day old and exports the logs to the storage provider defined in the connectionId
.
Scenario 4: Configure custom Pod resources
If you do not configure a specific default Pod CPU or memory resource amount, or if you want to override a configuration, you can make a GraphQL query to set a user-defined resource configuration.
The following query parameters show an example for configuring the resource requests and limits for the Cleanup run.
Access your cleanup logs
You can access your cleanup logs through the UI or with your Pod logs.
Pod Logs
You can access your Pod logs with vector sidecar logging or FluentD with <release-name>-meta-cleanup-job
in the Airflow namespace.
UI access
Go to the Logs tab in your Deployments page and select the AirflowMetaCleanup tab to access the logs. Only supported for FluentD at this time.