Clean up the Airflow metadata database using Dags

In addition to storing configurations about your Airflow environment, the Airflow metadata database stores data about past and present task runs. Airflow never automatically removes metadata, so the longer you use it, the more task run data is stored in your metadata DB. Over a long enough time, this can result in a bloated metadata DB, which can affect performance across your Airflow environment.

When a table in the metadata DB is larger than 50GiB, you might start to experience degraded scheduler performance. This can result in:

  • Slow task scheduling
  • Slow dag parsing
  • Gunicorn timing out when using the Celery executor
  • Slower Airflow UI load times

The following tables in the database are at risk of becoming too large over time:

  • dag_run
  • job
  • log
  • rendered_task_instance_fields
  • task_instance
  • xcom

To keep your Airflow environment running at optimal performance, you can clean the metadata DB using the Airflow CLI airflow db clean command. This command was created as a way to safely clean up your metadata DB without querying it directly.

In Airflow 3, this command cannot be called from a Dag because tasks can no longer directly access the metadata DB. Instead you can expose Airflow’s utility function used by the command via an HTTP API using an Airflow Plugin. This tutorial describes how to implement the cleanup Dag and corresponding plugin in Airflow so that you can clean your database using the command directly from the Airflow UI.

Even when using Airflow’s DB clean utilities, deleting data from the metadata database can destroy important data. Read the Warnings section carefully before implementing this tutorial Dag in any production Airflow environment.

Warnings

Deleting data from the metadata database can be an extremely destructive action. If you delete data that future task runs depend on, it’s difficult to restore the database to its previous state without interrupting your data pipelines. Before implementing the Dag in this tutorial, consider the following:

  • When specifying the clean_before_timestamp value, use as old a date as possible. The older the deleted data, the less likely it is to affect your currently running Dags.
  • The Dag in this tutorial drops the archived tables it created in the cleanup process by default using the skip_archive=True argument, and does not maintain any history. If the task fails (for example if it runs for longer than five minutes), the archive tables are not cleared. By calling drop_archived_tables in the second task of the Dag, we ensure all archive tables are dropped even in the event of the first task failing.

Prerequisites

  • An Airflow project

    This Dag has been designed and optimized for Airflow environments running on Astro. Consider adjusting the parameters and code if you’re running the Dag in any other type of Airflow environment.

  • The HTTP Airflow provider installed

Step 1: Create your Dag and plugin

  1. In your dags folder, create a file called db_cleanup.py.

  2. Copy the following code into the Dag file.

    1"""A DB cleanup dag maintained by Astronomer."""
    2
    3from datetime import UTC, datetime, timedelta
    4
    5from airflow.cli.commands.db_command import all_tables
    6from airflow.providers.http.hooks.http import HttpHook
    7from airflow.sdk import Param, dag, task
    8
    9
    10def get_tables() -> list[str]:
    11 tables = []
    12
    13 for table in all_tables:
    14 # can't delete dag versions which may be older than corresponding task instances
    15 # in order to keep dag_version untouched we also need to ignore the dag table
    16 # https://github.com/apache/airflow/issues/56192
    17 if table in {
    18 "dag_version",
    19 "dag",
    20 }:
    21 continue
    22 tables.append(table)
    23
    24 return tables
    25
    26
    27@task
    28def get_chunked_timestamps(**context) -> list[datetime]:
    29 from plugins.db_cleanup import OldestTimestampResponse
    30
    31 http_conn_id = context["params"]["http_conn_id"]
    32 tables = context["params"]["tables"]
    33 batches = []
    34
    35 response = HttpHook("GET", http_conn_id).run(
    36 "/db_cleanup/api/oldest_timestamp",
    37 data={"table_names": tables},
    38 )
    39 start_chunk_time = OldestTimestampResponse.model_validate_json(response.content).oldest_timestamp
    40
    41 if start_chunk_time is not None:
    42 start_ts = start_chunk_time
    43 end_ts = datetime.fromisoformat(context["params"]["clean_before_timestamp"])
    44 batch_size_days = context["params"]["batch_size_days"]
    45
    46 while start_ts < end_ts:
    47 batch_end = min(start_ts + timedelta(days=batch_size_days), end_ts)
    48 batches.append(batch_end)
    49 start_ts += timedelta(days=batch_size_days)
    50 return batches
    51
    52
    53@task(map_index_template="ts {{ clean_before_timestamp }}")
    54def db_cleanup(clean_before_timestamp: datetime, **context) -> None:
    55 context["clean_before_timestamp"] = clean_before_timestamp.isoformat()
    56 tables = context["params"]["tables"]
    57 http_conn_id = context["params"]["http_conn_id"]
    58 HttpHook("DELETE", http_conn_id).run(
    59 "/db_cleanup/api/records",
    60 params={
    61 "clean_before_timestamp": clean_before_timestamp.isoformat(),
    62 "dry_run": context["params"]["dry_run"],
    63 "skip_archive": True,
    64 "table_names": tables,
    65 },
    66 )
    67
    68
    69@task(trigger_rule="all_done")
    70def clean_archive_tables(**context) -> None:
    71 tables = context["params"]["tables"]
    72 http_conn_id = context["params"]["http_conn_id"]
    73 HttpHook("DELETE", http_conn_id).run(
    74 "/db_cleanup/api/archived",
    75 params={"table_names": tables},
    76 )
    77
    78
    79@dag(
    80 schedule=None,
    81 catchup=False,
    82 description=__doc__,
    83 doc_md=__doc__,
    84 render_template_as_native_obj=True,
    85 max_active_tasks=1,
    86 max_active_runs=1,
    87 tags=["astronomer", "cleanup"],
    88 params={
    89 "clean_before_timestamp": Param(
    90 default=(datetime.now(tz=UTC) - timedelta(days=90)).isoformat(),
    91 type="string",
    92 format="date-time",
    93 description="Delete records older than this timestamp. Default is 90 days ago.",
    94 ),
    95 "tables": Param(
    96 default=get_tables(),
    97 type=["null", "array"],
    98 examples=get_tables(),
    99 description="List of tables to clean. Default is all tables.",
    100 ),
    101 "dry_run": Param(
    102 default=False,
    103 type="boolean",
    104 description="Show a summary of which tables would be deleted in the api-server logs without actually deleting the records. Default is False.",
    105 ),
    106 "batch_size_days": Param(
    107 default=7,
    108 type="integer",
    109 description="Number of days in each batch for the cleanup. Default is 7 days.",
    110 ),
    111 "http_conn_id": Param(
    112 default="http_default",
    113 type="string",
    114 description="The HTTP connection ID for calling the cleanup API. Default is 'http_default'.",
    115 ),
    116 },
    117)
    118def astronomer_db_cleanup():
    119
    120 db_cleanup.expand(clean_before_timestamp=get_chunked_timestamps()) >> clean_archive_tables()
    121
    122
    123astronomer_db_cleanup()

    Rather than running on a schedule, this Dag is triggered manually by default and includes params so that you’re in full control over how you clean the metadata DB.

    It includes three tasks:

    • get_chunked_timestamps: creates a list of timestamps to process in batches.
    • db_cleanup: calls the run_cleanup utility.
    • clean_archive_tables: calls the drop_archived_tables utility.

    These three tasks run with params you specify at runtime. The params let you specify:

    • clean_before_timestamp: What age of data to delete. Any data that was created before the specified time will be deleted. The default is to delete all data older than 90 days.
    • tables: Which tables to delete data from. By default all tables supported by the DB cleanup utilities are included except for the dag and dag_version table.
    • dry_run: Whether to run the cleanup as a dry run, meaning that no data is deleted. The dag will instead return the SQL that would be executed based on other parameters you have specified. The default is to run the deletion without a dry run.
    • batch_size_days: What batch size to use in order to cleanup data in batches.
    • http_conn_id: Which HTTP connection to use for calling the API exposing the DB cleanup utilities.
  3. In your plugins folder, create a file called db_cleanup.py.

  4. Copy the following code into the plugin file.

    1"""A DB cleanup plugin maintained by Astronomer."""
    2
    3import logging
    4import os
    5from collections.abc import Generator
    6from datetime import datetime
    7from typing import Annotated
    8
    9from airflow.api_fastapi.common.router import AirflowRouter
    10from airflow.plugins_manager import AirflowPlugin
    11from airflow.utils.db import reflect_tables
    12from airflow.utils.db_cleanup import _effective_table_names, drop_archived_tables, run_cleanup
    13from airflow.utils.session import create_session
    14from fastapi import Depends, FastAPI, Query
    15from pydantic import BaseModel
    16from sqlalchemy import func
    17from sqlalchemy.orm.session import Session
    18
    19
    20def _get_session() -> Generator[Session, None]:
    21 with create_session() as session:
    22 yield session
    23
    24
    25logger = logging.getLogger(__name__)
    26
    27
    28class TableInfo(BaseModel):
    29 table_name: str
    30 row_estimate: int = 0
    31 table_bytes: int = 0
    32 index_bytes: int = 0
    33 toast_bytes: int = 0
    34 total_bytes: int = 0
    35
    36
    37class InfoResponse(BaseModel):
    38 tables: list[TableInfo] = []
    39
    40
    41class OldestTimestampResponse(BaseModel):
    42 oldest_timestamp: datetime | None = None
    43
    44
    45api = AirflowRouter(tags=["DB API"])
    46
    47
    48@api.get("/info")
    49def info(
    50 *,
    51 order_by: str = "total_bytes",
    52 order_desc: bool = True,
    53 session: Annotated[Session, Depends(_get_session)],
    54) -> InfoResponse:
    55 """
    56 Provides information about the size of tables in the metadata database.
    57 """
    58 if order_by not in {
    59 "table_name",
    60 "row_estimate",
    61 "table_bytes",
    62 "index_bytes",
    63 "toast_bytes",
    64 "total_bytes",
    65 }:
    66 raise ValueError(f"Invalid order_by value: {order_by}")
    67 query = f"""
    68 SELECT
    69 table_name,
    70 row_estimate,
    71 total_bytes - index_bytes - COALESCE(toast_bytes, 0) AS table_bytes,
    72 index_bytes,
    73 toast_bytes,
    74 total_bytes
    75 FROM (
    76 SELECT
    77 relname AS table_name,
    78 c.reltuples::int AS row_estimate,
    79 pg_indexes_size(c.oid) AS index_bytes,
    80 pg_total_relation_size(reltoastrelid) AS toast_bytes,
    81 pg_total_relation_size(c.oid) AS total_bytes
    82 FROM pg_class c
    83 LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
    84 WHERE relkind = 'r'
    85 AND nspname = :table_schema
    86 ) a
    87 ORDER BY {order_by} {"DESC" if order_desc else "ASC"};
    88 """
    89 table_schema = "public" if os.getenv("ASTRONOMER_ENVIRONMENT") == "local" else "airflow"
    90
    91 result = session.execute(query, {"table_schema": table_schema})
    92 response = InfoResponse()
    93
    94 for row in result:
    95 response.tables.append(TableInfo(**{k: v for k, v in row._mapping.items() if v is not None}))
    96
    97 return response
    98
    99
    100@api.get("/oldest_timestamp")
    101def get_oldest_timestamp(
    102 *,
    103 table_names: Annotated[list[str] | None, Query()] = None,
    104 session: Annotated[Session, Depends(_get_session)],
    105) -> OldestTimestampResponse:
    106 oldest_timestamp_list = []
    107 existing_tables = reflect_tables(tables=None, session=session).tables
    108 _, effective_config_dict = _effective_table_names(table_names=table_names)
    109 for table_name, table_config in effective_config_dict.items():
    110 if table_name in existing_tables:
    111 orm_model = table_config.orm_model
    112 recency_column = table_config.recency_column
    113 oldest_execution_date = session.query(func.min(recency_column)).select_from(orm_model).scalar()
    114 if oldest_execution_date:
    115 oldest_timestamp_list.append(oldest_execution_date)
    116 else:
    117 logging.info("No data found for %s, skipping...", table_name)
    118 else:
    119 logging.warning("Table %s not found. Skipping.", table_name)
    120
    121 response = OldestTimestampResponse()
    122 if oldest_timestamp_list:
    123 response.oldest_timestamp = min(oldest_timestamp_list)
    124 return response
    125
    126
    127@api.delete("/records")
    128def delete_records(
    129 *,
    130 clean_before_timestamp: datetime,
    131 table_names: Annotated[list[str] | None, Query()] = None,
    132 dry_run: bool = False,
    133 verbose: bool = False,
    134 skip_archive: bool = False,
    135 batch_size: int | None = None,
    136 session: Annotated[Session, Depends(_get_session)],
    137):
    138 run_cleanup(
    139 clean_before_timestamp=clean_before_timestamp,
    140 table_names=table_names,
    141 dry_run=dry_run,
    142 verbose=verbose,
    143 confirm=False,
    144 skip_archive=skip_archive,
    145 batch_size=batch_size,
    146 session=session,
    147 )
    148
    149
    150@api.delete("/archived")
    151def delete_archived(
    152 *,
    153 table_names: Annotated[list[str] | None, Query()] = None,
    154 session: Annotated[Session, Depends(_get_session)],
    155):
    156 drop_archived_tables(
    157 table_names=table_names,
    158 needs_confirm=False,
    159 session=session,
    160 )
    161
    162
    163app = FastAPI()
    164app.include_router(api, prefix="/api")
    165
    166
    167class AstronomerDBCleanupPlugin(AirflowPlugin):
    168 name = "AstronomerDBCleanupPlugin"
    169 fastapi_apps = [
    170 {
    171 "app": app,
    172 "url_prefix": "/db_cleanup",
    173 "name": "Astronomer DB Cleanup Plugin",
    174 }
    175 ]

    The plugin exposes the following API endpoints:

    • GET /db_cleanup/api/info: Provide a list of tables with their corresponding sizes and row count estimates. This endpoint is not used by the Dag, but can be useful to get insights into table sizes.
    • GET /db_cleanup/api/oldest_timestamp: Return the oldest timestamp for the tables to cleanup used for calculating batches.
    • DELETE /db_cleanup/api/records: Call the run_cleanup utility.
    • DELETE /db_cleanup/api/archived: Call the drop_archived_tables utility.

Because the DB cleanup utilities are running on the api-server, the corresponding logs will show up in the api-server logs.

Step 2: Configure a HTTP connection

Add a HTTP connection used for calling the API endpoints.

  • host: Set this to the deployment’s URL. For example on Astro this would look like something like https://cmls9yey09fpw01ncvse41m4n.4n.astronomer.run/dse41m4n. When running locally in astro dev this should be set to http://api-server:8080.
  • extra: If needed, set the authorization header. On Astro with an API token this would look something like {"Authorization": "Bearer mytoken1234...abc1234"}.

Step 3: Practice running the Dag

In this step, run the Dag in a local Airflow environment to practice the workflow for cleaning metadata DB records. If you completed Step 1 in your production environment, you will need to repeat it here before starting your local Airflow project. Typically in a fresh local Airflow environment there is not much to clean up. When completing this process in a production environment which has been running for a while, there are more historic records to cleanup.

  1. Run astro dev start in your Astro project to start Airflow, then open the Airflow UI at localhost:8080.

  2. Ensure the Airflow connection http_default with host http://api-server:8080 is set.

    Instead of creating an Airflow connection, you can also define it as an environment variable AIRFLOW_CONN_HTTP_DEFAULT=http://api-server:8080 in your local .env file.

  3. In the Airflow UI, run the astronomer_db_cleanup Dag by clicking the play button and configure the following params:

    • dry_run is enabled
    • Choose an appropriate cutoff date for clean_before_timestamp
  4. Click Trigger.

  5. In a local terminal run astro dev logs --api-server -f to show the api-server logs.

  6. Check that the run_cleanup utility completed successfully. Note that if you created a new Astro project for this tutorial, the run will not show much data to be deleted.

You can now use this Dag to periodically clean data from the Airflow metadata DB as needed.