Astronomer's the Dataflow Cast

Scaling Airflow to 11,000 DAGs Across Three Regions at Intercom with András Gombosi and Paul Vickers

The evolution of Intercom’s data infrastructure reveals how a well-built orchestration system can scale to serve global needs. With thousands of DAGs powering analytics, AI and customer operations, the team’s approach combines technical depth with organizational insight.

In this episode, András Gombosi, Senior Engineering Manager - Data Infra and Analytics Engineering at Intercom, and Paul Vickers, Principal Engineer at Intercom, share how they built one of the largest Airflow deployments in production and enabled self-serve data platforms across teams.

Key Takeaways:

(00:00) Introduction. (04:24) Community input encourages confident adoption of a common platform. (08:50) Self-serve workflows require consistent guardrails and review. (09:25) Internal infrastructure support accelerates scalable deployments. (13:26) Batch LLM processing benefits from a configuration-driven design. (15:20) Standardized development environments enable effective AI-assisted work. (19:58) Applied AI enhances internal analysis and operational enablement. (27:27) Strong test coverage and staged upgrades protect stability. (30:36) Proactive observability and on-call ownership improve outcomes.

Resources Mentioned:

Thanks for listening to “The Data Flowcast: Mastering Apache Airflow® for Data Engineering and AI.” If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.

Be Our Guest

Interested in being a guest on The Data Flowcast? Fill out the form and we will be in touch.

Build, run, & observe your data workflows.
All in one place.

Build, run, & observe
your data workflows.
All in one place.

Try Astro today and get up to $20 in free credits during your 14-day trial.