Astronomer's the Dataflow Cast

Inside Conviva’s Decision To Power Its Data Platform With Airflow with Han Zhang

Conviva operates at a massive scale, delivering outcome-based intelligence for digital businesses through real-time and batch data processing. As new use cases emerged, the team needed a way to extend a streaming-first architecture without rebuilding core systems.

In this episode, Han Zhang joins us to explain how Conviva uses Apache Airflow as the orchestration backbone for its batch workloads, how the control plane is designed and what trade-offs shaped their platform decisions.

Key Takeaways:

  • 00:00 Introduction.
  • 01:17 Large-scale data platforms require low-latency processing capabilities.
  • 02:08 Batch workloads can complement streaming pipelines for additional use cases.
  • 03:45 An orchestration framework can act as the core coordination layer.
  • 06:12 Batch processing enables workloads that streaming alone cannot support.
  • 08:50 Ecosystem maturity and observability are key orchestration considerations.
  • 10:15 Built-in run history and logs make failures easier to diagnose.
  • 14:20 Platform users can monitor workflows without managing orchestration logic.
  • 17:08 Identity, secrets and scheduling present ongoing optimization challenges.
  • 19:59 Configuration history and change visibility improve operational reliability.

Resources Mentioned:

Thanks for listening to “The Data Flowcast: Mastering Apache Airflow® for Data Engineering and AI.” If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations.

Be Our Guest

Interested in being a guest on The Data Flowcast? Fill out the form and we will be in touch.

Build, run, & observe your data workflows.
All in one place.

Build, run, & observe
your data workflows.
All in one place.

Try Astro today and get up to $20 in free credits during your 14-day trial.