Welcome to the recap of a webinar on notifications in Airflow - a really fun topic that’s relevant to all of us working with data pipelines!
Airflow is an ideal orchestrator - pipelines in code make it flexible, a vast network of provider packages and community contributions make it extensible. It's also highly scalable due to its sophisticated and flexible infrastructure.
In these cases, and others, it can make more sense to dynamically generate DAGs. Because everything in Airflow is code, you can dynamically generate DAGs using Python alone.
All of it sounds great... But what happens if something goes wrong?
We’d all love to think that our code has no bugs, but that isn't realistic. And even if we do have perfect code, sometimes things happen that are outside of our control. An API or external system might go down, you might have some mysterious Kubernetes error that you're never able to replicate. We’ve all been there.
So the question is not really IF something will go wrong, but rather how do you handle it when it happens? The first step is knowing that something occurred first, and that's what notifications are here for!
Let’s get to know them better.
During the webinar we covered:
- Notification basics
- How notifications work within Airflow and what you can do with them
- Setting up the most common methods - email notifications, Slack notifications etc.
- More advanced topics - SLS, Data quality issues