Blog |

The AI Spring: How Demand for Production-Ready GenAI Projects is Continuing to Grow

5 min read |

It’s easy to make fun of hype cycles. After all, it’s usually a safe bet
that any new technology is probably not quite as magical as it first
appears. But sometimes you can be very
wrong
,
and technologies do in fact change things forever. Recent skepticism about
Generative AI is healthy, for the short-term impacts are often
exaggerated. But the long term impacts are most certainly profound.

An AI Winter?

As hype cycles go, the one for Generative AI is moving especially quickly.
It’s been barely a couple of years since LLMs became a target of intense
funding, and already we seem to be in the Trough of Disillusionment.
People are talking about the next AI
Winter
.

Disillusionment? Maybe. The claims made about Generative AI are
undoubtedly inflated, sometimes wildly. It’s time for us to calm down and
recognize that Generative AI has limitations. But “AI Winter?” That
doesn’t seem right. First, as a label it’s just ahistorical. It’s just not
reasonable to use “AI Winter” to refer to two fundamentally different
events. The first AI Winter was a freeze, not a slowdown. It lasted
decades. After the famous Dartmouth Workshop in 1956, a period of
excitement met real physical limitations — lack of data, lack of compute.
Today’s revolution in AI, specifically deep learning, which began just
over a decade ago, was the realization of those 60-year-old dreams, now
powered by an abundance of compute and unprecedented access to all the
world’s data.

Second, while it’s true that Generative AI projects are meeting challenges
— specifically around cost, quality, and control — there’s still plenty of
successful projects creating real business value. And GenAI has resulted
in fairly fundamental changes in the way we develop predictive models, and
even the way we code and write. You can’t undo that.

Third, there’s plenty of evidence that production-ready AI
projects

are growing, not shrinking. Financial automation company (and Astronomer
customer) Ramp has
reported that AI is
the fastest growing expense in corporate budgets. At Astronomer, we’re
also able to measure this growth directly. As the team that manages the
world’s largest Airflow deployments, we have some idea of the types of
workloads that organizations run. And we can see that about 30% of Airflow
teams use it for training, serving or managing
MLOps
,
and just over 20% are already using it for Generative AI (especially
fine-tuning, RAG, and batch inference with LLMs). Our customers
consistently find that using a managed Airflow service frees them up to
apply data orchestration to many different use cases — none more so than
in the field of AI, which every business unit is being asked to explore
and evaluate for a multitude of different tasks. In fact, we see
ML-related tasks increase eightfold after initial onboarding, helping
teams qualify the practicality and usefulness of AI way faster than they
would have been able to do without a reliable path to production.

Production GenAI Powered by Airflow

Finally, there’s plenty of real-world evidence, and compelling stories of
AI in production, much of it using Airflow and Astro as the way to turn
prototypes into reality. And while larger companies are definitely making
major investments in AI, and are seeing meaningful
returns
,
it’s instructive to look at smaller companies, which can innovate faster
and are particularly attuned to fast ROI.

  • The Data
    Flowcast

    podcast recently talked to
    Laurel,
    an automated timekeeping software company that is using fine-tuned
    large-language models to automate the process of generating legal reports
    and billing. Their entire business model depends on Airflow training and
    deploying models.

Dosu
is using Generative AI to automate the more foundational aspects of
running software projects, including triaging issues and maintaining
documentation.

Anastasia
uses proprietary Generative AI to help SMBs predict sales trends and
streamline inventory management.

For companies like these, the performance of AI models is nothing less
than the difference between commercial success and failure.

My favorite example comes from the team at
ASAPP,
who use Generative AI to help organizations like jetBlue and DISH improve
the productivity of their contact centers, increasing customer
satisfaction while reducing costs. Their ML team told me how ASAPP’s
architecture uses Apache Spark and Airflow to manage over a million jobs
daily across more than 5,000 DAGs. These workflows involve advanced tasks
such as language identification, automatic speech recognition, and
summarization, all enhanced by fine-tuning of large language models based
on customer-specific data to ensure accuracy and relevance. Airflow’s
Python-based extensibility, robust ecosystem, and seamless integration
with Kubernetes made it a natural fit for ASAPP’s AI operations, allowing
them to streamline development, reduce processing times, and deploy
scalable, mission-critical generative AI solutions.

Perhaps we’ll look back on 2024 not so much as the start of the AI Winter,
but as a heatwave during the AI Spring. From what we’ve seen at
Astronomer, the work that the ASAPP team is doing is one of many examples
of how Generative AI is flourishing — and how Airflow is playing a role in
that.

For further information on how organizations are creating value from AI
and machine learning, download our Guide to Data Orchestration for
Generative
AI
.

Build, run, & observe your data workflows.
All in one place.

Build, run, & observe
your data workflows.
All in one place.

Try Astro today and get up to $20 in free credits during your 14-day trial.