Blog|

Improving Ask Astro: The Journey to Enhanced Retrieval Augmented Generation (RAG) with Cohere Rerank, Part 4

6 min read |

 

 

Ask Astro: A RAG-based Chat Assistant using Large Language Models (LLM)

Ask Astro has been a popular tool for developers and customers seeking answers about Astronomer products and Apache Airflow® since its initial release. Built from the power of a RAG-based LLM chat system, Ask Astro aims to provide users with swift and straightforward access to their inquiries along with related documents as source information. It was released in the #airflow-astronomer channel on Apache Airflow®’s Slack, as well as at ask.astronomer.io, and within a few days it was answering hundreds of questions a day. However, as the application gained popularity and question queries became more diverse, the need for a more refined document retrieval and answer accuracy became evident. As such, we integrated Hybrid Search from Weaviate and Cohere Rerank into our existing system, significantly enhancing Ask Astro's quality.

 

Meet Ask Astro

Ask Astro, an LLM-powered chatbot, harnesses Airflow knowledge from various platforms to deliver Astronomer's extensive expertise on demand. As an open-source project, it also serves as a starting point for operationalizing your LLM applications.