Contributions

Article
We'd began building with Pinecone and then decided we wanted to switch to Qdrant (open-source, on-prem-enabled with a managed cloud hosted option, emphasis on optimized speed and storage optimizations). But we already had thousands of vectors representing our data saved to the initial provider. This blog post explains our priorities, thinking and learnings as we navigated the switch.
Article
This post is the first in a series in which we will explore the limits of large language models (LLMs) with respect to memory overhead and context windows. The goal is to impart a high-level sense of understanding of what an LLM is and the limitations of such a system as of late 2023.
Article
New blog from Revelry engineer Brandon Bennett.