There are going to be A LOT of apps developed in the coming years that will essentially let you chat or ask questions with your data.
Will that be passé or even what we want then? Who knows, but it’s the hot topic right now and is a more novel approach to “read this really long thing”.
As existing large language models (LLMs) evolve, new models get introduced and developers become more attune to our needs, we’re going to be spoiled with options.
That’s why it’s important to have a baseline understanding of how the tech behind some of these tools work. If you get a pretty bad answer, it may be explained by something quite simple like knowing they’re using an old, cheaper model that can’t remember that far back or can’t process the entire document.
When we speak of an AI's "understanding" of context, we refer to its capability to comprehend the relevance and interconnectedness of certain elements within a text. This understanding is crucial in Q&A applications, where accurate and meaningful responses are determined by the AI's grasp of the text's overall context.
Think of a long, intricate movie where every scene connects to the rest. Just like you need to understand all the scenes to get the full picture, an AI needs to understand the "story" of a document to give you the right answer. It does this by recognizing connections between parts of the text.
<aside> 🗣 Think of this as being a detective in a mystery novel, piecing together clues from different parts of the story to solve the case. The more context the AI has, the more accurately it can 'solve' your query. It does this by recognizing connections and drawing upon the 'story' of the document, leading to more comprehensive and accurate responses.
</aside>