Knowledge Hub

Precision in the knowledge economy

by  Patrick Penzo
  |  September 26, 2024

If you think about it carefully, Large Language Models (LLMs) are just very good at guessing what the most likely next word in the sentence is going to be.

Given that, it’s almost a miracle how extensively AI is helping us across a range of jobs. The ability to consume information and return summarized answers to any query is incredible. 

There is, however, a limit to how useful probable answers are. In the knowledge economy, where the accuracy of the information is key to delivering value, probable doesn’t cut it. Imagine an account executive offering false information about how their product will solve a particular customer problem. Or think of a wealth manager advising a client about industry trends that aren’t real. 

This challenge with getting AI to deliver consistently trustworthy information is at the heart of our focus on the Quench Precision Engine. 

Before we talk about the Precision Engine, here are a few examples of how LLMs can let you down. 

Hallucinations & flawed reasoning

First, the most well known risk of LLMs are hallucinations. That’s when AI creates data or information that is incorrect, fabricating data where it has none to refer to. 

LLMs can find themselves caught up in reasoning processes that are incorrect and deliver a wrong conclusion to a simple and sometimes obvious question.

Companies continue to invest in improving the reasoning capabilities of LLMs. However, mistakes around telling you things that simply never happened (like the example below) remain elusive.

Inconsistent results

Additionally, very similar or even identical prompts can generate dramatically different results. Sometimes these results are accurate, sometimes they are not. While it makes sense for an answer to vary based on the specific context of a user, the knowledge contained within the answer should consistently be accurate and complete. 

Capped data

Processing a question from a user requires a lot of processing power. The more data contained in a prompt, the less accurate the LLM is in processing it and delivering a result. In order to limit the energy cost to run any one query, many companies actually cap how much data a user can supply a Large Language Model. 

What is the Quench Precision Engine?

At its core, Quench takes a company’s knowledge stored in files, recordings, slides and repurposes it to help knowledge workers thrive at work. 

Quench helps knowledge workers whether they are looking for a specific resource, want to chat with an asset or need to prepare for real world scenarios with a role play. But the value is only proportional to the precision of our results. 

There are three pillars to our Precision Engine. 

User intent

The first step is understanding the context of a user. 

“How can I persuade my customer?” is a question that requires very different answers based on who is asking it and who the customer is. 

The first step is understanding what the user’s intent is. That is why we break down our AI functionality into key features: Search, Chat and Role Play. This helps us take a generic prompt from user and make it more specific by triggering a different feature. More specific prompts deliver better results for users. 

Understanding content meaning

The more content a Large Language Model has to process, the less accurate its responses will be. This problem is called the “forgotten middle”.

That is why Quench focusses significant resources on extracting specific pieces of knowledge from a bigger resource. 

If a 2 hour webinar contains 10 minutes that are relevant to a user query, providing the full recording to a large language model will only dilute the computing power across unnecessary data. 

Quench breaks down information into component parts and only submits the relevant parts to the LLM for more focussed responses to a user query.

Matching user intent and content meaning

The final step is a complex reasoning exercise. We use complex mathematics to map a specific understanding of the user’s intent to the most relevant parcels of knowledge we extracted from all the company’s knowledge. 

This mapping exercise is a key part of our secret sauce that makes the black box of our Precision Engine more accurate than a direct integration into a LLM. 

Want to try it yourself?

If you are curious about how Quench delivers high precision, or you want to benchmark our precision against your internal solution, we are up for the challenge. 

Go to our homepage and click Get Started. A member of our team will be happy to help you get set up. 

Recent
Introducing the Release of Our Quench API

Introducing the Release of Our Quench API

After much work with our flagship clients over the past few months, the Quench.ai API is now live! What’s New? You can now integrate any of these core features directly into your platform: Search: Quickly find the exact information and content you need, empowering...

Can AI Transformation Succeed Without Putting People First?

Can AI Transformation Succeed Without Putting People First?

Optimistic, worried, limitless, revolutionary, *excitanxious* To kick off our inaugural AI & Future of Work Summit, we asked a diverse group of 100 C-suite leaders how they felt about AI. As you can see, it elicited a spectrum of emotions, ranging from fascination...