Glen AI wants to stop AI hallucinations using enterprise data

Head to our on-demand library to view sessions from VB Transform 2023. Register here


As organizations around the world look to rapidly evaluate, test, and deploy generative AI in their workflows — whether on the backend, frontend (customer-facing), or both — many decision-makers remain genuinely concerned about some lingering issues, one of which is hallucinations. artificial intelligence.

But a new startup, Gloen AI, has come on the scene and claims to “solve hallucinations,” according to Ashu Dubey, CEO and co-founder of Glen, who spoke to VentureBeat exclusively in a video call interview.

Today, Glen AI is announcing a $4.9M funding round from Slow Ventures, 6th Man Ventures, South Park Commons, Spartan Group, and other venture firms and angel investors including former Vice President of Product Management at Facebook/Meta Platforms Sam Lessin, to keep building. Enterprise Anti-Hallucination Data Layer software, initially intended to help them build AI models to provide customer support.

The problem with hallucinations

Generative AI tools such as popular and commercially available Large Language Models (LLMs) such as ChatGPT, Claude 2, LLaMA 2, Bard, and others, are trained to respond to human-input prompts and queries by producing data associated with the words and ideas entered by the human user.

It happened

VB Transform 2023 on request

Did you miss a session from VB Transform 2023? Sign up for on-demand library access to all of our featured sessions.

Register now

But general AI models don’t always do this correctly, and in many cases, they produce inaccurate or irrelevant information but the training of the model was previously associated with something the human user said.

One good recent example is that ChatGPT attempts to answer the question “When did Earth eclipse Mars?” and offering a sound, convincing, and wholly imprecise explanation (the basic premise of the question is flawed and inaccurate – Mars cannot be eclipsed by Earth).

While these inaccurate responses can sometimes be funny or intriguing, for companies trying to rely on them to produce accurate data for employees or users, the results can be incredibly risky — especially for the highly structured life and death information industry. health care. medicine, heavy industry and others.

What does Glenn do to prevent hallucinations?

“What we do is that when we send data (from a user) to an MBA, we provide facts that can create a good answer,” Dube said. “If we don’t think we have enough facts, we won’t send the data to LLM.”

Specifically, Glen has created a private AI and machine learning (ML) layer that is independent of any LLM the enterprise customer wants to deploy.

This layer securely sifts through the organization’s internal data, turns it into a vector database, and uses this data to improve the quality of the AI ​​model’s answers.

Screenshot showing how to create a Glen AI layer on the user side. Credit: Glenn AI

The Glenn layer does the following:

  1. Compilation of structured and unstructured corporate knowledge from multiple sources such as help documents, FAQs, product specifications, manuals, wikis, forums, and past chat logs.
  2. Organizes and extracts essential facts, eliminating noise and redundancy. This “allows us to extract the signal from the noise,” Doby said. (Also the origin of Glenn’s name.)
  3. Create a knowledge graph to understand the relationships between entities. The graph helps retrieve the most relevant facts for a particular query.
  4. Checks LLM response against rich facts before delivering output. If there is no evidence, the chatbot will say “I don’t know” rather than risk hallucinations.

The AI ​​layer acts as a checkpoint, validating the response of the LLM before it is delivered to the end user. This eliminates the risk of the chatbot providing false or fabricated information. It’s like having a chatbot quality control manager.

“We only engage with LLM when we have great confidence that the facts are exhaustive,” Dube explained. “Otherwise, we are transparent about needing more information from the user.”

Glene also enables users to quickly create customer support chatbots for their customers, adjusting their “persona” according to the use case.

Screenshot of Glenen AI’s robot character control settings. Credit: Glenn AI

Glen’s solution is an AI model, and can support any of the many leading models available that have API integrations.

For those customers who want the most popular LLM, it supports OpenAI’s GPT-3.5 Turbo model. For those concerned about sending data to the LLM’s hosting company, it also supports running LLaMA 2 on the company’s private servers (although OpenAI has said time and again that it does not collect or use customer data to train its models, except when the customer expressly allows it).

For some security-sensitive customers, Glen offers the option of using a private LLM that never touches the open internet. But Dube believes that the LLMs themselves are not the source of the hallucinations.

“MBA students will become delirious when they are not given enough relevant facts to justify a response,” Dubey said. “Our precision layer solves this problem by controlling the inputs to the LLM.”

Early feedback is promising

Right now, the end result for a customer using Glen is a custom chatbot that can plug into their Slack or pose as an end-user-facing support agent.

An example of a Glenen AI chatbot. Credit: Glenn I

Glen AI is already being used by clients that include quantum computing, cryptography, and other technical fields where accuracy is critical.

“The implementation of Glen AI is a zero effort on our part,” said Estevan Vilar, community support at Matter Labs, a company dedicated to making Ethereum cryptocurrency more enterprise-friendly. “We just made a few connections, and the rest was seamless.”

Example of Glenen AI chatbot integration into Slack. Credit: Glenn AI

Glen is offering potential customers a free “AI playground” where they can build their own custom chatbot using their company’s data.

As more companies look to harness the power of LLMs while mitigating their drawbacks, Glenen AI’s precision layer may provide a path for them to deploy generative AI at the level of accuracy that they and their customers demand.

“Our vision is that every company will have an AI assistant powered by their own knowledge graph,” said Dube. “This vector database will become as invaluable as their website, enabling custom automation across the entire customer lifecycle.”

VentureBeat’s mission It is to be a digital city arena for technical decision makers to gain knowledge about the technology of transformational and transactional enterprises. Discover our summaries.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: