Large language models (LLMs) like OpenAI’s GPT-4 are powerful, paradigm-shifting tools that promise to upend industries. But they suffer from limitations that make them less attractive to enterprise organizations with strict compliance and governance requirements. For example, LLMs have a tendency to make up information with high confidence, and they’re architected in a way that makes it difficult to remove — or even revise — their knowledge base.

To solve for these and other roadblocks, Douwe Kiela co-founded Contextual AI, which today launched out of stealth with $20 million in seed funding. Backed by investors including Bain Capital Ventures (which led the seed), Lightspeed, Greycroft and SV Angel, Contextual AI ambitiously aims to build the “next generation” of LLMs for the enterprise.

“We created the company to address the needs of enterprises in the burgeoning area of generative AI, which has thus far largely focused on consumers,” Kiela told TechCrunch via email. “Contextual AI is solving for several obstacles that exist today in getting enterprises to adopt generative AI.”

Kiela and Contextual AI’s other co-founder, Amanpreet Singh, worked together at AI startup Hugging Face and Meta before deciding to go it their own in early February. While at Meta, Kiela led research into a technique called retrieval augmented generation (RAG), which forms the basis of Contextual AI’s text-generating AI technology.

So what’s RAG? In a nutshell, RAG — which Google’s DeepMInd R&D division has also explored — augments LLMs with external sources, like files and webpages, to improve their performance. Given a prompt (e.g. “Who’s the president of the U.S.?”), RAG looks for data within the sources that might be relevant. Then, it packages the results with the original prompt and feeds it to an LLM, generating a “context-aware” response (e.g. “The current president is Joe Biden, according to the official White House website”).

By contrast, in response to a question like “What’s Nepal’s GDP by year?,” a typical LLM (e.g. ChatGPT) might only return the GDP up to a certain date and fail to cite the source of the information.

Kiela asserts that RAG can solve the other outstanding issues with today’s LLMs, like those around attribution and customization. With conventional LLMs, it can be tough to know why the models respond the way they do, and adding data sources to LLMs often requires retraining or fine-tuning — steps (usually) avoided with RAG.

“RAG language models can be smaller than equivalent language models and still achieve the same performance. This makes them a lot faster, meaning lower latency and lower cost,” Kiela said. “Our solution addresses the shortcomings and inherited issues of existing approaches. We believe that integrating and jointly optimizing different modules for data integration, reasoning, speech and even seeing and listening will unlock the true potential of language models for enterprise use cases.”

My colleague Ron Miller has mused about how generative AI’s future in the enterprise could be smaller, more focused language models. I don’t dispute that. But perhaps instead of exclusively fine-tuned, enterprise-focused LLMs, it’ll be a combination of “smaller” models and existing LLMs augmented with troves of company-specific documents.

Contextual AI isn’t the first to explore this idea. OpenAI and its close partner, Microsoft, recently launched a plug-ins framework that allows third parties to add sources of information to LLMs like GPT-4. Other startups, like LlamaIndex, are experimenting with ways to inject personal or private data, including enterprise data, into LLMs.

But Contextual AI claims to have inroads in the enterprise. While the company is pre-revenue at the present, Kiela claims that Contextual AI is in talks with Fortune 500 companies to pilot its technology.

“Enterprises need to be certain that the answers they’re getting from generative AI are accurate, reliable and traceable,” Kiela said. “Contextual AI will make it easy for employers and their valuable knowledge workers to gain the efficiency benefits that generative AI can provide, while doing so safely and accurately … Several generative AI companies have stated they will pursue the enterprise market, but Contextual AI will take a different approach by building a much more integrated solution geared specifically for enterprise use cases.”

Contextual AI, which has around eight employees, plans to spend the bulk of its seed funding on product development, which will include investing in a compute cluster to train LLMs. The company plans to grow its workforce to close to 20 people by the end of 2023.

Contextual AI launches from stealth to build enterprise-focused language models by Kyle Wiggers originally published on TechCrunch