Cheat sheet: Enterprise AI terminology

We compiled this handy cheat sheet that includes the terms you’re likely to hear in conversations about enterprise AI.

Lena Webster

Growth Support Manager at Coda

Cheat sheet: Enterprise AI terminology

By Lena Webster

Share this article

Share on TwitterShare on LinkedInShare on Facebook
AI · 6 min read
For those of us who aren’t engineers, generative AI can seem a little like magic—complete with its own set of incantations and spell components. Luckily, it’s not quite as arcane as all that. The vocab may have a small learning curve, but we have you covered. Since the introduction of Coda Brain, our AI solution, plenty of our co-workers have had to become familiar with the difference between ACL and RBAC, so we compiled this handy cheat sheet to share. It includes the main terms you may come across in conversations about Coda Brain and/or enterprise AI more generally.

Access Control Lists (ACLs)

This is a list of permissions attached to a document or database that lets Coda Brain know who can access and/or edit what information. It’s a permissions list that helps Coda Brain navigate organizations’ structures and data by individual employee access level. Good security is essential to creating worthwhile enterprise AI. We want to make sure employees can get any information they need without breaching sensitive data. So Coda Brain could dig up specific policies from the handbook or widely shared HR policies, but it wouldn’t share a list of upcoming promotions with anyone who asked.

Citations

In an AI context, this means exactly the same thing it did when you were writing high school English papers. When Coda Brain pulls up information, it always cites its sources, so you know you can trust the answers it generates. If needed, citations even give users the ability to trace back the origin of the data or insight provided by the AI.

Consumer AI

This refers to AI solutions meant to be used by the general public. ChatGPT is the most famous example, though plenty of other external-facing AI solutions (like Google Translate or all those AI additions to your favorite apps and websites) fall under this umbrella, too. Consumer AI usually pulls from large amounts of publicly available data to generate answers and help users with everyday tasks.

Enterprise AI

Enterprise AI, as opposed to consumer AI, was built by and for cutting-edge enterprise teams. Generally, enterprise AI solutions are integrated into an organization, meaning they have much better access to an organization’s data. Enterprise AI solutions like Coda Brain prioritize security and accuracy, and the answers enterprise AI pulls for employees are contextualized by a company’s accumulated institutional knowledge.

Hallucinations

When an AI model provides an incorrect or made-up answer to a query, it’s called a hallucination. With consumer AI, this happens fairly often, as the models can’t always tell the difference between an academic paper and a joke on Reddit. It’s annoying to be fed made-up information when you’re asking ChatGPT to help with your weekend plans. It could be disastrous when you’re using AI in an enterprise context. Our enterprise AI solution, Coda Brain, cites its sources, so you always know where an answer comes from.

Knowledge workers

If you have an email job, this is probably you. A knowledge worker is anyone who traffics in specialized information or whose main contribution to their work is knowledge. That’s everyone from engineers and marketers to lawyers and pharmacists. Knowledge workers are Coda Brain’s ideal users, as it was designed to help people sort through (and actually act upon!) huge amounts of data.

Large Language Model (LLM)

A large language model (LLM) is a computational model that studies large amounts of text in order to generate text. LLMs become familiar with language as a set of statistics, understanding which words are most likely to follow each other, and generate answers to queries accordingly.

Prompt engineering

If you’ve ever struggled to get consumer AI to do what you want it to do, someone may have told you you’re simply asking it the wrong questions. As we discussed in hallucinations, AI issues may not always be user errors, especially if you’re using consumer AI that may be pulling information from dodgy sources. But sometimes it does take some fine tuning to communicate exactly what you want from a model, so that it can give you the best possible results. That’s the art of prompt engineering. It involves understanding how an AI model processes information and using that knowledge to formulate questions or statements that guide the AI toward providing useful, accurate, or creative responses. Prompt engineering also includes adding relevant context to a prompt. For example, in Coda Brain you can reference a specific document, table, or even row in a table by using an @, so Coda Brain knows where to look for the latest numbers.

Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation is what makes enterprise AI like Coda Brain so trustworthy. General Large Language Models are, more or less, very sophisticated predictive text machines, not search engines. They’re made to give you an answer, any answer, so they’ll put together a sentence when you ask. RAG gives LLMs boundaries. With RAG, Coda Brain won’t go scanning the entire internet to tell you which of your company’s ad campaigns performed best. It has much better search engine capabilities because it understands which documents to check. AI equipped with RAG is like having a librarian who could point you to the exact page in the exact book you were looking for, instead of just telling you which section to look in. The generation of RAG bit means this librarian would even help you write a first draft. And, with the addition of ACLs and RBAC, Coda Brain knows better than to share sensitive information with employees who don’t have the right permissions.

Role-Based Access Control (RBAC)

Role-Based Access Control (RBAC) is another way to set who has the ability to access or interact with any given document or database. It’s a list of roles in an organization and the permissions associated with that role. Explicitly defining which positions within an organization can read or edit what data sets boundaries for how information is shared within a network. Roles could be anything from hierarchical titles like manager or technician to departmental assignments like IT or HR. It’s a bit easier to set these up than ACLs, as permissions aren’t tied to individual names. This way, when an employee is hired or promoted, they’re automatically granted their role’s access. RBAC gives Coda Brain boundaries for which information it can share with which employee.

Structured data

Structured data is simply data that is already sorted into either a chart or table. When it comes to organizational data, storing things in a structured way (usually in a database or spreadsheet) makes them much easier to search and analyze. Unlike many other AI solutions, Coda Brain can respond to a query with structured responses, pre-sorting answers into live, editable tables that can be easily added to a doc. Better yet, those tables will continue to pull in whatever information you searched for.

Text to SQL Engine

This element of Coda Brain’s build translates natural language queries into SQL queries. So you can ask Coda Brain a question as you’d normally phrase it to a human being, and the Text to SQL Engine makes your query intelligible as a request to search through structured data. Not many generative AI solutions can translate unstructured data into structured data answers, but you can ask Coda Brain a question and expect the results you need sorted into a table.

Turnkey

A turnkey solution is one that’s ready to go with minimal to no customization. It’s commonly used in the construction industry to denote a house that’s ready for a new owner to move right in (or simply turn the key and carry in the first box). Coda Brain is a turnkey solution because all a user has to do is start asking questions. It’s ready for immediate deployment and needs no technical experience to operate.

Unstructured data

This is a broad catch-all term to indicate anything that isn’t already organized into a model or stored in a structured format like a table or database. When it comes to Coda Brain, this mostly refers to text. When a user asks Coda Brain a question like, “What’s our vacation policy?,” they’re entering unstructured data into the model. When Coda Brain reads the employee handbook to retrieve the answer, it’s digging through more unstructured data. And when it generates a written answer for the user, Coda Brain is responding in even more unstructured data.

Test drive Coda Brain.

We hope you feel empowered to talk about generative AI with this primer on the language of this latest feat of engineering wizardry. Coda Brain is quickly becoming our team’s favorite know-it-all, helping Coda employees dig through years of data and provide the best possible product to our community of makers. If you’re curious to see what Coda Brain could do for your team, learn more here.

Related posts

Explore more stories about AI.
Enterprise AI can do more

With access to your team’s data, enterprise AI becomes a powerful tool.

Enterprise AI vs consumer AI

Consumer AI tools don’t have context for your tasks at work.

Introducing Coda Brain

Coda Brain turns data into action across the enterprise.