Your Staff Asked a Question. The Answer Came From Your Own Kitchen.
AI search tools pull from the internet. Your staff need YOUR answers. Here's how to make that work.
Eamonn Best
Founder, Lattify · April 10, 2026

It's 11pm on a Wednesday. Your closer has been on the job for two weeks and she's doing fine, mostly. She's got the tills, the floors, the bins. But tonight the kitchen ran a special with salmon, and a customer left a comment card asking whether the salmon was gluten-free. She's standing in the pass, card in hand, and she doesn't know. The chef who prepped it has gone home. The allergen folder is in the office, which is locked.
She could Google it. She'd get an answer in seconds - probably something like "salmon is naturally gluten-free." And that answer would be correct in the abstract, completely useless in practice. Because the question isn't whether salmon contains gluten. The question is whether YOUR salmon - the way your kitchen preps it, with your marinade, on your grill, next to your breaded dishes - is safe to serve to someone who can't eat gluten. Google doesn't know that. Neither does ChatGPT.
The gap between captured and found
If you've been building out your training - filming guides, writing up procedures, getting the knowledge out of your best people's heads and into a system - you've done the hard part. That work matters, and I've written about why it matters before. But having forty guides on a system doesn't help your closer at 11pm if she can't find the right answer in the next thirty seconds.
The capture problem and the retrieval problem are two different things. You can solve capture completely and still have staff calling you at closing time because they can't navigate to the one piece of information they need, right now, while they're standing in front of the thing they're trying to do. I wrote about how training content goes to die in binders and WhatsApp threads last week. This is the other side of that problem - what happens when the content is structured, searchable, and sitting on a server, and you need something smarter than a filename to get people to the right answer.
What happens when your staff ask the internet
According to McKinsey, employees spend an average of 1.8 hours every day searching for and gathering information. That's 9.3 hours a week. For knowledge workers at desks with keyboards, that's already a problem. For someone standing in a kitchen at 11pm with a customer's dietary question in their hand, it's a disaster, because the information they need isn't on the internet at all.
Your staff could ask ChatGPT. And ChatGPT would give them a confident, well-structured, completely generic answer drawn from its training data. It would tell them about salmon in general, about gluten in general, about cross-contamination risks in general. What it would never tell them is that your kitchen marinates the salmon in soy sauce on Thursdays, or that the grill station shares a surface with the breaded chicken, or that your head chef changed the prep method two weeks ago and the old allergen matrix is out of date.
This is where general-purpose AI becomes genuinely dangerous in a hospitality setting. Shield Safety reviewed thousands of allergen incidents over a multi-year period and found that 30% occurred because allergen information wasn't clearly communicated between the guest, front-of-house, and kitchen teams. Another 21% were linked to incorrect or incomplete allergen matrices - outdated, inconsistent, or simply not followed. These are failures of specific, local, constantly-changing information. The kind of information that lives inside your business and nowhere else.
An AI that pulls from the open internet has no access to any of it.
Grounded in your kitchen
There's a different way to build AI search, and it starts with a simple constraint: the AI only sees what your team has created.
When your team films a guide in Lattify - the allergen walkthrough for your menu, the close procedure, the keg change, the morning prep checklist - the AI breaks it into structured steps, substeps, safety warnings, ingredient lists, equipment notes. Each of those pieces gets indexed separately, tagged with metadata linking it back to the exact guide and exact step it came from. When someone searches, the system matches their question against those pieces and returns the closest matches, ranked by relevance.
That search fires instantly, on every keystroke. Your closer types "is the salmon gluten-free" and results start populating before she finishes the sentence - each one linking directly to a specific step in a specific guide that someone in your business created.
For straightforward queries, that's enough. She sees the result, taps it, lands on the allergen step of the Thursday specials guide, and she has her answer.
But some questions need more than a list of matching results. Sometimes the answer spans two guides, or the employee is asking something that requires the system to pull information from multiple steps across multiple sources - "what allergens are in tonight's specials" might draw from three different guides. When the system detects that kind of query - question patterns, results spanning multiple guides, or safety-critical keywords like allergens or hazards - a second layer activates. An AI model takes the retrieved chunks as context and generates a written answer, with every claim citing back to the specific step in the specific guide it came from.
Each citation is structured data from the API, a link the employee can tap to navigate directly to the source material. For an allergen query in a restaurant at 11pm, that citation chain is the entire point - your closer can verify the AI's answer by looking at the guide your head chef filmed.
What happens when it doesn't know
Here's the part most AI products skip over, and it's the part that matters most if you're thinking about allergens and safety.
If there's nothing relevant in your guides, the AI answer doesn't appear. The system doesn't guess. It doesn't fill the gap with internet knowledge or general reasoning. If your team hasn't created a guide covering the salmon marinade ingredients, the search will return whatever partial matches exist in the chunk results, but the synthesised AI answer won't generate, because there isn't enough grounded context to produce one responsibly.
And when an AI answer does appear, it carries a disclaimer for safety-critical information. Because even with citations, even with the answer grounded entirely in your own training materials, the system is still an AI generating text. The allergen guide your chef filmed is the source of truth. The AI answer is a fast path to that source, with a clear trail back to it, and an honest acknowledgment that the employee should verify safety-critical details against the original material.
That honesty is deliberate. The salmon scenario is a real edge case - if the marinade changed last Tuesday and the guide hasn't been updated yet, the AI would generate an answer based on the old information, because that's what's in the index. The citation still links to the guide step, so the employee can see when it was last updated and make a judgment call. The disclaimer exists because that judgment call should always rest with a human when someone's health is on the line.
What your team actually sees
Your closer types her question into the search bar. Chunk results start appearing in a dropdown as she types - instant, ranked, each one a link to a specific step in a specific guide. The guides grid is still visible underneath. If the query triggers the AI layer, a synthesised answer appears above the chunk results with inline citations she can tap. The whole interaction takes seconds.
She finds the allergen step. She reads it. She answers the customer's question with confidence, because the answer came from her own kitchen - from a guide her head chef filmed, broken into steps by the system, indexed, and retrieved at the moment she needed it. She didn't call you. She didn't Google it. She didn't get a generic answer from an AI that has never been inside your restaurant.
The search data you've never had
There's a second layer to this that most owners won't think about until they see it. When your team searches, you can see what they're searching for. If eight people in a month search for "how to change the keg," that tells you something specific. Maybe the keg room signage is wrong. Maybe the guide needs updating. Maybe you switched supplier and nobody filmed the new process.
Those search patterns are live training intelligence - real questions from real staff at the moments they actually needed help. No survey, no feedback form, no one-to-one meeting gives you that data with the same clarity or honesty.
The answer already existed
Your closer's salmon question had an answer. Your head chef knew it. He filmed the allergen walkthrough last month. The information was in the system, structured, indexed, waiting. The only question was whether she could find it at 11pm on a Wednesday, standing in the pass, without calling anyone.
She could.
If any of this sounded familiar, we built Lattify for exactly this problem.
Join the Waitlist