Skip to content
English - United States
  • There are no suggestions because the search field is empty.

Unlocking AI Accuracy: How Curation and Orchestration Shape Your Scores

Understanding the connection between updating your knowledge base, tracking AI performance, and the timing of large language models.

If you have started using the Bonafide Curator module to update, refine, and verify your brand's data, you might be wondering: Will these curation efforts actually change my Accuracy scores?

The short answer is yes. Your accuracy scores will change as you use Curator—and surprisingly, they can begin to change even before the Large Language Models (LLMs) have actually read your newly deployed files.

To understand why this happens and when the AI models will actually start using your data, it helps to look at how curation and orchestration work together.

The Simpler Terms: Curation as the "Answer Key"

To explain how curation affects accuracy, it helps to think of the process like a school test. The AI model (like ChatGPT or Gemini) is the student taking the test, and your "Source of Truth"—the verified facts stored within the Bonafide platform—is the teacher's answer key.

When you use Curator to verify, edit, or add an "Official Response" to a prompt, you are effectively updating the teacher's answer key in the background.

Every month, the Bonafide system performs a "re-interrogation." It asks the AI models the exact same set of questions and extracts their factual answers. Bonafide then directly compares the AI's answers against your newly updated answer key.

Here is the crucial part: Your curation efforts can be reflected in your Bonafide scores even if you haven't "orchestrated" or published your files to the web yet. Because you updated the hidden answer key, if an AI happens to guess the answer correctly, or if it pulls a correct fact from a random blog "in the wild" that now perfectly matches your curated answer, your accuracy score will instantly register as a match and improve. On the flip side, any prompts you leave as "Unverified" are not added to the answer key, which can cause scores to flatten out or dip.

Orchestration and the AI Crawl Timeline

While curation updates the answer key in private, Orchestration is the act of publishing that answer key to the world so the AI "students" can finally study it. During orchestration, your verified answers are compiled into specialized, machine-readable FAQ markdown and HTML files and placed on your website.

However, exposing these files through orchestration does not mean the LLMs will use them immediately.

The Timing:

  • The Crawl Delay: It takes time for the large language models to crawl your website and pick up that new knowledge base.
  • Unpredictable Timelines: The exact timing for when an AI ingests the files varies and remains LLM model dependent along with other variables such as new content that is created in the wild.  Think of all the new blogs, reddit posts or TripAdvisor content that gets produced about your product that is formatted in a way LLMs likes to consume. These new content can detract the LLMs from using your knowledge base as the single source of truth.  Sometimes crawlers pick up the new knowledge base very quickly, while for other brands, it can take two to three months for the LLMs to fully absorb the data.

The Long-Term Result: If you do not orchestrate your curated files, your accuracy scores will continue to fluctuate  based on whatever random sources the large language models decide to act on that month.

But over time, as the LLMs digest your orchestrated files, your curated knowledge base begins to heavily influence their models. Instead of using other unverified, non-authoritative sources out in the larger web, the AI models will at some point actively start citing your curated knowledge base as their primary source of truth. As the AIs align with your orchestrated data, your accuracy scores will steadily improve and stabilize.