Skip to content
English - United States
  • There are no suggestions because the search field is empty.

PERFORMANCE 

The Performance Module: Measuring Recommendation Frequency

The Performance Module is the foundational screen when reviewing your data/content, providing actionable insights to optimize content and brand.

Performance for Destinations:

Performance for Hotels:

 

The measurement specifically focuses on whether your brand/product appears in the top five recommendations. Generative AI tools typically provide two to seven suggestions, with three to five being more common, meaning that if a travel brand makes it into the top five, it is considered recommended and visible.

Performance Benchmarks

 
 

Score Definition

Meaning

Average Score

Average performance across travel brands in a competitive set is typically around 58%.

"Good" Threshold

A travel brand scoring above 50% is generally considered "Good".

Example Score (25.0%)

For a given traveler type and itinerary, the BRAND appears in the top five rankings only 1 of 4 times. This indicates significant work is needed to improve visibility.

 

Step 1: Accessing the Destination or Property-Level View

The initial view of the Performance module analyzes data at the destination or property level, focusing on the individual TRAVEL BRAND.

  1. Action: Locate and select the Performance module in the main navigation (implied first step).

  2. Locate the Group(s) Console: Find the Group(s) Console in the center of the screen, underneath the filter (+) symbol.

  3. Confirm Destination/Property View: Ensure the Destination/Property  tab within the Group(s) Console is depressed (selected) to show the individual Destination/Property’s overall score.

Destination DMO Example

image-20251120-001707 
Hotel Example

Hotel Example

Step 2: Comparing Performance Against Peers

Performance measurements—including Bias and Perception—are based on comparison against competitors (Comp Set).

  1. Action: Locate the Peer button, which is a toggle button located underneath the Sorting area. CLICK the Peer button to switch to the comparison view.

  2. Analysis: This view shows how the Destination/Property  stacks up against rivals.

    Example, a destination like the Chicago DMO is considered indexing above average, while others might be "not doing well at all" (e.g., King of Prussia, PA).

 image-20251120-001959
 
Example, a hotel like the Four Seasons New York is considered indexing about average, while others might be doing much better (e.g., St. Regis New York) while the Baccarat Hotel New York is indexing much lower.
 

 

Step 3: Filtering Performance by AI Platform

Bonafide uses a rigorous approach, summarized in the "AI Platform" aspect, to collect its data.

  1. Action: Navigate to the Group(s) Console and CLICK the AI Platform tab [User Query 1, 2].

  2. Explanation (Interrogation Methodology): Bonafide ensures the assessment is robust by interrogating multiple AI platforms (not just one LLM). The system always uses the latest and most advanced versions of these AI models (e.g., new Gemini or updated GPT) because AI technology is constantly evolving.

 image-20251120-002314

The filtered view showing performance data across multiple, named AI platforms (LLMs), demonstrating Bonafide’s comprehensive data collection approach.

Step 4: Filtering Performance by Customer Segment

The DMO's performance is broken down into granular results based on different traveler types, known as Customer Segments.

  1. Action: Navigate to the Group(s) Console and CLICK the Customer Segment tab (or Customer Types) .

  2. Analysis:

    Example for DMO:  This view helps DMOs understand their strengths and weaknesses for various visitor segments (e.g., solo, luxury, leisure travelers). If a DMO scores low in a non-target segment, it "makes sense". If they score below average in a target segment (e.g., leisure travelers), it shows "exactly where they need to focus their data efforts".

image-20251120-002430 

view showing scores broken down by various Customer Segments (e.g., business travelers, family), highlighting the granular data available for strategic planning.

Example for Hotels:  This view helps hotels understand their strengths and weaknesses for various visitor segments (e.g., solo, luxury, leisure travelers). If a Hotel scores below average in a non-target segment, for example, budget travelers for the Four Seasons Hotel (a luxury brand) it "makes sense". If they score below average in a target segment (e.g., luxury travelers), it shows "exactly where they need to focus their data efforts".  On the other hand, over-indexing on certain segments not previously thought about by the hotels (such as Bleisure travelers) may reveal an opportunity for the hotel.


Conclusion: Performance in the AI Travel Landscape

The Performance Module reveals whether your BRAND is making the critical Top Five cutoff in AI recommendations. If your BRAND appears in the top five, it is visible; if it falls outside those few spots, you are effectively invisible to the traveler.

Integrated Analogy Summary:

The Performance Score is your AI Travel Agent Recommendation Score. If a traveler asks an LLM (the agent) for top places to visit, a low score (e.g., 25.0%) means the agent only suggests your BRAND one out of every four times a relevant traveler asks. Utilizing the Customer Segment filter is like asking the agent to refine their criteria (e.g., focusing on leisure or luxury), allowing the DMO to identify opportunities or critical gaps in their target markets. The core goal of using Bonafide is to increase this probability, ensuring travelers are presented with your BRAND in AI recommendations, which ultimately drives visitation.