Citation North is a Vancouver-based Generative Engine Optimization (GEO) agency that helps Canadian professional services firms — law firms, accountants, financial advisors, and healthcare practices — become visible and cited in AI-generated search responses. One of the foundational questions in any GEO engagement is deceptively simple: how do you actually measure AI visibility? The AI Visibility Score is Citation North's answer to that question — a proprietary, reproducible metric built on 15 queries, 4 query tiers, and inverse position weighting across multiple AI platforms, normalised to a score out of 100.
This article explains the methodology behind the number, why each design decision was made, and how to interpret your score in the context of competitive benchmarking.
Why Measurement Matters
Before you can improve AI visibility, you need to know where you stand. Without a consistent, objective scoring methodology, every GEO claim is anecdote: "I searched and my firm didn't appear" is observation, not measurement. A rigorous metric answers several questions that anecdote cannot:
- How visible is your firm across the full range of queries prospective clients actually use?
- Where do you rank relative to specific competitors in your market?
- Which query types are you strong in, and which represent gaps?
- How has your score changed after implementing GEO improvements?
- What score would qualify as "strong" in your practice area and city?
Citation North designed the AI Visibility Score specifically to answer these questions in a way that is objective (runs standardised queries, not cherry-picked ones), comparable (the same methodology applied to every firm means scores are directly comparable), and actionable (the score breakdown reveals exactly which query tiers and platforms to address first).
The Scoring Methodology
The AI Visibility Score is produced by the following process, repeated consistently for every firm audited:
Step 1: Query selection. Citation North selects 15 queries designed to reflect the actual search behaviour of prospective clients in your practice area and geography. These are not generic queries — they are specific to your location, practice area, and competitive context. A Vancouver employment law firm receives different queries from a Calgary family law firm, reflecting genuine differences in how clients search for legal services in those markets.
Step 2: Platform execution. Each query is run across ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews. The responses are recorded verbatim, including the names of businesses mentioned, the position at which each business is mentioned, and the sentiment of each mention (neutral, positive, or cautionary).
Step 3: Citation extraction. Each mention of a firm in an AI response is identified using semantic classification — not simple keyword matching. This matters because AI responses refer to businesses in varied ways: by firm name, by practitioner name, by practice area description, or by implication. Citation North's methodology captures all forms of reference.
Step 4: Position scoring using inverse weighting. Each citation earns points based on its position in the AI response, using Citation North's V3 inverse position scoring algorithm. The first business mentioned earns 15 points; the second earns 14 points; continuing down to the 15th position, which earns 1 point. A business not mentioned earns 0 points. This weighting reflects the commercial reality that the first recommendation in an AI response receives dramatically more attention and action than subsequent mentions.
Position 1 = 15 points
Position 2 = 14 points
Position 3 = 13 points
// .. continues to Position 15 = 1 point
// Not mentioned = 0 points
/* Normalisation */
Raw score ÷ Maximum possible score × 100 = AI Visibility Score /100
Step 5: Normalisation. The raw point total is divided by the maximum possible score (which would be achieved if the firm appeared first in every query across every platform) and multiplied by 100, producing a final score between 0 and 100. This normalisation enables direct comparison between firms and between score periods.
The 4 Query Tiers
The 15 queries are structured across 4 tiers, each representing a different stage in the client decision journey and a different type of AI query intent. Understanding the tiers helps interpret where your score is strong and where gaps exist.
Tier 1 · List Queries
Discovery
The client is seeking a list of options with no specific context. These queries cast the widest net and represent early-stage awareness.
"Who are the best employment lawyers in Vancouver?"
Tier 2 · Situational
Problem-Led
The client describes a specific situation. Answers here require the AI to match expertise to context — higher specificity and intent.
"I was wrongfully dismissed — which Vancouver lawyer should I call?"
Tier 3 · Comparison
Competitive
The client is comparing specific firms. AI systems must have sufficient knowledge of both entities to make a substantive comparison.
"Compare [Firm A] vs [Firm B] for family law in BC"
Tier 4 · Authority
Expertise-Led
The client asks who the definitive expert is. These queries reveal whether the AI treats your firm as a category leader.
"Who is the leading immigration lawyer in British Columbia?"
Tier 1 and Tier 2 queries are typically the most accessible for firms beginning GEO investment — they reward clear entity definition and geographic specificity. Tier 3 and Tier 4 queries are harder to win and represent more advanced GEO maturity; a firm scoring well in Authority tier queries has typically built substantial entity authority across multiple platforms over time.
Inverse Position Scoring: Why Position Matters More Than Presence
The most important design decision in the AI Visibility Score methodology is the inverse position weighting. An alternative approach — counting citations without weighting for position — would tell you whether your firm appears at all, but not how prominently. For professional services client acquisition, this distinction is critical.
Research on AI-generated recommendations consistently shows that users are disproportionately influenced by the first business mentioned in a list response. When ChatGPT says "You might consider Firm A, Firm B, or Firm C," the caller most often dials Firm A. When Perplexity leads with "Smith & Partners is a well-regarded immigration firm in Vancouver," that framing carries a recommendation weight that a late-listed mention cannot replicate.
"Appearing fifth in an AI response is vastly better than not appearing at all — but it is not the same as appearing first. Our scoring reflects that commercial reality."
Inverse position scoring also creates a more sensitive instrument for measuring GEO progress over time. A firm that improves from appearing third in a query to appearing first has made meaningful commercial progress — and the score reflects that improvement proportionally. Simple binary (cited / not cited) measurement would record no change.
How to Interpret Your Score
The AI Visibility Score is most meaningful in context: compared to competitors in your market, and compared to your own previous scores over time. As a general orientation framework, Citation North uses the following bands:
High Visibility
Your firm appears consistently and prominently across multiple query types. You are a strong AI recommendation in your market. Focus shifts to maintaining position and expanding into Tier 3 and Tier 4 query wins.
Partial Visibility
Your firm appears in some queries but inconsistently, or consistently in lower positions. Structural and content gaps are present. The AI Foundation Sprint typically produces the greatest gains at this score band.
Low Visibility
Your firm rarely or never appears in AI responses. Significant structural gaps exist in entity markup, content, and third-party citation signals. Immediate structural investment is the highest-leverage action.
These bands should be read alongside your competitor benchmark. A score of 40 may be strong if your top competitor scores 35 — or concerning if they score 72. The AI Visibility Snapshot includes a competitor leaderboard that contextualises your score against the 3–5 firms most commonly recommended alongside you in AI responses for your practice area and geography.
What a Good Score Looks Like
The definition of a "good" AI Visibility Score is market-dependent and changes over time as overall AI search adoption increases and as competitors invest in GEO. In Canadian professional services markets in 2026, most firms start with scores between 0 and 25 — the market is early and GEO investment has been limited. First-mover advantage is still available in most Canadian cities and practice areas.
Citation North's Retainer clients typically reach scores in the 45–70 range within 6–9 months of consistent GEO investment, depending on starting position and competitive intensity. Authority Programme clients targeting category dominance work toward scores above 75 in their core query set — a position that reflects genuine AI recommendation leadership in their market.
If you want to know where your firm stands right now — before any GEO investment — the AI Visibility Snapshot delivers your current score alongside your competitor leaderboard, a query-by-query breakdown across all four tiers, and a prioritised 90-day roadmap,. It is the most informative investment a Canadian professional services firm can spend on its marketing strategy in 2026.
Ready to See Where Your Firm Stands?
Your AI Visibility Snapshot includes a scored baseline, competitor benchmarking leaderboard, query-by-query tier breakdown, and a 90-day roadmap — delivered within 2 business days.
Request Your SnapshotResults delivered within 2 business days
