Money prompts, not keywords

How to choose the exact questions you need to be winning in

Keyword-centric SEO assumed a stable interface: users typed short queries, clicked a ranked list, and evaluated options on destination pages. Assistant-mediated discovery breaks that contract. In Google’s AI Overviews and conversational systems such as ChatGPT, Gemini, and Perplexity, the user increasingly receives a resolved answer (often with a shortlist) before any click occurs. As a result, the most valuable unit of optimisation shifts from the keyword to the prompt: a decision-shaped question that invites recommendations, comparisons, and justification.

This article introduces “money prompts” - a deliberately chosen set of commercially meaningful questions that an organisation decides to win - and presents a rigorous, repeatable method for selecting them. The method is designed to be pragmatic (fast enough to execute in weeks) but academically grounded (explicit criteria, falsifiable scoring, and benchmarkable outputs). It also avoids a common failure mode: optimising for prompts that are unwinnable because the model cannot safely recommend the brand, cannot verify the claims, or cannot reliably retrieve the necessary information.

Why keywords stop being the centre of gravity

Keyword strategies were optimised for document retrieval. In classical web search, the ranking system’s primary job is to select documents that satisfy an information need. The user then performs the expensive work: reading, comparing, and deciding. In contrast, modern assistants optimise for resolution: reduce uncertainty until the user can proceed to an action (choose, buy, book, contact) or the next question. This reallocation of cognitive work changes which queries matter most and how influence is exerted.

Two empirical trends make the shift visible. First, the web has moved steadily toward zero-click behaviour, where the user’s need is satisfied on-platform without navigating outward. Second, AI Overviews push synthesis to the top of the search experience and compress the attention available for organic results. In that environment, many high-value interactions happen before a site visit - meaning the optimisation target must be earlier than the click.

A useful framing: from lexical matching to decision leverage

Keywords are primarily lexical objects: strings that approximate what a user might type. Money prompts are decision objects: questions that predict a choice. They tend to have three characteristics:

The practical consequence is that money prompt selection is closer to product strategy and buyer psychology than to keyword research. You are not trying to capture language at scale; you are choosing the questions whose answers most strongly shape revenue outcomes.

What is a money prompt?

A money prompt is a commercially meaningful question that (a) your ideal customer profile (ICP) plausibly asks pre-purchase, (b) an assistant can answer by naming options, and (c) if your brand is included and framed correctly, the buyer is meaningfully closer to conversion.

Money prompts are not high volume by definition. In assistant-mediated discovery, leverage often concentrates in mid-volume, high-intent prompts where the assistant forms a shortlist or specifies evaluation criteria. This is consistent with established distinctions in web search intent. Broder’s taxonomy (informational, navigational, transactional) remains useful, but assistant prompts often blend categories - for example, informational questions that explicitly request transactional options (recommendations) or navigational questions disguised as evaluation (Is {brand} good for {use case}?).

A money prompt list is a strategic commitment

The defining move is not discovering a large inventory of prompts - it is locking a small, explicit target set. At Recrawled, projects typically lock 10 to 30 money prompts after an initial diagnosis phase, distributed across the highest-value intent clusters. This list becomes a benchmark (for measuring model outputs over time) and a governance artifact (for what claims must be true, provable, and consistently presented).

The anatomy of prompts assistants can resolve

In practice, prompt performance depends on how the assistant interprets intent and what retrieval and verification pathways are available. The same category can produce dramatically different outputs depending on constraint specificity and risk. A useful way to think about prompt design is as a structure with slots that shape resolution:

Money prompts are rarely single-slot. The highest-leverage prompts typically include category plus use case plus at least one constraint, because that combination naturally invites a defensible shortlist.

Prompt archetypes that commonly produce shortlists

Building a prompt longlist from real buyer language

A money prompt programme should start from buyer language, not internal messaging. In B2B especially, internal teams often over-index on category terms they want to own, while buyers use problem language, constraints, and comparisons. The goal of a longlist is breadth: capture the plausible questions an ICP asks across awareness, discovery, and evaluation.

High-signal inputs (ranked by practical usefulness)

If you only have time for one method, start with sales and support. Those channels surface the questions closest to purchase, including objections, constraints, and evaluation criteria.

Don’t start with best prompts. Start with intent clusters.

A typical longlist contains hundreds of candidates. The mistake is attempting to select the best 30 directly. Instead, first cluster by intent. In Recrawled’s workflow, the canonical answer system (CAS) is drafted as intent-cluster modules - each module covers 10 to 30 prompt variants - before the final money prompt lock. This sequencing ensures that prompt selection is supported by an answer strategy and proof plan, rather than being an isolated research exercise.

The critical filter: resolution fit (achievability)

The most expensive mistake is optimising for prompts you cannot plausibly win. In assistant-mediated discovery, winning requires more than relevance. The assistant must be able to retrieve your entity, disambiguate it, and verify enough proof to recommend it without taking on risk. We call this achievability the resolution fit.

A quick resolution-fit test (pass, revise, reject)

Before scoring prompts, apply a fast pre-filter. For each candidate prompt, ask:

If the prompt fails on retrieval, proof, or risk, it should be revised into a more winnable form (add constraints, specify the use case) or rejected. This is the difference between strategic selection and vanity optimisation.

Scoring, weighting, and locking the final 10 to 30 prompts

After the resolution-fit filter, scoring becomes a disciplined way to prioritise. The point is not mathematical precision; it is explicit trade-offs. Recrawled uses a worksheet approach: score each prompt on a small set of criteria (1 to 5) and apply weights if necessary to force prioritisation toward revenue-relevant prompts.

A practical scoring rubric (1 to 5 per item)

Score each candidate prompt on the following dimensions:

The output is a locked set of 10 to 30 prompts distributed across the highest-value intent clusters. Distribution matters: if your list is dominated by a single archetype (for example, best X), you have created fragility. A robust list includes selection, comparison, how to choose, objections and risks, and use-case-specific prompts.

Benchmarking: turning prompts into a measurement system

Once prompts are locked, they become a benchmark set. The benchmark is used to measure how assistants currently answer, and how those answers change after interventions. Crucially, this is measurement, not discovery. Prompt discovery happens earlier; benchmarking happens after lock.

What to define for each locked prompt

For each prompt, define the minimum structure of a good answer. At a minimum:

This structure creates two benefits. First, it makes measurement reproducible: you can score outputs consistently over time. Second, it forces governance: you cannot optimise for a prompt without specifying what must remain true and what must never be claimed.

Run 1 to 2 phrasing variants, but keep intent stable

Assistants are sensitive to phrasing. For a benchmark, it is usually sufficient to test each money prompt plus one or two paraphrases that preserve intent (for example, best vs recommended, or adding a clarifying constraint). The goal is not to brute-force every permutation; it is to detect whether your inclusion and framing is robust to normal human variation.

Common failure modes (and how to avoid them)

Vanity prompts

Teams often select prompts because they sound prestigious (best {category}) rather than because they reflect buyer behaviour. These prompts are frequently unwinnable for challengers because they default to incumbents, aggregators, or generic guidance. Fix: add constraints (use case, industry, geo, compliance) so the assistant can justify non-incumbent options.

Brand-internal language

If your prompt list is built from internal positioning language, you will optimise for questions buyers do not ask. Fix: prioritise language from sales and support, RFPs, and external comparisons; treat internal messaging as a mapping layer, not a discovery source.

Superlatives without proof

Assistants are risk-averse. Prompts that force unverifiable superlatives (number one, most trusted, best in the world) invite conservative outputs or omission. Fix: revise prompts toward verifiable dimensions (for example, best for regulated healthcare workflows), and build proof anchors that travel.

Misaligned funnel weighting

If your list is dominated by either pure top-of-funnel curiosity or pure bottom-of-funnel brand navigation, you miss the leverage zone where assistants form a shortlist. Fix: overweight discovery and evaluation prompts - where the assistant is likely to recommend, compare, and justify.

From prompts to reality: mapping each prompt to answers, proof, and surfaces

A money prompt list only creates value if it is connected to an execution system. In Recrawled’s blueprint, locked prompts map to:

This mapping prevents a common trap: writing content that feels relevant but does not change assistant behaviour because it does not land on the retrieval surfaces the model actually uses.

Conclusion

In the answer economy, keywords remain useful, but they are no longer the primary strategic object. The strategic object is the prompt: a decision-shaped question whose answer forms a shortlist, establishes evaluation criteria, or resolves an objection. A money prompt list is therefore a commitment to win a defined set of revenue-relevant questions - supported by accessible surfaces, defensible proof, and a governance layer that keeps claims boringly correct over time.

Teams that treat money prompts as a benchmarkable system - rather than a brainstorming artifact - gain two compounding advantages: they can measure what assistants do, and they can reliably change it.

Sources and references

56% of firms invested significantly in AEO in 2025. 94% of firms plan to spend more on AEO in 2026.

eMarketer Research, January 2026

Combining proven AEO best practice with real human execution

We are not a SaaS platform. We are real people doing real human work to help clients both mitigate and take advantage of AI assistants like ChatGPT. We deliver results within a three-phased work program: Diagnosis + Setup, Repair + Optimisation, and Management + Continuity.

At the heart of our work is our powerful multi-layer blueprint which continuously self-adapts to the rapid, ongoing developments in AI technology. Our blueprint both improves and augments each client's entire digital footprint with laser-focused targeting to increase visibility, trust and recommendations on AI assistants. The ultimate goal is to increase client revenue.

Diagnosis + Setup

AEO and SEO firms often make the mistake of optimising what's fundamentally flawed. We start with each client's latest go-to-market plans, commercial goals, and marketing materials then apply our proprietary blueprint to create a detailed optimisation baseline. This is the basis for laser-focused diagnoses and optimisation planning.

Repair + Optimisation

Using the client-specific optimisation baseline, diagnosis and plan, we methodically strengthen each and every factor that affects client visibility, trust and recommendations on AI assistants. This covers a wide range of technical and creative work including machine accessibility, content and information architecture, external trust validation, and entity mapping.

Management + Continuity

As soon as we are hired, we become exclusively responsible for the client's visibility, trust and recommendations on AI assistants such as ChatGPT and Gemini. This involves an adaptive approach to optimisation that comprises continuous performance monitoring, drift prevention, competitive strategy and reporting.

What others are saying
Most people now prefer AI to search engines for product and service recommendations

AI presence is becoming more important than search rankings

Products and services have to aim to be recommended on AI

2 in 3 consumers say that they rely on AI to help them evaluate brands

AI platforms are replacing traditional brand loyalty

Brands have to aim to be trusted on AI platforms

80% of consumers now rely on AI-written results for nearly half of their searches

AI overviews are reducing visits to company-owned media

Businesses increasingly have to compensate via AI visibility

FAQs

How do AI assistants decide who to recommend?

AI assistants like ChatGPT and Gemini don’t rank websites in the same way search engines do. They typically resolve answers using signals like entity clarity (who you are), consistency (same facts everywhere), evidence (proof and specificity), machine accessibility (content they can parse), and external trust validation (credible third-party corroboration).

What is AEO, and what do you actually do day-to-day?

AEO (Answer Engine Optimisation) is the practice of making your brand and content easier for AI assistants to understand, trust, and reuse. In practice, we combine technical and creative work across machine accessibility, information architecture, entity mapping, and external validation - with real human execution (not a “set-and-forget” tool).

Do you guarantee ChatGPT or Gemini will recommend us?

Often we can commit to specific performance guarantees. We increase the probability and consistency of being cited and recommended by improving the signals that AI systems rely on, and we keep going until we achieve a meaningful competitive advantage for our clients (resulting in a multiple ROI). Customer success is extremely important to us - it's the reason we exist!