Retrieval Prompt Variants: Experiment Log
Protocol
Prompt variants were grouped by specificity, temporal constraints, and citation explicitness.
The same topic clusters were reused to preserve comparability across variants.
Measurements
Primary metrics were citation precision, duplicate-source rate, and domain entropy.
All measurements were recorded per engine and per prompt class.
Observations
High-specificity prompts improved citation precision but reduced source diversity.
Temporal constraints materially changed source freshness patterns in recommendation outputs.
| Engine | Citation Rate | Presence Rate |
|---|---|---|
| ChatGPT | 42% | 67% |
| Gemini | 35% | 61% |
Precision by Prompt Variant
Related Research
Comparing Gemini and Claude Citation Patterns
A side-by-side review of source repetition, citation depth, and mention diversity.
Sofia Laurent | 1/28/2026
What Increases AI Citation Probability?
Findings from controlled tests on schema, source clarity, and topical authority structure.
Marcus Vale | 2/7/2026
Author
Sofia Laurent
Head of Experimental Research
Sofia leads controlled experiments on prompt sensitivity, source diversity, and ranking signal interactions across major AI answer engines.
View all research by Sofia Laurent.