AI Content Policy
Travel Risk Calculator uses AI to write the prose of its travel-risk briefings. This page describes how the AI works, what we feed it, how we keep it from inventing facts, and what stays under human control.
Where AI is and isn't used
AI is used to write: the plain-English summary at the top of each assessment, the "top drivers" explanations, and the action recommendations. The AI takes a structured input — baseline scores, live search snippets, and the user's trip details — and produces a coherent briefing.
AI is not used to:
- set the threat scores (those come from our human-curated city baselines plus the live data feeds);
- decide what counts as a real city or a real country (that's our database, sourced from SimpleMaps and admin review);
- choose which sources to cite (we hand the AI a fixed set of snippets retrieved from the live data feeds and tell it which ones to ground its claims in);
- set the editorial policy of the site (humans do that — this page is the proof).
The model we use
The primary model is gpt-oss:20b-cloud, accessed via Ollama Cloud. We selected it after a head-to-head test against ministral-3:14b-cloud, gpt-oss:120b-cloud, and gemma4:31b-cloud on grounding accuracy, latency, and citation discipline. gpt-oss:20b-cloud won on grounded-drivers-per-second and cleanest JSON output. We may change models in the future and will update this page when we do.
Live search uses Ollama Cloud's /api/web_search as the primary provider, with a self-hosted SearXNG instance as the fallback. If both fail, the briefing falls back to baseline-only text and is marked accordingly.
How we keep the AI honest
Hallucination — an AI making up plausible-sounding but false claims — is the central failure mode of generative systems. We use four guardrails:
- Snippet-ID grounding. The model is told to reference each claim by the numerical ID of the search snippet it came from. After generation, we re-parse the output and drop any sentence whose cited ID doesn't appear in the snippet set we actually retrieved.
- Fail-closed publishing. If the AI returns malformed JSON, fails to cite any sources, or returns an empty briefing, we save the result as a draft (not publicly visible) and fall back to baseline-only content. The page is marked "AI synthesis unavailable".
- Domestic-trip filtering. When the origin and destination are in the same country, the prompt explicitly forbids passport/visa/embassy recommendations. A post-filter also drops those if they slip through.
- Tier discipline. The AI cannot label a score of 40/100 as "high risk" or a 70/100 as "low risk" — the tier label is computed mechanically from the composite score (Low 0-24, Moderate 25-49, Elevated 50-69, Extreme 70-100). The AI only writes the prose.
What you'll see on a finished page
Every assessment page that was AI-synthesized shows:
- The status —
fresh(AI ran in the last 6 hours),pending(AI is queued), orfailed(AI couldn't ground its output, baseline shown); - The "Sources the AI drew from" panel — the actual list of search results the AI saw, with title, URL, and a snippet;
- A timestamp for the last AI refresh and the last human review.
What the AI never does
- It does not write or alter user content (assessments, feedback, contact messages).
- It does not write code on this site — the codebase is human-authored.
- It does not have access to user accounts, watchlists, trips, or any persistent user data.
- It does not run with elevated permissions — it only sees the data we explicitly pass in each call.
Disclosure of AI involvement
Every page synthesized with AI carries a visible "AI-assisted" tag and timestamp. We do not present AI-generated content as if it were written by a human author. We do not generate fictional bylines.
Reporting an AI mistake
If you see a claim on an AI-synthesized page that looks wrong, use the feedback widget at the bottom of the assessment page (it appears once the AI status is fresh). For more serious corrections, see Corrections.
Updates to this policy
This AI policy was last updated on May 15, 2026.