How We Verify Claims
ANN Verify uses a proprietary 7-Layer AI analysis pipeline — patent pending — to evaluate claims from public figures, institutions, and viral media. Every verdict is traceable to explicit evidence and human editorial review.
How We Select Claims
We apply a selection filter to ensure resources are focused on claims that matter to the public. Per IFCN requirements, at least 75% of our fact-checks address claims related to public welfare, health, governance, or widely circulated misinformation.
The 7-Layer Verification Pipeline
Each claim is processed through seven sequential analysis layers. Layers 1–6 are AI-automated. Layer 7 is the cryptographic integrity seal. A human editor reviews the final output before publication on all high-stakes verdicts.
output: source_scores[], domain_authority[], bias_flag
model: claude-sonnet-4-6 + Tavily search
output: corroboration_count, conflicting_sources[], confidence_band
method: Tavily API · live web retrieval · date-anchored
output: temporal_accuracy, context_change_flag, retroactive_alert
note: TSL time-series tracking (Patent #3)
output: fallacy_flags[], framing_score, misleading_index
model: claude-sonnet-4-6 extended reasoning
output: stat_accuracy[], deviation_pct, manipulation_flag
sources: WHO, World Bank, IMF, government data APIs
output: consensus_alignment, expert_disagreement_flag
escalation: Human editor review for score < 40
output: bisl_hash (hex), timestamp, version_id
note: BISL = Blockchain Integrity Seal Layer (Patent #1)
AI + Human Review Structure
ANN Verify is AI-assisted — not fully automated. AI handles evidence retrieval and scoring at scale. Human editors maintain editorial control over final verdicts on sensitive or high-impact topics.
Verdict Scale & Definitions
Every fact-check results in one of six verdict labels, applied consistently across all topics and political positions.
| VERDICT | SCORE | DEFINITION |
|---|---|---|
| TRUE | 90–100 | Accurate and complete. All key elements verified by multiple independent primary sources. |
| MOSTLY TRUE | 75–89 | Substantially accurate but omits important context or contains minor inaccuracies that don't change the overall meaning. |
| MIXED | 50–74 | Contains both accurate and inaccurate elements. Context determines which parts stand. |
| MOSTLY FALSE | 25–49 | Primary claim is inaccurate or exaggerated. A small element may be technically accurate but used out of context. |
| FALSE | 0–24 | Directly contradicted by multiple credible, independent primary sources. No element of the core claim holds up. |
| UNVERIFIED | — | Insufficient evidence to render a verdict at the time of analysis. |
Evidence Standards
Every source used in a verdict is cited so readers can independently verify our findings.
Known Limitations
Transparency about what we cannot do is as important as confidence in what we can.
- AI language models may carry training biases. We mitigate this through human review and multi-source cross-referencing, but cannot guarantee complete neutrality.
- Claims requiring specialized expertise are escalated to human editorial review, but we do not employ domain-specific experts for every field.
- Real-time retrieval is limited to publicly available web content. Claims supported only by paywalled research receive an UNVERIFIED verdict.
- Our analysis reflects evidence available at the time of publication. New information may emerge — we encourage correction requests when this happens.
- ANN Verify does not evaluate intent or motivation — only factual accuracy.