IFCN Principle 4 · Methodology Transparency

How We Verify Claims

ANN Verify uses a proprietary 7-Layer AI analysis pipeline — patent pending — to evaluate claims from public figures, institutions, and viral media. Every verdict is traceable to explicit evidence and human editorial review.

Patent Pending · KIPO 2026.02.19 PCT International Filing In Progress Inventors: Park Minwoo · Kim Baekjun
// 01

How We Select Claims

We apply a selection filter to ensure resources are focused on claims that matter to the public. Per IFCN requirements, at least 75% of our fact-checks address claims related to public welfare, health, governance, or widely circulated misinformation.

🏛️
Public Figure Statements
Politicians, executives, officials — claims made in speeches, interviews, or official documents.
📊
Statistical Claims
Figures cited in media, scientific papers, or social media that are unverified or misattributed.
🦠
Viral Misinformation
Claims spreading across platforms related to health, safety, elections, or finance.
🌐
Institutional Assertions
Claims by governments, NGOs, or international bodies that are contested or disputed.
// 02

The 7-Layer Verification Pipeline

Each claim is processed through seven sequential analysis layers. Layers 1–6 are AI-automated. Layer 7 is the cryptographic integrity seal. A human editor reviews the final output before publication on all high-stakes verdicts.

L1
Source Credibility Analysis
AI · AUTOMATED
All referenced sources are evaluated for domain authority, publication history, editorial independence, and known bias patterns. Sources are scored 0–100 and weighted in downstream layers.
input: claim text + cited URLs
output: source_scores[], domain_authority[], bias_flag
model: claude-sonnet-4-6 + Tavily search
L2
Cross-Reference Verification
REAL-TIME SEARCH
The core claim is matched against real-time web evidence via Tavily search. We require corroboration from at least two independent primary sources before a claim receives a definitive verdict.
input: L1 source scores + claim
output: corroboration_count, conflicting_sources[], confidence_band
method: Tavily API · live web retrieval · date-anchored
L3
Temporal Consistency Check
REAL-TIME
Claims are evaluated against a timeline of events. We check whether the claim was accurate at the time it was made — date-sensitive claims are verified against the specific date of the original statement.
input: claim_date, TODAY date, event_timeline
output: temporal_accuracy, context_change_flag, retroactive_alert
note: TSL time-series tracking (Patent #3)
L4
Logical Coherence Analysis
AI · AUTOMATED
The internal logic of the claim is evaluated for fallacies, false equivalences, misleading framing, and selective context. A claim can be factually accurate in parts but receive a lower verdict if the framing is designed to mislead.
input: claim structure, L2 evidence set
output: fallacy_flags[], framing_score, misleading_index
model: claude-sonnet-4-6 extended reasoning
L5
Statistical & Data Verification
DATA LOOKUP
Numerical claims — percentages, growth rates, counts, rankings — are retrieved from authoritative databases and verified against original data sources.
input: numerical_claims[], date_context
output: stat_accuracy[], deviation_pct, manipulation_flag
sources: WHO, World Bank, IMF, government data APIs
L6
Expert Consensus Evaluation
HUMAN REVIEW
For scientific, medical, or technical claims, we assess alignment with established expert consensus. Claims that contradict overwhelming scientific consensus receive an automatic flag regardless of other scores.
input: domain_classification, L2+L5 evidence
output: consensus_alignment, expert_disagreement_flag
escalation: Human editor review for score < 40
L7
BISL Integrity Hash
CRYPTOGRAPHIC SEAL
A SHA-256 hash of the complete fact-check result is generated via the browser-native Web Crypto API. This tamper-evident seal ensures any post-publication alteration is immediately detectable.
method: SHA-256 · crypto.subtle.digest() · browser-native
output: bisl_hash (hex), timestamp, version_id
note: BISL = Blockchain Integrity Seal Layer (Patent #1)
// 03

AI + Human Review Structure

ANN Verify is AI-assisted — not fully automated. AI handles evidence retrieval and scoring at scale. Human editors maintain editorial control over final verdicts on sensitive or high-impact topics.

🤖
What AI Does
Layers 1–6 analysis, real-time evidence retrieval, scoring, cross-referencing, logical fallacy detection, statistical verification, and BISL hash generation.
👁️
What Humans Do
Final editorial review on all verdicts scoring below 50 or flagged as high-stakes. Editors can override AI verdicts, escalate to senior review, and add Editor's Notes.
Editorial Independence Guarantee
All AI-generated analyses are subject to human editorial review before publication. No funder, advertiser, investor, or external party has any influence over the verdict rendered by our editorial process.
// 04

Verdict Scale & Definitions

Every fact-check results in one of six verdict labels, applied consistently across all topics and political positions.

VERDICTSCOREDEFINITION
TRUE90–100Accurate and complete. All key elements verified by multiple independent primary sources.
MOSTLY TRUE75–89Substantially accurate but omits important context or contains minor inaccuracies that don't change the overall meaning.
MIXED50–74Contains both accurate and inaccurate elements. Context determines which parts stand.
MOSTLY FALSE25–49Primary claim is inaccurate or exaggerated. A small element may be technically accurate but used out of context.
FALSE0–24Directly contradicted by multiple credible, independent primary sources. No element of the core claim holds up.
UNVERIFIEDInsufficient evidence to render a verdict at the time of analysis.
// 05

Evidence Standards

Every source used in a verdict is cited so readers can independently verify our findings.

E1
Primary sources are always preferred. Official government publications, peer-reviewed research, and institutional reports take precedence over secondary sources.
E2
All significant sources are cited with links. Readers can replicate our research. We do not use sources we cannot publicly link to, except where source safety would be compromised.
E3
Date context is mandatory. We note the date of each source and flag when a source predates the claim being evaluated.
E4
Real-time retrieval via Tavily API ensures freshness. Live search is performed at analysis time — we do not rely solely on model training data.
E5
Conflicting sources are disclosed, not suppressed. If credible sources disagree, we present the disagreement transparently.
// 06

Known Limitations

Transparency about what we cannot do is as important as confidence in what we can.

HONEST LIMITATIONS
  • AI language models may carry training biases. We mitigate this through human review and multi-source cross-referencing, but cannot guarantee complete neutrality.
  • Claims requiring specialized expertise are escalated to human editorial review, but we do not employ domain-specific experts for every field.
  • Real-time retrieval is limited to publicly available web content. Claims supported only by paywalled research receive an UNVERIFIED verdict.
  • Our analysis reflects evidence available at the time of publication. New information may emerge — we encourage correction requests when this happens.
  • ANN Verify does not evaluate intent or motivation — only factual accuracy.