Back to home
OpenTruth
METHODOLOGY

Our Methodology

How OpenTruth verifies political discourse

We do not evaluate political discourse. We structure it as data.

OpenTruth is an automated fact-checking infrastructure. Our pipeline analyzes videos of political speeches, parliamentary debates, and media appearances to extract, verify, and score factual claims.

Claim Classification

11 claim types (I1-I11): Attribution, Event, Factual Definition, Interpretation, Legal, Scientific, Statistical, Comparative, Predictive/Promise, Historical, Causal

24 themes (D1-D24): Politics, Economy, Health, Environment, etc.

When a claim falls under multiple types, we retain the one that determines the verification strategy.

1/6

Ingestion and Transcription

Each video is automatically transcribed (YouTube Transcript API or OpenAI Whisper). Raw text is segmented into coherent thematic units through automatic topic change detection.

2/6

Claim Extraction

AI agents (GPT-5) identify verifiable claims in each segment. Each claim is classified according to our taxonomy of 11 types and 24 themes.

3/6

Source Research

For each claim, our agents search for a minimum of 4 independent sources following a reliability hierarchy from public institutions to media outlets.

4/6

Correspondence Evaluation

Each claim-source pair is evaluated along three quality dimensions: independence, proximity to primary source, and methodological quality.

5/6

NLI Scoring

A language model (mDeBERTa-v3, multilingual) automatically evaluates the logical consistency between each claim and its sources.

6/6

Verdicts

Each claim receives a verdict on an 8-level scale, from True to Satire, based on the combined NLI scores and source reliability.

Source Research

For each claim, our agents search for a minimum of 4 independent sources following a reliability hierarchy:

Public institutions

95%

gov websites, EU, UN

Scientific publications

90%

PubMed, CNRS, universities

Official databases

85%

INSEE, Eurostat, FSO

Media

60%

AFP, Reuters, Le Monde, BBC

NGOs and think tanks

55%

Amnesty, Brookings

We systematically prioritize primary sources. Anonymous or unverifiable sources are excluded.

Correspondence Evaluation

Independence

Is the source independent from the subject matter?

Proximity to primary source

Primary (1), secondary (2), or tertiary (3)?

Methodological quality

Is the methodology transparent and verifiable?

Correspondence is classified as: confirms, contradicts, partial, or no data.

NLI Scoring

Natural Language Inference

A language model (mDeBERTa-v3, multilingual) automatically evaluates the logical consistency between each claim and its sources. The score combines:

1

NLI result (entailment/contradiction/neutral)

2

Source reliability (weighted by G1 hierarchy)

3

Semantic relevance (cosine similarity of embeddings)

Verdicts

Each claim receives a verdict on an 8-level scale:

TrueH1.1

Confirmed by at least 2 concordant reliable sources

Mostly TrueH1.2

Broadly confirmed with minor nuances

MixedH1.3

Contains both true and false elements

MisleadingH1.4

Technically accurate but missing or distorted context

FalseH1.5

Contradicted by reliable sources

UnverifiableH1.6

Cannot be confirmed or refuted with available sources

OpinionH1.7

Value judgment, not factually verifiable

SatireH1.8

Humorous/satirical content identified

Transparency and Limitations

What we do

  • We publish all sources used with their URLs
  • We apply the same standards to all political actors
  • Our verdicts are interoperable with the ClaimReview standard (schema.org/Google)

What we do not do

  • We do not take political positions
  • We do not censor or recommend censorship
  • We do not claim infallibility — our models have known limitations

Known limitations

  • Detecting "partially true" claims is our biggest technical challenge
  • Sources in uncovered languages (outside FR/EN/DE) may be missed
  • The pipeline is optimized for Swiss and French political discourse

Corrections

If you identify an error in our verifications, you can contact us at contact[at]opentruth[dot]ch

We commit to:

  • Correcting any verified error within 48 hours
  • Publishing a visible correction note on the affected verification
  • Documenting corrections in a public annual register