Multilingual Fact-Checked Claim Retrieval
Fact-checked claim retrieval task requiring annotators to assess whether a fact-check article matches a given claim, supporting multilingual evaluation. Based on SemEval-2025 Task 7.
Konfigurationsdateiconfig.yaml
# Multilingual Fact-Checked Claim Retrieval
# Based on Nakov et al., SemEval 2025
# Paper: https://aclanthology.org/volumes/2025.semeval-1/
# Dataset: https://github.com/SemEval/semeval-2025-task7
#
# This task asks annotators to assess whether a previously fact-checked
# article matches a given claim. Annotators judge the degree of match
# and provide justification for their decision.
annotation_task_name: "Multilingual Fact-Checked Claim Retrieval"
task_dir: "."
data_files:
- sample-data.json
item_properties:
id_key: "id"
text_key: "text"
output_annotation_dir: "annotation_output/"
output_annotation_format: "json"
port: 8000
server_name: localhost
annotation_schemes:
- annotation_type: radio
name: match_judgment
description: "How well does the fact-check match the claim?"
labels:
- "Exact Match"
- "Partial Match"
- "Related But Different"
- "Not Related"
keyboard_shortcuts:
"Exact Match": "1"
"Partial Match": "2"
"Related But Different": "3"
"Not Related": "4"
tooltips:
"Exact Match": "The fact-check directly addresses the exact same claim"
"Partial Match": "The fact-check addresses part of the claim or a closely related variant"
"Related But Different": "The fact-check covers the same topic but a different specific claim"
"Not Related": "The fact-check is about a different topic or claim entirely"
- annotation_type: text
name: justification
description: "Provide a brief justification for your match judgment."
annotation_instructions: |
You will be shown a claim and a fact-check article. Your task is to:
1. Read the claim carefully.
2. Read the fact-check text and consider its content.
3. Judge how well the fact-check matches the claim using the four categories.
4. Provide a brief justification explaining your reasoning.
html_layout: |
<div style="padding: 15px; max-width: 800px; margin: auto;">
<div style="background: #f0f9ff; border: 1px solid #bae6fd; border-radius: 8px; padding: 16px; margin-bottom: 16px;">
<strong style="color: #0369a1;">Claim:</strong>
<p style="font-size: 16px; line-height: 1.7; margin: 8px 0 0 0;">{{text}}</p>
</div>
<div style="background: #fefce8; border: 1px solid #fde68a; border-radius: 8px; padding: 16px; margin-bottom: 16px;">
<strong style="color: #a16207;">Fact-Check:</strong>
<p style="font-size: 15px; line-height: 1.6; margin: 8px 0 0 0;">{{fact_check}}</p>
</div>
<div style="background: #f0fdf4; border: 1px solid #bbf7d0; border-radius: 8px; padding: 12px;">
<strong style="color: #166534;">Source Language:</strong> <span>{{source_language}}</span>
</div>
</div>
allow_all_users: true
instances_per_annotator: 50
annotation_per_instance: 2
allow_skip: true
skip_reason_required: false
Beispieldatensample-data.json
[
{
"id": "factclaim_001",
"text": "COVID-19 vaccines contain microchips that track people's movements.",
"fact_check": "Multiple fact-checking organizations have verified that COVID-19 vaccines do not contain microchips. The vaccines contain mRNA or viral vector components, lipids, salts, and sugars. No tracking devices are present in any approved vaccine.",
"source_language": "English"
},
{
"id": "factclaim_002",
"text": "The Great Wall of China is visible from space with the naked eye.",
"fact_check": "NASA astronauts have confirmed that the Great Wall of China is not visible from low Earth orbit without aid. The wall is narrow and blends with the natural landscape, making it indistinguishable from orbit.",
"source_language": "English"
}
]
// ... and 8 more itemsDieses Design herunterladen
Clone or download from the repository
Schnellstart:
git clone https://github.com/davidjurgens/potato-showcase.git cd potato-showcase/semeval/2025/task07-factchecked-claim-retrieval potato start config.yaml
Details
Annotationstypen
Bereich
Anwendungsfälle
Schlagwörter
Problem gefunden oder möchten Sie dieses Design verbessern?
Issue öffnenVerwandte Designs
PromiseEval - Promise Verification in Political Discourse
Verification of political promises, requiring annotators to identify whether promises were made, kept, or broken based on political statements and context. Based on SemEval-2025 Task 6.
Argument Reasoning in Civil Procedure
Legal argument reasoning task requiring annotators to answer multiple-choice questions about civil procedure by selecting the best answer and providing legal reasoning. Based on SemEval-2024 Task 5.
BRAINTEASER - Commonsense-Defying QA
Lateral thinking and commonsense-defying question answering task requiring annotators to select answers to brain teasers that defy default commonsense assumptions and provide explanations. Based on SemEval-2024 Task 9 (BRAINTEASER).