XNLI - Cross-Lingual Natural Language Inference
Natural language inference annotation for cross-lingual evaluation, based on the XNLI benchmark (Conneau et al., EMNLP 2018). Annotators classify the relationship between a premise and hypothesis as entailment, neutral, or contradiction.
File di configurazioneconfig.yaml
# XNLI - Cross-Lingual Natural Language Inference
# Based on Conneau et al., EMNLP 2018
# Paper: https://aclanthology.org/D18-1269/
# Dataset: https://github.com/facebookresearch/XNLI
#
# This task presents a premise and hypothesis pair for natural language
# inference classification. The XNLI benchmark extends MultiNLI to 15
# languages for cross-lingual evaluation. This configuration uses
# English sentence pairs representative of the benchmark.
#
# Labels:
# - Entailment: The hypothesis is definitely true given the premise
# - Neutral: The hypothesis might be true given the premise
# - Contradiction: The hypothesis is definitely false given the premise
#
# Annotation Guidelines:
# 1. Read the premise carefully
# 2. Read the hypothesis and determine its relationship to the premise
# 3. Select the appropriate label
# 4. Optionally provide reasoning or notes about your decision
# 5. Base your judgment only on the information in the premise
annotation_task_name: "XNLI - Cross-Lingual Natural Language Inference"
task_dir: "."
data_files:
- sample-data.json
item_properties:
id_key: "id"
text_key: "text"
output_annotation_dir: "annotation_output/"
output_annotation_format: "json"
port: 8000
server_name: localhost
annotation_schemes:
# Step 1: Classify the relationship
- annotation_type: radio
name: nli_label
description: "What is the relationship between the premise and hypothesis?"
labels:
- "Entailment"
- "Neutral"
- "Contradiction"
keyboard_shortcuts:
"Entailment": "1"
"Neutral": "2"
"Contradiction": "3"
tooltips:
"Entailment": "The hypothesis is definitely true given the premise"
"Neutral": "The hypothesis might or might not be true given the premise"
"Contradiction": "The hypothesis is definitely false given the premise"
# Step 2: Optional reasoning notes
- annotation_type: text
name: reasoning_notes
description: "Optionally explain your reasoning for the chosen label"
annotation_instructions: |
You will be shown a premise sentence and a hypothesis sentence. Your task is to determine
the relationship between them:
- **Entailment**: If the premise is true, the hypothesis must also be true.
- **Neutral**: The hypothesis could be true or false given the premise; there is not enough information.
- **Contradiction**: If the premise is true, the hypothesis must be false.
Base your judgment only on the premise. Do not use outside knowledge. Optionally, you may
add a note explaining your reasoning.
html_layout: |
<div style="padding: 15px; max-width: 800px; margin: auto;">
<div style="background: #eff6ff; border: 1px solid #bfdbfe; border-radius: 8px; padding: 16px; margin-bottom: 12px;">
<strong style="color: #1d4ed8;">Premise:</strong>
<p style="font-size: 16px; line-height: 1.7; margin: 8px 0 0 0;">{{text}}</p>
</div>
<div style="background: #fefce8; border: 1px solid #fde68a; border-radius: 8px; padding: 16px; margin-bottom: 12px;">
<strong style="color: #a16207;">Hypothesis:</strong>
<p style="font-size: 16px; line-height: 1.7; margin: 8px 0 0 0;">{{hypothesis}}</p>
</div>
<div style="text-align: right; font-size: 13px; color: #6b7280;">
Language: {{language}}
</div>
</div>
allow_all_users: true
instances_per_annotator: 50
annotation_per_instance: 2
allow_skip: true
skip_reason_required: false
Dati di esempiosample-data.json
[
{
"id": "xnli_001",
"text": "A woman is walking through a busy marketplace carrying bags of vegetables.",
"hypothesis": "A woman is shopping for food.",
"language": "English"
},
{
"id": "xnli_002",
"text": "The children played soccer in the park until it started raining.",
"hypothesis": "The children were indoors the entire day.",
"language": "English"
}
]
// ... and 8 more itemsOttieni questo design
Clone or download from the repository
Avvio rapido:
git clone https://github.com/davidjurgens/potato-showcase.git cd potato-showcase/text/cross-lingual/xnli-cross-lingual-nli potato start config.yaml
Dettagli
Tipi di annotazione
Dominio
Casi d'uso
Tag
Hai trovato un problema o vuoi migliorare questo design?
Apri un problemaDesign correlati
Bias Benchmark for QA (BBQ)
Annotate question-answering examples designed to probe social biases. Based on BBQ (Parrish et al., Findings of ACL 2022). Annotators select the correct answer given a context, assess the direction of bias in the question, categorize the type of bias, and explain their reasoning.
BIG-Bench Task Evaluation
Evaluate language model responses on diverse reasoning tasks from the BIG-Bench benchmark. Annotators assess correctness, provide reasoning explanations, and rate confidence for model outputs across multiple task categories.
Code Review Annotation (CodeReviewer)
Annotation of code review activities based on the CodeReviewer benchmark. Annotators identify issues in code diffs, classify defect types, assign severity levels, make review decisions, and provide natural language review comments, supporting research in automated code review and software engineering.