SNLI - Textual Entailment
Natural language inference over sentence pairs from the Stanford Natural Language Inference corpus (Bowman et al., EMNLP 2015). Annotators classify the relationship between a premise and hypothesis as entailment, contradiction, or neutral.
कॉन्फ़िगरेशन फ़ाइलconfig.yaml
# SNLI - Textual Entailment
# Based on Bowman et al., EMNLP 2015
# Paper: https://aclanthology.org/D15-1075/
# Dataset: https://nlp.stanford.edu/projects/snli/
#
# This task presents a premise sentence and a hypothesis sentence.
# Annotators determine the inferential relationship between them:
# entailment, contradiction, or neutral.
#
# Label Definitions:
# - Entailment: The hypothesis is definitely true given the premise
# - Contradiction: The hypothesis is definitely false given the premise
# - Neutral: The hypothesis may or may not be true given the premise
#
# Annotation Guidelines:
# 1. Read the premise sentence carefully
# 2. Read the hypothesis sentence
# 3. Determine the relationship between them
# 4. Provide a brief reasoning for your choice
annotation_task_name: "SNLI - Textual Entailment"
task_dir: "."
data_files:
- sample-data.json
item_properties:
id_key: "id"
text_key: "text"
output_annotation_dir: "annotation_output/"
output_annotation_format: "json"
port: 8000
server_name: localhost
annotation_schemes:
# Step 1: Classify the relationship
- annotation_type: radio
name: nli_label
description: "What is the inferential relationship between the premise and hypothesis?"
labels:
- "Entailment"
- "Contradiction"
- "Neutral"
keyboard_shortcuts:
"Entailment": "1"
"Contradiction": "2"
"Neutral": "3"
tooltips:
"Entailment": "The hypothesis is definitely true given the premise"
"Contradiction": "The hypothesis is definitely false given the premise"
"Neutral": "The hypothesis may or may not be true given the premise"
# Step 2: Provide reasoning
- annotation_type: text
name: reasoning
description: "Briefly explain your reasoning for the chosen label"
annotation_instructions: |
You will be shown two sentences: a premise and a hypothesis. Your task is to:
1. Determine the relationship between the premise and the hypothesis.
2. Select one of: Entailment, Contradiction, or Neutral.
3. Provide a brief explanation for your choice.
- Entailment: If the premise is true, the hypothesis must also be true.
- Contradiction: If the premise is true, the hypothesis must be false.
- Neutral: The premise does not give enough information to determine if the hypothesis is true or false.
html_layout: |
<div style="padding: 15px; max-width: 800px; margin: auto;">
<div style="background: #f0f9ff; border: 1px solid #bae6fd; border-radius: 8px; padding: 16px; margin-bottom: 16px;">
<strong style="color: #0369a1; font-size: 14px; text-transform: uppercase; letter-spacing: 0.5px;">Premise:</strong>
<p style="font-size: 16px; line-height: 1.7; margin: 8px 0 0 0;">{{text}}</p>
</div>
<div style="background: #fefce8; border: 1px solid #fde68a; border-radius: 8px; padding: 16px; margin-bottom: 16px;">
<strong style="color: #a16207; font-size: 14px; text-transform: uppercase; letter-spacing: 0.5px;">Hypothesis:</strong>
<p style="font-size: 16px; line-height: 1.7; margin: 8px 0 0 0;">{{hypothesis}}</p>
</div>
</div>
allow_all_users: true
instances_per_annotator: 50
annotation_per_instance: 2
allow_skip: true
skip_reason_required: false
नमूना डेटाsample-data.json
[
{
"id": "snli_001",
"text": "A man wearing a hard hat is dancing on a street corner.",
"hypothesis": "A man is outdoors."
},
{
"id": "snli_002",
"text": "Two women are embracing while holding to-go packages.",
"hypothesis": "The women are fighting each other."
}
]
// ... and 8 more itemsयह डिज़ाइन प्राप्त करें
Clone or download from the repository
Quick start:
git clone https://github.com/davidjurgens/potato-showcase.git cd potato-showcase/text/natural-language-inference/snli-textual-entailment potato start config.yaml
विवरण
एनोटेशन प्रकार
डोमेन
उपयोग के मामले
टैग
कोई समस्या मिली या इस डिज़ाइन को सुधारना चाहते हैं?
एक Issue खोलेंसंबंधित डिज़ाइन
MultiNLI - Multi-Genre Natural Language Inference
Natural language inference across multiple genres of text, based on the Multi-Genre NLI corpus (Williams et al., NAACL 2018). Annotators classify premise-hypothesis relationships with genre-diverse examples from fiction, government, travel, and more.
Safe Biomedical NLI
Safe biomedical natural language inference task requiring annotators to determine entailment or contradiction between clinical premise-hypothesis pairs and provide reasoning. Based on SemEval-2024 Task 2 (Safe Biomedical NLI).
Argument Reasoning in Civil Procedure
Legal argument reasoning task requiring annotators to answer multiple-choice questions about civil procedure by selecting the best answer and providing legal reasoning. Based on SemEval-2024 Task 5.