Predicting Multilingual and Cross-Lingual Lexical Entailment
Determine whether the meaning of a word in one context entails, is entailed by, or is unrelated to the meaning of another word, based on SemEval-2020 Task 2 (Glavavs et al.). Supports graded lexical entailment across multiple languages.
配置文件config.yaml
# Predicting Multilingual and Cross-Lingual Lexical Entailment
# Based on Glavavs et al., SemEval 2020
# Paper: https://aclanthology.org/2020.semeval-1.2/
# Dataset: https://competitions.codalab.org/competitions/22192
#
# Annotators judge whether the meaning of one word in context entails,
# is entailed by, or does not entail another word. This captures graded
# lexical semantic relations across and within languages.
annotation_task_name: "Predicting Multilingual and Cross-Lingual Lexical Entailment"
task_dir: "."
data_files:
- sample-data.json
item_properties:
id_key: "id"
text_key: "text"
output_annotation_dir: "annotation_output/"
output_annotation_format: "json"
port: 8000
server_name: localhost
annotation_schemes:
- annotation_type: radio
name: entailment_judgment
description: "Does the meaning of word 1 in this context entail the meaning of word 2?"
labels:
- "Entails"
- "Does Not Entail"
- "Reverse Entailment"
keyboard_shortcuts:
"Entails": "1"
"Does Not Entail": "2"
"Reverse Entailment": "3"
tooltips:
"Entails": "Word 1 in this context entails the meaning of word 2 (word 1 is more specific)"
"Does Not Entail": "There is no entailment relation between the two words in these contexts"
"Reverse Entailment": "Word 2 entails word 1 (word 2 is more specific)"
annotation_instructions: |
You will see a sentence containing a word in context, along with a second word
and its language. Your task is to:
1. Read the sentence and understand the meaning of word 1 in that context.
2. Consider whether the meaning of word 1 entails (implies) the meaning of word 2.
3. Select "Entails" if word 1 is more specific (e.g., "dog" entails "animal").
4. Select "Reverse Entailment" if word 2 is more specific.
5. Select "Does Not Entail" if there is no clear entailment relationship.
html_layout: |
<div style="padding: 15px; max-width: 800px; margin: auto;">
<div style="background: #f0f9ff; border: 1px solid #bae6fd; border-radius: 8px; padding: 16px; margin-bottom: 16px;">
<strong style="color: #0369a1;">Word 1 in Context:</strong>
<p style="font-size: 16px; line-height: 1.7; margin: 8px 0 0 0;">{{text}}</p>
</div>
<div style="background: #fefce8; border: 1px solid #fde68a; border-radius: 8px; padding: 16px; margin-bottom: 16px;">
<strong style="color: #a16207;">Word 2:</strong>
<span style="font-size: 18px; font-weight: bold; color: #b45309;">{{word_2}}</span>
</div>
<div style="background: #f0fdf4; border: 1px solid #bbf7d0; border-radius: 8px; padding: 16px; margin-bottom: 16px;">
<strong style="color: #166534;">Language:</strong>
<span style="font-size: 14px;">{{language}}</span>
</div>
</div>
allow_all_users: true
instances_per_annotator: 50
annotation_per_instance: 2
allow_skip: true
skip_reason_required: false
示例数据sample-data.json
[
{
"id": "le_001",
"text": "The sparrow perched on the fence and began to sing at dawn.",
"word_2": "bird",
"language": "English"
},
{
"id": "le_002",
"text": "She drove her sedan to the office every morning through heavy traffic.",
"word_2": "vehicle",
"language": "English"
}
]
// ... and 8 more items获取此设计
Clone or download from the repository
快速开始:
git clone https://github.com/davidjurgens/potato-showcase.git cd potato-showcase/semeval/2020/task02-lexical-entailment potato start config.yaml
详情
标注类型
领域
应用场景
标签
发现问题或想改进此设计?
提交 Issue相关设计
ADMIRE - Multimodal Idiomaticity Recognition
Multimodal idiomaticity detection task requiring annotators to identify whether expressions are used idiomatically or literally, with supporting cue analysis. Based on SemEval-2025 Task 1 (ADMIRE).
AfriSenti - African Language Sentiment
Sentiment analysis for tweets in African languages, classifying text as positive, negative, or neutral. Covers 14 African languages including Amharic, Hausa, Igbo, Yoruba, and Swahili. Based on SemEval-2023 Task 12 (Muhammad et al.).
Argument Reasoning in Civil Procedure
Legal argument reasoning task requiring annotators to answer multiple-choice questions about civil procedure by selecting the best answer and providing legal reasoning. Based on SemEval-2024 Task 5.