Plausible Clarifications of Implicit and Underspecified Instructions
Binary plausibility judgment for clarifications of underspecified or ambiguous instructions, determining whether a proposed clarification is plausible or implausible. Based on SemEval-2022 Task 7 (Roth and Anthonio).
Konfigurationsdateiconfig.yaml
# Plausible Clarifications of Implicit and Underspecified Instructions
# Based on Roth and Anthonio, SemEval 2022
# Paper: https://aclanthology.org/2022.semeval-1.133/
# Dataset: https://github.com/Anthonio/SemEval2022-Task7
#
# This task asks annotators to judge whether a proposed clarification
# of an underspecified instruction is plausible or implausible. The
# clarification fills in implicit information that was left unstated
# in the original instruction.
#
# Plausibility Labels:
# - Plausible: The clarification is a reasonable interpretation of the instruction
# - Implausible: The clarification is unreasonable or contradicts the instruction
annotation_task_name: "Plausible Clarifications of Instructions"
task_dir: "."
data_files:
- sample-data.json
item_properties:
id_key: "id"
text_key: "text"
output_annotation_dir: "annotation_output/"
output_annotation_format: "json"
port: 8000
server_name: localhost
annotation_schemes:
- annotation_type: radio
name: plausibility
description: "Is this clarification a plausible interpretation of the instruction?"
labels:
- "Plausible"
- "Implausible"
keyboard_shortcuts:
"Plausible": "1"
"Implausible": "2"
tooltips:
"Plausible": "The clarification is a reasonable, sensible interpretation of what the instruction means"
"Implausible": "The clarification is unreasonable, contradictory, or does not make sense in context"
annotation_instructions: |
You will see an instruction and a proposed clarification of that instruction.
The clarification attempts to make explicit something that was left implicit or underspecified.
Judge whether the clarification is plausible (reasonable) or implausible (unreasonable).
html_layout: |
<div style="padding: 15px; max-width: 800px; margin: auto;">
<div style="background: #f0f9ff; border: 1px solid #bae6fd; border-radius: 8px; padding: 16px; margin-bottom: 16px;">
<strong style="color: #0369a1;">Instruction:</strong>
<p style="font-size: 16px; line-height: 1.7; margin: 8px 0 0 0;">{{text}}</p>
</div>
<div style="background: #fef3c7; border: 1px solid #fde68a; border-radius: 8px; padding: 16px; margin-bottom: 16px;">
<strong style="color: #92400e;">Proposed Clarification:</strong>
<p style="font-size: 16px; line-height: 1.7; margin: 8px 0 0 0;">{{clarification}}</p>
</div>
</div>
allow_all_users: true
instances_per_annotator: 50
annotation_per_instance: 3
allow_skip: true
skip_reason_required: false
Beispieldatensample-data.json
[
{
"id": "plausible_001",
"text": "Add the butter to the pan and heat until melted.",
"clarification": "Heat the butter on medium heat until it is fully liquefied and starts to bubble slightly."
},
{
"id": "plausible_002",
"text": "Add the butter to the pan and heat until melted.",
"clarification": "Heat the butter to 500 degrees Fahrenheit until it turns black and produces thick smoke."
}
]
// ... and 8 more itemsDieses Design herunterladen
Clone or download from the repository
Schnellstart:
git clone https://github.com/davidjurgens/potato-showcase.git cd potato-showcase/semeval/2022/task07-plausible-clarifications potato start config.yaml
Details
Annotationstypen
Bereich
Anwendungsfälle
Schlagwörter
Problem gefunden oder möchten Sie dieses Design verbessern?
Issue öffnenVerwandte Designs
ADMIRE - Multimodal Idiomaticity Recognition
Multimodal idiomaticity detection task requiring annotators to identify whether expressions are used idiomatically or literally, with supporting cue analysis. Based on SemEval-2025 Task 1 (ADMIRE).
AfriSenti - African Language Sentiment
Sentiment analysis for tweets in African languages, classifying text as positive, negative, or neutral. Covers 14 African languages including Amharic, Hausa, Igbo, Yoruba, and Swahili. Based on SemEval-2023 Task 12 (Muhammad et al.).
Argument Reasoning in Civil Procedure
Legal argument reasoning task requiring annotators to answer multiple-choice questions about civil procedure by selecting the best answer and providing legal reasoning. Based on SemEval-2024 Task 5.