Commonsense QA Explanation (ECQA)
Annotate explanations for commonsense QA with positive and negative properties. Based on ECQA (Aggarwal et al., ACL 2021). Explain why an answer is correct and why others are wrong.
Fichier de configurationconfig.yaml
# Commonsense QA Explanation (ECQA)
# Based on Aggarwal et al., ACL 2021
# Paper: https://aclanthology.org/2021.acl-long.238/
# Dataset: https://github.com/dair-iitd/ECQA-Dataset
#
# ECQA provides explanations for CommonsenseQA that:
# 1. Justify why the correct answer is right (positive properties)
# 2. Explain why incorrect answers are wrong (negative properties)
# 3. Combine into a free-flow explanation
#
# Explanation Structure:
# - Positives: Facts that support the correct answer
# - Negatives: Facts that refute the incorrect answers
# - Full explanation: Natural language justification
#
# Annotation Guidelines:
# 1. First identify what makes the correct answer correct
# 2. Then identify why each wrong answer fails
# 3. Use commonsense knowledge, not just word matching
# 4. Explanations should be clear to someone unfamiliar with the question
# 5. Focus on the KEY distinguishing properties
annotation_task_name: "Commonsense QA Explanation"
task_dir: "."
data_files:
- sample-data.json
item_properties:
id_key: "id"
text_key: "question"
output_annotation_dir: "annotation_output/"
output_annotation_format: "json"
annotation_schemes:
# Step 1: Verify the correct answer
- annotation_type: radio
name: answer_verification
description: "Is the marked answer correct?"
labels:
- "Yes, correct"
- "No, incorrect"
- "Ambiguous"
tooltips:
"Yes, correct": "The marked answer is definitely correct"
"No, incorrect": "The marked answer is wrong"
"Ambiguous": "Multiple answers could be correct"
# Step 2: Explanation completeness
- annotation_type: radio
name: explanation_quality
description: "How complete is a good explanation for this question?"
labels:
- "Simple - one fact needed"
- "Moderate - few facts needed"
- "Complex - many facts needed"
tooltips:
"Simple - one fact needed": "One commonsense fact explains the answer"
"Moderate - few facts needed": "2-3 facts are needed"
"Complex - many facts needed": "Requires combining multiple pieces of knowledge"
# Step 3: Knowledge type required
- annotation_type: radio
name: knowledge_type
description: "What type of knowledge is primarily needed to answer this?"
labels:
- "Physical/spatial"
- "Social/cultural"
- "Temporal"
- "Causal"
- "Definitional"
- "Other"
tooltips:
"Physical/spatial": "Knowledge about physical properties or locations"
"Social/cultural": "Knowledge about social norms or cultural practices"
"Temporal": "Knowledge about time, sequences, or duration"
"Causal": "Knowledge about cause and effect"
"Definitional": "Knowledge about what things are or mean"
"Other": "Other type of commonsense knowledge"
# Step 4: Difficulty
- annotation_type: likert
name: difficulty
description: "How difficult is this question for most people?"
min_value: 1
max_value: 5
labels:
1: "Very easy"
2: "Easy"
3: "Moderate"
4: "Hard"
5: "Very hard"
allow_all_users: true
instances_per_annotator: 50
annotation_per_instance: 3
allow_skip: true
skip_reason_required: false
Données d'exemplesample-data.json
[
{
"id": "ecqa_001",
"question": "Where would you put a plant if you want it to get lots of sunlight?",
"choices": [
"A) windowsill",
"B) basement",
"C) closet",
"D) refrigerator",
"E) garden center"
],
"correct_answer": "A) windowsill"
},
{
"id": "ecqa_002",
"question": "What do people typically do when they feel cold?",
"choices": [
"A) open windows",
"B) wear shorts",
"C) turn on heater",
"D) go swimming",
"E) eat ice cream"
],
"correct_answer": "C) turn on heater"
}
]
// ... and 6 more itemsObtenir ce design
Clone or download from the repository
Démarrage rapide :
git clone https://github.com/davidjurgens/potato-showcase.git cd potato-showcase/text/commonsense-ethics/commonsense-qa-explanation potato start config.yaml
Détails
Types d'annotation
Domaine
Cas d'utilisation
Étiquettes
Vous avez trouvé un problème ou souhaitez améliorer ce design ?
Ouvrir un ticketDesigns associés
NLI with Explanations (e-SNLI)
Natural language inference with human explanations. Based on e-SNLI (Camburu et al., NeurIPS 2018). Classify entailment/contradiction/neutral and provide natural language justifications.
Social Chemistry 101 (Social Norms)
Annotate rules-of-thumb for social and moral norms. Based on Forbes et al., EMNLP 2020. Capture 12 dimensions of social judgment including cultural pressure, moral foundations, and legality.
Moral Stories Annotation
Annotate moral reasoning in situated narratives. Based on Emelin et al., EMNLP 2021. Evaluate whether actions adhere to or diverge from social norms given situations and intentions.