Rationale Annotation (ERASER)
Annotate rationales (evidence spans) that justify classification decisions. Based on the ERASER benchmark (DeYoung et al., ACL 2020). Identify which parts of the text are necessary and sufficient for making a prediction.
text annotation
Configuration Fileconfig.yaml
# Rationale Annotation (ERASER-style)
# Based on DeYoung et al., ACL 2020
# Paper: https://aclanthology.org/2020.acl-main.408/
# Benchmark: https://www.eraserbenchmark.com
#
# Rationales are text spans that provide evidence for a classification decision.
# Good rationales should be:
# - SUFFICIENT: The rationale alone should support the label
# - NECESSARY: Removing the rationale should change the prediction
# - MINIMAL: Don't include extra context that isn't needed
#
# Annotation Guidelines:
# 1. First make the classification decision for the full text
# 2. Then identify which spans were most important for that decision
# 3. Ask: "If I only saw these spans, would I make the same decision?"
# 4. Ask: "If these spans were removed, would the label change?"
# 5. Prefer shorter, more precise spans over longer ones
# 6. Multiple separate rationales are okay
#
# Common Patterns:
# - For sentiment: words expressing opinion, emotion
# - For topics: key phrases indicating subject matter
# - For NLI: phrases showing contradiction or entailment
port: 8000
server_name: localhost
task_name: "Rationale Annotation"
data_files:
- sample-data.json
id_key: id
text_key: text
output_file: annotations.json
annotation_schemes:
# Step 1: Make the classification
- annotation_type: radio
name: classification
description: "What is the sentiment of this movie review?"
labels:
- "Positive"
- "Negative"
tooltips:
"Positive": "The reviewer liked the movie (recommending it)"
"Negative": "The reviewer disliked the movie (not recommending it)"
# Step 2: Highlight rationales
- annotation_type: span
name: rationales
description: "Highlight the text spans that JUSTIFY your classification (evidence)"
labels:
- "Rationale"
label_colors:
"Rationale": "#3b82f6"
tooltips:
"Rationale": "Spans that provide evidence for the classification. Should be sufficient (enough to justify label) and necessary (removing them would change the label)"
allow_overlapping: false
# Step 3: Rationale sufficiency check
- annotation_type: radio
name: sufficiency
description: "Looking ONLY at your highlighted rationales, would you make the same classification?"
labels:
- "Yes, definitely"
- "Probably yes"
- "Uncertain"
- "Probably no"
tooltips:
"Yes, definitely": "The rationales alone are completely sufficient"
"Probably yes": "The rationales are mostly sufficient"
"Uncertain": "Hard to tell from rationales alone"
"Probably no": "More context would be needed"
allow_all_users: true
instances_per_annotator: 100
annotation_per_instance: 2
allow_skip: true
skip_reason_required: false
Sample Datasample-data.json
[
{
"id": "rat_001",
"text": "I went into this movie with low expectations, but I was completely blown away. The performances were incredible, especially the lead actor who delivered a career-defining role. The cinematography was breathtaking. Highly recommend!"
},
{
"id": "rat_002",
"text": "What a waste of two hours. The plot made no sense, the dialogue was cringeworthy, and I found myself checking my watch constantly. Save your money and skip this one."
}
]
// ... and 6 more itemsGet This Design
Clone or download from the repository
Quick start:
git clone https://github.com/davidjurgens/potato-showcase.git cd potato-showcase/rationale-annotation potato start config.yaml
Details
Annotation Types
Domain
Use Cases
Tags
Found an issue or want to improve this design?
Open an IssueRelated Designs
Coreference Resolution (OntoNotes)
Link pronouns and noun phrases to the entities they refer to in text. Based on the OntoNotes coreference annotation guidelines and CoNLL shared tasks. Identify mention spans and cluster coreferent mentions together.
Dialogue Relation Extraction (DialogRE)
Extract relations between entities in dialogue. Based on Yu et al., ACL 2020. Identify 36 relation types between speakers and entities mentioned in conversations.
Emotion Cause Extraction (RECCON)
Extract emotion causes from conversational text based on RECCON (Poria et al., EMNLP 2020). Identify which utterances and specific spans caused an emotion expressed in dialogue.