Skip to content
Showcase/ArgSciChat Scientific Argumentation Dialogue
intermediateevaluation

ArgSciChat Scientific Argumentation Dialogue

Annotation of argumentative dialogues about scientific papers based on the ArgSciChat dataset. Annotators label dialogue turns for argument components (claim, evidence, rebuttal) and assess argument quality dimensions such as clarity, relevance, and persuasiveness.

Select all that apply:

Configuration Fileconfig.yaml

# ArgSciChat Scientific Argumentation Dialogue
# Based on Ruiz-Dolz et al., ACL 2023
# Paper: https://aclanthology.org/2023.acl-long.414/
# Dataset: https://github.com/UKPLab/ArgSciChat
#
# Task: Annotate argumentative dialogues about scientific papers.
# Label each dialogue turn for argument components and assess
# argument quality dimensions.
#
# Argument Components:
# - Claim: A statement the speaker wants the listener to accept
# - Evidence: Data, findings, or references supporting a claim
# - Rebuttal: A counter-argument challenging a previous claim or evidence
# - Clarification: Requesting or providing clarification on an argument
# - Concession: Acknowledging the validity of a counterpoint
#
# Guidelines:
# - Consider the scientific context when labeling argument components
# - A single turn may contain multiple argument components
# - Evidence should be grounded in the paper or external references
# - Assess quality based on how well arguments are constructed, not agreement

annotation_task_name: "ArgSciChat Scientific Argumentation Annotation"
task_dir: "."

data_files:
  - sample-data.json
item_properties:
  id_key: "id"
  text_key: "text"

output_annotation_dir: "annotation_output/"
output_annotation_format: "json"

annotation_schemes:
  # Argument component type
  - annotation_type: radio
    name: argument_component
    description: "Select the PRIMARY argument component type for this dialogue turn"
    labels:
      - "Claim"
      - "Evidence"
      - "Rebuttal"
      - "Clarification Request"
      - "Clarification Response"
      - "Concession"
      - "Question"
      - "Agreement"
      - "Meta-Discussion"
      - "Other"
    keyboard_shortcuts:
      "Claim": "c"
      "Evidence": "e"
      "Rebuttal": "r"
      "Clarification Request": "q"
      "Clarification Response": "a"
      "Concession": "o"
      "Question": "u"
      "Agreement": "g"
    tooltips:
      "Claim": "A statement or assertion the speaker wants the listener to accept (e.g., 'This method outperforms the baseline')"
      "Evidence": "Data, experimental results, citations, or facts supporting or undermining a claim"
      "Rebuttal": "A counter-argument that challenges a previous claim, evidence, or reasoning"
      "Clarification Request": "Asking for more details, definitions, or explanation of an argument"
      "Clarification Response": "Providing additional details or explanation in response to a question"
      "Concession": "Acknowledging the validity of a counterpoint while potentially maintaining one's position"
      "Question": "A general question about the paper or topic (not specifically requesting clarification)"
      "Agreement": "Expressing agreement with a previous claim or argument"
      "Meta-Discussion": "Discussion about the dialogue process itself, not the scientific content"
      "Other": "Turns that do not fit the above categories (e.g., greetings, tangents)"

  # Argument quality dimensions (multiselect)
  - annotation_type: multiselect
    name: quality_dimensions
    description: "Select all quality dimensions that are WELL demonstrated in this argument turn"
    labels:
      - "Clarity"
      - "Relevance"
      - "Sufficiency"
      - "Logical Soundness"
      - "Specificity"
      - "Novelty"
      - "Grounded in Paper"
      - "Acknowledges Limitations"
      - "None Notable"
    tooltips:
      "Clarity": "The argument is clearly articulated and easy to understand"
      "Relevance": "The argument is directly relevant to the paper or topic under discussion"
      "Sufficiency": "Adequate evidence or reasoning is provided to support the claim"
      "Logical Soundness": "The reasoning follows logically without fallacies"
      "Specificity": "The argument references specific methods, results, or sections of the paper"
      "Novelty": "The argument raises a new point or perspective not previously discussed"
      "Grounded in Paper": "The argument is directly supported by content from the paper"
      "Acknowledges Limitations": "The speaker acknowledges limitations or caveats in their argument"
      "None Notable": "No quality dimensions are particularly well demonstrated"

  # Persuasiveness rating
  - annotation_type: radio
    name: persuasiveness
    description: "How persuasive is this argument turn?"
    labels:
      - "Very Persuasive"
      - "Somewhat Persuasive"
      - "Neutral"
      - "Somewhat Unpersuasive"
      - "Not Persuasive"
      - "Not Applicable"
    keyboard_shortcuts:
      "Very Persuasive": "1"
      "Somewhat Persuasive": "2"
      "Neutral": "3"
      "Somewhat Unpersuasive": "4"
      "Not Persuasive": "5"
      "Not Applicable": "6"
    tooltips:
      "Very Persuasive": "Strong, well-supported argument that would likely convince most readers"
      "Somewhat Persuasive": "Reasonable argument with some support, moderately convincing"
      "Neutral": "Neither particularly persuasive nor unpersuasive"
      "Somewhat Unpersuasive": "Weak argument with limited support or questionable reasoning"
      "Not Persuasive": "Poorly constructed argument, unsupported, or logically flawed"
      "Not Applicable": "The turn is not argumentative (e.g., a question or greeting)"

allow_all_users: true
instances_per_annotator: 150
annotation_per_instance: 3
allow_skip: true
skip_reason_required: false

Sample Datasample-data.json

[
  {
    "id": "argscichat_001",
    "text": "I think the main contribution of this paper is the novel attention mechanism they propose. They claim it reduces computational complexity from quadratic to linear while maintaining comparable performance on machine translation benchmarks.",
    "speaker": "Reviewer A",
    "paper_title": "Efficient Attention: Attention with Linear Complexities"
  },
  {
    "id": "argscichat_002",
    "text": "That's a strong claim, but if you look at Table 3, the BLEU scores on WMT'14 En-De are actually 1.2 points below the standard Transformer. That's not exactly 'comparable performance' in my view.",
    "speaker": "Reviewer B",
    "paper_title": "Efficient Attention: Attention with Linear Complexities"
  }
]

// ... and 8 more items

Get This Design

View on GitHub

Clone or download from the repository

Quick start:

git clone https://github.com/davidjurgens/potato-showcase.git
cd potato-showcase/text/dialogue/argscichat-scientific-argumentation
potato start config.yaml

Details

Annotation Types

multiselectradio

Domain

NLPScientific TextArgumentation

Use Cases

Argument MiningScientific Dialogue AnalysisArgumentation Quality

Tags

argumentationscientific-dialogueclaimsevidencerebuttalpersuasiveness

Found an issue or want to improve this design?

Open an Issue