Likert Scale Survey
Multi-question survey using Likert scales to measure agreement, satisfaction, or frequency.
Archivo de configuraciónconfig.yaml
# Likert Scale Survey Configuration
# Multi-question survey for measuring attitudes and opinions
task_dir: "."
annotation_task_name: "User Experience Survey"
data_files:
- "data/survey_items.json"
item_properties:
id_key: "id"
text_key: "product_description"
text_display_key: "product_description"
user_config:
allow_all_users: true
annotation_schemes:
- annotation_type: "likert"
name: "ease_of_use"
description: "The product is easy to use"
size: 5
min_label: "Strongly Disagree"
max_label: "Strongly Agree"
- annotation_type: "likert"
name: "meets_needs"
description: "The product meets my needs"
size: 5
min_label: "Strongly Disagree"
max_label: "Strongly Agree"
- annotation_type: "likert"
name: "would_recommend"
description: "I would recommend this product to others"
size: 5
min_label: "Strongly Disagree"
max_label: "Strongly Agree"
- annotation_type: "likert"
name: "value_for_money"
description: "The product provides good value for money"
size: 5
min_label: "Strongly Disagree"
max_label: "Strongly Agree"
- annotation_type: "likert"
name: "overall_satisfaction"
description: "Overall, I am satisfied with the product"
size: 7
min_label: "Very Dissatisfied"
max_label: "Very Satisfied"
- annotation_type: "multiselect"
name: "best_features"
description: "Which features do you like most?"
labels:
- name: "Design"
- name: "Performance"
- name: "Reliability"
- name: "Customer Support"
- name: "Documentation"
- name: "Price"
- annotation_type: "text"
name: "feedback"
description: "Any additional feedback?"
required: false
output: "annotation_output/"
output_annotation_dir: "annotation_output/"
output_annotation_format: "json"
Datos de ejemplosample-data.json
[
{
"id": "survey_001",
"product_description": "Please rate your experience with Potato Annotation Tool - a lightweight, configurable annotation platform for NLP research."
},
{
"id": "survey_002",
"product_description": "Please rate your experience with our new documentation system - an interactive guide for setting up annotation tasks."
}
]
// ... and 1 more itemsObtener este diseño
Clone or download from the repository
Inicio rápido:
git clone https://github.com/davidjurgens/potato-showcase.git cd potato-showcase/templates/surveys/likert-scale-survey potato start config.yaml
Detalles
Tipos de anotación
Dominio
Casos de uso
Etiquetas
¿Encontró un problema o desea mejorar este diseño?
Abrir un issueDiseños relacionados
UltraFeedback Multi-Aspect Rating
Multi-aspect quality rating of AI model responses based on the UltraFeedback dataset (Cui et al., ICML 2024). Annotators rate responses on helpfulness, honesty, instruction following, and truthfulness, then provide a Likert agreement rating and overall feedback.
AnnoMI Counselling Dialogue Annotation
Annotation of motivational interviewing counselling dialogues based on the AnnoMI dataset. Annotators label therapist and client utterances for MI techniques (open questions, reflections, affirmations) and client change talk (sustain talk, change talk), with quality ratings for therapeutic interactions.
Argument Reasoning Comprehension (ARCT)
Identify implicit warrants in arguments. Based on Habernal et al., NAACL 2018 / SemEval 2018 Task 12. Given a claim and premise, choose the correct warrant that connects them.