LIP Human Parsing
Pixel-level human body part segmentation (Gong et al., CVPR 2017). Parse human images into 20 semantic parts including hair, face, arms, legs, and clothing items.
image annotation
Configuration Fileconfig.yaml
# LIP Human Parsing Configuration
# Based on Gong et al., CVPR 2017
annotation_task_name: "LIP Human Body Part Parsing"
data_files:
- "sample-data.json"
item_properties:
id_key: "id"
text_key: "image_url"
context_key: "context"
user_config:
allow_all_users: true
annotation_schemes:
- annotation_type: "multiselect"
name: "body_parts"
description: "Select all visible body parts"
labels:
- name: "hat"
tooltip: "Hat or head covering"
- name: "hair"
tooltip: "Hair"
- name: "face"
tooltip: "Face"
- name: "sunglasses"
tooltip: "Sunglasses or glasses"
- name: "upper_clothes"
tooltip: "Upper body clothing"
- name: "dress"
tooltip: "Dress"
- name: "coat"
tooltip: "Coat or jacket"
- name: "socks"
tooltip: "Socks"
- name: "pants"
tooltip: "Pants"
- name: "gloves"
tooltip: "Gloves"
- name: "scarf"
tooltip: "Scarf"
- name: "skirt"
tooltip: "Skirt"
- name: "jumpsuits"
tooltip: "Jumpsuits"
- name: "left_arm"
tooltip: "Left arm"
- name: "right_arm"
tooltip: "Right arm"
- name: "left_leg"
tooltip: "Left leg"
- name: "right_leg"
tooltip: "Right leg"
- name: "left_shoe"
tooltip: "Left shoe"
- name: "right_shoe"
tooltip: "Right shoe"
- annotation_type: "radio"
name: "pose_type"
description: "What is the primary pose?"
labels:
- name: "standing"
tooltip: "Person is standing"
- name: "sitting"
tooltip: "Person is sitting"
- name: "walking"
tooltip: "Person is walking"
- name: "other"
tooltip: "Other pose"
- annotation_type: "radio"
name: "occlusion"
description: "Are body parts occluded?"
labels:
- name: "no_occlusion"
tooltip: "All visible parts are clear"
- name: "partial_occlusion"
tooltip: "Some parts are partially hidden"
- name: "heavy_occlusion"
tooltip: "Significant occlusion"
interface_config:
item_display_format: "<img src='{{text}}' style='max-width:100%; max-height:500px;'/><br/><small>{{context}}</small>"
output_annotation_format: "json"
output_annotation_dir: "annotations"
Sample Datasample-data.json
[
{
"id": "lip_001",
"image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/c/c2/Aiga_toiletsq_men.svg/800px-Aiga_toiletsq_men.svg.png",
"context": "Parse this human image into semantic body parts. Label all visible body parts and clothing items."
},
{
"id": "lip_002",
"image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/e/e5/Red_coat.jpg/800px-Red_coat.jpg",
"context": "Segment body parts: face, hair, arms, legs, and clothing."
}
]
// ... and 1 more itemsGet This Design
Clone or download from the repository
Quick start:
git clone https://github.com/davidjurgens/potato-showcase.git cd potato-showcase/lip-human-parsing potato start config.yaml
Details
Annotation Types
Domain
Use Cases
Tags
Found an issue or want to improve this design?
Open an IssueRelated Designs
ADE20K Semantic Segmentation
Comprehensive scene parsing with 150 semantic categories (Zhou et al., CVPR 2017). Annotate indoor and outdoor scenes with pixel-level labels covering objects, parts, and stuff classes.
BDD100K Autonomous Driving Segmentation
Large-scale diverse driving video dataset (Yu et al., CVPR 2020). Annotate driving scenes with bounding boxes, lane markings, drivable areas, and full-frame instance segmentation.
Cityscapes Instance Segmentation
Urban scene understanding with instance-level semantic labeling (Cordts et al., CVPR 2016). Annotate street scenes with pixel-level labels for 30 classes across vehicles, humans, construction, and nature.