Skip to content
Docs/Features

Parquet Export

Export annotations to Apache Parquet format for efficient large-scale data processing.

Parquet Export

New in v2.3.0

Apache Parquet is a columnar storage format optimized for analytical workloads. It offers significant advantages over JSON and CSV for large annotation datasets: smaller file sizes (typically 5-10x compression), faster reads for column-subset queries, and native support in virtually every data science tool (pandas, DuckDB, PyArrow, Spark, Polars, Hugging Face Datasets).

Potato can export annotations directly to Parquet format, producing three structured files that cover all annotation types.

Enabling Parquet Export

As Primary Output Format

yaml
output_annotation_dir: "output/"
output_annotation_format: "parquet"

As Secondary Export (Keep JSON Primary)

yaml
output_annotation_dir: "output/"
output_annotation_format: "jsonl"
 
parquet_export:
  enabled: true
  output_dir: "output/parquet/"
  auto_export: true              # export after each annotation session

On-Demand via CLI

bash
python -m potato.export parquet --config config.yaml --output ./parquet_output/

Output Files

Parquet export produces three files, each representing a different level of the annotation data.

1. annotations.parquet

The primary output file. One row per (instance, annotator, schema) combination.

ColumnTypeDescription
instance_idstringInstance identifier
annotatorstringAnnotator username
schema_namestringAnnotation schema name
valuestringAnnotation value (JSON-encoded for complex types)
timestamptimestampWhen the annotation was created
duration_msint64Time spent on this instance (milliseconds)
session_idstringAnnotation session identifier

For simple annotation types (radio, likert, text), value contains the raw value. For complex types (multiselect, spans, events), value contains a JSON string.

2. spans.parquet

For span-based annotation types (span, span_link, event_annotation, coreference). One row per annotated span.

ColumnTypeDescription
instance_idstringInstance identifier
annotatorstringAnnotator username
schema_namestringAnnotation schema name
span_idstringUnique span identifier
textstringSpan text content
start_offsetint32Character start offset
end_offsetint32Character end offset
labelstringSpan label
fieldstringSource field (for multi-field span annotation)
linksstringJSON-encoded link data (for span_link)
attributesstringJSON-encoded additional attributes

3. items.parquet

Metadata about each instance in the dataset. One row per instance.

ColumnTypeDescription
instance_idstringInstance identifier
textstringPrimary text content
annotation_countint32Number of annotations received
annotatorsstringJSON list of annotator usernames
statusstringInstance status (pending, in_progress, complete)
metadatastringJSON-encoded instance metadata

Compression Options

yaml
parquet_export:
  enabled: true
  output_dir: "output/parquet/"
 
  compression: snappy            # snappy (default), gzip, zstd, lz4, brotli, none
  row_group_size: 50000          # rows per row group (affects read performance)
  use_dictionary: true           # dictionary encoding for string columns
  write_statistics: true         # column statistics for query optimization

Compression Comparison

AlgorithmCompression RatioWrite SpeedRead SpeedBest For
snappyModerateFastFastGeneral use (default)
gzipHighSlowModerateArchival, small files
zstdHighFastFastBest balance of size and speed
lz4LowVery FastVery FastSpeed-critical workloads
brotliVery HighVery SlowModerateMaximum compression
noneNoneFastestFastestDebugging

For most annotation projects, the default snappy compression is a good choice. For large datasets where file size matters, use zstd.

Loading Parquet Data

pandas

python
import pandas as pd
 
annotations = pd.read_parquet("output/parquet/annotations.parquet")
spans = pd.read_parquet("output/parquet/spans.parquet")
items = pd.read_parquet("output/parquet/items.parquet")
 
# Filter to a specific schema
sentiment = annotations[annotations["schema_name"] == "sentiment"]
 
# Compute inter-annotator agreement
from sklearn.metrics import cohen_kappa_score
pivot = sentiment.pivot(index="instance_id", columns="annotator", values="value")
kappa = cohen_kappa_score(pivot.iloc[:, 0], pivot.iloc[:, 1])

DuckDB

sql
-- Direct query without loading into memory
SELECT instance_id, value, COUNT(*) as annotator_count
FROM 'output/parquet/annotations.parquet'
WHERE schema_name = 'sentiment'
GROUP BY instance_id, value
ORDER BY annotator_count DESC;
 
-- Join annotations with items
SELECT a.instance_id, i.text, a.value, a.annotator
FROM 'output/parquet/annotations.parquet' a
JOIN 'output/parquet/items.parquet' i
  ON a.instance_id = i.instance_id
WHERE a.schema_name = 'sentiment';

PyArrow

python
import pyarrow.parquet as pq
 
# Read specific columns only (fast for wide tables)
table = pq.read_table(
    "output/parquet/annotations.parquet",
    columns=["instance_id", "value", "annotator"]
)
 
# Convert to pandas
df = table.to_pandas()
 
# Read with row group filtering
parquet_file = pq.ParquetFile("output/parquet/annotations.parquet")
print(f"Row groups: {parquet_file.metadata.num_row_groups}")
print(f"Total rows: {parquet_file.metadata.num_rows}")

Hugging Face Datasets

python
from datasets import load_dataset
 
# Load directly from Parquet files
dataset = load_dataset("parquet", data_files={
    "annotations": "output/parquet/annotations.parquet",
    "spans": "output/parquet/spans.parquet",
    "items": "output/parquet/items.parquet",
})
 
# Access as a regular HF dataset
print(dataset["annotations"][0])
 
# Push to Hugging Face Hub
dataset["annotations"].push_to_hub("my-org/my-annotations", split="train")

Polars

python
import polars as pl
 
annotations = pl.read_parquet("output/parquet/annotations.parquet")
 
# Fast aggregation
label_counts = (
    annotations
    .filter(pl.col("schema_name") == "sentiment")
    .group_by("value")
    .agg(pl.count().alias("count"))
    .sort("count", descending=True)
)
print(label_counts)

Incremental Export

For long-running annotation projects, enable incremental export to avoid re-exporting the entire dataset each time:

yaml
parquet_export:
  enabled: true
  output_dir: "output/parquet/"
  incremental: true
  partition_by: date             # date, annotator, or none

With partition_by: date, Parquet files are organized into date-partitioned directories:

text
output/parquet/
  annotations/
    date=2026-03-01/part-0.parquet
    date=2026-03-02/part-0.parquet
    date=2026-03-03/part-0.parquet
  spans/
    date=2026-03-01/part-0.parquet
  items/
    part-0.parquet

Partitioned datasets can be read as a single logical table by all major tools:

python
# pandas reads partitioned directories automatically
df = pd.read_parquet("output/parquet/annotations/")
 
# DuckDB handles partitions natively
# SELECT * FROM 'output/parquet/annotations/**/*.parquet'

Configuration Reference

yaml
parquet_export:
  enabled: true
  output_dir: "output/parquet/"
 
  # When to export
  auto_export: true              # export after each session (default: false)
  export_on_shutdown: true       # export when server stops (default: true)
 
  # File settings
  compression: snappy
  row_group_size: 50000
  use_dictionary: true
  write_statistics: true
 
  # Incremental settings
  incremental: false
  partition_by: none             # none, date, annotator
 
  # Schema-specific options
  flatten_complex_types: false   # flatten JSON values into columns
  include_raw_json: true         # include raw JSON alongside flattened columns
 
  # Span export
  export_spans: true             # generate spans.parquet
  export_items: true             # generate items.parquet

Full Example

yaml
task_name: "NER Annotation Project"
task_dir: "."
 
data_files:
  - "data/documents.jsonl"
 
item_properties:
  id_key: doc_id
  text_key: text
 
annotation_schemes:
  - annotation_type: span
    name: entities
    labels:
      - name: PERSON
        color: "#3b82f6"
      - name: ORGANIZATION
        color: "#22c55e"
      - name: LOCATION
        color: "#f59e0b"
 
output_annotation_dir: "output/"
output_annotation_format: "jsonl"
 
parquet_export:
  enabled: true
  output_dir: "output/parquet/"
  compression: zstd
  auto_export: true
  export_spans: true
  export_items: true

After annotation, load and analyze:

python
import pandas as pd
 
spans = pd.read_parquet("output/parquet/spans.parquet")
 
# Entity type distribution
print(spans["label"].value_counts())
 
# Average span length by type
spans["length"] = spans["end_offset"] - spans["start_offset"]
print(spans.groupby("label")["length"].mean())

Further Reading

For implementation details, see the source documentation.