Skip to content
Showcase/RefactorBench Multi-File Evaluation
advancedsurvey

RefactorBench Multi-File Evaluation

Evaluate multi-file refactoring operations generated by coding agents. Annotators assess whether refactorings preserve behavior, identify the types of refactoring applied, rate code improvement, and provide detailed review comments.

Q1: Rate your experience12345Q2: Primary use case?ResearchIndustryEducationQ3: Additional feedback

Configuration Fileconfig.yaml

# RefactorBench Multi-File Evaluation
# Based on "RefactorBench: Evaluating Stateful Reasoning in Language Agents Through Code" (Gautam et al., ICLR 2025)
# Task: Evaluate multi-file refactoring operations for correctness, type, and quality improvement

annotation_task_name: "RefactorBench Multi-File Evaluation"
task_dir: "."

data_files:
  - sample-data.json
item_properties:
  id_key: "id"
  text_key: "text"

output_annotation_dir: "annotation_output/"
output_annotation_format: "json"

html_layout: |
  <div class="container" style="font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif; max-width: 1500px; margin: 0 auto;">
    <div style="background: #24292f; color: #ffffff; padding: 10px 16px; border-radius: 6px 6px 0 0; display: flex; justify-content: space-between; align-items: center;">
      <span style="font-weight: 600;">{{repo_name}}</span>
      <span style="font-size: 12px; color: #8b949e;">Changed files: {{file_list}}</span>
    </div>
    <div style="border: 1px solid #d0d7de; border-radius: 6px; padding: 16px; margin: 8px 0; background: #f6f8fa;">
      <h3 style="margin-top: 0; color: #24292f;">Refactoring Task</h3>
      <div style="white-space: pre-wrap; font-size: 14px; line-height: 1.6; color: #1f2328;">{{text}}</div>
    </div>
    <div style="display: flex; gap: 12px; margin-top: 8px;">
      <div style="flex: 0 0 200px; border: 1px solid #d0d7de; border-radius: 6px; overflow: hidden;">
        <div style="background: #2d333b; color: #adbac7; padding: 8px 12px; font-weight: 600; font-size: 13px;">Changed Files</div>
        <div style="padding: 8px 12px; background: #22272e; font-family: monospace; font-size: 12px; line-height: 2;">
          {{file_list}}
        </div>
      </div>
      <div style="flex: 1; border: 1px solid #30363d; border-radius: 6px; overflow: hidden;">
        <div style="background: #161b22; color: #c9d1d9; padding: 8px 16px; font-weight: 600; font-size: 13px;">Multi-File Diff</div>
        <pre style="margin: 0; padding: 12px 16px; background: #0d1117; color: #c9d1d9; font-family: 'SFMono-Regular', Consolas, 'Liberation Mono', Menlo, monospace; font-size: 12px; line-height: 1.5; overflow-x: auto; white-space: pre;">{{file_diffs}}</pre>
      </div>
    </div>
    <div style="margin-top: 8px; border: 1px solid #30363d; border-radius: 6px; overflow: hidden;">
      <div style="background: #0d1117; color: #3fb950; padding: 8px 12px; font-weight: 600; font-size: 13px;">Test Results</div>
      <pre style="margin: 0; padding: 12px; background: #161b22; color: #c9d1d9; font-family: 'SFMono-Regular', Consolas, monospace; font-size: 12px; line-height: 1.5; white-space: pre;">{{test_results}}</pre>
    </div>
  </div>

annotation_schemes:
  - name: "refactoring_success"
    description: "Does the refactoring correctly preserve the original behavior?"
    annotation_type: radio
    labels:
      - "Correct — refactoring preserves behavior"
      - "Partially Correct — some files correct"
      - "Incorrect — breaks functionality"
      - "Regression — introduces new bugs"
    keyboard_shortcuts:
      "Correct — refactoring preserves behavior": "1"
      "Partially Correct — some files correct": "2"
      "Incorrect — breaks functionality": "3"
      "Regression — introduces new bugs": "4"

  - name: "refactoring_types"
    description: "Select all refactoring types applied in this change"
    annotation_type: multiselect
    labels:
      - "Extract Method"
      - "Rename Symbol"
      - "Move Class/Function"
      - "Inline Variable"
      - "Change Signature"
      - "Extract Interface"
      - "Simplify Conditional"
      - "Other"

  - name: "code_improvement"
    description: "Overall, did the refactoring improve the code?"
    annotation_type: likert
    min_label: "Much Worse"
    max_label: "Much Better"
    size: 5

  - name: "review_comments"
    description: "Code review comments referencing specific files and changes"
    annotation_type: text

allow_all_users: true
instances_per_annotator: 50
annotation_per_instance: 2

Sample Datasample-data.json

[
  {
    "id": "refactor-001",
    "text": "Extract the database connection logic from UserService and OrderService into a shared DatabasePool class. Both services currently create their own connections with duplicated configuration. The extracted class should manage connection pooling, health checks, and graceful shutdown.",
    "repo_name": "ecommerce/backend",
    "file_diffs": "diff --git a/src/services/user_service.py b/src/services/user_service.py\nindex a1b2c3d..e4f5g6h 100644\n--- a/src/services/user_service.py\n+++ b/src/services/user_service.py\n@@ -1,25 +1,12 @@\n-import psycopg2\n-from psycopg2.pool import ThreadedConnectionPool\n+from src.db.pool import DatabasePool\n \n class UserService:\n-    def __init__(self, db_url: str, min_conn: int = 2, max_conn: int = 10):\n-        self.pool = ThreadedConnectionPool(\n-            min_conn, max_conn, db_url\n-        )\n-\n-    def _get_conn(self):\n-        return self.pool.getconn()\n-\n-    def _put_conn(self, conn):\n-        self.pool.putconn(conn)\n+    def __init__(self, db_pool: DatabasePool):\n+        self.db = db_pool\n \n     def get_user(self, user_id: int):\n-        conn = self._get_conn()\n-        try:\n+        with self.db.connection() as conn:\n             cur = conn.cursor()\n             cur.execute(\"SELECT * FROM users WHERE id = %s\", (user_id,))\n             return cur.fetchone()\n-        finally:\n-            self._put_conn(conn)\n\ndiff --git a/src/services/order_service.py b/src/services/order_service.py\nindex b2c3d4e..f5g6h7i 100644\n--- a/src/services/order_service.py\n+++ b/src/services/order_service.py\n@@ -1,22 +1,12 @@\n-import psycopg2\n-from psycopg2.pool import ThreadedConnectionPool\n+from src.db.pool import DatabasePool\n \n class OrderService:\n-    def __init__(self, db_url: str, min_conn: int = 2, max_conn: int = 10):\n-        self.pool = ThreadedConnectionPool(\n-            min_conn, max_conn, db_url\n-        )\n+    def __init__(self, db_pool: DatabasePool):\n+        self.db = db_pool\n \n     def create_order(self, user_id: int, items: list):\n-        conn = self.pool.getconn()\n-        try:\n+        with self.db.connection() as conn:\n             cur = conn.cursor()\n             cur.execute(\n                 \"INSERT INTO orders (user_id) VALUES (%s) RETURNING id\",\n                 (user_id,)\n             )\n             order_id = cur.fetchone()[0]\n             for item in items:\n                 cur.execute(\n                     \"INSERT INTO order_items (order_id, product_id, qty) VALUES (%s, %s, %s)\",\n                     (order_id, item['product_id'], item['qty'])\n                 )\n             conn.commit()\n             return order_id\n-        finally:\n-            self.pool.putconn(conn)\n\ndiff --git a/src/db/__init__.py b/src/db/__init__.py\nnew file mode 100644\nindex 0000000..e69de29\n\ndiff --git a/src/db/pool.py b/src/db/pool.py\nnew file mode 100644\nindex 0000000..a1b2c3d\n--- /dev/null\n+++ b/src/db/pool.py\n@@ -0,0 +1,38 @@\n+from contextlib import contextmanager\n+from psycopg2.pool import ThreadedConnectionPool\n+\n+class DatabasePool:\n+    def __init__(self, db_url: str, min_conn: int = 2, max_conn: int = 10):\n+        self._pool = ThreadedConnectionPool(min_conn, max_conn, db_url)\n+        self._closed = False\n+\n+    @contextmanager\n+    def connection(self):\n+        conn = self._pool.getconn()\n+        try:\n+            yield conn\n+        finally:\n+            self._pool.putconn(conn)\n+\n+    def health_check(self) -> bool:\n+        try:\n+            with self.connection() as conn:\n+                cur = conn.cursor()\n+                cur.execute(\"SELECT 1\")\n+                return True\n+        except Exception:\n+            return False\n+\n+    def shutdown(self):\n+        if not self._closed:\n+            self._pool.closeall()\n+            self._closed = True",
    "file_list": "src/services/user_service.py, src/services/order_service.py, src/db/__init__.py, src/db/pool.py",
    "test_results": "$ python -m pytest tests/ -v\ntests/test_user_service.py::test_get_user PASSED\ntests/test_user_service.py::test_get_user_not_found PASSED\ntests/test_order_service.py::test_create_order PASSED\ntests/test_order_service.py::test_create_order_empty_items PASSED\ntests/test_db_pool.py::test_connection_context_manager PASSED\ntests/test_db_pool.py::test_health_check PASSED\ntests/test_db_pool.py::test_shutdown PASSED\n\n7 passed in 1.42s"
  },
  {
    "id": "refactor-002",
    "text": "Rename the 'process' method to 'transform' across the entire data pipeline codebase. Update all call sites, docstrings, tests, and type hints. The method exists in DataProcessor, BatchProcessor, and StreamProcessor classes.",
    "repo_name": "datapipeline/core",
    "file_diffs": "diff --git a/src/processor/base.py b/src/processor/base.py\nindex c3d4e5f..g7h8i9j 100644\n--- a/src/processor/base.py\n+++ b/src/processor/base.py\n@@ -8,12 +8,12 @@ class DataProcessor(ABC):\n     \"\"\"Base class for all data processors.\n \n-    Subclasses must implement the `process` method.\n+    Subclasses must implement the `transform` method.\n     \"\"\"\n \n     @abstractmethod\n-    def process(self, data: pd.DataFrame) -> pd.DataFrame:\n-        \"\"\"Process the input data and return transformed result.\n+    def transform(self, data: pd.DataFrame) -> pd.DataFrame:\n+        \"\"\"Transform the input data and return the result.\n \n         Args:\n             data: Input DataFrame to process.\n\ndiff --git a/src/processor/batch.py b/src/processor/batch.py\nindex d4e5f6g..h8i9j0k 100644\n--- a/src/processor/batch.py\n+++ b/src/processor/batch.py\n@@ -5,7 +5,7 @@ class BatchProcessor(DataProcessor):\n     def __init__(self, batch_size: int = 1000):\n         self.batch_size = batch_size\n \n-    def process(self, data: pd.DataFrame) -> pd.DataFrame:\n+    def transform(self, data: pd.DataFrame) -> pd.DataFrame:\n         results = []\n         for start in range(0, len(data), self.batch_size):\n             batch = data.iloc[start:start + self.batch_size]\n@@ -14,7 +14,7 @@ class BatchProcessor(DataProcessor):\n \n     def _process_batch(self, batch: pd.DataFrame) -> pd.DataFrame:\n-        \"\"\"Process a single batch. Override in subclasses.\"\"\"\n+        \"\"\"Transform a single batch. Override in subclasses.\"\"\"\n         return batch\n\ndiff --git a/src/processor/stream.py b/src/processor/stream.py\nindex e5f6g7h..i9j0k1l 100644\n--- a/src/processor/stream.py\n+++ b/src/processor/stream.py\n@@ -5,7 +5,7 @@ class StreamProcessor(DataProcessor):\n     def __init__(self, window_size: int = 100):\n         self.window_size = window_size\n \n-    def process(self, data: pd.DataFrame) -> pd.DataFrame:\n+    def transform(self, data: pd.DataFrame) -> pd.DataFrame:\n         # Process in sliding windows\n         for i in range(0, len(data), self.window_size):\n             window = data.iloc[i:i + self.window_size]\n\ndiff --git a/src/pipeline.py b/src/pipeline.py\nindex f6g7h8i..j0k1l2m 100644\n--- a/src/pipeline.py\n+++ b/src/pipeline.py\n@@ -15,7 +15,7 @@ class Pipeline:\n     def run(self, data: pd.DataFrame) -> pd.DataFrame:\n         result = data\n         for step in self.steps:\n-            result = step.process(result)\n+            result = step.transform(result)\n         return result\n\ndiff --git a/tests/test_processors.py b/tests/test_processors.py\nindex g7h8i9j..k1l2m3n 100644\n--- a/tests/test_processors.py\n+++ b/tests/test_processors.py\n@@ -12,14 +12,14 @@ class TestBatchProcessor:\n     def test_process_small_data(self):\n         proc = BatchProcessor(batch_size=10)\n         data = pd.DataFrame({'val': range(5)})\n-        result = proc.process(data)\n+        result = proc.transform(data)\n         assert len(result) == 5\n \n     def test_process_large_data(self):\n         proc = BatchProcessor(batch_size=100)\n         data = pd.DataFrame({'val': range(500)})\n-        result = proc.process(data)\n+        result = proc.transform(data)\n         assert len(result) == 500",
    "file_list": "src/processor/base.py, src/processor/batch.py, src/processor/stream.py, src/pipeline.py, tests/test_processors.py",
    "test_results": "$ python -m pytest tests/ -v\ntests/test_processors.py::TestBatchProcessor::test_process_small_data PASSED\ntests/test_processors.py::TestBatchProcessor::test_process_large_data PASSED\ntests/test_processors.py::TestStreamProcessor::test_window_processing PASSED\ntests/test_pipeline.py::test_pipeline_run PASSED\ntests/test_pipeline.py::test_pipeline_empty PASSED\n\n5 passed in 0.67s"
  }
]

// ... and 6 more items

Get This Design

View on GitHub

Clone or download from the repository

Quick start:

git clone https://github.com/davidjurgens/potato-showcase.git
cd potato-showcase/agentic/refactorbench-multi-file-eval
potato start config.yaml

Details

Annotation Types

radiomultiselectlikerttext

Domain

Software EngineeringCode Refactoring

Use Cases

Refactoring EvaluationCode Review

Tags

refactoringmulti-filecode-qualitybenchmarkagentic-coding

Found an issue or want to improve this design?

Open an Issue