Skip to content

Conversation

@KRRT7
Copy link

@KRRT7 KRRT7 commented Jan 21, 2026

📄 144% (1.44x) speedup for standardize_quotes in unstructured/metrics/text_extraction.py

⏱️ Runtime : 128 microseconds 52.2 microseconds (best of 170 runs)

📝 Explanation and details

The optimized code achieves a 144% speedup by replacing a loop-based character replacement approach with Python's built-in str.translate() method using a pre-computed translation table.

Key Optimizations

1. Pre-computed Translation Table at Module Load

  • The quote dictionaries and translation table are now created once at module import time (module-level constants prefixed with _)
  • Original code recreated these 40+ entry dictionaries on every function call (6.1% + 6.5% = 12.6% of runtime just for dictionary creation)
  • Translation table maps Unicode codepoints directly to ASCII quote codepoints, eliminating repeated string operations

2. Single-Pass O(n) Algorithm with str.translate()

  • Original: Two loops iterating through ~40 quote types, calling unicode_to_char() 3,096 times (67.5% of total runtime) and performing substring searches with in operator (5.9% of runtime)
  • Optimized: Single str.translate() call that processes the entire string in one pass using efficient C-level implementation
  • Eliminates 3,096 function calls to unicode_to_char() and all associated string parsing/conversion overhead

3. Algorithmic Complexity Improvement

  • Original: O(n × m) where n = text length, m = number of quote types (~40), with repeated text.replace() creating new string objects
  • Optimized: O(n) single pass through the text, with translation table lookups being O(1)

Performance Context

Based on function_references, this function is called from calculate_edit_distance(), which is likely in a hot path for text extraction metrics. The function processes strings before edit distance calculations, meaning:

  • Any text comparison workflow will call this repeatedly
  • The 144% speedup compounds when processing multiple documents or performing batch comparisons
  • Reduced memory allocation pressure from eliminating repeated dictionary creation and intermediate string objects

Test Case Insights

The test with input "«'" (containing both double and single quote variants) shows the optimization handles mixed quote types efficiently in a single pass, whereas the original code would iterate through all 40 quote types regardless of actual presence in the text.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 24 Passed
🌀 Generated Regression Tests 🔘 None Found
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 1 Passed
📊 Tests Coverage
⚙️ Click to see Existing Unit Tests
Test File::Test Function Original ⏱️ Optimized ⏱️ Speedup
metrics/test_text_extraction.py::test_standardize_quotes 117μs 51.6μs 128%✅
🔎 Click to see Concolic Coverage Tests
Test File::Test Function Original ⏱️ Optimized ⏱️ Speedup
codeflash_concolic_qdmvy_uv/tmpooe6tmfm/test_concolic_coverage.py::test_standardize_quotes 9.96μs 625ns 1493%✅

To edit these changes git checkout codeflash/optimize-standardize_quotes-mklcp188 and push.

Codeflash

codeflash-ai bot and others added 5 commits January 19, 2026 15:59
The optimized code achieves a **144% speedup** by replacing a loop-based character replacement approach with Python's built-in `str.translate()` method using a pre-computed translation table.

## Key Optimizations

**1. Pre-computed Translation Table at Module Load**
- The quote dictionaries and translation table are now created once at module import time (module-level constants prefixed with `_`)
- Original code recreated these 40+ entry dictionaries on every function call (6.1% + 6.5% = 12.6% of runtime just for dictionary creation)
- Translation table maps Unicode codepoints directly to ASCII quote codepoints, eliminating repeated string operations

**2. Single-Pass O(n) Algorithm with `str.translate()`**
- Original: Two loops iterating through ~40 quote types, calling `unicode_to_char()` 3,096 times (67.5% of total runtime) and performing substring searches with `in` operator (5.9% of runtime)
- Optimized: Single `str.translate()` call that processes the entire string in one pass using efficient C-level implementation
- Eliminates 3,096 function calls to `unicode_to_char()` and all associated string parsing/conversion overhead

**3. Algorithmic Complexity Improvement**
- Original: O(n × m) where n = text length, m = number of quote types (~40), with repeated `text.replace()` creating new string objects
- Optimized: O(n) single pass through the text, with translation table lookups being O(1)

## Performance Context

Based on `function_references`, this function is called from `calculate_edit_distance()`, which is likely in a **hot path** for text extraction metrics. The function processes strings before edit distance calculations, meaning:
- Any text comparison workflow will call this repeatedly
- The 144% speedup compounds when processing multiple documents or performing batch comparisons
- Reduced memory allocation pressure from eliminating repeated dictionary creation and intermediate string objects

## Test Case Insights

The test with input `"«'"` (containing both double and single quote variants) shows the optimization handles mixed quote types efficiently in a single pass, whereas the original code would iterate through all 40 quote types regardless of actual presence in the text.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant