Skip to content

feat: Cleanup HTML page to reduce token usage#1073

Open
mguella wants to merge 2 commits intoItzCrazyKns:masterfrom
mguella:feature/cleanup-html-page-to-reduce-token-usage
Open

feat: Cleanup HTML page to reduce token usage#1073
mguella wants to merge 2 commits intoItzCrazyKns:masterfrom
mguella:feature/cleanup-html-page-to-reduce-token-usage

Conversation

@mguella
Copy link

@mguella mguella commented Mar 20, 2026

Issue

As reported by #1031 some pages are consuming too many tokens.

Cause

This is because the page is parsed directly to markdown after being fetched, so it contains data that we don't really care about (e.g. comments, styles, scripts).

Solution

There is another PR #1035 that tries to reduce the number of tokens sent to the LLM by truncating the HTML content.
However that approach risks to delete data we care about, especially because it truncates the data at a fixed point, and HTML pages tend to include style and script before the page content, so with that approach we might just end up including in the resulting text script and style without any of the body and the main page content.

This PR takes a different approach: cleanup the HTML by removing things the LLM doesn't need, like comments, script tags and style tags, so we can limit the token usage.

Next steps

If the approach from this PR is not enough, we could parse the page with Mozilla's Readability.js to keep only the main page content.
If that is also not enough we can combine both approaches above (HTML cleanup + Readability.js) with the truncate approach from #1035.


Summary by cubic

Clean up HTML pages before converting to Markdown to lower token usage while keeping the main content. We strip comments, scripts, styles, templates, and extra whitespace when the response is HTML.

  • New Features

    • Clean HTML in scrapeURL.ts using jsdom when Content-Type is text/html.
    • Strip comments and excess whitespace.
    • Remove script, style, and template tags before Markdown conversion.
  • Bug Fixes

    • Fixed the comment-removal regex to reliably strip HTML comments.

Written for commit 5be0f5b. Summary will update on new commits.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 3 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src/lib/agents/search/researcher/actions/scrapeURL.ts">

<violation number="1" location="src/lib/agents/search/researcher/actions/scrapeURL.ts:53">
P2: Comment-stripping regex only matches whitespace/dot comments, so most HTML comments remain and the cleanup fails to remove common comments.</violation>
</file>

Since this is your first cubic review, here's how it works:

  • cubic automatically reviews your code and comments on bugs and improvements
  • Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
  • Add one-off context when rerunning by tagging @cubic-dev-ai with guidance or docs links (including llms.txt)
  • Ask questions if you need clarification on any suggestion

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@mguella mguella changed the title Cleanup HTML page to reduce token usage feat: Cleanup HTML page to reduce token usage Mar 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant