Skip to content

Conversation

@jeremio
Copy link
Contributor

@jeremio jeremio commented Jul 3, 2025

Description

  • Moves the translation logic to a web worker to run in the browser.
  • Updates transformers.js to v3 to enable GPU support.
  • Removes the old translation worker from the Electron main process.
  • Fixes various TypeScript errors and configuration issues.

What is the purpose of this pull request?

  • Bug fix
  • New Feature
  • Documentation update
  • Other

- Moves the translation logic to a web worker to run in the browser.
- Updates transformers.js to v3 to enable GPU support.
- Removes the old translation worker from the Electron main process.
- Fixes various TypeScript errors and configuration issues.
@naeruru
Copy link
Owner

naeruru commented Jul 4, 2025

awesome!! the worker works on web side now. however it seems to be having issues using device: 'webgpu' and errors out while also never passing any messages back to the UI (such as model download progress). the error is not very helpful, I'm not immediately sure what could be causing it.

works fine with device: 'wasm'(what it used back in 2.x) though. any clue what might be causing webgpu to fail?

image

@jeremio
Copy link
Contributor Author

jeremio commented Jul 8, 2025

Hi @naeruru,

To debug this webgpu issue, I need:

  • OS and browser version
  • GPU model
  • Steps to reproduce the issue

I haven't seen these errors on my Ubuntu/Chrome/GTX 1660 setup, so the browser console errors suggest webgpu compatibility issues with your specific environment.

Thanks

@naeruru
Copy link
Owner

naeruru commented Jul 11, 2025

cant seem to get it to work on multiple rigs. Ive tried changing the revision: 'v3' in order to use this branch of the translation model, and I've also tried multiple dtypes (such as fp32, fp16, q4). still no good. I can see from my network that it is indeed downloading the model files (despite no longer reporting the progress below the text field bar), but it errors out sometime trying to evaluate the text in the model itself (pipeline(...)).

image
  • First rig:
    OS: Macbook Pro (arm64) on Sequoia 15.4.1
    Browser: Chrome 135.0.7049.116 (Official Build) (arm64)
    GPU: M3

  • Second rig:
    OS: Windows 11 26100.4652
    Chrome 138.0.7204.101 (Official Build (64-bit)
    GPU: RTX 4080

steps to reproduce are simply:

  • turning on translations
  • typing 'test' into the text bar
  • wait a couple minutes for model to download
  • at this point I can see there is no loading bar
  • some point later it spits out the number error

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants