Skip to content

Optimization & Bug Fixes#215

Open
Riffaells wants to merge 2 commits intogfxholo:mainfrom
Riffaells:main
Open

Optimization & Bug Fixes#215
Riffaells wants to merge 2 commits intogfxholo:mainfrom
Riffaells:main

Conversation

@Riffaells
Copy link

  • refreshManagers() - Fixed categories?.includes -> categories.includes (array always truthy)
  • File cache - Added invalidation on vault events (create/rename/delete)
  • Title rendering - Fixed metadata-container nesting inside iconic-title-wrapper
  • Replaced LRUCache(1) with version-based cache
  • Cached Object.keys() for tags, debounced invalidation (100ms)
  • Event-based cache invalidation
  • LRU caches: rulings (1000), paths (500), metadata (200), regex (50)
  • Set-based O(1) lookups vs Array O(n)
  • Lazy toLowerCase (~40% reduction)
  • LRUCache, Debouncer, Memoizer, BatchProcessor, RequestDeduplicator
  • LRU cache (100)
  • throttled refreshes (100ms)
  • debounced events, deferred rendering

@gfxholo
Copy link
Owner

gfxholo commented Feb 14, 2026

Hi, thanks for submitting what looks like a lot of hard work! I can't merge this pull request in its current form (I don't understand everything that's happening here) but I might be able to if it's broken up into smaller PRs.

The main questions on my mind are:

  • What does the LRU cache achieve?
  • What does the Memoizer achieve?
  • Why use a custom Debouncer class over the built-in Debouncer?
  • Are there any specific tests I can run to see how these changes are effective?

I don't have the free time to do complex code reviews, so some spoon-feeding may be necessary. 💚

@gfxholo gfxholo added the performance Laggy behavior caused by the plugin label Feb 14, 2026
@Riffaells
Copy link
Author

I tried using Memoizer, BatchProcessor, RequestDeduplicator, I forgot to remove them from the commit and from the project, I corrected them

Debouncer is custom because it creates one debounced function, but here I need to manage several independent operations with several keys and with their own timers.

The bugs I found were related to file creation (I had a conflict with datacorejs, when I created a file using it, the classes got confused there). Also another problem with invalidateFileItemsCache not being called on create/rename/delete, that it only worked after restarting obsidian, LRUCache(1) replaced.

There was also a bug with Object.keys(allTags) with each call there was a rebuild, so the array was cached in cachedTagIds

I tested it manually, opened the list of files one by one, I changed the file name (where the name is written) and after about a second the file context appeared. But now for about 200ms, and even then it is no longer related to the plugin. But as I later found out it was because of the tags.

@gfxholo
Copy link
Owner

gfxholo commented Feb 23, 2026

Thank you for simplifying the PR! I still have two questions before I run some tests on this:

  • What does the LRU cache improve performance-wise?
  • Is it possible to split your smaller bugfixes into separate PRs? (eg. the changes to refreshManagers)

@Riffaells
Copy link
Author

LRU is used in two places in RuleManager and ColorUtils. In RuleManager, when checking the rules for each file, splitFilePath() and getFileCache() are called. Without a cache, this happens on every access. LRU stores results and returns them instantly, and old records are evicted automatically. On a vault with 1000+ files (like mine) this is noticeable. And in ColorUtils, converting a color to RGB requires calling getComputedStyle() and manipulating the DOM element. LRU(100) caches the result, and repeated calls with the same color do not touch the DOM.

The IconicPlugin had LRUCache(1) - meaningless, one slot is just a variable with a Map overhead. I replaced it with a simple version-based cache (the version changes → the cache is rebuilt the next time it is accessed).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance Laggy behavior caused by the plugin

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants