Conversation
|
I like the approach, even though it makes the files bigger and adding more content, but if that concept works and we can reduce garbage collector work, this definitely has some benefit :) can you pls rebase? some conflicts on |
If you like the concept, then I should work on applying this to more objects so we can see if we can see the difference |
Instead of spawning a goroutine for each wild pokemon that sleeps waiting for an encounter, use a single pending queue with a background sweeper. This reduces memory pressure and goroutine overhead under high load. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
eb6983d to
d238589
Compare
Resolved conflicts by keeping typed webhook structs from json_structure while using dirty flag pattern (oldValues, newRecord) for change detection. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…nto dirtyflag_prototype
There was a problem hiding this comment.
Pull request overview
Explores a “dirty flag + in-place mutation” pattern to reduce allocations/GC pressure by storing pointers in caches and only persisting records when modified, with additional debug instrumentation and a pending Pokémon queue.
Changes:
- Refactors several cached DB-backed structs to use dirty flags + setter methods, and updates caches to store pointers (including a new generic sharded cache for concurrency).
- Updates webhook generation to compare against captured “old values” instead of passing old/new copies.
- Adds a pending Pokémon queue + sweeper to delay wild updates until an encounter arrives (or timeout).
Reviewed changes
Copilot reviewed 27 out of 27 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| routes.go | Introduces a typed StatusResponse for consistent JSON responses. |
| main.go | Initializes the new Pokémon pending queue. |
| decoder/weather.go | Applies dirty-flag + old-values snapshotting to weather + webhook payload struct. |
| decoder/tappable.go | Applies dirty-flag setters and pointer caching for tappables. |
| decoder/stats.go | Updates stats logic to use in-object oldValues rather than old/new parameters. |
| decoder/station.go | Applies dirty-flag setters + old-values snapshotting for station + webhooks. |
| decoder/spawnpoint.go | Applies dirty-flag setters + pointer caching for spawnpoints. |
| decoder/sharded_cache.go | Adds a generic sharded cache wrapper around ttlcache to reduce lock contention. |
| decoder/s2cell.go | Switches S2 cell cache values to pointers and updates bulk-save behavior. |
| decoder/routes.go | Applies dirty-flag setters + pointer caching for routes. |
| decoder/pokestop_showcase.go | Routes showcase field updates through new Pokestop setters. |
| decoder/pokestop.go | Major refactor to dirty-flag setters, pointer caching, and webhook comparison via oldValues. |
| decoder/pokemonRtree.go | Updates eviction handling for pointer-based, sharded Pokémon cache. |
| decoder/player.go | Applies dirty-flag setters + pointer caching for player records. |
| decoder/pending_pokemon.go | Adds pending Pokémon queue + background sweeper to process expired entries. |
| decoder/main.go | Switches caches to pointer storage, adds sharded caches, updates cache helpers. |
| decoder/incident.go | Applies dirty-flag setters + old-values snapshotting for incidents + webhooks. |
| decoder/gym.go | Major refactor to dirty-flag setters, pointer caching, and webhook comparison via oldValues. |
| decoder/fort.go | Switches fort change webhooks to a typed payload struct; updates shared-field copy logic. |
| decoder/db_debug.go / decoder/db_debug_off.go | Adds build-tag controlled DB debug logging. |
| decoder/api_pokemon_scan_v1.go / v2.go / v3.go | Updates API builders for pointer-based Pokémon cache values. |
| decoder/api_pokemon.go | Updates Pokémon search API to return pointer-based cache values. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Feat: reduce updates config
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 33 out of 34 changed files in this pull request and generated 3 comments.
Comments suppressed due to low confidence (1)
routes.go:505
GET /api/pokestop/id/:fort_idnow usesdecoder.PeekPokestopRecord, which is cache-only. If the pokestop isn’t already in cache, this will returnnullwith 202 rather than loading from DB (previous behavior). Either restore a DB-backed getter for this endpoint or return a 404 whenpokestop == nilto avoid silently returning empty responses.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| func NewShardedCache[K comparable, V any](config ShardedCacheConfig[K, V]) *ShardedCache[K, V] { | ||
| sc := &ShardedCache[K, V]{ | ||
| shards: make([]*ttlcache.Cache[K, V], config.NumShards), | ||
| keyToShard: config.KeyToShard, | ||
| } | ||
|
|
||
| for i := 0; i < config.NumShards; i++ { | ||
| opts := []ttlcache.Option[K, V]{ | ||
| ttlcache.WithTTL[K, V](config.TTL), | ||
| } | ||
| if config.DisableTouchOnHit { | ||
| opts = append(opts, ttlcache.WithDisableTouchOnHit[K, V]()) | ||
| } | ||
| sc.shards[i] = ttlcache.New[K, V](opts...) | ||
| go sc.shards[i].Start() | ||
| } | ||
|
|
||
| return sc | ||
| } | ||
|
|
||
| // getShard returns the cache shard for the given key | ||
| func (sc *ShardedCache[K, V]) getShard(key K) *ttlcache.Cache[K, V] { | ||
| return sc.shards[sc.keyToShard(key)%uint64(len(sc.shards))] | ||
| } |
There was a problem hiding this comment.
NewShardedCache/getShard assume config.NumShards > 0 and config.KeyToShard != nil. If either is zero/nil, getShard will panic (divide by zero / nil function call). Add input validation (and/or sensible defaults) in NewShardedCache and return an error or panic with a clear message.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 33 out of 34 changed files in this pull request and generated 2 comments.
Comments suppressed due to low confidence (1)
routes.go:505
GetPokestopnow usesPeekPokestopRecord(cache-only) and returns202with a potentially nil body. This changes the endpoint semantics vs DB-backed reads and makes cache misses look like successes. Consider restoring a DB-backed read with a timeout (or returning404when the cache miss occurs). Also remove the commented-out context/cancel code.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // NewShardedCache creates a new sharded cache with the given configuration. | ||
| // The keyToShard function converts keys to uint64 for shard selection. | ||
| func NewShardedCache[K comparable, V any](config ShardedCacheConfig[K, V]) *ShardedCache[K, V] { | ||
| sc := &ShardedCache[K, V]{ | ||
| shards: make([]*ttlcache.Cache[K, V], config.NumShards), | ||
| keyToShard: config.KeyToShard, | ||
| } | ||
|
|
There was a problem hiding this comment.
NewShardedCache does not validate NumShards (>0) or that KeyToShard is non-nil. As a public constructor, invalid config will panic later (division by zero / nil call). Add upfront validation (return error or panic with a clear message).
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 63 out of 64 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| func CreateFortWebhooks(ctx context.Context, dbDetails db.DbDetails, ids []string, fortType FortType, change FortChange) { | ||
| var gyms []Gym | ||
| var stops []Pokestop | ||
| if fortType == GYM { | ||
| for _, id := range ids { | ||
| gym, err := GetGymRecord(ctx, dbDetails, id) | ||
| if err != nil { | ||
| continue | ||
| } | ||
| if gym == nil { | ||
| gym, unlock, err := GetGymRecordReadOnly(ctx, dbDetails, id) | ||
| if err != nil || gym == nil { | ||
| if unlock != nil { | ||
| unlock() | ||
| } | ||
| continue | ||
| } | ||
| gyms = append(gyms, *gym) | ||
|
|
||
| fort := InitWebHookFortFromGym(gym) | ||
| unlock() | ||
|
|
||
| CreateFortWebHooks(fort, &FortWebhook{}, change) | ||
| } | ||
| } | ||
| if fortType == POKESTOP { | ||
| for _, id := range ids { | ||
| stop, err := GetPokestopRecord(ctx, dbDetails, id) | ||
| if err != nil { | ||
| stop, unlock, err := getPokestopRecordReadOnly(ctx, dbDetails, id) | ||
| if err != nil || stop == nil { | ||
| if unlock != nil { | ||
| unlock() | ||
| } | ||
| continue | ||
| } | ||
| if stop == nil { | ||
| continue | ||
| } | ||
| stops = append(stops, *stop) | ||
|
|
||
| fort := InitWebHookFortFromPokestop(stop) | ||
| unlock() | ||
|
|
||
| CreateFortWebHooks(fort, &FortWebhook{}, change) | ||
| } | ||
| } |
There was a problem hiding this comment.
CreateFortWebhooks always calls CreateFortWebHooks(fort, &FortWebhook{}, change). This is only correct for REMOVAL; for NEW or EDIT it will emit webhooks with an empty new payload (and MatchStatsGeofence will use 0/0 coords), and stats may be attributed to the wrong fort type. Adjust the call site to pass (nil, fort) for NEW, (fort, nil) for REMOVAL, and (oldFort, newFort) for EDIT (or split the helper per change type).
The combination of all of these items gives a reduction to about 40% of previous cpu for my instance
Thanks to Claude or I wouldn't have bothered!