Conversation
WalkthroughReloadConfig now invokes r.config.ReloadFunc() when provided. reloadServers was reworked to build per-server updates in goroutines (serverUpdate entries) and apply them in bulk under a lock, preserving existing Redirects and initializing Prometheus counters for new servers; removed servers are pruned after the update. check.go moves setting of the User-Agent header to before the request is sent, defers res.Body.Close(), and adds mutex protection around mutations to server.Protocols. redirector.go runs ListenAndServe in a goroutine with explicit error logging and Fatal on failure. servers.go adds a mu sync.RWMutex to Server, protects status updates, and refactors list access and Closest to use the centralized server list. Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro Disabled knowledge base sources:
📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Fix all issues with AI Agents 🤖
In @config.go:
- Line 110: r.config.ReloadFunc may be nil and calling it unconditionally can
panic; update the call site to check for nil before invoking it (e.g., if
r.config != nil && r.config.ReloadFunc != nil { r.config.ReloadFunc() }) so the
reload callback is only executed when set; use the existing r.config and
ReloadFunc symbols to locate the code and avoid introducing behavioral changes
beyond the nil-check.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Jira integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
config.go
🧰 Additional context used
🧬 Code graph analysis (1)
config.go (2)
servers.go (1)
Server(21-35)db/structs.go (1)
Country(18-22)
🔇 Additional comments (4)
config.go (4)
203-222: Good deferred-update pattern for thread safety.The approach of collecting updates during concurrent processing and applying them afterward under a lock is correct. The
serverUpdatestruct cleanly separates new servers (index -1) from existing ones.However,
serversLockis local to this function, so it only protects the update phase within a singlereloadServerscall. IfReloadConfigcan be triggered concurrently (e.g., multiple HTTP reload requests), there's still a potential race onr.servers.Consider whether
ReloadConfigneeds a mutex at theRedirectorlevel to prevent concurrent reloads, or verify that callers already serialize reload requests.
251-259: LGTM!The lock usage is correct with minimal critical sections. The goroutines properly synchronize access to shared state.
264-286: LGTM!The bulk update logic correctly handles both cases:
- Existing servers preserve their Prometheus
Redirectscounter (avoiding duplicate registration)- New servers get a fresh counter via
promauto.NewCounterThe bounds check at line 267 prevents index-out-of-range panics.
288-298: LGTM!The backward iteration for safe removal while iterating is the correct pattern. The lock correctly spans both the update and removal phases, ensuring atomicity of the entire server list reconciliation.
There was a problem hiding this comment.
Actionable comments posted: 4
Fix all issues with AI Agents 🤖
In @check.go:
- Around line 80-82: Reads of the shared slice server.Protocols are not
synchronized; wrap every read access (e.g., the lo.Contains calls in
checkProtocol and at the sites where lo.Contains(server.Protocols, "https") is
used) with server.mu.RLock() and server.mu.RUnlock() to match the existing write
locking around Remove(server.Protocols, "http"); ensure any function that
inspects server.Protocols (including checkProtocol and the two locations with
"https" checks) acquires the read lock before reading and releases it afterward
so all accesses are protected by the mutex.
In @redirector.go:
- Around line 131-135: The goroutine that calls
http.ListenAndServe(r.config.BindAddress, router) must not call
log.WithError(...).Fatal asynchronously because Start() can return success
before bind failures occur; modify Start()/redirector logic to surface bind
errors synchronously: either call ListenAndServe directly (blocking) and return
its error from Start(), or start the server in a goroutine but capture
ListenAndServe errors on an error channel and return or propagate that error
from Start(); if you must not return an error, replace Fatal with a non-fatal
log and signal failure via a status channel so callers can detect binding
failure instead.
In @servers.go:
- Around line 147-154: checkLoop reads r.servers without synchronization,
causing a data race when ReloadConfig or other code mutates r.servers; add
synchronization by protecting r.servers with a shared lock or atomic swap and
update checkLoop to use it. Specifically, add a sync.RWMutex (e.g., serversMu)
to the Redirector struct, acquire a read lock in ServerList.checkLoop before
accessing r.servers (and release it afterward), and update all writers (e.g.,
ReloadConfig in config.go and any other code that assigns r.servers) to acquire
the write lock when modifying the slice; alternatively implement atomic pointer
swapping for r.servers and have checkLoop load the pointer atomically to iterate
a stable snapshot.
- Around line 204-333: Closest performs unsynchronized reads of Server fields
(Available, Protocols, Country, Latitude, Longitude, Weight, Host, Rules)
leading to data races; fix by acquiring server.mu.RLock() whenever reading a
server inside Closest (e.g., in the lo.Filter and lo.Map closures) or, better,
by RLocking, copying the needed fields into local variables/struct (Available,
Protocols slice, Rules slice, Country, Latitude, Longitude, Weight, Host) and
then RUnlocking before doing rule checks, Distance calculations, or building
randutil.Choice; if server.checkRules currently reads server state, refactor it
or add a helper that accepts the copied Rules/inputs so you can call rule checks
without holding the server mutex. Ensure every read of Server.<field> in Closest
uses this RLock-and-copy pattern to eliminate races.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Jira integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
check.goredirector.goservers.go
🧰 Additional context used
🧬 Code graph analysis (2)
servers.go (2)
redirector.go (1)
Redirector(31-43)db/structs.go (4)
City(5-10)Country(18-22)ASN(37-40)Location(24-28)
check.go (1)
config.go (1)
Remove(101-105)
🔇 Additional comments (4)
check.go (2)
58-64: LGTM: Proper resource management and header setting.The User-Agent header is correctly set before the request, and the response body is properly closed with defer, preventing resource leaks.
252-253: LGTM: Consistent User-Agent header addition.User-Agent headers are properly set before HTTP requests in the version check paths, consistent with the HTTPCheck implementation.
Also applies to: 298-299
servers.go (2)
23-23: Good addition of mutex for thread safety.Adding
mu sync.RWMutexto theServerstruct enables thread-safe access to server fields. Ensure all reads useRLock/RUnlockand all writes useLock/Unlockconsistently across the codebase.
74-75: Verify synchronization scope incheckStatus.The mutex is acquired after all checks have completed (line 56). While the lock protects updates to
Available,LastChange, andReason, the checks themselves may read or write server state (e.g.,Protocolsin check.go).Confirm that:
- All server field accesses within the check implementations are properly synchronized
- The current locking scope is sufficient given that checks now use their own locks for protocol mutations
| server.mu.Lock() | ||
| server.Protocols = Remove(server.Protocols, "http") | ||
| server.mu.Unlock() |
There was a problem hiding this comment.
Critical: Incomplete synchronization of server.Protocols accesses.
While mutations to server.Protocols are now protected with server.mu, the reads remain unprotected, creating data races:
- Line 108:
lo.Contains(server.Protocols, scheme)reads without lock - Line 226:
lo.Contains(server.Protocols, "https")reads without lock - Line 289:
lo.Contains(server.Protocols, "https")reads without lock
All accesses (reads and writes) to shared mutable state must be synchronized.
🔎 Proposed fix: Protect reads with RLock
For Line 108 in checkProtocol:
func (h *HTTPCheck) checkProtocol(server *Server, scheme string) {
res, err := h.checkHTTPScheme(server, scheme, log.Fields{})
if !res || err != nil {
return
}
+ server.mu.RLock()
+ hasScheme := lo.Contains(server.Protocols, scheme)
+ server.mu.RUnlock()
- if !lo.Contains(server.Protocols, scheme) {
+ if !hasScheme {
server.mu.Lock()
server.Protocols = append(server.Protocols, scheme)
server.mu.Unlock()
}
}Apply similar patterns to lines 226 and 289.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @check.go around lines 80-82, Reads of the shared slice server.Protocols are
not synchronized; wrap every read access (e.g., the lo.Contains calls in
checkProtocol and at the sites where lo.Contains(server.Protocols, "https") is
used) with server.mu.RLock() and server.mu.RUnlock() to match the existing write
locking around Remove(server.Protocols, "http"); ensure any function that
inspects server.Protocols (including checkProtocol and the two locations with
"https" checks) acquires the read lock before reading and releases it afterward
so all accesses are protected by the mutex.
| // every 60 seconds. It uses r.servers to always check the current list | ||
| func (s ServerList) checkLoop(r *Redirector, checks []ServerCheck) { | ||
| t := time.NewTicker(60 * time.Second) | ||
|
|
||
| for { | ||
| <-t.C | ||
| s.Check(r, checks) | ||
| // Use r.servers instead of s to always check the current server list | ||
| r.servers.Check(r, checks) |
There was a problem hiding this comment.
Critical: Unsynchronized access to r.servers in concurrent context.
Line 154 accesses r.servers without synchronization, but r.servers can be modified by ReloadConfig (per the PR objectives and AI summary). This creates a data race on the slice header and could lead to crashes or corrupted reads.
According to the AI summary, config.go applies server updates under a lock, but checkLoop doesn't acquire that same lock before reading r.servers. You need either:
- A shared
sync.RWMutexonRedirectorto protectr.servers - Atomic pointer swapping for the server list
- Pass a stable snapshot of servers to the check loop
🔎 Example: Use RWMutex in Redirector
In redirector.go, add a mutex to the Redirector struct:
type Redirector struct {
+ serversMu sync.RWMutex
config *Config
db *maxminddb.Reader
asnDB *maxminddb.Reader
servers ServerList
// ...
}Then in checkLoop:
func (s ServerList) checkLoop(r *Redirector, checks []ServerCheck) {
t := time.NewTicker(60 * time.Second)
for {
<-t.C
// Use r.servers instead of s to always check the current server list
+ r.serversMu.RLock()
+ currentServers := r.servers
+ r.serversMu.RUnlock()
- r.servers.Check(r, checks)
+ currentServers.Check(r, checks)
}
}And protect all reads/writes to r.servers in config.go and other locations.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @servers.go around lines 147-154, checkLoop reads r.servers without
synchronization, causing a data race when ReloadConfig or other code mutates
r.servers; add synchronization by protecting r.servers with a shared lock or
atomic swap and update checkLoop to use it. Specifically, add a sync.RWMutex
(e.g., serversMu) to the Redirector struct, acquire a read lock in
ServerList.checkLoop before accessing r.servers (and release it afterward), and
update all writers (e.g., ReloadConfig in config.go and any other code that
assigns r.servers) to acquire the write lock when modifying the slice;
alternatively implement atomic pointer swapping for r.servers and have checkLoop
load the pointer atomically to iterate a stable snapshot.
| func (s ServerList) Closest(r *Redirector, scheme string, ip net.IP) (*Server, float64, error) { | ||
| cacheKey := scheme + "_" + ip.String() | ||
|
|
||
| if cached, exists := r.serverCache.Get(cacheKey); exists { | ||
| if comp, ok := cached.(ComputedDistance); ok { | ||
| log.Infof("Cache hit: %s", comp.Server.Host) | ||
| return comp.Server, comp.Distance, nil | ||
| } | ||
| r.serverCache.Remove(cacheKey) | ||
| } | ||
|
|
||
| var city db.City | ||
| if err := r.db.Lookup(ip, &city); err != nil { | ||
| log.WithError(err).Warning("Unable to lookup client location") | ||
| return nil, -1, err | ||
| } | ||
| clientCountry := city.Country.IsoCode | ||
|
|
||
| var asn db.ASN | ||
| if r.asnDB != nil { | ||
| if err := r.asnDB.Lookup(ip, &asn); err != nil { | ||
| log.WithError(err).Warning("Unable to load ASN information") | ||
| return nil, -1, err | ||
| } | ||
| } | ||
|
|
||
| ruleInput := RuleInput{ | ||
| IP: ip.String(), | ||
| ASN: asn, | ||
| Location: city, | ||
| } | ||
|
|
||
| validServers := lo.Filter(s, func(server *Server, _ int) bool { | ||
| if !server.Available || !lo.Contains(server.Protocols, scheme) { | ||
| return false | ||
| } | ||
| if len(server.Rules) > 0 && !server.checkRules(ruleInput) { | ||
| log.WithField("host", server.Host).Debug("Skipping server due to rules") | ||
| return false | ||
| } | ||
| return true | ||
| }) | ||
|
|
||
| if len(validServers) < 2 { | ||
| validServers = s | ||
| } | ||
|
|
||
| localServers := lo.Filter(validServers, func(server *Server, _ int) bool { | ||
| return server.Country == clientCountry | ||
| }) | ||
|
|
||
| if len(localServers) > 0 { | ||
| computedLocal := lo.Map(localServers, func(server *Server, _ int) ComputedDistance { | ||
| d := Distance(city.Location.Latitude, city.Location.Longitude, server.Latitude, server.Longitude) | ||
| return ComputedDistance{ | ||
| Server: server, | ||
| Distance: d, | ||
| } | ||
| }) | ||
|
|
||
| sort.Slice(computedLocal, func(i, j int) bool { | ||
| return computedLocal[i].Distance < computedLocal[j].Distance | ||
| }) | ||
|
|
||
| if computedLocal[0].Distance < r.config.SameCityThreshold { | ||
| chosen := computedLocal[0] | ||
| r.serverCache.Add(cacheKey, chosen) | ||
| return chosen.Server, chosen.Distance, nil | ||
| } | ||
|
|
||
| choiceCount := r.config.TopChoices | ||
| if len(computedLocal) < choiceCount { | ||
| choiceCount = len(computedLocal) | ||
| } | ||
|
|
||
| choices := make([]randutil.Choice, choiceCount) | ||
| for i, item := range computedLocal[:choiceCount] { | ||
| choices[i] = randutil.Choice{ | ||
| Weight: item.Server.Weight, | ||
| Item: item, | ||
| } | ||
| } | ||
|
|
||
| choice, err := randutil.WeightedChoice(choices) | ||
| if err != nil { | ||
| log.WithError(err).Warning("Unable to choose a weighted choice") | ||
| return nil, -1, err | ||
| } | ||
|
|
||
| dist := choice.Item.(ComputedDistance) | ||
| r.serverCache.Add(cacheKey, dist) | ||
| return dist.Server, dist.Distance, nil | ||
| } | ||
| cacheKey := scheme + "_" + ip.String() | ||
|
|
||
| if cached, exists := r.serverCache.Get(cacheKey); exists { | ||
| if comp, ok := cached.(ComputedDistance); ok { | ||
| log.Infof("Cache hit: %s", comp.Server.Host) | ||
| return comp.Server, comp.Distance, nil | ||
| } | ||
| r.serverCache.Remove(cacheKey) | ||
| } | ||
|
|
||
| var city db.City | ||
| if err := r.db.Lookup(ip, &city); err != nil { | ||
| log.WithError(err).Warning("Unable to lookup client location") | ||
| return nil, -1, err | ||
| } | ||
| clientCountry := city.Country.IsoCode | ||
|
|
||
| var asn db.ASN | ||
| if r.asnDB != nil { | ||
| if err := r.asnDB.Lookup(ip, &asn); err != nil { | ||
| log.WithError(err).Warning("Unable to load ASN information") | ||
| return nil, -1, err | ||
| } | ||
| } | ||
|
|
||
| ruleInput := RuleInput{ | ||
| IP: ip.String(), | ||
| ASN: asn, | ||
| Location: city, | ||
| } | ||
|
|
||
| validServers := lo.Filter(s, func(server *Server, _ int) bool { | ||
| if !server.Available || !lo.Contains(server.Protocols, scheme) { | ||
| return false | ||
| } | ||
| if len(server.Rules) > 0 && !server.checkRules(ruleInput) { | ||
| log.WithField("host", server.Host).Debug("Skipping server due to rules") | ||
| return false | ||
| } | ||
| return true | ||
| }) | ||
|
|
||
| if len(validServers) < 2 { | ||
| validServers = s | ||
| } | ||
|
|
||
| localServers := lo.Filter(validServers, func(server *Server, _ int) bool { | ||
| return server.Country == clientCountry | ||
| }) | ||
|
|
||
| if len(localServers) > 0 { | ||
| computedLocal := lo.Map(localServers, func(server *Server, _ int) ComputedDistance { | ||
| d := Distance(city.Location.Latitude, city.Location.Longitude, server.Latitude, server.Longitude) | ||
| return ComputedDistance{ | ||
| Server: server, | ||
| Distance: d, | ||
| } | ||
| }) | ||
|
|
||
| sort.Slice(computedLocal, func(i, j int) bool { | ||
| return computedLocal[i].Distance < computedLocal[j].Distance | ||
| }) | ||
|
|
||
| if computedLocal[0].Distance < r.config.SameCityThreshold { | ||
| chosen := computedLocal[0] | ||
| r.serverCache.Add(cacheKey, chosen) | ||
| return chosen.Server, chosen.Distance, nil | ||
| } | ||
|
|
||
| choiceCount := r.config.TopChoices | ||
| if len(computedLocal) < choiceCount { | ||
| choiceCount = len(computedLocal) | ||
| } | ||
|
|
||
| choices := make([]randutil.Choice, choiceCount) | ||
| for i, item := range computedLocal[:choiceCount] { | ||
| choices[i] = randutil.Choice{ | ||
| Weight: item.Server.Weight, | ||
| Item: item, | ||
| } | ||
| } | ||
|
|
||
| choice, err := randutil.WeightedChoice(choices) | ||
| if err != nil { | ||
| log.WithError(err).Warning("Unable to choose a weighted choice") | ||
| return nil, -1, err | ||
| } | ||
|
|
||
| dist := choice.Item.(ComputedDistance) | ||
| r.serverCache.Add(cacheKey, dist) | ||
| return dist.Server, dist.Distance, nil | ||
| } | ||
|
|
||
| // Fallback: if no local servers exist, simply select the nearest server among all valid servers. | ||
| computed := lo.Map(validServers, func(server *Server, _ int) ComputedDistance { | ||
| d := Distance(city.Location.Latitude, city.Location.Longitude, server.Latitude, server.Longitude) | ||
| return ComputedDistance{ | ||
| Server: server, | ||
| Distance: d, | ||
| } | ||
| }) | ||
|
|
||
| sort.Slice(computed, func(i, j int) bool { | ||
| return computed[i].Distance < computed[j].Distance | ||
| }) | ||
|
|
||
| choiceCount := r.config.TopChoices | ||
| if len(computed) < choiceCount { | ||
| choiceCount = len(computed) | ||
| } | ||
|
|
||
| choices := make([]randutil.Choice, choiceCount) | ||
| for i, item := range computed[:choiceCount] { | ||
| choices[i] = randutil.Choice{ | ||
| Weight: item.Server.Weight, | ||
| Item: item, | ||
| } | ||
| } | ||
|
|
||
| choice, err := randutil.WeightedChoice(choices) | ||
| if err != nil { | ||
| log.WithError(err).Warning("Unable to choose a weighted choice") | ||
| return nil, -1, err | ||
| } | ||
|
|
||
| dist := choice.Item.(ComputedDistance) | ||
| r.serverCache.Add(cacheKey, dist) | ||
| return dist.Server, dist.Distance, nil | ||
| computed := lo.Map(validServers, func(server *Server, _ int) ComputedDistance { | ||
| d := Distance(city.Location.Latitude, city.Location.Longitude, server.Latitude, server.Longitude) | ||
| return ComputedDistance{ | ||
| Server: server, | ||
| Distance: d, | ||
| } | ||
| }) | ||
|
|
||
| sort.Slice(computed, func(i, j int) bool { | ||
| return computed[i].Distance < computed[j].Distance | ||
| }) | ||
|
|
||
| choiceCount := r.config.TopChoices | ||
| if len(computed) < choiceCount { | ||
| choiceCount = len(computed) | ||
| } | ||
|
|
||
| choices := make([]randutil.Choice, choiceCount) | ||
| for i, item := range computed[:choiceCount] { | ||
| choices[i] = randutil.Choice{ | ||
| Weight: item.Server.Weight, | ||
| Item: item, | ||
| } | ||
| } | ||
|
|
||
| choice, err := randutil.WeightedChoice(choices) | ||
| if err != nil { | ||
| log.WithError(err).Warning("Unable to choose a weighted choice") | ||
| return nil, -1, err | ||
| } | ||
|
|
||
| dist := choice.Item.(ComputedDistance) | ||
| r.serverCache.Add(cacheKey, dist) | ||
| return dist.Server, dist.Distance, nil | ||
| } |
There was a problem hiding this comment.
Critical: Unprotected reads of server fields in Closest.
Line 237 reads server.Available and server.Protocols without acquiring server.mu.RLock(). These fields are modified concurrently:
server.Availableis updated incheckStatus(line 81, 97)server.Protocolsis modified in check.go
This creates data races that can lead to inconsistent reads or panics when iterating over server.Protocols.
🔎 Proposed fix: Protect server field access
validServers := lo.Filter(s, func(server *Server, _ int) bool {
+ server.mu.RLock()
+ available := server.Available
+ hasProtocol := lo.Contains(server.Protocols, scheme)
+ passesRules := len(server.Rules) == 0 || server.checkRules(ruleInput)
+ server.mu.RUnlock()
+
- if !server.Available || !lo.Contains(server.Protocols, scheme) {
+ if !available || !hasProtocol {
return false
}
- if len(server.Rules) > 0 && !server.checkRules(ruleInput) {
+ if !passesRules {
log.WithField("host", server.Host).Debug("Skipping server due to rules")
return false
}
return true
})Note: You'll also need to protect reads of server.Country at line 252 and other server field accesses in this method.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (s ServerList) Closest(r *Redirector, scheme string, ip net.IP) (*Server, float64, error) { | |
| cacheKey := scheme + "_" + ip.String() | |
| if cached, exists := r.serverCache.Get(cacheKey); exists { | |
| if comp, ok := cached.(ComputedDistance); ok { | |
| log.Infof("Cache hit: %s", comp.Server.Host) | |
| return comp.Server, comp.Distance, nil | |
| } | |
| r.serverCache.Remove(cacheKey) | |
| } | |
| var city db.City | |
| if err := r.db.Lookup(ip, &city); err != nil { | |
| log.WithError(err).Warning("Unable to lookup client location") | |
| return nil, -1, err | |
| } | |
| clientCountry := city.Country.IsoCode | |
| var asn db.ASN | |
| if r.asnDB != nil { | |
| if err := r.asnDB.Lookup(ip, &asn); err != nil { | |
| log.WithError(err).Warning("Unable to load ASN information") | |
| return nil, -1, err | |
| } | |
| } | |
| ruleInput := RuleInput{ | |
| IP: ip.String(), | |
| ASN: asn, | |
| Location: city, | |
| } | |
| validServers := lo.Filter(s, func(server *Server, _ int) bool { | |
| if !server.Available || !lo.Contains(server.Protocols, scheme) { | |
| return false | |
| } | |
| if len(server.Rules) > 0 && !server.checkRules(ruleInput) { | |
| log.WithField("host", server.Host).Debug("Skipping server due to rules") | |
| return false | |
| } | |
| return true | |
| }) | |
| if len(validServers) < 2 { | |
| validServers = s | |
| } | |
| localServers := lo.Filter(validServers, func(server *Server, _ int) bool { | |
| return server.Country == clientCountry | |
| }) | |
| if len(localServers) > 0 { | |
| computedLocal := lo.Map(localServers, func(server *Server, _ int) ComputedDistance { | |
| d := Distance(city.Location.Latitude, city.Location.Longitude, server.Latitude, server.Longitude) | |
| return ComputedDistance{ | |
| Server: server, | |
| Distance: d, | |
| } | |
| }) | |
| sort.Slice(computedLocal, func(i, j int) bool { | |
| return computedLocal[i].Distance < computedLocal[j].Distance | |
| }) | |
| if computedLocal[0].Distance < r.config.SameCityThreshold { | |
| chosen := computedLocal[0] | |
| r.serverCache.Add(cacheKey, chosen) | |
| return chosen.Server, chosen.Distance, nil | |
| } | |
| choiceCount := r.config.TopChoices | |
| if len(computedLocal) < choiceCount { | |
| choiceCount = len(computedLocal) | |
| } | |
| choices := make([]randutil.Choice, choiceCount) | |
| for i, item := range computedLocal[:choiceCount] { | |
| choices[i] = randutil.Choice{ | |
| Weight: item.Server.Weight, | |
| Item: item, | |
| } | |
| } | |
| choice, err := randutil.WeightedChoice(choices) | |
| if err != nil { | |
| log.WithError(err).Warning("Unable to choose a weighted choice") | |
| return nil, -1, err | |
| } | |
| dist := choice.Item.(ComputedDistance) | |
| r.serverCache.Add(cacheKey, dist) | |
| return dist.Server, dist.Distance, nil | |
| } | |
| cacheKey := scheme + "_" + ip.String() | |
| if cached, exists := r.serverCache.Get(cacheKey); exists { | |
| if comp, ok := cached.(ComputedDistance); ok { | |
| log.Infof("Cache hit: %s", comp.Server.Host) | |
| return comp.Server, comp.Distance, nil | |
| } | |
| r.serverCache.Remove(cacheKey) | |
| } | |
| var city db.City | |
| if err := r.db.Lookup(ip, &city); err != nil { | |
| log.WithError(err).Warning("Unable to lookup client location") | |
| return nil, -1, err | |
| } | |
| clientCountry := city.Country.IsoCode | |
| var asn db.ASN | |
| if r.asnDB != nil { | |
| if err := r.asnDB.Lookup(ip, &asn); err != nil { | |
| log.WithError(err).Warning("Unable to load ASN information") | |
| return nil, -1, err | |
| } | |
| } | |
| ruleInput := RuleInput{ | |
| IP: ip.String(), | |
| ASN: asn, | |
| Location: city, | |
| } | |
| validServers := lo.Filter(s, func(server *Server, _ int) bool { | |
| if !server.Available || !lo.Contains(server.Protocols, scheme) { | |
| return false | |
| } | |
| if len(server.Rules) > 0 && !server.checkRules(ruleInput) { | |
| log.WithField("host", server.Host).Debug("Skipping server due to rules") | |
| return false | |
| } | |
| return true | |
| }) | |
| if len(validServers) < 2 { | |
| validServers = s | |
| } | |
| localServers := lo.Filter(validServers, func(server *Server, _ int) bool { | |
| return server.Country == clientCountry | |
| }) | |
| if len(localServers) > 0 { | |
| computedLocal := lo.Map(localServers, func(server *Server, _ int) ComputedDistance { | |
| d := Distance(city.Location.Latitude, city.Location.Longitude, server.Latitude, server.Longitude) | |
| return ComputedDistance{ | |
| Server: server, | |
| Distance: d, | |
| } | |
| }) | |
| sort.Slice(computedLocal, func(i, j int) bool { | |
| return computedLocal[i].Distance < computedLocal[j].Distance | |
| }) | |
| if computedLocal[0].Distance < r.config.SameCityThreshold { | |
| chosen := computedLocal[0] | |
| r.serverCache.Add(cacheKey, chosen) | |
| return chosen.Server, chosen.Distance, nil | |
| } | |
| choiceCount := r.config.TopChoices | |
| if len(computedLocal) < choiceCount { | |
| choiceCount = len(computedLocal) | |
| } | |
| choices := make([]randutil.Choice, choiceCount) | |
| for i, item := range computedLocal[:choiceCount] { | |
| choices[i] = randutil.Choice{ | |
| Weight: item.Server.Weight, | |
| Item: item, | |
| } | |
| } | |
| choice, err := randutil.WeightedChoice(choices) | |
| if err != nil { | |
| log.WithError(err).Warning("Unable to choose a weighted choice") | |
| return nil, -1, err | |
| } | |
| dist := choice.Item.(ComputedDistance) | |
| r.serverCache.Add(cacheKey, dist) | |
| return dist.Server, dist.Distance, nil | |
| } | |
| // Fallback: if no local servers exist, simply select the nearest server among all valid servers. | |
| computed := lo.Map(validServers, func(server *Server, _ int) ComputedDistance { | |
| d := Distance(city.Location.Latitude, city.Location.Longitude, server.Latitude, server.Longitude) | |
| return ComputedDistance{ | |
| Server: server, | |
| Distance: d, | |
| } | |
| }) | |
| sort.Slice(computed, func(i, j int) bool { | |
| return computed[i].Distance < computed[j].Distance | |
| }) | |
| choiceCount := r.config.TopChoices | |
| if len(computed) < choiceCount { | |
| choiceCount = len(computed) | |
| } | |
| choices := make([]randutil.Choice, choiceCount) | |
| for i, item := range computed[:choiceCount] { | |
| choices[i] = randutil.Choice{ | |
| Weight: item.Server.Weight, | |
| Item: item, | |
| } | |
| } | |
| choice, err := randutil.WeightedChoice(choices) | |
| if err != nil { | |
| log.WithError(err).Warning("Unable to choose a weighted choice") | |
| return nil, -1, err | |
| } | |
| dist := choice.Item.(ComputedDistance) | |
| r.serverCache.Add(cacheKey, dist) | |
| return dist.Server, dist.Distance, nil | |
| computed := lo.Map(validServers, func(server *Server, _ int) ComputedDistance { | |
| d := Distance(city.Location.Latitude, city.Location.Longitude, server.Latitude, server.Longitude) | |
| return ComputedDistance{ | |
| Server: server, | |
| Distance: d, | |
| } | |
| }) | |
| sort.Slice(computed, func(i, j int) bool { | |
| return computed[i].Distance < computed[j].Distance | |
| }) | |
| choiceCount := r.config.TopChoices | |
| if len(computed) < choiceCount { | |
| choiceCount = len(computed) | |
| } | |
| choices := make([]randutil.Choice, choiceCount) | |
| for i, item := range computed[:choiceCount] { | |
| choices[i] = randutil.Choice{ | |
| Weight: item.Server.Weight, | |
| Item: item, | |
| } | |
| } | |
| choice, err := randutil.WeightedChoice(choices) | |
| if err != nil { | |
| log.WithError(err).Warning("Unable to choose a weighted choice") | |
| return nil, -1, err | |
| } | |
| dist := choice.Item.(ComputedDistance) | |
| r.serverCache.Add(cacheKey, dist) | |
| return dist.Server, dist.Distance, nil | |
| } | |
| func (s ServerList) Closest(r *Redirector, scheme string, ip net.IP) (*Server, float64, error) { | |
| cacheKey := scheme + "_" + ip.String() | |
| if cached, exists := r.serverCache.Get(cacheKey); exists { | |
| if comp, ok := cached.(ComputedDistance); ok { | |
| log.Infof("Cache hit: %s", comp.Server.Host) | |
| return comp.Server, comp.Distance, nil | |
| } | |
| r.serverCache.Remove(cacheKey) | |
| } | |
| var city db.City | |
| if err := r.db.Lookup(ip, &city); err != nil { | |
| log.WithError(err).Warning("Unable to lookup client location") | |
| return nil, -1, err | |
| } | |
| clientCountry := city.Country.IsoCode | |
| var asn db.ASN | |
| if r.asnDB != nil { | |
| if err := r.asnDB.Lookup(ip, &asn); err != nil { | |
| log.WithError(err).Warning("Unable to load ASN information") | |
| return nil, -1, err | |
| } | |
| } | |
| ruleInput := RuleInput{ | |
| IP: ip.String(), | |
| ASN: asn, | |
| Location: city, | |
| } | |
| validServers := lo.Filter(s, func(server *Server, _ int) bool { | |
| server.mu.RLock() | |
| available := server.Available | |
| hasProtocol := lo.Contains(server.Protocols, scheme) | |
| passesRules := len(server.Rules) == 0 || server.checkRules(ruleInput) | |
| server.mu.RUnlock() | |
| if !available || !hasProtocol { | |
| return false | |
| } | |
| if !passesRules { | |
| log.WithField("host", server.Host).Debug("Skipping server due to rules") | |
| return false | |
| } | |
| return true | |
| }) | |
| if len(validServers) < 2 { | |
| validServers = s | |
| } | |
| localServers := lo.Filter(validServers, func(server *Server, _ int) bool { | |
| return server.Country == clientCountry | |
| }) | |
| if len(localServers) > 0 { | |
| computedLocal := lo.Map(localServers, func(server *Server, _ int) ComputedDistance { | |
| d := Distance(city.Location.Latitude, city.Location.Longitude, server.Latitude, server.Longitude) | |
| return ComputedDistance{ | |
| Server: server, | |
| Distance: d, | |
| } | |
| }) | |
| sort.Slice(computedLocal, func(i, j int) bool { | |
| return computedLocal[i].Distance < computedLocal[j].Distance | |
| }) | |
| if computedLocal[0].Distance < r.config.SameCityThreshold { | |
| chosen := computedLocal[0] | |
| r.serverCache.Add(cacheKey, chosen) | |
| return chosen.Server, chosen.Distance, nil | |
| } | |
| choiceCount := r.config.TopChoices | |
| if len(computedLocal) < choiceCount { | |
| choiceCount = len(computedLocal) | |
| } | |
| choices := make([]randutil.Choice, choiceCount) | |
| for i, item := range computedLocal[:choiceCount] { | |
| choices[i] = randutil.Choice{ | |
| Weight: item.Server.Weight, | |
| Item: item, | |
| } | |
| } | |
| choice, err := randutil.WeightedChoice(choices) | |
| if err != nil { | |
| log.WithError(err).Warning("Unable to choose a weighted choice") | |
| return nil, -1, err | |
| } | |
| dist := choice.Item.(ComputedDistance) | |
| r.serverCache.Add(cacheKey, dist) | |
| return dist.Server, dist.Distance, nil | |
| } | |
| // Fallback: if no local servers exist, simply select the nearest server among all valid servers. | |
| computed := lo.Map(validServers, func(server *Server, _ int) ComputedDistance { | |
| d := Distance(city.Location.Latitude, city.Location.Longitude, server.Latitude, server.Longitude) | |
| return ComputedDistance{ | |
| Server: server, | |
| Distance: d, | |
| } | |
| }) | |
| sort.Slice(computed, func(i, j int) bool { | |
| return computed[i].Distance < computed[j].Distance | |
| }) | |
| choiceCount := r.config.TopChoices | |
| if len(computed) < choiceCount { | |
| choiceCount = len(computed) | |
| } | |
| choices := make([]randutil.Choice, choiceCount) | |
| for i, item := range computed[:choiceCount] { | |
| choices[i] = randutil.Choice{ | |
| Weight: item.Server.Weight, | |
| Item: item, | |
| } | |
| } | |
| choice, err := randutil.WeightedChoice(choices) | |
| if err != nil { | |
| log.WithError(err).Warning("Unable to choose a weighted choice") | |
| return nil, -1, err | |
| } | |
| dist := choice.Item.(ComputedDistance) | |
| r.serverCache.Add(cacheKey, dist) | |
| return dist.Server, dist.Distance, nil | |
| } |
🤖 Prompt for AI Agents
In @servers.go around lines 204-333, Closest performs unsynchronized reads of
Server fields (Available, Protocols, Country, Latitude, Longitude, Weight, Host,
Rules) leading to data races; fix by acquiring server.mu.RLock() whenever
reading a server inside Closest (e.g., in the lo.Filter and lo.Map closures) or,
better, by RLocking, copying the needed fields into local variables/struct
(Available, Protocols slice, Rules slice, Country, Latitude, Longitude, Weight,
Host) and then RUnlocking before doing rule checks, Distance calculations, or
building randutil.Choice; if server.checkRules currently reads server state,
refactor it or add a helper that accepts the copied Rules/inputs so you can call
rule checks without holding the server mutex. Ensure every read of
Server.<field> in Closest uses this RLock-and-copy pattern to eliminate races.
Re-read config during relaod config and fix some potantial data-races