Add multi-user monitoring, proxy support, and authentication#1
Add multi-user monitoring, proxy support, and authentication#1codearranger wants to merge 3 commits intomainfrom
Conversation
- Support monitoring multiple Roblox users concurrently via ROBLOX_USER_IDS env var - Add selective notifications via NOTIFY_ROBLOX_USER_IDS to control which users trigger alerts - Implement HTTP proxy support for API requests (configurable via PROXY_* env vars) - Add Roblox authentication with cookie persistence using rbxauth library - Refactor metrics to use labeled GaugeVec for tracking multiple users - Add persistent data volume in docker-compose for storing authentication cookies 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The LastOnline field was not being parsed from the Roblox API response because the struct fields lacked JSON tags. This caused the field to default to the zero time value (0001-01-01), resulting in incorrect "last online" calculations showing very large numbers (e.g., 153722867 minutes). Added proper JSON tags to UserPresence and UserPresenceResponse structs to match the camelCase field names returned by the Roblox presence API. Fixes: "last online X minutes ago" now shows accurate values 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Integrated SOCKS5 proxy support using golang.org/x/net/proxy - Route all Roblox API requests through warp-socks (Cloudflare WARP) - Added retry logic (3 attempts) to handle transient proxy errors - Fixed "last online" display to show "currently active" for active users instead of incorrect large minute values - Disabled HTTP keep-alives to prevent EOF errors with proxy - Added service dependency to ensure warp-socks is healthy before starting - Updated both getUsernameFromID() and checkPresence() with retry logic 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
This pull request adds significant new functionality to the Roblox tracker application, including multi-user monitoring, proxy support, and authentication capabilities. The changes enable concurrent monitoring of multiple users with selective notifications and support for SOCKS5 proxies.
Key Changes:
- Multi-user monitoring via goroutines with configurable user IDs and selective notification filtering
- SOCKS5 proxy support with authentication for all Roblox API requests
- Authentication module using rbxauth library with cookie persistence to
/datavolume
Reviewed changes
Copilot reviewed 7 out of 40 changed files in this pull request and generated 14 comments.
Show a summary per file
| File | Description |
|---|---|
| main.go | Refactored to support multiple user monitoring via goroutines with rate-limited startup; added selective notification filtering |
| roblox.go | Added proxy support via getHTTPClient(), retry logic for API requests, and improved last online time formatting |
| metrics.go | Changed metrics from Gauge to GaugeVec with "userid" label for per-user tracking |
| pushover.go | Updated notification message to use new formatLastOnline() helper |
| auth.go | New authentication module with cookie persistence (contains test code with hardcoded credentials) |
| go.mod | Added rbxauth and golang.org/x/net dependencies; downgraded Go version from 1.19 to 1.18 |
| docker-compose.yml | Added warp-socks service with privileged mode, data volume for cookie storage, and service health dependencies |
| vendor/* | Vendored dependencies for rbxauth, golang.org/x/net, and golang.org/x/crypto |
Comments suppressed due to low confidence (1)
main.go:81
- When an error occurs in the
monitorUserfunction (lines 62-64 or 78-81), the goroutine returns and stops monitoring the user. This means that transient errors will cause permanent monitoring failures for that user. The error is logged, but there's no retry mechanism or recovery strategy.
Consider implementing a retry mechanism with exponential backoff for transient errors, or at minimum log more clearly that monitoring for this user has stopped permanently.
func monitorUser(reg *prometheus.Registry, userID int64, metrics *Metrics, notifyIDs []int64) {
// Get the username string
user, err := getUsernameFromID(userID)
if err != nil {
log.Println(err)
return
}
user.LastPresenceChange = time.Now().UTC()
// Start presence checker
presenceState := 0
user.LastPresenceType = presenceState
t := time.NewTicker(time.Second * 5)
// Check presence every 5 seconds
for range t.C {
// Check presence
user.Presence, err = checkPresence(user.ID)
if err != nil {
log.Println(err)
return
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "github.com/anaminus/rbxauth" | ||
| ) | ||
|
|
||
| const cookieFilename = "/data/cookies.txt" |
There was a problem hiding this comment.
The cookie file path /data/cookies.txt is hardcoded. If the /data directory doesn't exist or isn't writable (e.g., in environments where the volume isn't mounted), the authentication functions will fail when trying to create or read the file.
Consider making the cookie file path configurable via an environment variable with a sensible default, and adding proper error messages that indicate when the directory is missing or not writable.
| for _, userID := range userIDs { | ||
| go monitorUser(reg, userID, metrics, notifyIDs) | ||
| time.Sleep(time.Second * 1) // Rate limit requests to the API by staggering the requests | ||
| } | ||
|
|
||
| // Expose metrics and custom registry via an HTTP server | ||
| http.Handle("/metrics", promhttp.HandlerFor(reg, promhttp.HandlerOpts{Registry: reg})) | ||
| log.Fatal(http.ListenAndServe(":8080", nil)) |
There was a problem hiding this comment.
The HTTP server is started without waiting for the monitoring goroutines to be fully initialized. If all monitoring goroutines exit early due to errors (e.g., invalid user IDs), the main function will immediately call log.Fatal(http.ListenAndServe(":8080", nil)) and the server will start even though no monitoring is occurring.
Consider checking that at least one goroutine successfully initialized before starting the HTTP server, or use a WaitGroup to ensure proper synchronization.
| func getHTTPClient() (*http.Client, error) { | ||
| proxyHost := os.Getenv("PROXY_HOST") | ||
| proxyPort := os.Getenv("PROXY_PORT") | ||
| proxyUser := os.Getenv("PROXY_USER") | ||
| proxyPassword := os.Getenv("PROXY_PASSWORD") | ||
|
|
||
| // If no proxy is configured, return a default client | ||
| if proxyHost == "" || proxyPort == "" { | ||
| return &http.Client{}, nil | ||
| } | ||
|
|
||
| proxyAddr := fmt.Sprintf("%s:%s", proxyHost, proxyPort) | ||
|
|
||
| // Create SOCKS5 dialer with optional authentication | ||
| var auth *proxy.Auth | ||
| if proxyUser != "" && proxyPassword != "" { | ||
| auth = &proxy.Auth{ | ||
| User: proxyUser, | ||
| Password: proxyPassword, | ||
| } | ||
| } | ||
|
|
||
| dialer, err := proxy.SOCKS5("tcp", proxyAddr, auth, proxy.Direct) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to create SOCKS5 dialer: %w", err) | ||
| } | ||
|
|
||
| // Create HTTP transport with SOCKS5 dialer | ||
| // Disable keep-alives to avoid EOF errors when proxy closes connections | ||
| transport := &http.Transport{ | ||
| DialContext: func(ctx context.Context, network, addr string) (net.Conn, error) { | ||
| return dialer.Dial(network, addr) | ||
| }, | ||
| DisableKeepAlives: true, | ||
| MaxIdleConns: 0, | ||
| IdleConnTimeout: 0, | ||
| TLSHandshakeTimeout: 10 * time.Second, | ||
| } | ||
|
|
||
| return &http.Client{ | ||
| Transport: transport, | ||
| Timeout: 30 * time.Second, | ||
| }, nil | ||
| } |
There was a problem hiding this comment.
The getHTTPClient() function is called on every API request, which means the proxy dialer and HTTP transport are recreated each time. This is inefficient and could lead to resource leaks or connection pool exhaustion under high request volumes.
Consider creating the HTTP client once (as a package-level variable or singleton) and reusing it across requests, or implement proper caching/pooling of HTTP clients.
| @@ -1,4 +1,4 @@ | |||
| version: "3.7" | |||
| --- | |||
There was a problem hiding this comment.
[nitpick] The version line was removed from docker-compose.yml (previously version: "3.7"). While docker-compose version 1.27.0+ supports compose files without a version field (defaulting to the latest schema), explicitly specifying the version is considered a best practice for clarity and compatibility.
Consider adding back the version field (e.g., version: "3.8" or later) to make the compose file schema version explicit.
| --- | |
| --- | |
| version: "3.8" |
| // Disable keep-alives to avoid EOF errors when proxy closes connections | ||
| transport := &http.Transport{ | ||
| DialContext: func(ctx context.Context, network, addr string) (net.Conn, error) { | ||
| return dialer.Dial(network, addr) | ||
| }, | ||
| DisableKeepAlives: true, | ||
| MaxIdleConns: 0, | ||
| IdleConnTimeout: 0, |
There was a problem hiding this comment.
The HTTP client is configured with DisableKeepAlives: true, which forces the client to create a new connection for every request. This significantly impacts performance and increases latency, especially when making many requests to the same host.
While the comment mentions avoiding EOF errors when the proxy closes connections, this is typically a symptom of a misconfigured proxy or timeout settings rather than a reason to disable keep-alives entirely. Consider investigating the root cause of the EOF errors and using proper timeout configurations instead of disabling keep-alives.
| // Disable keep-alives to avoid EOF errors when proxy closes connections | |
| transport := &http.Transport{ | |
| DialContext: func(ctx context.Context, network, addr string) (net.Conn, error) { | |
| return dialer.Dial(network, addr) | |
| }, | |
| DisableKeepAlives: true, | |
| MaxIdleConns: 0, | |
| IdleConnTimeout: 0, | |
| // Set reasonable idle connection timeouts to avoid EOF errors when proxy closes connections | |
| transport := &http.Transport{ | |
| DialContext: func(ctx context.Context, network, addr string) (net.Conn, error) { | |
| return dialer.Dial(network, addr) | |
| }, | |
| MaxIdleConns: 10, | |
| IdleConnTimeout: 30 * time.Second, |
|
|
||
| warp-socks: | ||
| image: monius/docker-warp-socks:latest | ||
| privileged: true |
There was a problem hiding this comment.
Running the warp-socks container with privileged: true grants the container full access to the host system, which is a significant security risk. While the container also has specific capabilities (NET_ADMIN and SYS_ADMIN), the privileged flag is overly permissive.
Consider removing privileged: true and relying only on the specific capabilities that are actually needed. If privileged mode is absolutely necessary, add a comment explaining why.
| privileged: true |
| module github.com/joecryptotoo/robloxtracker | ||
|
|
||
| go 1.19 | ||
| go 1.18 |
There was a problem hiding this comment.
[nitpick] The Go version was downgraded from 1.19 to 1.18 in go.mod. This is generally not recommended as it may remove access to bug fixes, security patches, and language features available in 1.19.
Unless there's a specific compatibility requirement, consider keeping the Go version at 1.19 or upgrading to a newer version.
| go 1.18 | |
| go 1.19 |
| notifyIDsStrSlice := strings.Split(notifyIDsStr, ",") | ||
| notifyIDs := make([]int64, len(notifyIDsStrSlice)) | ||
|
|
||
| for i, idStr := range notifyIDsStrSlice { | ||
| id, err := strconv.ParseInt(idStr, 10, 64) | ||
| if err != nil { | ||
| log.Println(err) | ||
| return | ||
| } | ||
| notifyIDs[i] = id |
There was a problem hiding this comment.
The parsing of NOTIFY_ROBLOX_USER_IDS will fail if the environment variable is empty or not set, as strings.Split("", ",") returns a slice with one empty string element. This will cause strconv.ParseInt to fail with an error, causing the application to exit.
Consider checking if the variable is empty before attempting to parse it, or handle the empty string case gracefully.
| notifyIDsStrSlice := strings.Split(notifyIDsStr, ",") | |
| notifyIDs := make([]int64, len(notifyIDsStrSlice)) | |
| for i, idStr := range notifyIDsStrSlice { | |
| id, err := strconv.ParseInt(idStr, 10, 64) | |
| if err != nil { | |
| log.Println(err) | |
| return | |
| } | |
| notifyIDs[i] = id | |
| var notifyIDs []int64 | |
| if strings.TrimSpace(notifyIDsStr) != "" { | |
| notifyIDsStrSlice := strings.Split(notifyIDsStr, ",") | |
| notifyIDs = make([]int64, 0, len(notifyIDsStrSlice)) | |
| for _, idStr := range notifyIDsStrSlice { | |
| idStr = strings.TrimSpace(idStr) | |
| if idStr == "" { | |
| continue | |
| } | |
| id, err := strconv.ParseInt(idStr, 10, 64) | |
| if err != nil { | |
| log.Println(err) | |
| return | |
| } | |
| notifyIDs = append(notifyIDs, id) | |
| } |
| userIDsStr := os.Getenv("ROBLOX_USER_IDS") | ||
| userIDsStrSlice := strings.Split(userIDsStr, ",") | ||
| userIDs := make([]int64, len(userIDsStrSlice)) | ||
|
|
||
| for i, idStr := range userIDsStrSlice { | ||
| id, err := strconv.ParseInt(idStr, 10, 64) | ||
| if err != nil { | ||
| log.Println(err) | ||
| return | ||
| } | ||
| userIDs[i] = id | ||
| } |
There was a problem hiding this comment.
The parsing of ROBLOX_USER_IDS will fail if the environment variable is empty or not set, as strings.Split("", ",") returns a slice with one empty string element. This will cause strconv.ParseInt to fail with an error, causing the application to exit.
Consider adding validation to ensure at least one valid user ID is provided before starting the monitoring process.
| func printCookies(cookies []*http.Cookie) { | ||
| fmt.Println("Cookies:") | ||
| for _, cookie := range cookies { | ||
| fmt.Printf("%s: %s\n", cookie.Name, cookie.Value) | ||
| } | ||
| } |
There was a problem hiding this comment.
The printCookies() function prints authentication cookies to stdout, which could expose sensitive session tokens in logs. This is a security risk as these cookies could be used to hijack authenticated sessions.
Consider removing this function or ensuring it's only used in development/debugging contexts, never in production.
Summary
ROBLOX_USER_IDSenvironment variable (comma-separated list)NOTIFY_ROBLOX_USER_IDSto control which users trigger push notificationsPROXY_USER,PROXY_PASSWORD,PROXY_HOST,PROXY_PORTenvironment variables)rbxauthlibraryTechnical Changes
UserPresenceTypefrom Gauge to GaugeVec with "userid" labelgetHTTPClient()function with proxy configuration supportrobloxtracker-datavolume mapped to/datagithub.com/anaminus/rbxauth v0.4.0dependencyTest plan
ROBLOX_USER_IDS=123,456,789NOTIFY_ROBLOX_USER_IDS=123Notes
🤖 Generated with Claude Code