Skip to content

Fix worker configuration#25

Merged
ionutz89 merged 29 commits intomainfrom
fix-worker-configuration
Jun 20, 2025
Merged

Fix worker configuration#25
ionutz89 merged 29 commits intomainfrom
fix-worker-configuration

Conversation

@ionutz89
Copy link
Contributor

@ionutz89 ionutz89 commented Jun 20, 2025

Summary by CodeRabbit

  • New Features

    • Added observability logs with logging enabled for improved monitoring in the production environment.
    • Introduced a network latency and jitter measurement tool providing detailed connection stability reports.
    • Version and build metadata are now dynamically available via environment variables in API responses.
    • Enhanced the /speed endpoint to support customizable response sizes and data patterns with validation.
  • Chores

    • Updated deployment configuration to include new environment variables for versioning and build information.
    • Added an API token to the production environment configuration.
    • Simplified environment variable handling and rate limiting logic for improved reliability.
    • Removed outdated environment mocking in test setup for cleaner testing environment.

ionutz89 added 10 commits June 20, 2025 12:29
- Remove condition check from deploy step to allow testing on feature branch
- This will enable testing the deployment workflow on feature branches

Signed-off-by: Ionut Iorgu <git@h-all.co>
- Fix worker event handler registration
- Resolve duplicate declarations in worker code
- Fix lint errors in worker.test.js
- Add proper ES module exports
- Configure wrangler.toml for ESM and service-worker target

Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
…tion

Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 20, 2025

Warning

Rate limit exceeded

@ionutz89 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 6 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between eaada2f and 8e54b1e.

📒 Files selected for processing (2)
  • README.md (2 hunks)
  • src/index.js (10 hunks)

Walkthrough

The deployment workflow was updated to remove globally scoped secrets and instead define new environment variables (VERSION, GIT_COMMIT, BUILD_TIME) specifically for the deploy step. The production configuration template now includes an API_PROBE_TOKEN variable within [env.production.vars] and enables observability logs. Additionally, a new script was added to measure network latency and jitter by sending repeated HTTPS requests and reporting statistics. The /info and /version endpoints were updated to read version metadata from environment variables with fallback to constants. Test and Jest setup code was simplified by removing mocks for import.meta.env.

Changes

File(s) Change Summary
.github/workflows/deploy.yml Removed global secret env vars; added VERSION, GIT_COMMIT, and BUILD_TIME env vars to "Generate wrangler.toml" step; reduced env vars in deploy step.
wrangler.toml.template Moved API_PROBE_TOKEN, VERSION, GIT_COMMIT, and BUILD_TIME into [env.production.vars]; added [observability.logs] with enabled = true.
scripts/measure-jitter.js Added new script to measure network latency and jitter with repeated HTTPS requests and reporting.
src/index.js Updated /info and /version endpoints to read version metadata from environment variables with fallback to constants; removed dynamic env detection logic; enhanced /speed endpoint and rate limiting logic.
jest.setup.js, test/worker.test.js Removed mocking of import.meta.env in Jest setup and tests; retained but unused global env mock in tests.

Possibly related PRs

  • fix: update deploy.yml #18: Both PRs modify environment variables scoped to the "Deploy to Cloudflare Workers" step in the deployment workflow.
  • Feature/network probe implementation #7: This PR builds upon initial environment variable handling and deployment workflow changes introduced in that PR, modifying the same files and version metadata usage.

Poem

The deploy script hops anew,
With secrets tucked away from view.
Logs now sparkle, shining bright,
And tokens join the production night.
With every build, a versioned cheer—
The rabbit grins: deployment’s here!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Signed-off-by: Ionut I <git@iops.pro>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0eb3a75 and f8c6504.

📒 Files selected for processing (2)
  • .github/workflows/deploy.yml (1 hunks)
  • wrangler.toml.template (1 hunks)
🔇 Additional comments (1)
wrangler.toml.template (1)

27-29: Review observability logs table placement
The [observability.logs] block is currently defined at the top level. If you intend to enable logs only in production, consider moving it under the production environment ([env.production.observability.logs]) or verify via Wrangler’s documentation that root‐level placement is supported.

binding = "RATE_LIMIT_KV"
id = "${RATE_LIMIT_KV_ID}"

API_PROBE_TOKEN = "${API_PROBE_TOKEN}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure placeholder substitution for API_PROBE_TOKEN
The new API_PROBE_TOKEN = "${API_PROBE_TOKEN}" entry won’t be replaced during templating because the deploy workflow’s sed commands don’t handle this variable. You’ll end up with an unresolved placeholder in the generated wrangler.toml.

🤖 Prompt for AI Agents
In wrangler.toml.template at line 25, the placeholder ${API_PROBE_TOKEN} is
added but the deploy workflow's sed commands do not substitute this variable,
leaving it unresolved in the final wrangler.toml. Update the deploy workflow's
sed or templating script to include a replacement rule for API_PROBE_TOKEN so
that the placeholder is correctly replaced with the actual token value during
deployment.

ionutz89 added 2 commits June 20, 2025 15:22
- Add measure-jitter.js script with emoji and color output
- Update wrangler.toml.template with API_PROBE_TOKEN
- Improve script output formatting and readability

Signed-off-by: Ionut Iorgu <git@h-all.co>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
scripts/measure-jitter.js (2)

1-21: LGTM! Clean setup and configuration.

The imports, color utilities, and configuration are well-structured. The hardcoded endpoint and parameters work fine for this specific tool.

Consider making the configuration values configurable via command-line arguments or environment variables for broader usability:

+const ENDPOINT = process.env.PING_ENDPOINT || 'https://cb.apigw.io/ping';
+const REQUESTS = parseInt(process.env.REQUESTS) || 20;
+const INTERVAL = parseInt(process.env.INTERVAL) || 200;
-const ENDPOINT = 'https://cb.apigw.io/ping';
-const REQUESTS = 20; // Number of requests to make
-const INTERVAL = 200; // ms between requests

28-32: Consider simplifying readline usage.

The readline interface is created but minimally used - progress updates are handled via direct stdout manipulation rather than the readline interface.

If you're not using readline features, you could simplify by removing it:

-// Create readline interface for progress updates
-const rl = createInterface({
-  input: stdin,
-  output: stdout
-});

And remove the corresponding cleanup in the main function:

-  // Close readline
-  rl.close();
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f8c6504 and 0206e8d.

📒 Files selected for processing (2)
  • scripts/measure-jitter.js (1 hunks)
  • wrangler.toml.template (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • wrangler.toml.template
🧰 Additional context used
🪛 GitHub Check: Deploy
scripts/measure-jitter.js

[failure] 25-25:
'completed' is assigned a value but never used

🪛 ESLint
scripts/measure-jitter.js

[error] 25-25: 'completed' is assigned a value but never used.

(no-unused-vars)

🪛 GitHub Actions: Deploy to Cloudflare Workers
scripts/measure-jitter.js

[error] 25-25: ESLint: 'completed' is assigned a value but never used. (no-unused-vars)

🔇 Additional comments (6)
scripts/measure-jitter.js (6)

34-71: LGTM! Robust request implementation.

The function correctly implements high-resolution timing and handles both successful responses and errors gracefully. The dual timing approach (request latency via hrtime and TCP RTT from API response) provides comprehensive network metrics.


73-95: LGTM! Mathematically sound statistics calculation.

The jitter calculation using TCP RTT standard deviation is appropriate, and the function correctly handles the edge case of no successful requests.


97-131: LGTM! Excellent user-friendly output formatting.

The colorized output with clear verdicts and helpful interpretation guidance provides an excellent user experience. The jitter thresholds are reasonable for categorizing connection quality.


133-154: LGTM! Clean orchestration and progress handling.

The async/await flow is clean and the progress indication provides good user feedback. The output cleanup before showing results ensures a professional presentation.


156-161: LGTM! Graceful interruption handling.

The SIGINT handler appropriately shows partial results when the test is interrupted, providing value even when not completing the full test suite.


163-164: LGTM! Clean script execution.

The main execution with error handling is appropriate for the script's purpose.

ionutz89 and others added 11 commits June 20, 2025 15:27
Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
* fix env vars

* Use envsubst for wrangler.toml

* fix: load vars from wrangler env
Signed-off-by: Ionut Iorgu <git@h-all.co>
… logic

Signed-off-by: Ionut Iorgu <git@h-all.co>
… limit call

Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d425b62 and eaada2f.

📒 Files selected for processing (1)
  • src/index.js (10 hunks)
🔇 Additional comments (4)
src/index.js (4)

42-48: LGTM: Binary data handling is well implemented.

The binary data detection logic correctly identifies ArrayBuffer and TypedArray objects and sets appropriate content-type headers.


175-177: LGTM: Environment variable usage with proper fallbacks.

The implementation correctly reads version metadata from environment variables with appropriate fallback to default constants.

Also applies to: 208-210


234-264: LGTM: Well-implemented /speed endpoint with robust validation.

The expanded /speed endpoint includes:

  • Proper parameter validation with size limits
  • Multiple pattern options (asterisk, zero, rand)
  • Appropriate error responses for invalid inputs
  • Correct binary buffer generation and response handling

The implementation is secure and follows best practices.


99-99: Verify rate limit threshold logic.

Ensure the condition next > RATE_LIMIT aligns with your intended rate limiting behavior. Currently, this allows exactly RATE_LIMIT requests before blocking.

#!/bin/bash
# Verify rate limiting configuration and usage patterns
rg -A 5 -B 5 "RATE_LIMIT.*=" src/

Comment on lines +95 to +97
// Fetch current count (stored as string). Default to 0 if not set.
const current = parseInt(await env.RATE_LIMIT_KV.get(key) || '0', 10);
const next = current + 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Rate limiting has a race condition.

The change from atomic increment to fetch-parse-increment introduces a race condition where concurrent requests can bypass the rate limit by reading the same count value simultaneously.

Apply this diff to restore atomic behavior:

-  // Fetch current count (stored as string). Default to 0 if not set.
-  const current = parseInt(await env.RATE_LIMIT_KV.get(key) || '0', 10);
-  const next = current + 1;
-
-  if (next > RATE_LIMIT) {
+  // Use atomic increment to avoid race conditions
+  const count = await env.RATE_LIMIT_KV.get(key, { type: 'json' }) || 0;
+  
+  if (count >= RATE_LIMIT) {
     return {
       error: true,
       response: createSecureResponse(
@@ -108,8 +105,8 @@
     };
   }
 
-  // Store the updated count with a TTL so it resets automatically.
-  await env.RATE_LIMIT_KV.put(key, next.toString(), { expirationTtl: RATE_LIMIT_WINDOW });
+  // Atomic increment with TTL
+  await env.RATE_LIMIT_KV.put(key, count + 1, { expirationTtl: RATE_LIMIT_WINDOW });

Or use a more robust atomic approach if KV supports it.

Also applies to: 109-110

🤖 Prompt for AI Agents
In src/index.js around lines 95-97 and 109-110, the current rate limiting logic
fetches the count, parses it, and increments it separately, causing a race
condition with concurrent requests. To fix this, replace the
fetch-parse-increment sequence with an atomic increment operation provided by
the KV store, or implement a transaction or locking mechanism if supported,
ensuring that increments happen atomically to prevent race conditions.

* Main worker fetch handler
*/
const workerFetch = async (request, env, ctx) => {
const workerFetch = async (request, env) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Function signature mismatch with event listener.

The workerFetch function signature was changed to remove the ctx parameter, but the event listener on line 303 still passes three arguments including event.ctx.

Apply this diff to fix the event listener:

 if (typeof addEventListener !== 'undefined') {
   addEventListener('fetch', event => {
-    event.respondWith(workerFetch(event.request, event.env, event.ctx))
+    event.respondWith(workerFetch(event.request, event.env))
   });
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const workerFetch = async (request, env) => {
if (typeof addEventListener !== 'undefined') {
addEventListener('fetch', event => {
event.respondWith(workerFetch(event.request, event.env))
});
}
🤖 Prompt for AI Agents
In src/index.js at line 138, the workerFetch function signature was changed to
accept only two parameters (request, env), but the event listener at line 303
still calls it with three arguments including event.ctx. To fix this, update the
event listener to call workerFetch with only the request and env arguments,
removing the third ctx argument to match the function signature.

Comment on lines +1 to +5
// Default values used when environment variables are not provided
const DEFAULT_VERSION = 'v1.0.0';
const DEFAULT_GIT_COMMIT = 'abcdef0';
// YYYY-MM-DD format
const DEFAULT_BUILD_TIME = new Date().toISOString().split('T')[0];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix build time constant to use actual build time.

The DEFAULT_BUILD_TIME constant is evaluated at module load time rather than actual build time, which defeats its purpose as a build-time constant.

Apply this diff to use a static build time:

-// YYYY-MM-DD format
-const DEFAULT_BUILD_TIME = new Date().toISOString().split('T')[0];
+// YYYY-MM-DD format - this should be replaced during build
+const DEFAULT_BUILD_TIME = 'unknown';

Alternatively, if you want a more meaningful default, use a fixed date:

-const DEFAULT_BUILD_TIME = new Date().toISOString().split('T')[0];
+const DEFAULT_BUILD_TIME = '1970-01-01';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Default values used when environment variables are not provided
const DEFAULT_VERSION = 'v1.0.0';
const DEFAULT_GIT_COMMIT = 'abcdef0';
// YYYY-MM-DD format
const DEFAULT_BUILD_TIME = new Date().toISOString().split('T')[0];
// Default values used when environment variables are not provided
const DEFAULT_VERSION = 'v1.0.0';
const DEFAULT_GIT_COMMIT = 'abcdef0';
// YYYY-MM-DD format - this should be replaced during build
const DEFAULT_BUILD_TIME = 'unknown';
🤖 Prompt for AI Agents
In src/index.js lines 1 to 5, the DEFAULT_BUILD_TIME is set using new Date() at
module load time, which does not reflect the actual build time. To fix this,
replace the dynamic date assignment with a static string representing the build
date, either by hardcoding a fixed date or by injecting the build time during
the build process so that DEFAULT_BUILD_TIME holds a constant value reflecting
the actual build time.

ionutz89 added 2 commits June 20, 2025 17:25
Signed-off-by: Ionut Iorgu <git@h-all.co>
ionutz89 added 3 commits June 20, 2025 17:30
…declarations

Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
Signed-off-by: Ionut Iorgu <git@h-all.co>
@ionutz89 ionutz89 merged commit 34391c1 into main Jun 20, 2025
2 of 3 checks passed
@ionutz89 ionutz89 deleted the fix-worker-configuration branch June 20, 2025 14:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant