-
-
Notifications
You must be signed in to change notification settings - Fork 90
feat(cursor): add ConnectRPC protobuf encoder/decoder and HTTP/2 executor #525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Split protobuf into 3 files: schema, encoder, decoder for <200 LOC constraint - Implement ConnectRPC wire format encoding (varint, fields, messages, frames) - Implement protobuf decoder with gzip decompression support - Port Cursor executor with HTTP/2 support and checksum generation (Jyh cipher) - Add OpenAI to Cursor message translator (system→user, tool results handling) - Transform protobuf responses to both SSE and JSON formats - Use Node.js built-in crypto/zlib (no uuid dependency) - All files TypeScript strict mode compliant
CRITICAL FIX: - CursorCredentials interface now matches types.ts (machineId, ghostMode as top-level) - Fixes runtime error when cursor-auth saves credentials and cursor-executor reads them HIGH: - Replace 18+ non-null assertions with guard clauses across executor and decoder - Prefix unused params in translator (_model, _stream, _credentials) - HTTP/2 client closes on connection error to prevent leak - AbortSignal listener leak documented with TODO (inline arrow prevents cleanup) MEDIUM: - Export concatArrays from encoder, remove duplicate from protobuf.ts - Varint decoder now enforces 5-byte max to prevent overflow - Buffer slice bounds check prevents out-of-range read - Empty messages array validation with explicit error - Buffered streaming limitation documented with TODO comment All edge cases from code review now addressed.
Code Review: PR #525 —
|
| Check | Status | Notes |
|---|---|---|
| Injection (command/SQL/XSS) | OK | No shell commands or DOM interaction |
| Auth token handling | OK | Token passed via header, not logged |
| Secrets in source | OK | No hardcoded secrets |
| Input validation | WARN | No message content size limits before protobuf encoding |
| SSRF risk | OK | baseUrl is hardcoded to api2.cursor.sh |
| Dependency security | OK | Uses only Node.js built-ins (crypto, zlib, http2) |
| Error message leakage | OK | Errors are wrapped in generic OpenAI-format responses |
Code Quality Checklist
| Check | Status | Notes |
|---|---|---|
| TypeScript strict mode | OK | All files compile under strict |
No any types |
OK | Uses unknown where needed |
No non-null assertions (!) |
OK | Replaced with guard clauses in fix commit |
prefer-const / no-var |
OK | Consistent const/let usage |
Unused variables prefixed _ |
OK | _body, _model, _stream, _credentials |
| Conventional commit format | OK | feat(cursor): and fix(cursor): |
| No emojis in CLI output | OK | No CLI output in these modules |
| Cross-platform | OK | Uses process.platform/process.arch for OS detection |
| Test coverage | FAIL | No tests for any of the 6 new files |
| DRY compliance | WARN | COMPRESS_FLAG duplicated with different values |
Recommendations
| Priority | Issue | Action |
|---|---|---|
| HIGH | No tests | Add unit tests for encoder/decoder roundtrips, frame parsing, message translation, and checksum generation |
| HIGH | FIXED64/FIXED32 missing bounds check | Add buffer length validation before slicing |
| HIGH | Bitwise shift overflow in checksum | Add comment documenting 32-bit limitation or use BigInt / Number arithmetic |
| MEDIUM | COMPRESS_FLAG duplication |
Consolidate into schema or differentiate naming |
| MEDIUM | Token split edge case | Use slice + indexOf instead of split |
| MEDIUM | AbortSignal listener leak | Store handler reference and remove on completion |
| LOW | Field number namespace collision risk | Consider grouping or documenting per-message scope more explicitly |
| LOW | SSE chunk building duplication | Extract helper function for building SSE data lines |
| LOW | Synchronous gzip | Convert to async when streaming is implemented |
Overall Assessment
APPROVED WITH NOTES
The architecture is clean and well-decomposed. The fix commit addresses many edge cases raised in prior review. However, the complete absence of tests for binary protocol handling code is a significant risk — protobuf off-by-one errors and varint edge cases are exactly the kind of bugs that only surface in production. The FIXED64/FIXED32 bounds check gap and the misleading bitwise shifts in generateChecksum should be addressed before this code handles real traffic.
I recommend merging to dev for integration testing but strongly suggest adding unit tests as a fast-follow before promoting to main.
Reviewed by
gemini-claude-opus-4-6-thinking
- Fix bitwise shift overflow in generateChecksum: use Math.trunc
division for >>40/>>32 which wrap modulo 32 in JS
- Add FIXED64/FIXED32 bounds checks in protobuf decoder to prevent
out-of-bounds slice on truncated buffers
- Consolidate COMPRESS_FLAG to single definition in schema (DRY):
executor now imports from schema, added GZIP_ALT/GZIP_BOTH values
- Fix token split edge case: use indexOf+slice instead of split('::')[1]
to handle tokens containing multiple :: delimiters
- Fix AbortSignal listener leak: store handler ref, use once:true,
remove listener on request end/error
Code Review: PR #525 —
|
| Value | Keys |
|---|---|
1 |
REQUEST, MESSAGES, MSG_CONTENT, TOOL_RESULT_CALL_ID, MODEL_NAME, INSTRUCTION_TEXT, SETTING_PATH, META_PLATFORM, MSGID_ID, MCP_TOOL_NAME, TOOL_CALL, MCP_TOOLS_LIST, MCP_NESTED_NAME, RESPONSE_TEXT, THINKING_TEXT, SETTING6_FIELD_1 |
2 |
UNKNOWN_2, MSG_ROLE, TOOL_RESULT_NAME, RESPONSE, MODEL_EMPTY, ... |
This works correctly because each field number is used only within its message context, but it makes the code fragile — a developer could easily use FIELD.REQUEST when they meant FIELD.MESSAGES and get the same result by accident. Consider grouping by message type:
export const FIELD = {
TopLevel: { REQUEST: 1 },
ChatRequest: { MESSAGES: 1, UNKNOWN_2: 2, INSTRUCTION: 3, ... },
Message: { CONTENT: 1, ROLE: 2, ID: 13, ... },
// ...
} as const;🟡 P1 — Hardcoded values that will go stale
| Location | Value | Risk |
|---|---|---|
cursor-executor.ts:160 |
https://api2.cursor.sh |
Cannot be overridden for testing or alternative endpoints |
cursor-executor.ts:232 |
2.3.41 (client version) |
Will become outdated; Cursor may reject old versions |
cursor-executor.ts:228 |
connect-es/1.6.1 (user-agent) |
Same concern |
Suggestion: Make these configurable via constructor params or constants that can be updated in one place.
🟡 P1 — Silent error swallowing
Multiple catch blocks silently discard errors with only // Continue comments:
cursor-executor.ts:107-109— JSON parse failure during decompressioncursor-executor.ts:456-458— JSON parse failure during protobuf-to-JSON transformcursor-executor.ts:611-613— JSON parse failure during SSE transformcursor-protobuf-decoder.ts:70-72— Thinking parse errorcursor-protobuf-decoder.ts:107-109— MCP parse errorcursor-executor.ts:75-77— EdgeRuntime detection
For a protocol decoder, silently dropping data can cause hard-to-diagnose bugs. At minimum, consider logging at debug level (the project has CCS_DEBUG support).
🟡 P2 — stream !== false comparison is semantically loose
cursor-executor.ts:393:
const transformedResponse =
stream !== false
? this.transformProtobufToSSE(response.body, model, body)
: this.transformProtobufToJSON(response.body, model, body);This means stream: undefined is treated as true (streaming). Given stream is typed as boolean in ExecutorParams, this should be a simple truthiness check (stream ? ... : ...) or strict equality (stream === true).
🟢 P3 — Deprecated Buffer.slice
cursor-executor.ts:446,601 and cursor-protobuf-decoder.ts:55,62,67,120 use buffer.slice() which creates a copy. In Node.js 17+, Buffer.slice is deprecated in favor of buffer.subarray() (which shares memory). Since the code already uses Uint8Array.slice in some places and buffer.slice in others, standardize on subarray for consistency and to avoid deprecation warnings.
🟢 P3 — Unused exported type FieldNumber
cursor-protobuf-schema.ts:147 exports FieldNumber but it is never imported anywhere in the codebase. With noUnusedLocals: true in tsconfig, this may cause a typecheck error if not consumed (exports are exempt, but it's still dead code).
🟢 P3 — Redundant default export
cursor-executor.ts:785 — export default CursorExecutor is redundant since the class is already a named export (export class CursorExecutor). The codebase generally uses named exports (per exploration); only router files and a few legacy modules use default exports.
🟢 P3 — isCloudEnv() may be unnecessary
cursor-executor.ts:53-79 — Checks for Cloudflare Workers (caches in globalThis) and EdgeRuntime. This is a Node.js CLI tool — when would it run in a cloud edge environment? If this is defensive code for potential future use, consider removing it per YAGNI.
🔒 Security Considerations
| Check | Status | Notes |
|---|---|---|
| No hardcoded secrets | ✅ | Tokens passed at runtime via credentials parameter |
| No command injection | ✅ | No shell commands executed |
| Token handling | ✅ | accessToken is extracted and passed via HTTPS headers only |
No eval() or dynamic code |
✅ | Clean |
| Input validation | machineId is validated as non-empty (cursor-executor.ts:216-218), but accessToken is not validated before use |
|
| Buffer parsing bounds checks | ✅ | Varint decoder, field decoder, and frame parser all check buffer bounds |
| HTTPS only | ✅ | baseUrl hardcoded to https:// |
| No sensitive data in logs | ✅ | No logging of tokens or headers |
| Compression bomb risk | zlib.gunzipSync (cursor-executor.ts:118, cursor-protobuf-decoder.ts:125) has no size limit on decompressed output |
📊 Code Quality Checklist
| Check | Status | Notes |
|---|---|---|
| TypeScript strict mode compliance | ✅ | No any, no ! assertions |
| ESLint compliance | ✅ | Unused params prefixed with _ |
| Conventional commit format | ✅ | feat(cursor): ... |
| No emojis in CLI output | ✅ | No CLI output in this PR |
| ASCII status markers | N/A | No terminal output |
| Cross-platform compatibility | ✅ | process.platform used for OS detection |
| Test coverage | ❌ | Zero tests for 1,861 LOC |
| Help text updated | N/A | No new CLI commands |
| Documentation updated | PR description is thorough, no docs/ changes |
💡 Recommendations
🔴 High Priority
- Add unit tests — At minimum: varint round-trip, protobuf encode/decode, ConnectRPC frame parsing, message translation, checksum generation, and header building. This is 1,861 LOC of binary protocol handling with no test coverage.
- Address buffered streaming — Either implement true streaming via
TransformStreamor document this as a known limitation and create a tracking issue. Users will experience the full response latency before seeing any output.
🟡 Medium Priority
- Export
CursorCredentialsfrom the schema file and import it in the executor to unify the type as claimed in the PR description. - Make hardcoded values configurable —
baseUrl,clientVersion, anduserAgentshould be constructor parameters or class constants for easier updates. - Add debug-level logging in catch blocks instead of silent swallowing, using the project's existing
CCS_DEBUGpattern.
🟢 Low Priority
- Replace
Buffer.slice()withBuffer.subarray()to avoid deprecation warnings. - Remove unused
FieldNumberexport and redundantexport default. - Consider removing
isCloudEnv()per YAGNI unless there's a concrete use case.
🎯 Overall Assessment
⚠️ APPROVED WITH NOTES
The implementation is architecturally sound with good separation of concerns and careful buffer handling. The protobuf codec is well-implemented with proper bounds checking. However, shipping 1,861 lines of binary protocol code with zero tests is a significant risk — protobuf encoding bugs are notoriously hard to debug in production. The fake streaming (full buffering before SSE output) is also a functional gap that should be explicitly tracked.
Recommended action: Merge after adding at least encoder/decoder round-trip tests and creating a follow-up issue for true streaming support.
🤖 Reviewed by
gemini-claude-opus-4-6-thinking
HIGH Priority: - Add comprehensive unit tests (27 tests covering encoder, decoder, translator, executor) * encodeVarint/decodeVarint round-trip (0, 1, 127, 128, 16383, 0xFFFFFFFF) * encodeField/decodeField round-trip (VARINT, LEN string, LEN binary) * wrapConnectRPCFrame/parseConnectRPCFrame (compressed/uncompressed) * buildCursorRequest message translation (system, user, assistant, tool) * generateChecksum header format validation * buildHeaders output validation * transformProtobufToJSON basic conversion - Create GitHub issue #531 for true streaming implementation - Update TODO comment to reference issue #531 MEDIUM Priority: - Export CursorCredentials from cursor-protobuf-schema.ts - Add JSDoc grouping comments to FIELD constant for clarity - Make hardcoded values configurable (CURSOR_CLIENT_VERSION, CURSOR_USER_AGENT) - Add debug logging to 9 silent catch blocks (respects CCS_DEBUG env var) - Fix stream check: stream !== false → stream === true Bug Fixes: - Fix decodeVarint to return unsigned values (>>> 0) - Fix test assertion for Response.text() async API
|
/review |
Code Review: PR #525 — feat(cursor): add ConnectRPC protobuf encoder/decoder and HTTP/2 executor📋 SummaryThis PR introduces a complete Cursor API integration layer with ConnectRPC binary protobuf encoding/decoding, HTTP/2 transport with fallback to fetch, OpenAI-compatible request/response translation, and Jyh Cipher checksum header generation. The module is well-structured across 6 source files with clean separation of concerns, following the class-based pattern established by ✅ Strengths
🔍 Observations & Suggestions🔴 HIGH — Significant code duplication between JSON and SSE transformers
File: Suggestion: Extract the shared buffer parsing loop into a private method that yields parsed results, then have each transformer format the output: private *parseProtobufFrames(buffer: Buffer): Generator<{
text: string | null;
error: string | null;
toolCall: ToolCallResult | null;
}> {
let offset = 0;
while (offset < buffer.length) {
if (offset + 5 > buffer.length) break;
const flags = buffer[offset];
const length = buffer.readUInt32BE(offset + 1);
if (offset + 5 + length > buffer.length) break;
let payload = buffer.slice(offset + 5, offset + 5 + length);
offset += 5 + length;
payload = decompressPayload(payload, flags);
// ... JSON error check, extractTextFromResponse ...
yield result;
}
}🔴 HIGH —
|
| Component | Tested | Missing |
|---|---|---|
| Varint encode/decode | ✅ | — |
| Field encode/decode | ✅ | — |
| ConnectRPC frame | ✅ | — |
| Message translation | ✅ | Unknown role handling |
transformProtobufToJSON |
✅ (basic) | Error responses, tool calls |
transformProtobufToSSE |
❌ | Entire function untested |
decompressPayload |
❌ | Gzip path, error fallback |
encodeRequest / generateCursorBody |
❌ | Full request construction |
extractToolCall |
❌ | Tool call decoding |
execute |
❌ | Integration flow |
| Malformed input | ❌ | Truncated buffers, corrupt proto |
🟢 LOW — Misleading test for checksum time variance
The test at cursor-protobuf.test.ts:300-314 waits 10ms then checks if checksums differ. The checksum timestamp granularity is Math.floor(Date.now() / 1000000) ≈ 16.7 minutes. A 10ms wait will almost never produce a different timestamp. The test passes because it doesn't actually assert inequality, but the name "should generate different checksums over time" and comment are misleading.
File: tests/unit/cursor/cursor-protobuf.test.ts:300-314
Suggestion: Either increase the description accuracy ("should generate valid checksums at different times") or remove the test in favor of a deterministic one that mocks Date.now.
🟢 LOW — extractTextFromResponse swallows all exceptions
At cursor-protobuf-decoder.ts:326-330, all parse errors are caught and return {text: null, error: null, ...}. This makes debugging protobuf decoding issues difficult even in development — a corrupted response is indistinguishable from an empty one.
File: cursor-protobuf-decoder.ts:326-331
🟢 LOW — Debug log in SSE transformer has wrong function name
At cursor-executor.ts:611, the error message says transformProtobufToJSON but the code is in transformProtobufToSSE.
File: cursor-executor.ts:611
// Current (wrong):
console.error('[cursor] transformProtobufToJSON error parsing failed:', err);
// Should be:
console.error('[cursor] transformProtobufToSSE error parsing failed:', err);🟢 LOW — TOOL_RESULT_RESULT defined but never used
FIELD.TOOL_RESULT_RESULT = 8 is defined in the schema but never referenced in encoder or decoder. If it's needed for future work, a comment explaining its purpose would help.
File: cursor-protobuf-schema.ts:78
🔒 Security Considerations
| Check | Status | Notes |
|---|---|---|
| No credential logging | ✅ | Tokens never logged, even in debug mode |
| Token sanitization | ✅ | :: prefix stripped from access tokens (cursor-executor.ts:214-215) |
| SHA-256 hashing for client key | ✅ | Token not sent in plaintext as client key (cursor-executor.ts:224) |
| HTTPS only | ✅ | baseUrl is hardcoded to https:// (cursor-executor.ts:152) |
| No command injection | ✅ | No shell commands or user input interpolation |
| Input validation | ✅ | machineId required check (cursor-executor.ts:210-212), empty messages check (cursor-protobuf.ts:49-51) |
| Buffer bounds checking | ✅ | All buffer reads check offset + N > buffer.length before access |
| Metadata exposure | process.cwd(), process.version, process.platform sent to API (cursor-protobuf-encoder.ts:166-170) — standard for this type of API client but worth documenting |
📊 Code Quality Checklist
| Criterion | Status | Notes |
|---|---|---|
| TypeScript strict mode | ✅ | No any, proper type narrowing |
| No non-null assertions in src | ✅ | Guard clauses used throughout |
| ESLint compliance | ✅ | Based on passing validation |
| Conventional commit format | ✅ | feat(cursor): ... |
| ASCII-only CLI output | ✅ | [cursor] prefix, no emojis |
| Cross-platform | ✅ | Platform detection for OS headers, no OS-specific paths |
| Error handling | ✅ | All error paths produce valid JSON responses |
| HTTP/2 resource cleanup | ✅ | client.close() on all paths |
| DRY | JSON/SSE transformers share ~60% code | |
| Test coverage | Core encode/decode covered, SSE/tool calls/error paths missing |
💡 Recommendations
| Priority | Item | Effort |
|---|---|---|
| 🔴 High | Extract shared buffer parsing from JSON/SSE transformers | Moderate |
| 🔴 High | Use COMPRESS_FLAG constants in parseConnectRPCFrame |
Trivial |
| 🟡 Medium | Fix silent data corruption on gzip failure — return empty buffer | Trivial |
| 🟡 Medium | Fix copy-paste debug log name in SSE transformer (line 611) | Trivial |
| 🟡 Medium | Add tests for transformProtobufToSSE and error response paths |
Moderate |
| 🟡 Medium | Add debug warning for unknown message roles in translator | Trivial |
| 🟢 Low | Track FIELD namespace refactoring as follow-up issue | — |
| 🟢 Low | Fix misleading checksum time-variance test | Trivial |
| 🟢 Low | Document or remove unused TOOL_RESULT_RESULT field |
Trivial |
🎯 Overall Assessment
⚠️ APPROVED WITH NOTES
The module is architecturally sound with good separation of concerns, proper security practices, and solid encode/decode test coverage. The critical issues are:
- The debug log copy-paste error (
transformProtobufToJSONin the SSE function) — trivial fix. - Hardcoded compression flags in
parseConnectRPCFrame— trivial fix. - DRY violation in the two transform methods — worth addressing before the module grows further.
None of these are blocking for merge, but the first two should ideally be fixed before merging. The DRY refactoring and additional test coverage can be tracked as follow-up work.
🤖 Reviewed by
claude-opus-4-6-thinking
HIGH PRIORITY FIXES: - Extract shared buffer parsing logic into parseProtobufFrames generator method (DRY violation fix) - both JSON and SSE transformers now use common frame parsing loop, eliminating ~60% code duplication - Use COMPRESS_FLAG constants instead of hardcoded 0x01/0x02/0x03 in parseConnectRPCFrame for better maintainability MEDIUM PRIORITY FIXES: - Return empty buffer on gzip decompression failure (prevents silent data corruption) - ALREADY FIXED - Add debug warning for unknown message roles in convertMessages - Create GitHub issue #535 for FIELD namespace refactoring follow-up - Add test coverage: transformProtobufToSSE, error paths, unknown roles LOW PRIORITY FIXES: - Rename checksum test to clarify timestamp granularity (~16 min) - Fix debug log function name in SSE transformer - FIXED BY REFACTOR - Add comment to TOOL_RESULT_RESULT field documenting future use All tests pass (1593 pass, 0 fail) All validation checks pass (typecheck + lint + format + tests)
Code Review: PR #525 —
|
| Component | Status | Notes |
|---|---|---|
| Varint encode/decode | Covered | 6 boundary values |
| Field encode/decode | Covered | VARINT, LEN (string + binary) |
| ConnectRPC framing | Covered | Compressed, uncompressed, incomplete |
| Message translation | Covered | 5 scenarios |
| Checksum generation | Covered | Format + determinism |
| Header building | Covered | Required headers, token parsing, ghostMode, missing machineId |
transformProtobufToJSON |
Covered | Text + error |
transformProtobufToSSE |
Partially | Has duplicated/broken tests (see items 1-3) |
encodeRequest / generateCursorBody |
Missing | Full request construction |
extractToolCall (decoder) |
Missing | Tool call decoding |
convertMessages edge cases |
Missing | Trailing tool results, empty content |
| Malformed/truncated input | Missing | Corrupt protobuf, truncated frames |
parseProtobufFrames generator |
Missing | Multi-frame buffers |
Security Considerations
| Check | Status | Notes |
|---|---|---|
| No hardcoded secrets | OK | Tokens passed at runtime |
| No command injection | OK | No shell commands |
| Token handling | OK | SHA-256 hashed for x-client-key, not logged |
| HTTPS only | OK | baseUrl hardcoded to https:// |
| Buffer bounds checking | OK | All reads check offset + N > buffer.length |
No eval() / dynamic code |
OK | Clean |
| Compression bomb risk | WARN | zlib.gunzipSync has no decompressed size limit |
| Metadata exposure | INFO | process.cwd(), process.version sent to API — standard for API clients |
| Error message leakage | OK | Errors wrapped in generic OpenAI format |
| Input validation | OK | machineId required, empty messages throw |
Code Quality Checklist
| Criterion | Status | Notes |
|---|---|---|
| TypeScript strict mode | OK | No any, proper type narrowing |
| No non-null assertions | OK | Guard clauses throughout |
| ESLint compliance | OK | Unused params prefixed _ |
| Conventional commit format | OK | feat(cursor):, fix(cursor): |
| ASCII-only CLI output | OK | [cursor] prefix for debug, no emojis |
| Cross-platform | OK | process.platform/process.arch for headers |
| DRY | OK | parseProtobufFrames generator extracts shared logic |
| YAGNI | WARN | isCloudEnv() for CLI tool |
| Test correctness | FAIL | Two tests use wrong wrapConnectRPCFrame call signature |
| Test coverage | WARN | Core codec covered; SSE, tool calls, edge cases missing |
Recommendations
| Priority | Issue | Action |
|---|---|---|
| HIGH | Test bug: wrapConnectRPCFrame called with object instead of boolean |
Fix lines 497-499 and 527-529 to pass false instead of { compress: false } |
| MEDIUM | System message content array not handled | Add type narrowing for array content in convertMessages system branch |
| MEDIUM | Duplicate/broken SSE test describe blocks | Merge the two transformProtobufToSSE blocks, remove the broken tests |
| MEDIUM | Hardcoded 429 for all protobuf errors | Infer error type from message content or add an error code field |
| MEDIUM | Add tests for encodeRequest, extractToolCall, edge cases |
Expand test coverage for untested paths |
| LOW | NaN status from missing HTTP/2 header | Default to 500 if :status is missing |
| LOW | Empty cleaned token not validated | Add non-empty check after :: stripping |
| LOW | Remove isCloudEnv() per YAGNI |
Dead code for CLI context |
Overall Assessment
APPROVED WITH NOTES
The implementation is architecturally solid with clean module separation, proper buffer safety, and good security practices. The fix commits have addressed most of the critical issues from prior reviews (DRY extraction via parseProtobufFrames, guard clauses, COMPRESS_FLAG consolidation, abort listener cleanup, debug logging).
Remaining blockers for clean merge:
- The test file has two calls with wrong argument types to
wrapConnectRPCFrame(object instead of boolean) — these tests are testing compressed behavior while claiming to test uncompressed. This is a straightforward fix. - The system message content type mismatch could produce garbage output for array-format system messages.
Non-blocking follow-ups:
- Expand test coverage for tool calls,
encodeRequest, and malformed input - Track true streaming implementation (currently buffered SSE, documented via TODO)
- Remove
isCloudEnv()per YAGNI
Reviewed by
claude-opus-4-6-thinking
Code Review: PR #525 — feat(cursor): add ConnectRPC protobuf encoder/decoder and HTTP/2 executor📋 SummaryThis PR adds a complete ConnectRPC binary protobuf encoding/decoding layer for Cursor API communication, including an HTTP/2 executor with Jyh Cipher checksum, gzip compression, streaming support, and an OpenAI-compatible translation layer. The implementation is well-structured across 6 source files and 1 test file (2,557 additions), with clear separation of concerns between schema, encoder, decoder, orchestrator, translator, and executor. The code demonstrates strong defensive programming with proper buffer bounds checks, guard clauses replacing non-null assertions, and comprehensive error handling throughout. ✅ Strengths
🔍 Observations & Suggestions🔴 HIGH — Fragile Rate Limit Detection via String MatchingFile: const isRateLimit =
result.error.toLowerCase().includes('rate') ||
result.error.toLowerCase().includes('limit');This will false-positive on any error containing "rate" or "limit" (e.g., "character limit exceeded", "generation rate changed", "moderate content"). The JSON error path at line 112 correctly checks for Suggestion: Match on more specific patterns or use the structured error code when available: const isRateLimit =
result.error.toLowerCase().includes('rate limit') ||
result.error.toLowerCase().includes('resource_exhausted') ||
result.error.toLowerCase().includes('too many requests');🟡 MEDIUM — Duplicate Decompression LogicFiles: Both functions implement the same gzip decompression with COMPRESS_FLAG checking. The executor's version additionally checks for JSON error payloads before decompression, but the core gzip logic is duplicated. Currently Suggestion: Consider having 🟡 MEDIUM — Inconsistent Frame Length ParsingFile: const length = (buffer[1] << 24) | (buffer[2] << 16) | (buffer[3] << 8) | buffer[4];File: const length = buffer.readUInt32BE(offset + 1);The decoder uses bitwise shifts while the executor uses Suggestion: Use const length = ((buffer[1] << 24) | (buffer[2] << 16) | (buffer[3] << 8) | buffer[4]) >>> 0;🟡 MEDIUM —
|
| Check | Status | Notes |
|---|---|---|
| No hardcoded secrets | ✅ | Tokens passed via credentials, not stored |
| Token handling | ✅ | Bearer token extracted safely; x-client-key is SHA-256 hash, not raw token |
| Input validation | ✅ | Buffer bounds checked, varint max bytes enforced, empty messages rejected |
| No command injection | ✅ | No shell execution; pure data transformation |
| No path traversal | ✅ | process.cwd() used for metadata only, not for file operations |
| Debug logging safety | ✅ | Only logs errors, not token values; gated behind CCS_DEBUG |
| HTTP/2 TLS | ✅ | Always connects via https:// (cursor-executor.ts:130) |
| Dependency security | ✅ | Uses only Node.js built-ins (crypto, zlib, http2) — no external deps |
| Abort signal cleanup | ✅ | Prevents listener leaks via stored ref + once: true + removeEventListener |
📊 Code Quality Checklist
| Criterion | Status | Notes |
|---|---|---|
| TypeScript strict mode | ✅ | Full compliance, no any, no ! assertions |
| ESLint rules | ✅ | No violations; unused params use _ prefix |
| No emojis in CLI output | ✅ | Debug logs use [cursor] prefix only |
| Error handling | ✅ | All catch blocks log with CCS_DEBUG, graceful fallbacks |
| Test coverage | ✅ | 27+ tests covering encoder, decoder, translator, executor, edge cases |
| DRY | ✅ | parseProtobufFrames generator, shared concatArrays, COMPRESS_FLAG |
| KISS | ✅ | No over-abstraction; functional style with clear data flow |
| YAGNI | ✅ | No speculative features; TODO/issues track future work |
| Cross-platform | ✅ | OS detection for headers, no platform-specific code |
| Documentation | ✅ | JSDoc on all exported functions, grouping comments on FIELD constant |
| Conventional commits | ✅ | All 7 commits follow feat(cursor):/fix(cursor): pattern |
| Module isolation | ✅ | Fully self-contained, no imports from or exports to other modules |
💡 Recommendations
| Priority | Item | Location |
|---|---|---|
| 🔴 High | Fix rate limit detection to use more specific string matching | cursor-executor.ts:441-443 |
| 🟡 Medium | Consolidate duplicate decompression logic | cursor-executor.ts:64-94 / cursor-protobuf-decoder.ts:124-137 |
| 🟡 Medium | Make frame length parsing consistent (readUInt32BE or >>> 0) |
cursor-protobuf-decoder.ts:117 |
| 🟡 Medium | Throw on unsupported wire types in encodeField instead of silent empty return |
cursor-protobuf-encoder.ts:60 |
| 🟢 Low | Remove duplicate decompression failure test | cursor-protobuf.test.ts:577-624 |
| 🟢 Low | Add test for empty messages validation | cursor-protobuf.ts:49-51 |
🎯 Overall Assessment
⚠️ APPROVED WITH NOTES
This is a high-quality, well-architected implementation of a complex protocol layer. The code demonstrates excellent TypeScript practices, thorough defensive programming, and proper resource management. The module is fully self-contained, follows existing codebase patterns, and has comprehensive test coverage.
The one high-priority item (fragile rate limit detection) should be addressed before merging to prevent false-positive rate limit errors in production. The medium-priority items are code quality improvements that can be addressed in follow-up PRs without blocking this one.
The phased rollout approach (PR #519 auth, #525 protobuf core, #520 daemon/CLI, #521 config/dashboard) is a sound architecture decision that keeps each PR reviewable and testable in isolation.
🤖 Reviewed by
claude-opus-4-6-thinking
Use specific patterns ('rate limit', 'resource_exhausted', 'too many
requests') instead of broad 'rate' or 'limit' substrings that would
false-positive on unrelated errors like 'character limit exceeded'.
Code Review: PR #525 —
|
| Check | Status | Notes |
|---|---|---|
| No hardcoded secrets | ✅ | Tokens passed via credentials parameter at runtime |
| Token handling | ✅ | SHA-256 hash for x-client-key, Bearer token via HTTPS, not logged |
| No command injection | ✅ | No shell commands; pure data transformation |
| Input validation | ✅ | Buffer bounds checks, varint max bytes, empty messages rejected, machineId required, empty token check |
| HTTPS only | ✅ | baseUrl hardcoded to https://api2.cursor.sh |
| Debug logging safety | ✅ | Only logs errors, never token values; gated behind CCS_DEBUG |
| Dependency security | ✅ | Node.js built-ins only (crypto, zlib, http2) — no external deps |
| Abort signal cleanup | ✅ | Stored handler ref + removeEventListener prevents listener leaks |
| Error message leakage | ✅ | Errors wrapped in generic OpenAI-format responses |
| Compression bomb | No size limit on gunzipSync output (low risk for trusted endpoint) |
📊 Code Quality Checklist
| Criterion | Status | Notes |
|---|---|---|
| TypeScript strict mode | ✅ | No any, no ! assertions, proper type narrowing |
| ESLint compliance | ✅ | Unused params prefixed _, prefer-const, eqeqeq |
| No emojis in CLI output | ✅ | Debug logs use [cursor] prefix, no emojis |
| Conventional commits | ✅ | All commits follow feat(cursor):/fix(cursor): |
| Cross-platform | ✅ | process.platform/process.arch for OS detection headers |
| Error handling | ✅ | All catch blocks log with CCS_DEBUG, graceful fallbacks |
| DRY | ✅ | parseProtobufFrames generator, shared concatArrays, unified COMPRESS_FLAG |
| KISS | ✅ | No over-abstraction; clear data flow through encode → frame → send → receive → decode → translate |
| YAGNI | ✅ | No speculative features; TODOs track future work with issue refs |
| Test coverage | ✅ | 27+ tests covering core codec, translator, executor, and edge cases |
| Module isolation | ✅ | Fully self-contained; no imports from or exports to other CCS modules |
💡 Recommendations
| Priority | Item | Location |
|---|---|---|
| 🟡 Medium | Consolidate duplicate decompression logic (or align failure behavior) | cursor-executor.ts:64-94 / cursor-protobuf-decoder.ts:124-137 |
| 🟡 Medium | Use consistent unsigned frame length parsing | cursor-protobuf-decoder.ts:117 |
| 🟡 Medium | Throw on unsupported wire types in encodeField |
cursor-protobuf-encoder.ts:60 |
| 🟢 Low | Remove duplicate decompression failure test | cursor-protobuf.test.ts:577-624 |
| 🟢 Low | Narrow buildCursorRequest spread to needed fields only |
cursor-translator.ts:151-154 |
| 🟢 Low | Add test for empty messages validation | cursor-protobuf.ts:49-51 |
| 🟢 Low | Remove unused exports (FieldNumber, export default, TOOL_RESULT_RESULT) |
schema + executor |
🎯 Overall Assessment
✅ APPROVED
This is a well-architected, thoroughly hardened implementation of a complex binary protocol layer. The code demonstrates strong TypeScript practices (strict mode compliance, no any or ! assertions), proper resource management (HTTP/2 client cleanup, abort signal handling), and comprehensive defensive programming (buffer bounds checking at every level).
Five rounds of review feedback have been systematically addressed: guard clauses replaced all non-null assertions, parseProtobufFrames generator eliminated DRY violations, COMPRESS_FLAG was consolidated, rate limit detection was tightened with specific string matching, system message array content is properly handled, and token parsing edge cases are covered.
The remaining items are all non-blocking improvements (decompression consolidation, consistent frame length parsing, unsupported wire type handling, minor test cleanup). These can be addressed in follow-up work without risk to the current implementation.
The phased rollout approach (PR #519 auth → #525 protobuf core → #520 daemon/CLI → #521 config/dashboard) keeps each PR reviewable and testable in isolation.
🤖 Reviewed by
claude-opus-4-6-thinking
Summary
Files Added (6)
src/cursor/cursor-protobuf-schema.ts— Protobuf field/wire type definitionssrc/cursor/cursor-protobuf-encoder.ts— Binary protobuf message encodingsrc/cursor/cursor-protobuf-decoder.ts— Protobuf message decoding with field mappingsrc/cursor/cursor-protobuf.ts— Request/response serialization orchestratorsrc/cursor/cursor-translator.ts— OpenAI ↔ Cursor format translationsrc/cursor/cursor-executor.ts— HTTP/2 executor with ConnectRPC protocolEdge Cases Fixed (code review)
_Test plan