Skip to content

Conversation

@serefyarar
Copy link
Contributor

@serefyarar serefyarar commented Jan 27, 2026

Introduces an agent-based endpoint for generating friendly, personalized first message suggestions when starting a new chat. Updates frontend chat components to support initial message prefill and auto-suggestion, refactors chat input to use a textarea, and passes mutual intent context for improved relevance. Removes unused ChatWindow component and updates StreamChatContext to support initial message handling.

Summary by CodeRabbit

  • New Features

    • Suggested, auto-generated opening messages for new conversations via a suggestion service.
    • Chat input upgraded to an auto-resizing textarea and supports pre-filled initial messages.
  • Documentation

    • Added comprehensive in-repo operational and developer documentation covering architecture, workflows, and environment setup.
  • Refactor

    • Simplified chat architecture to centralize message initialization and pass initial-message context to chat windows (legacy chat UI removed).

✏️ Tip: You can customize this high-level summary in your review settings.

Introduces an agent-based endpoint for generating friendly, personalized first message suggestions when starting a new chat. Updates frontend chat components to support initial message prefill and auto-suggestion, refactors chat input to use a textarea, and passes mutual intent context for improved relevance. Removes unused ChatWindow component and updates StreamChatContext to support initial message handling.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 27, 2026

📝 Walkthrough

Walkthrough

Adds comprehensive repository documentation; refactors frontend chat UI to use ChatView with textarea and initialMessage support; propagates initialMessage through StreamChatContext and ClientWrapper; extends ConnectionActions props; adds IntroMakerGenerator agent and a new POST /suggest-intro route (duplicate implementation present).

Changes

Cohort / File(s) Summary
Documentation
CLAUDE.md
New comprehensive operational and developer documentation covering architecture, dev commands, DB schema, queues, API routes, frontend structure, environment, testing, debugging, and conventions.
Frontend — Chat UI & Context
frontend/src/components/chat/ChatView.tsx, frontend/src/components/chat/ChatWindow.tsx, frontend/src/contexts/StreamChatContext.tsx, frontend/src/components/ClientWrapper.tsx
Removed legacy ChatWindow.tsx. Enhanced ChatView.tsx: added initialMessage prop, textarea with auto-resize, fetching of suggested intro, and UI updates. StreamChatContext updated to accept/propagate initialMessage. ClientWrapper now passes activeChat.initialMessage to ChatView.
Frontend — Connection UI Props
frontend/src/components/ConnectionActions.tsx, frontend/src/components/InboxContent.tsx
ConnectionActionsProps extended with mutualIntents? and synthesis?. InboxContent updated to pass mutualIntents={intents} and synthesis={syntheses[user.id]} into ConnectionActions.
Backend — Intro Generation API & Agent
protocol/src/agents/intent/stake/intro/intro-maker.generator.ts, protocol/src/routes/chat.ts
New IntroMakerGenerator agent (Zod schema, input/result types, run method, factory). New POST /suggest-intro route that derives reasonings from mutual intents, invokes the generator, and returns a generated message (note: the route is duplicated twice in the file).

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant ChatView as ChatView (Frontend)
    participant API as /suggest-intro (Backend)
    participant Generator as IntroMakerGenerator (Agent)
    participant LLM as Language Model

    User->>ChatView: Open new conversation
    ChatView->>API: POST /suggest-intro (targetUserId, mutualIntents, synthesis)
    API->>API: Fetch current & target user data
    API->>API: Derive reasonings from mutual intents/synthesis
    API->>Generator: run(input with reasonings)
    Generator->>LLM: Invoke model with SystemMessage & prompt
    LLM-->>Generator: Return structured response
    Generator-->>API: Return generated intro
    API-->>ChatView: { message, timestamp }
    ChatView->>ChatView: Set initialMessage and render pre-filled intro
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~30 minutes

Poem

🐰 I hop in code where chats begin,
I tuck a friendly first-line in,
Generators hum, suggestions bloom,
Textareas cradle the intro's tune,
Conversations wake — let's grin!

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add AI-powered intro message suggestion to chat' directly and clearly summarizes the main objective of this PR: implementing an AI-generated message suggestion feature for chat initialization.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
frontend/src/components/chat/ChatView.tsx (1)

38-68: Reset messageText when switching chats to avoid stale drafts.

When switching between chats (different userId), the messageText state retains the previous draft because useState initialization only runs once. The existing useEffect at lines 250-255 only clears text when messages are already loaded and the conversation is not new, leaving stale drafts in scenarios where chats switch before messages load.

Add a useEffect that resets messageText when userId changes:

Proposed fix
+  const lastChatUserIdRef = useRef<string | null>(null);
+
   const scrollToBottom = useCallback(() => {
     messagesEndRef.current?.scrollIntoView({ behavior: 'auto' });
   }, []);
+
+  useEffect(() => {
+    if (lastChatUserIdRef.current !== userId) {
+      lastChatUserIdRef.current = userId;
+      setMessageText(initialMessage || '');
+    }
+  }, [userId, initialMessage]);
🤖 Fix all issues with AI agents
In `@CLAUDE.md`:
- Line 176: Replace bare URLs in the CLAUDE.md text (e.g., the sentence "Use
`bun run admin-queues` to start BullBoard UI at
http://localhost:3001/admin/queues" and the other two occurrences of the same
URL) with Markdown links; update the line to something like: Use `bun run
admin-queues` to start BullBoard UI at
[http://localhost:3001/admin/queues](http://localhost:3001/admin/queues) or use
a labeled link such as [BullBoard UI](http://localhost:3001/admin/queues) to
satisfy MD034 and maintain clarity.
- Around line 76-83: The fenced monorepo tree code block (the block showing
"index/ ├── protocol/ ...") is missing a language tag which triggers MD040;
update the opening fence from ``` to ```text so the block becomes a fenced
"text" code block—locate the monorepo tree block in CLAUDE.md and add the
language tag to the opening backticks.

In `@frontend/src/components/chat/ChatView.tsx`:
- Line 8: The async suggestion response can overwrite a user's in-progress edit
because in-flight requests are not gated; update the logic that calls
setMessageText(response.message) to ignore results when the user has started
editing after the request began by adding a ref flag (e.g., userEditedRef) and
an initialMessageRef: set userEditedRef.current = false before sending the
suggestion, capture initialMessageRef.current = messageText at send time,
subscribe input changes to set userEditedRef.current = true, and when the
response arrives only call setMessageText if userEditedRef.current is false AND
current messageText === initialMessageRef.current (or otherwise unchanged);
modify the suggestion-fetching code paths that use setMessageText and the input
change handler accordingly to locate these symbols (setMessageText, messageText,
and the suggestion fetch function) and ignore stale responses.

In `@protocol/src/agents/intent/stake/intro/intro-maker.generator.ts`:
- Around line 73-81: The buildUserMessage function currently inserts a
fabricated fallback "Working on interesting projects" for senderReasonings and
recipientReasonings when none are provided; remove these invented defaults and
instead require explicit reasonings by either leaving the reasoning section
empty/omitting the list when input.sender.reasonings or
input.recipient.reasonings is empty or surface a validation/error earlier (e.g.,
validateIntroMakerInput) so callers must supply reasonings; update the logic in
buildUserMessage (referencing buildUserMessage, senderReasonings,
recipientReasonings) to stop injecting the fallback and handle empty arrays
consistently (empty string/omitted section or validation failure).

In `@protocol/src/routes/chat.ts`:
- Around line 499-503: The current selects that populate currentUser and
targetUser (the db.select() from users that set currentUser and targetUser) do
not exclude soft-deleted rows, so add a condition to each query to filter out
records where users.deletedAt is set (i.e., require deletedAt IS NULL /
isNull(users.deletedAt)) so deactivated accounts are not used for intro
generation; apply the same change to the other similar select(s) mentioned
around the 505-507 area to ensure all user lookups exclude soft-deleted users.
- Around line 513-521: The loop over mutualIntents can push undefined or empty
strings into senderReasonings and recipientReasonings; update the
mutualIntents.forEach block (the intentPair handling inside
protocol/src/routes/chat.ts) to derive the candidate text via
intentPair.intent.summary || intentPair.intent.payload, then check that
candidate is a non-empty trimmed string (e.g., truthy after .trim()) before
pushing it into senderReasonings and recipientReasonings so no undefined/empty
values appear in the prompt.
- Around line 532-545: The intro generation call to IntroMakerGenerator is
synchronous and can hang; wrap the introMaker.run(...) invocation in a timeout
mechanism (e.g., Promise.race with a timer that rejects or returns a fallback
after a configurable timeout like 5–10s) so the route doesn't block
indefinitely; locate the IntroMakerGenerator usage and replace the direct await
introMaker.run(...) with the timed wrapper, ensure you handle the timeout
rejection by logging via the existing logger and returning a safe fallback
response or throwing an HTTP timeout error so the request thread is released.

Comment on lines +76 to +83
```
index/
├── protocol/ # Backend API & Agent Engine (Bun, Express, TypeScript)
├── frontend/ # Next.js 15 App with React 19
├── contracts/ # Smart contracts (blockchain)
├── redis/ # Redis configuration
└── txt/ # Documentation/knowledge base
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a language tag to the monorepo tree code block.
Markdownlint (MD040) flags the fenced block missing a language.

🔧 Proposed fix
-```
+```text
 index/
 ├── protocol/          # Backend API & Agent Engine (Bun, Express, TypeScript)
 ├── frontend/          # Next.js 15 App with React 19
 ├── contracts/         # Smart contracts (blockchain)
 ├── redis/            # Redis configuration
 └── txt/              # Documentation/knowledge base
</details>

<!-- suggestion_start -->

<details>
<summary>📝 Committable suggestion</summary>

> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

```suggestion

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

76-76: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@CLAUDE.md` around lines 76 - 83, The fenced monorepo tree code block (the
block showing "index/ ├── protocol/ ...") is missing a language tag which
triggers MD040; update the opening fence from ``` to ```text so the block
becomes a fenced "text" code block—locate the monorepo tree block in CLAUDE.md
and add the language tag to the opening backticks.

- Cleanup: Completed jobs removed after 24h, failed after 7d
- Default concurrency: 1 (sequential processing)

**Monitoring**: Use `bun run admin-queues` to start BullBoard UI at http://localhost:3001/admin/queues
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Replace bare URLs with Markdown links.
Markdownlint (MD034) flags bare URLs; wrapping them improves consistency.

🔧 Proposed fix
-**Monitoring**: Use `bun run admin-queues` to start BullBoard UI at http://localhost:3001/admin/queues
+**Monitoring**: Use `bun run admin-queues` to start BullBoard UI at [http://localhost:3001/admin/queues](http://localhost:3001/admin/queues)

-Each preset is configured at https://openrouter.ai/settings/presets with specific model, temperature, and max_tokens settings.
+Each preset is configured at [https://openrouter.ai/settings/presets](https://openrouter.ai/settings/presets) with specific model, temperature, and max_tokens settings.

-- View traces at https://us.cloud.langfuse.com
+- View traces at [https://us.cloud.langfuse.com](https://us.cloud.langfuse.com)

Also applies to: 287-287, 398-398

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

176-176: Bare URL used

(MD034, no-bare-urls)

🤖 Prompt for AI Agents
In `@CLAUDE.md` at line 176, Replace bare URLs in the CLAUDE.md text (e.g., the
sentence "Use `bun run admin-queues` to start BullBoard UI at
http://localhost:3001/admin/queues" and the other two occurrences of the same
URL) with Markdown links; update the line to something like: Use `bun run
admin-queues` to start BullBoard UI at
[http://localhost:3001/admin/queues](http://localhost:3001/admin/queues) or use
a labeled link such as [BullBoard UI](http://localhost:3001/admin/queues) to
satisfy MD034 and maintain clarity.

import { useStreamChat } from '@/contexts/StreamChatContext';
import { useNotifications } from '@/contexts/NotificationContext';
import { useDiscover } from '@/contexts/APIContext';
import { useAuthenticatedAPI } from '@/lib/api';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "ChatView.tsx" -type f

Repository: indexnetwork/index

Length of output: 105


🏁 Script executed:

wc -l ./frontend/src/components/chat/ChatView.tsx

Repository: indexnetwork/index

Length of output: 109


🏁 Script executed:

cat -n ./frontend/src/components/chat/ChatView.tsx | head -100

Repository: indexnetwork/index

Length of output: 4426


🏁 Script executed:

cat -n ./frontend/src/components/chat/ChatView.tsx | sed -n '257,286p'

Repository: indexnetwork/index

Length of output: 1188


🏁 Script executed:

cat -n ./frontend/src/components/chat/ChatView.tsx | sed -n '590,610p'

Repository: indexnetwork/index

Length of output: 1475


🏁 Script executed:

cat -n ./frontend/src/components/chat/ChatView.tsx | sed -n '240,290p'

Repository: indexnetwork/index

Length of output: 1979


🏁 Script executed:

cat -n ./frontend/src/components/chat/ChatView.tsx | sed -n '100,110p'

Repository: indexnetwork/index

Length of output: 483


Prevent async suggestion from overwriting user input after they start typing.
The suggestion fetch can complete after the user begins editing, calling setMessageText(response.message) and overwriting their draft. The current check at line 259 (messageText.trim()) only prevents new suggestions from being fetched when text is present, but doesn't guard in-flight requests that were initiated before the user started typing. Track user edits explicitly with a ref to prevent overwriting.

Proposed fix
+  const userEditedRef = useRef(false);
+
+  useEffect(() => {
+    userEditedRef.current = false;
+  }, [userId, isNewConversation]);

   // Fetch suggested intro message for new conversations
   useEffect(() => {
     if (!isNewConversation || !isReady || loading || messageText.trim()) return;
@@ -268,7 +275,9 @@
         const response = await api.post<{ message: string }>('/chat/suggest-intro', {
           targetUserId: userId,
         });
-        if (mounted && response.message) {
+        if (
+          mounted &&
+          response.message &&
+          !userEditedRef.current &&
+          !inputRef.current?.value.trim()
+        ) {
           setMessageText(response.message);
         }
@@ -604,7 +613,9 @@
                 <textarea
                   ref={inputRef}
                   value={messageText}
-                  onChange={(e) => setMessageText(e.target.value)}
+                  onChange={(e) => {
+                    userEditedRef.current = true;
+                    setMessageText(e.target.value);
+                  }}
🤖 Prompt for AI Agents
In `@frontend/src/components/chat/ChatView.tsx` at line 8, The async suggestion
response can overwrite a user's in-progress edit because in-flight requests are
not gated; update the logic that calls setMessageText(response.message) to
ignore results when the user has started editing after the request began by
adding a ref flag (e.g., userEditedRef) and an initialMessageRef: set
userEditedRef.current = false before sending the suggestion, capture
initialMessageRef.current = messageText at send time, subscribe input changes to
set userEditedRef.current = true, and when the response arrives only call
setMessageText if userEditedRef.current is false AND current messageText ===
initialMessageRef.current (or otherwise unchanged); modify the
suggestion-fetching code paths that use setMessageText and the input change
handler accordingly to locate these symbols (setMessageText, messageText, and
the suggestion fetch function) and ignore stale responses.

Comment on lines +73 to +81
private buildUserMessage(input: IntroMakerInput): string {
const senderReasonings = input.sender.reasonings.length > 0
? input.sender.reasonings.map((r, i) => `${i + 1}. ${r}`).join('\n')
: '1. Working on interesting projects';

const recipientReasonings = input.recipient.reasonings.length > 0
? input.recipient.reasonings.map((r, i) => `${i + 1}. ${r}`).join('\n')
: '1. Working on interesting projects';

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid fabricated fallback “reasonings.”
Line 74-81 inserts “Working on interesting projects,” which conflicts with the “use only explicit information” rule and can lead to invented claims. Prefer requiring explicit reasonings.

🔧 Proposed fix
-    const senderReasonings = input.sender.reasonings.length > 0
-      ? input.sender.reasonings.map((r, i) => `${i + 1}. ${r}`).join('\n')
-      : '1. Working on interesting projects';
-
-    const recipientReasonings = input.recipient.reasonings.length > 0
-      ? input.recipient.reasonings.map((r, i) => `${i + 1}. ${r}`).join('\n')
-      : '1. Working on interesting projects';
+    const senderReasonings = input.sender.reasonings
+      .map(r => r?.trim())
+      .filter((r): r is string => Boolean(r));
+    const recipientReasonings = input.recipient.reasonings
+      .map(r => r?.trim())
+      .filter((r): r is string => Boolean(r));
+
+    if (senderReasonings.length === 0 || recipientReasonings.length === 0) {
+      throw new Error('IntroMakerGenerator requires explicit reasonings for both users');
+    }
+
+    const senderReasoningsText = senderReasonings.map((r, i) => `${i + 1}. ${r}`).join('\n');
+    const recipientReasoningsText = recipientReasonings.map((r, i) => `${i + 1}. ${r}`).join('\n');
-${senderReasonings}
+${senderReasoningsText}

-${recipientReasonings}
+${recipientReasoningsText}
🤖 Prompt for AI Agents
In `@protocol/src/agents/intent/stake/intro/intro-maker.generator.ts` around lines
73 - 81, The buildUserMessage function currently inserts a fabricated fallback
"Working on interesting projects" for senderReasonings and recipientReasonings
when none are provided; remove these invented defaults and instead require
explicit reasonings by either leaving the reasoning section empty/omitting the
list when input.sender.reasonings or input.recipient.reasonings is empty or
surface a validation/error earlier (e.g., validateIntroMakerInput) so callers
must supply reasonings; update the logic in buildUserMessage (referencing
buildUserMessage, senderReasonings, recipientReasonings) to stop injecting the
fallback and handle empty arrays consistently (empty string/omitted section or
validation failure).

Comment on lines +499 to +503
// Get user info
const [currentUser, targetUser] = await Promise.all([
db.select().from(users).where(eq(users.id, userId)).limit(1),
db.select().from(users).where(eq(users.id, targetUserId)).limit(1)
]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Exclude soft-deleted users from intro generation.
Line 499-503 pulls users without deletedAt filters; this can generate intros for deactivated accounts.

🔧 Proposed fix
-      const [currentUser, targetUser] = await Promise.all([
-        db.select().from(users).where(eq(users.id, userId)).limit(1),
-        db.select().from(users).where(eq(users.id, targetUserId)).limit(1)
-      ]);
+      const [currentUser, targetUser] = await Promise.all([
+        db.select().from(users).where(and(eq(users.id, userId), isNull(users.deletedAt))).limit(1),
+        db.select().from(users).where(and(eq(users.id, targetUserId), isNull(users.deletedAt))).limit(1)
+      ]);

Also applies to: 505-507

🤖 Prompt for AI Agents
In `@protocol/src/routes/chat.ts` around lines 499 - 503, The current selects that
populate currentUser and targetUser (the db.select() from users that set
currentUser and targetUser) do not exclude soft-deleted rows, so add a condition
to each query to filter out records where users.deletedAt is set (i.e., require
deletedAt IS NULL / isNull(users.deletedAt)) so deactivated accounts are not
used for intro generation; apply the same change to the other similar select(s)
mentioned around the 505-507 area to ensure all user lookups exclude
soft-deleted users.

Comment on lines 513 to 521
if (mutualIntents && mutualIntents.length > 0) {
// Use intent summaries/payloads as reasoning
mutualIntents.forEach((intentPair: any) => {
if (intentPair.intent) {
const text = intentPair.intent.summary || intentPair.intent.payload;
senderReasonings.push(text);
recipientReasonings.push(text);
}
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Filter out empty/undefined intent text before prompting.
Line 515-520 can push undefined (or empty strings) into reasonings, which surfaces as “undefined” in the prompt.

🔧 Proposed fix
-        mutualIntents.forEach((intentPair: any) => {
-          if (intentPair.intent) {
-            const text = intentPair.intent.summary || intentPair.intent.payload;
-            senderReasonings.push(text);
-            recipientReasonings.push(text);
-          }
-        });
+        mutualIntents.forEach((intentPair: any) => {
+          if (intentPair.intent) {
+            const text = intentPair.intent.summary || intentPair.intent.payload;
+            if (typeof text === 'string' && text.trim()) {
+              senderReasonings.push(text.trim());
+              recipientReasonings.push(text.trim());
+            }
+          }
+        });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (mutualIntents && mutualIntents.length > 0) {
// Use intent summaries/payloads as reasoning
mutualIntents.forEach((intentPair: any) => {
if (intentPair.intent) {
const text = intentPair.intent.summary || intentPair.intent.payload;
senderReasonings.push(text);
recipientReasonings.push(text);
}
});
if (mutualIntents && mutualIntents.length > 0) {
// Use intent summaries/payloads as reasoning
mutualIntents.forEach((intentPair: any) => {
if (intentPair.intent) {
const text = intentPair.intent.summary || intentPair.intent.payload;
if (typeof text === 'string' && text.trim()) {
senderReasonings.push(text.trim());
recipientReasonings.push(text.trim());
}
}
});
🤖 Prompt for AI Agents
In `@protocol/src/routes/chat.ts` around lines 513 - 521, The loop over
mutualIntents can push undefined or empty strings into senderReasonings and
recipientReasonings; update the mutualIntents.forEach block (the intentPair
handling inside protocol/src/routes/chat.ts) to derive the candidate text via
intentPair.intent.summary || intentPair.intent.payload, then check that
candidate is a non-empty trimmed string (e.g., truthy after .trim()) before
pushing it into senderReasonings and recipientReasonings so no undefined/empty
values appear in the prompt.

Comment on lines 532 to 545
// Generate intro using agent
const introMaker = new IntroMakerGenerator();

const result = await introMaker.run({
sender: {
name: currentUser[0].name,
reasonings: senderReasonings.slice(0, 3) // Limit to top 3
},
recipient: {
name: targetUser[0].name,
reasonings: recipientReasonings.slice(0, 3)
}
});

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add a timeout to the intro generation call.
This route calls the model synchronously; without a timeout, request threads can hang on long model latency.

🔧 Proposed fix
-      const result = await introMaker.run({
-        sender: {
-          name: currentUser[0].name,
-          reasonings: senderReasonings.slice(0, 3) // Limit to top 3
-        },
-        recipient: {
-          name: targetUser[0].name,
-          reasonings: recipientReasonings.slice(0, 3)
-        }
-      });
+      const timeoutMs = 15000;
+      const result = await Promise.race([
+        introMaker.run({
+          sender: {
+            name: currentUser[0].name,
+            reasonings: senderReasonings.slice(0, 3) // Limit to top 3
+          },
+          recipient: {
+            name: targetUser[0].name,
+            reasonings: recipientReasonings.slice(0, 3)
+          }
+        }),
+        new Promise<never>((_, reject) =>
+          setTimeout(() => reject(new Error('Intro generation timeout')), timeoutMs)
+        )
+      ]);
🤖 Prompt for AI Agents
In `@protocol/src/routes/chat.ts` around lines 532 - 545, The intro generation
call to IntroMakerGenerator is synchronous and can hang; wrap the
introMaker.run(...) invocation in a timeout mechanism (e.g., Promise.race with a
timer that rejects or returns a fallback after a configurable timeout like
5–10s) so the route doesn't block indefinitely; locate the IntroMakerGenerator
usage and replace the direct await introMaker.run(...) with the timed wrapper,
ensure you handle the timeout rejection by logging via the existing logger and
returning a safe fallback response or throwing an HTTP timeout error so the
request thread is released.

Replaces manual mutual intents extraction with discoverUsers API for fetching mutual intents and reasonings between users. Simplifies request body validation and improves context generation for intro messages.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants