Add LTI guest authentication for anonymous chatbot access#5016
Add LTI guest authentication for anonymous chatbot access#5016rschlaefli wants to merge 4 commits intov3from
Conversation
…ccounts Enable LTI-verified users to use the tutor chatbot without creating a Klicker participant account. This implements the core plumbing from PLAN-chat-semi-anonymous-lti-mode.md: - Add LTI guest identity module (ltiGuest.ts) with deterministic HMAC-based guest SSO IDs scoped per course, chat-only JWT tokens signed with a separate secret, and guest persona creation/lookup - Add /auth/lti entry route that verifies the short-lived LTI JWT, creates or finds a guest persona, issues a chat_participant_token, and redirects to the chatbot - Update apiGuards.ts to accept both chat_participant_token (anonymous) and participant_token (account), returning authMode to callers - Update middleware.ts to allow chat_participant_token and bypass the /auth/lti route from auth checks - Force anonymous users to the fallback (cheapest) model in the chat and credits endpoints - Update noLogin page with LTI-specific messaging https://claude.ai/code/session_01RKzoRiYSNuVRB5DDevf2xk
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughAdds LTI guest authentication: new LTI auth route, deterministic guest persona creation, guest JWT issuance/verification, middleware support for guest tokens, propagation of authMode through auth guards, and anonymous-only fallback model restrictions in chat and credits endpoints. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant LTI_Route as "LTI Auth Route"
participant DB as "Database / Prisma"
participant GuestModule as "ltiGuest (derive/sign/verify)"
participant JWT_Signer as "JWT Signer"
participant ChatAPI as "Chat/Credits API"
Client->>LTI_Route: GET /auth/lti?jwt=...&courseId=...&chatbotId=...
LTI_Route->>GuestModule: verifyLtiToken(jwt)
GuestModule-->>LTI_Route: LTI payload (sub, scope)
LTI_Route->>DB: find account by ssoId / upsert participation
alt existing non-guest account
DB-->>LTI_Route: account found
LTI_Route->>Client: redirect to chatbot (account mode)
else no account / guest needed
LTI_Route->>GuestModule: findOrCreateGuestPersona(sub, scope, courseId)
GuestModule->>DB: create Participant, SSO, Participation
GuestModule-->>LTI_Route: guest participantId
LTI_Route->>GuestModule: signChatGuestToken(participantId)
GuestModule-->>LTI_Route: chat_participant_token
LTI_Route->>Client: set host-only cookie + redirect
end
Client->>ChatAPI: Request chat/credits with chat_participant_token cookie
ChatAPI->>GuestModule: verifyChatGuestToken(cookie)
GuestModule-->>ChatAPI: authMode='anonymous' + participantId
ChatAPI->>DB: fetch models + modelRegistry (determine fallback IDs)
ChatAPI-->>Client: respond with models filtered to fallback (authMode included)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f695bf1c94
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
|
||
| // Override model selection if modelSelection is disabled | ||
| let userCredits: { current: number; total: number } | null = null | ||
| if (!chatbot.modelSelection) { |
There was a problem hiding this comment.
Preserve fallback model for anonymous when selection is disabled
The anonymous fallback restriction is applied first, but this block immediately overwrites selectedModel with getAutomaticModelId(userCredits) when modelSelection is off. Since getAutomaticModelId returns the primary (non-fallback) model whenever credits are positive, LTI guest users can still hit premium models on chatbots with automatic selection enabled, which defeats the cost-control guarantee introduced in this change.
Useful? React with 👍 / 👎.
| if (!chatbot) { | ||
| return NextResponse.json({ error: 'Chatbot not found' }, { status: 404 }) | ||
| } |
There was a problem hiding this comment.
Validate chatbot belongs to requested course before enrollment
This route only checks that both courseId and chatbotId exist, but never enforces chatbot.courseId === courseId before creating/upserting participation for courseId. Because the LTI JWT validation does not bind these IDs, a caller with a valid LTI token and knowledge of UUIDs can submit a mismatched pair and get enrolled into an unintended course context, then access that course’s chatbot if IDs are paired correctly.
Useful? React with 👍 / 👎.
| // environments without APP_CHAT_GUEST_SECRET, the middleware will fall | ||
| // through to the participant_token check. The API route guards in | ||
| // apiGuards.ts handle the authoritative verification. | ||
| return null |
There was a problem hiding this comment.
Derive middleware guest secret when APP_CHAT_GUEST_SECRET is absent
Returning null here means middleware can never validate chat_participant_token unless APP_CHAT_GUEST_SECRET is explicitly set, even though token signing/verification in ltiGuest.ts supports an APP_SECRET-derived fallback. In environments that rely on that fallback (e.g., dev or partial rollout), valid guest tokens will always be treated as invalid and users are redirected to /noLogin before API guards can authenticate them.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/chat/src/app/api/chatbots/[chatbotId]/chat/route.ts (1)
767-782:⚠️ Potential issue | 🟠 MajorAnonymous model restriction can be overridden by the
modelSelectiondisabled path.When
authMode === 'anonymous', the code forcesselectedModelto the fallback model (Line 772). However, ifchatbot.modelSelectionisfalse(Lines 778–781),getAutomaticModelId(userCredits)overridesselectedModelunconditionally and may assign a non-fallback (premium) model—effectively bypassing the anonymous cost constraint.Consider either skipping the automatic override for anonymous users or re-applying the fallback restriction after it:
Proposed fix
// Override model selection if modelSelection is disabled let userCredits: { current: number; total: number } | null = null if (!chatbot.modelSelection) { // Get current user credits to determine automatic model selection userCredits = await CreditsService.getUserCredits(participantId, chatbotId) - selectedModel = getAutomaticModelId(userCredits) + selectedModel = + authMode === 'anonymous' + ? getAutomaticModelId({ current: 0 }) + : getAutomaticModelId(userCredits) }
🤖 Fix all issues with AI agents
In `@apps/chat/src/app/api/chatbots/`[chatbotId]/chat/route.ts:
- Around line 769-774: When authMode === 'anonymous' and you attempt to look up
a fallback model via modelRegistry.find((m) => m.fallback), add an explicit
failure path when fallbackModel is undefined instead of silently continuing; in
the anonymous branch (where fallbackModel is assigned and selectedModel would be
overridden), return/throw an error response indicating "no fallback model
configured for anonymous users" (or similar) and do not allow the request to
proceed or use the caller-provided selectedModel, so the route handler
(route.ts) enforces cost-control guarantees for anonymous users.
In `@apps/chat/src/app/api/chatbots/`[chatbotId]/credits/route.ts:
- Around line 63-68: getAutomaticModelId(credits) can return a premium model for
users with credits while anonymous users (authMode indicates anonymous) are
limited to fallback models, causing automaticModelId to point to a model not
present in availableModels; update the response logic in the route handler to
filter or override the computed automaticModelId when authMode is anonymous so
it always selects an ID from the availableModels list (or a configured fallback
model), e.g., compute models (availableModels) first, then set automaticModelId
= getAutomaticModelId(credits) but if authMode indicates anonymous and
automaticModelId is not in availableModels, replace it with an allowed fallback
from availableModels or null before returning the JSON.
In `@apps/chat/src/app/auth/lti/route.ts`:
- Around line 75-80: The log currently emits ltiPayload.sub (ltiSub) in
plaintext which may be a privacy/compliance risk; change the logging to either
omit ltiSub from info-level logs or replace it with a non-reversible identifier
by hashing/HMACing ltiPayload.sub with a server-side secret (e.g., using an env
var like LTI_SUB_HASH_SECRET) before logging, and move the raw or reversible
value to a more restricted debug/secure log if absolutely needed; update the log
call that uses LTI_AUTH_LOG_PREFIX to include the hashedId (or remove ltiSub)
while still logging courseId and chatbotId.
- Around line 4-8: Replace the relative import in route.ts with the project
path-alias form: change the import that brings in findOrCreateGuestPersona,
signChatGuestToken, and verifyLtiToken from "../../../lib/server/ltiGuest" to
use the "@/..." alias (e.g., import from "@/lib/server/ltiGuest") so the three
symbols (findOrCreateGuestPersona, signChatGuestToken, verifyLtiToken) are
imported via the alias per the coding guidelines.
- Around line 85-101: Ensure the chatbot actually belongs to the requested
course by comparing chatbot.courseId to the incoming courseId after the existing
prisma queries: in the route handler (where prisma.course.findUnique and
prisma.chatbot.findUnique are called), add a check that chatbot.courseId ===
courseId and if it does not match return an error (e.g., NextResponse.json({
error: 'Chatbot does not belong to course' }, { status: 404 or 403 })) so you
cannot create participations for a chatbot from a different course.
In `@apps/chat/src/app/noLogin/page.tsx`:
- Line 13: The isLtiContext check (resolvedSearchParams.lti === '1') in page.tsx
is dead because the redirect to /noLogin never includes lti=1; update the code
paths that redirect to /noLogin so the LTI context is preserved: when token
verification fails in the middleware (the middleware redirect branch) and in the
LTI route error handlers, append lti=1 to the redirect URL (alongside
redirectTo) so resolvedSearchParams and isLtiContext in page.tsx become
reachable and the LTI-specific rendering executes.
In `@apps/chat/src/lib/server/ltiGuest.ts`:
- Around line 60-66: The HMAC input in deriveGuestSsoId uses a plain ":" join of
ltiSub and courseId which can collide when either value contains ":"; update
deriveGuestSsoId (and its use of getChatGuestSeed/createHmac/GUEST_SSO_PREFIX)
to build an unambiguous input, e.g. prefix each field with its length or use a
reserved separator plus escaping (or serialize as JSON) before hashing so
different (ltiSub, courseId) pairs cannot map to the same HMAC input.
- Around line 129-135: The ParticipantAccount.type value used in the
accounts.create block (accounts.create with ssoId: guestSsoId and ssoType:
ltiScope) is set to 'lti_guest' which contradicts the documented valid values
("sso" or "affiliation"); update that literal to a documented value (e.g.,
change 'lti_guest' -> 'sso') or, if 'lti_guest' must be supported, update the
ParticipantAccount.type schema comment to include 'lti_guest' so the code and
schema remain consistent.
🧹 Nitpick comments (2)
apps/chat/src/middleware.ts (1)
31-43: Dev-environment limitation: guest tokens silently fall through whenAPP_CHAT_GUEST_SECRETis not set.Without
APP_CHAT_GUEST_SECRET, the middleware cannot verify guest tokens and falls through to theparticipant_tokencheck. If the guest has noparticipant_token, they'll be incorrectly redirected to/noLogindespite having a valid guest session. The authoritative check inapiGuards.tshandles this correctly for API routes, but page-level middleware will still redirect.The trade-off is well-documented in the comments, but consider logging a warning at startup or on first miss so developers aren't puzzled by unexpected redirects during local testing.
apps/chat/src/lib/server/ltiGuest.ts (1)
116-143: Hash the random password or document why guest SSO accounts store plaintext passwords.Guest participants are created with
isSSOAccount: trueand authenticate via JWT tokens (chat_participant_token), not password-based login. The plaintextrandomPasswordis never used for authentication and remains unused in the database. While this isn't a security issue for guests (which bypass password validation), it's inconsistent with the pattern used everywhere else in the codebase—all other participant creation operations hash passwords with bcrypt. Either hash it for consistency, or add a comment explaining that SSO guests authenticate via JWT and the password field is intentionally unused.
| if (authMode === 'anonymous') { | ||
| const fallbackModel = modelRegistry.find((m) => m.fallback) | ||
| if (fallbackModel) { | ||
| selectedModel = fallbackModel.id | ||
| } | ||
| } |
There was a problem hiding this comment.
Silent no-op when no fallback model exists in the registry.
If modelRegistry.find((m) => m.fallback) returns undefined, the anonymous user silently proceeds with whatever model they selected. This undermines the cost control guarantee. Consider returning an error when no fallback model is available for anonymous users.
Proposed fix
if (authMode === 'anonymous') {
const fallbackModel = modelRegistry.find((m) => m.fallback)
- if (fallbackModel) {
- selectedModel = fallbackModel.id
+ if (!fallbackModel) {
+ return NextResponse.json(
+ { error: 'No model available for anonymous access' },
+ { status: 403 }
+ )
}
+ selectedModel = fallbackModel.id
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (authMode === 'anonymous') { | |
| const fallbackModel = modelRegistry.find((m) => m.fallback) | |
| if (fallbackModel) { | |
| selectedModel = fallbackModel.id | |
| } | |
| } | |
| if (authMode === 'anonymous') { | |
| const fallbackModel = modelRegistry.find((m) => m.fallback) | |
| if (!fallbackModel) { | |
| return NextResponse.json( | |
| { error: 'No model available for anonymous access' }, | |
| { status: 403 } | |
| ) | |
| } | |
| selectedModel = fallbackModel.id | |
| } |
🤖 Prompt for AI Agents
In `@apps/chat/src/app/api/chatbots/`[chatbotId]/chat/route.ts around lines 769 -
774, When authMode === 'anonymous' and you attempt to look up a fallback model
via modelRegistry.find((m) => m.fallback), add an explicit failure path when
fallbackModel is undefined instead of silently continuing; in the anonymous
branch (where fallbackModel is assigned and selectedModel would be overridden),
return/throw an error response indicating "no fallback model configured for
anonymous users" (or similar) and do not allow the request to proceed or use the
caller-provided selectedModel, so the route handler (route.ts) enforces
cost-control guarantees for anonymous users.
| return NextResponse.json({ | ||
| ...credits, | ||
| availableModels, | ||
| availableModels: models, | ||
| automaticModelId: getAutomaticModelId(credits), | ||
| authMode, | ||
| }) |
There was a problem hiding this comment.
automaticModelId may reference a non-fallback model for anonymous users.
getAutomaticModelId(credits) can return a premium model ID when the user has credits, but anonymous users are restricted to fallback models (Line 42). The response could advertise an automaticModelId that isn't in the availableModels list, which may confuse the client.
Proposed fix
return NextResponse.json({
...credits,
availableModels: models,
- automaticModelId: getAutomaticModelId(credits),
+ automaticModelId:
+ authMode === 'anonymous'
+ ? getAutomaticModelId({ current: 0 })
+ : getAutomaticModelId(credits),
authMode,
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return NextResponse.json({ | |
| ...credits, | |
| availableModels, | |
| availableModels: models, | |
| automaticModelId: getAutomaticModelId(credits), | |
| authMode, | |
| }) | |
| return NextResponse.json({ | |
| ...credits, | |
| availableModels: models, | |
| automaticModelId: | |
| authMode === 'anonymous' | |
| ? getAutomaticModelId({ current: 0 }) | |
| : getAutomaticModelId(credits), | |
| authMode, | |
| }) |
🤖 Prompt for AI Agents
In `@apps/chat/src/app/api/chatbots/`[chatbotId]/credits/route.ts around lines 63
- 68, getAutomaticModelId(credits) can return a premium model for users with
credits while anonymous users (authMode indicates anonymous) are limited to
fallback models, causing automaticModelId to point to a model not present in
availableModels; update the response logic in the route handler to filter or
override the computed automaticModelId when authMode is anonymous so it always
selects an ID from the availableModels list (or a configured fallback model),
e.g., compute models (availableModels) first, then set automaticModelId =
getAutomaticModelId(credits) but if authMode indicates anonymous and
automaticModelId is not in availableModels, replace it with an allowed fallback
from availableModels or null before returning the JSON.
| import { | ||
| findOrCreateGuestPersona, | ||
| signChatGuestToken, | ||
| verifyLtiToken, | ||
| } from '../../../lib/server/ltiGuest' |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Use @/ path alias instead of relative import.
Proposed fix
import {
findOrCreateGuestPersona,
signChatGuestToken,
verifyLtiToken,
-} from '../../../lib/server/ltiGuest'
+} from '@/src/lib/server/ltiGuest'As per coding guidelines: "Use @ and ~ path aliases for imports."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { | |
| findOrCreateGuestPersona, | |
| signChatGuestToken, | |
| verifyLtiToken, | |
| } from '../../../lib/server/ltiGuest' | |
| import { | |
| findOrCreateGuestPersona, | |
| signChatGuestToken, | |
| verifyLtiToken, | |
| } from '@/src/lib/server/ltiGuest' |
🤖 Prompt for AI Agents
In `@apps/chat/src/app/auth/lti/route.ts` around lines 4 - 8, Replace the relative
import in route.ts with the project path-alias form: change the import that
brings in findOrCreateGuestPersona, signChatGuestToken, and verifyLtiToken from
"../../../lib/server/ltiGuest" to use the "@/..." alias (e.g., import from
"@/lib/server/ltiGuest") so the three symbols (findOrCreateGuestPersona,
signChatGuestToken, verifyLtiToken) are imported via the alias per the coding
guidelines.
| console.info(LTI_AUTH_LOG_PREFIX, 'LTI token verified', { | ||
| ltiSub: ltiPayload.sub, | ||
| ltiScope: ltiPayload.scope, | ||
| courseId, | ||
| chatbotId, | ||
| }) |
There was a problem hiding this comment.
Consider whether logging ltiSub poses a compliance risk.
ltiPayload.sub is the LMS-issued subject identifier. While pseudonymous, it could be correlated back to a specific student. Depending on your privacy posture (GDPR, FERPA), logging this value in plaintext may be undesirable. Consider hashing it or omitting it from info-level logs.
🤖 Prompt for AI Agents
In `@apps/chat/src/app/auth/lti/route.ts` around lines 75 - 80, The log currently
emits ltiPayload.sub (ltiSub) in plaintext which may be a privacy/compliance
risk; change the logging to either omit ltiSub from info-level logs or replace
it with a non-reversible identifier by hashing/HMACing ltiPayload.sub with a
server-side secret (e.g., using an env var like LTI_SUB_HASH_SECRET) before
logging, and move the raw or reversible value to a more restricted debug/secure
log if absolutely needed; update the log call that uses LTI_AUTH_LOG_PREFIX to
include the hashedId (or remove ltiSub) while still logging courseId and
chatbotId.
| const redirectTo = Array.isArray(redirectToParam) | ||
| ? redirectToParam[0] | ||
| : redirectToParam | ||
| const isLtiContext = resolvedSearchParams.lti === '1' |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n 'lti.*=.*1|lti.*noLogin|noLogin.*lti' --type=ts --type=tsx -g '!node_modules' | head -50Repository: uzh-bf/klicker-uzh
Length of output: 89
🏁 Script executed:
fd -type f -name 'middleware.ts' -o -name 'route.ts' | head -20Repository: uzh-bf/klicker-uzh
Length of output: 232
🏁 Script executed:
rg -n 'noLogin' --type=ts --type=tsx -g '!node_modules' -B 2 -A 2 | head -100Repository: uzh-bf/klicker-uzh
Length of output: 89
🏁 Script executed:
rg -n 'noLogin' --type ts --max-count 30Repository: uzh-bf/klicker-uzh
Length of output: 812
🏁 Script executed:
find apps/chat -name 'middleware.ts' -o -name 'middleware.js' 2>/dev/nullRepository: uzh-bf/klicker-uzh
Length of output: 89
🏁 Script executed:
find apps/chat -path '*/lti*' -name '*.ts' -o -path '*/lti*' -name '*.tsx' 2>/dev/null | head -20Repository: uzh-bf/klicker-uzh
Length of output: 134
🏁 Script executed:
rg -n 'lti.*noLogin|noLogin.*lti|lti=1' --max-count 50Repository: uzh-bf/klicker-uzh
Length of output: 44
🏁 Script executed:
cat -n apps/chat/src/middleware.ts | sed -n '70,110p'Repository: uzh-bf/klicker-uzh
Length of output: 1530
🏁 Script executed:
cat -n apps/chat/src/app/auth/lti/route.tsRepository: uzh-bf/klicker-uzh
Length of output: 8135
🏁 Script executed:
cat -n apps/chat/src/lib/server/ltiGuest.tsRepository: uzh-bf/klicker-uzh
Length of output: 8349
🏁 Script executed:
rg -n 'searchParams\.set.*lti|query.*lti|\.lti.*=' --max-count 50Repository: uzh-bf/klicker-uzh
Length of output: 50376
🏁 Script executed:
rg -n 'lti.*param|param.*lti' --max-count 50Repository: uzh-bf/klicker-uzh
Length of output: 50376
🏁 Script executed:
cat -n apps/chat/src/app/noLogin/page.tsx | head -50Repository: uzh-bf/klicker-uzh
Length of output: 2382
🏁 Script executed:
rg -n 'redirectTo\|lti.*=\|\.lti' apps/chat/src/app/noLogin/page.tsxRepository: uzh-bf/klicker-uzh
Length of output: 44
The LTI-context conditional rendering is unreachable dead code.
Lines 29-45 that display LTI-specific messaging can never execute because the lti=1 query parameter is never set by any authentication flow. The middleware redirects to /noLogin with only the redirectTo parameter (lines 77-84 and 96-103), and the LTI route returns JSON errors on failures rather than redirecting to /noLogin.
To enable the LTI messaging path, append lti=1 to the /noLogin redirect URL when appropriate—either in the middleware (when a token verification fails in an LTI context) or in the LTI route's error handlers.
🤖 Prompt for AI Agents
In `@apps/chat/src/app/noLogin/page.tsx` at line 13, The isLtiContext check
(resolvedSearchParams.lti === '1') in page.tsx is dead because the redirect to
/noLogin never includes lti=1; update the code paths that redirect to /noLogin
so the LTI context is preserved: when token verification fails in the middleware
(the middleware redirect branch) and in the LTI route error handlers, append
lti=1 to the redirect URL (alongside redirectTo) so resolvedSearchParams and
isLtiContext in page.tsx become reachable and the LTI-specific rendering
executes.
| export function deriveGuestSsoId(ltiSub: string, courseId: string): string { | ||
| const seed = getChatGuestSeed() | ||
| const hmac = createHmac('sha256', seed) | ||
| .update(`${ltiSub}:${courseId}`) | ||
| .digest('base64url') | ||
| return `${GUEST_SSO_PREFIX}${hmac}` | ||
| } |
There was a problem hiding this comment.
Potential HMAC input collision in deriveGuestSsoId.
If ltiSub contains a : character, different (ltiSub, courseId) pairs could produce the same HMAC input. For example, ("a:b", "c") and ("a", "b:c") would both yield "a:b:c". Consider using a non-ambiguous separator or length-prefixing.
Proposed fix
export function deriveGuestSsoId(ltiSub: string, courseId: string): string {
const seed = getChatGuestSeed()
const hmac = createHmac('sha256', seed)
- .update(`${ltiSub}:${courseId}`)
+ .update(`${ltiSub.length}:${ltiSub}:${courseId}`)
.digest('base64url')
return `${GUEST_SSO_PREFIX}${hmac}`
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function deriveGuestSsoId(ltiSub: string, courseId: string): string { | |
| const seed = getChatGuestSeed() | |
| const hmac = createHmac('sha256', seed) | |
| .update(`${ltiSub}:${courseId}`) | |
| .digest('base64url') | |
| return `${GUEST_SSO_PREFIX}${hmac}` | |
| } | |
| export function deriveGuestSsoId(ltiSub: string, courseId: string): string { | |
| const seed = getChatGuestSeed() | |
| const hmac = createHmac('sha256', seed) | |
| .update(`${ltiSub.length}:${ltiSub}:${courseId}`) | |
| .digest('base64url') | |
| return `${GUEST_SSO_PREFIX}${hmac}` | |
| } |
🤖 Prompt for AI Agents
In `@apps/chat/src/lib/server/ltiGuest.ts` around lines 60 - 66, The HMAC input in
deriveGuestSsoId uses a plain ":" join of ltiSub and courseId which can collide
when either value contains ":"; update deriveGuestSsoId (and its use of
getChatGuestSeed/createHmac/GUEST_SSO_PREFIX) to build an unambiguous input,
e.g. prefix each field with its length or use a reserved separator plus escaping
(or serialize as JSON) before hashing so different (ltiSub, courseId) pairs
cannot map to the same HMAC input.
| accounts: { | ||
| create: { | ||
| ssoId: guestSsoId, | ||
| ssoType: ltiScope, | ||
| type: 'lti_guest', | ||
| }, | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd "schema.prisma" --type fRepository: uzh-bf/klicker-uzh
Length of output: 44
🏁 Script executed:
find . -name "schema.prisma" -o -name "prisma.schema" 2>/dev/null | head -20Repository: uzh-bf/klicker-uzh
Length of output: 44
🏁 Script executed:
find . -type d -name "prisma" 2>/dev/null && find . -name "*.prisma" 2>/dev/null | head -20Repository: uzh-bf/klicker-uzh
Length of output: 887
🏁 Script executed:
cat packages/prisma/src/prisma/schema/participant.prismaRepository: uzh-bf/klicker-uzh
Length of output: 8182
type: 'lti_guest' does not match the documented valid values for ParticipantAccount.type.
The type field is a String with documented valid values of "sso" and "affiliation" (per the schema comment). The value 'lti_guest' is not in this list. While the field is not an enum and won't cause a database error, this represents a logic error that contradicts the schema design. Update the value to one of the documented valid types or extend the schema comment to include 'lti_guest' as a valid value.
🤖 Prompt for AI Agents
In `@apps/chat/src/lib/server/ltiGuest.ts` around lines 129 - 135, The
ParticipantAccount.type value used in the accounts.create block (accounts.create
with ssoId: guestSsoId and ssoType: ltiScope) is set to 'lti_guest' which
contradicts the documented valid values ("sso" or "affiliation"); update that
literal to a documented value (e.g., change 'lti_guest' -> 'sso') or, if
'lti_guest' must be supported, update the ParticipantAccount.type schema comment
to include 'lti_guest' so the code and schema remain consistent.
There was a problem hiding this comment.
Pull request overview
Adds an anonymous (LTI-guest) authentication path to the chat app so users can launch chatbots via LTI without creating a Klicker account, while attempting to enforce cost controls by restricting anonymous users to fallback models.
Changes:
- Introduces LTI guest persona + JWT helpers (
ltiGuest.ts) and a new/auth/ltientry route. - Updates auth/middleware and API guards to accept
chat_participant_tokenand propagateauthMode. - Restricts anonymous users to fallback models in credits + chat endpoints; updates
/noLoginmessaging.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| apps/chat/src/middleware.ts | Adds Edge middleware check for chat_participant_token before participant_token. |
| apps/chat/src/lib/server/ltiGuest.ts | Implements guest persona creation and JWT verification for LTI + guest tokens. |
| apps/chat/src/lib/server/apiGuards.ts | Extends auth extraction to return authMode and accept guest tokens. |
| apps/chat/src/app/auth/lti/route.ts | Adds LTI launch route to verify LTI token, create guest persona, set cookie, redirect. |
| apps/chat/src/app/api/chatbots/[chatbotId]/credits/route.ts | Filters available models to fallback-only for anonymous users and returns authMode. |
| apps/chat/src/app/api/chatbots/[chatbotId]/chat/route.ts | Forces fallback model selection for anonymous users. |
| apps/chat/src/app/noLogin/page.tsx | Adds LTI-context messaging based on lti query param. |
Comments suppressed due to low confidence (1)
apps/chat/src/app/api/chatbots/[chatbotId]/chat/route.ts:782
- Anonymous-user model enforcement is applied before the
!chatbot.modelSelectionautomatic selection logic, butselectedModelis overwritten togetAutomaticModelId(userCredits)afterwards. That means anonymous users can still end up on the primary (non-fallback) model whenevermodelSelectionis disabled and credits are > 0. Apply the anonymous fallback restriction after all model-selection branches (or makegetAutomaticModelIdaware ofauthMode) so the cost-control guarantee holds regardless ofmodelSelectionconfiguration.
// Anonymous (LTI guest) users are restricted to fallback models only.
// This enforces the cost constraint from the semi-anonymous LTI mode design.
if (authMode === 'anonymous') {
const fallbackModel = modelRegistry.find((m) => m.fallback)
if (fallbackModel) {
selectedModel = fallbackModel.id
}
}
// Override model selection if modelSelection is disabled
let userCredits: { current: number; total: number } | null = null
if (!chatbot.modelSelection) {
// Get current user credits to determine automatic model selection
userCredits = await CreditsService.getUserCredits(participantId, chatbotId)
selectedModel = getAutomaticModelId(userCredits)
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // 2. Create new guest participant + account + participation | ||
| const randomSuffix = randomBytes(8).toString('hex') | ||
| const randomPassword = randomBytes(16).toString('hex') | ||
|
|
||
| const newParticipant = await prisma.participant.create({ | ||
| data: { | ||
| username: `guest_${randomSuffix}`, | ||
| password: randomPassword, | ||
| email: null, | ||
| isEmailValid: false, | ||
| isSSOAccount: true, | ||
| isProfilePublic: false, | ||
| isActive: true, |
There was a problem hiding this comment.
Guest participants are created with password: randomPassword stored as a plain hex string. The main login flow uses bcrypt.compare(password, participant.password) without a try/catch; if a guest username is ever used in the login form, bcrypt.compare can throw on a non-bcrypt hash and cause a 500. Store a valid bcrypt hash here (or set a known-disabled value and ensure username login cannot select SSO accounts) to avoid triggering errors and to keep password storage consistent with the rest of the system.
| } | ||
| if (!chatbot) { | ||
| return NextResponse.json({ error: 'Chatbot not found' }, { status: 404 }) | ||
| } |
There was a problem hiding this comment.
courseId and chatbotId are validated to exist, but the code doesn’t verify that chatbot.courseId === courseId. This can lead to confusing behavior (creating participation for one course but redirecting to a chatbot in another), and it becomes more important if you later bind course context to the LTI token. Add an explicit check and return a 400/403 when the chatbot is not part of the provided course.
| } | |
| } | |
| if (chatbot.courseId !== courseId) { | |
| console.error(LTI_AUTH_LOG_PREFIX, 'Chatbot does not belong to course', { | |
| courseId, | |
| chatbotId, | |
| chatbotCourseId: chatbot.courseId, | |
| }) | |
| return NextResponse.json( | |
| { error: 'Chatbot does not belong to the specified course' }, | |
| { status: 403 } | |
| ) | |
| } |
| const fallbackModel = modelRegistry.find((m) => m.fallback) | ||
| if (fallbackModel) { | ||
| selectedModel = fallbackModel.id | ||
| } |
There was a problem hiding this comment.
For anonymous users you pick the first global fallback model from modelRegistry, but this may not be allowed by the chatbot’s allowedModelIds allow-list. In that case the request will fail later with “Model not available for this chatbot”. Consider selecting a fallback model that also satisfies the allow-list (and return a clearer error if no fallback model is permitted for that chatbot).
| const fallbackModel = modelRegistry.find((m) => m.fallback) | |
| if (fallbackModel) { | |
| selectedModel = fallbackModel.id | |
| } | |
| const fallbackModel = modelRegistry.find( | |
| (m) => m.fallback && (!allowedIds || allowedIds.has(m.id)) | |
| ) | |
| if (!fallbackModel) { | |
| return NextResponse.json( | |
| { error: 'No fallback model is available for this chatbot' }, | |
| { status: 400 } | |
| ) | |
| } | |
| selectedModel = fallbackModel.id |
| export async function verifyLtiToken(token: string): Promise<LtiTokenPayload> { | ||
| const appSecret = process.env.APP_SECRET | ||
| if (!appSecret) { | ||
| throw new Error('APP_SECRET is required') | ||
| } | ||
|
|
||
| const payload = await verifyJWT(token, appSecret) | ||
|
|
||
| if ( | ||
| !payload.sub || | ||
| (payload.scope !== 'LTI1.3' && payload.scope !== 'LTI1.1') | ||
| ) { | ||
| throw new Error('Invalid LTI token: missing sub or wrong scope') | ||
| } |
There was a problem hiding this comment.
The LTI JWT validation does not check the token issuer. Since apps/lti signs these tokens with issuer: APP_ORIGIN_LTI, verifyLtiToken() should pass the expected issuer to verifyJWT(..., { issuer: process.env.APP_ORIGIN_LTI }) (and fail if it’s not configured). Without this, any HS256 token signed with APP_SECRET and containing a matching scope/sub could be accepted as an LTI launch token.
| export interface LtiTokenPayload { | ||
| sub: string | ||
| email?: string | ||
| scope: string // 'LTI1.3' or 'LTI1.1' | ||
| } | ||
|
|
||
| /** | ||
| * Verifies the short-lived LTI JWT issued by `apps/lti` on LTI launch. | ||
| * This token is signed with `APP_SECRET` and has a 5-minute expiry. | ||
| */ | ||
| export async function verifyLtiToken(token: string): Promise<LtiTokenPayload> { | ||
| const appSecret = process.env.APP_SECRET | ||
| if (!appSecret) { | ||
| throw new Error('APP_SECRET is required') | ||
| } | ||
|
|
||
| const payload = await verifyJWT(token, appSecret) | ||
|
|
||
| if ( | ||
| !payload.sub || | ||
| (payload.scope !== 'LTI1.3' && payload.scope !== 'LTI1.1') | ||
| ) { | ||
| throw new Error('Invalid LTI token: missing sub or wrong scope') | ||
| } | ||
|
|
||
| return { | ||
| sub: payload.sub, | ||
| email: payload.email, | ||
| scope: payload.scope, | ||
| } |
There was a problem hiding this comment.
verifyLtiToken() only validates { sub, scope } and the /auth/lti route trusts courseId/chatbotId from query params. Since the JWT payload does not bind the launch to a specific course/chatbot, a valid (but stolen/replayed) LTI token could potentially be reused with arbitrary courseId/chatbotId values to create participations and gain access outside the intended context. Consider including courseId (and ideally chatbotId or an LTI deployment/resource identifier) in the signed LTI JWT and validating it here, or otherwise deriving/validating the context from a trusted server-side mapping rather than query parameters.
| } catch (error) { | ||
| console.error(LTI_AUTH_LOG_PREFIX, 'LTI JWT verification failed:', error) | ||
| return NextResponse.json( | ||
| { error: 'Invalid or expired LTI token' }, | ||
| { status: 401 } | ||
| ) |
There was a problem hiding this comment.
The new /noLogin page supports an lti=1 context message, but this route returns JSON errors on invalid/expired LTI JWTs instead of redirecting to /noLogin?lti=1 (and preserving redirectTo). If the intended UX is to guide LMS users to re-launch, consider redirecting to the /noLogin page (or another user-facing error page) rather than returning JSON for these failures.
| response.cookies.set('chat_participant_token', chatGuestToken, { | ||
| httpOnly: true, | ||
| secure: isProduction, | ||
| sameSite: 'lax', | ||
| path: '/', | ||
| maxAge: 60 * 60 * 24 * 14, // 14 days | ||
| // Do NOT set domain — host-only cookie ensures it never leaves the chat subdomain | ||
| }) |
There was a problem hiding this comment.
The chat_participant_token cookie is set with sameSite: 'lax'. If the chat app is launched inside an LMS iframe (common for LTI), browsers will generally not send SameSite=Lax cookies in that third-party iframe context, breaking the guest session on subsequent requests. Consider using sameSite: 'none' (and secure: true) in non-dev environments, similar to the existing LTI cookie handling in apps/lti.
| return NextResponse.json({ | ||
| ...credits, | ||
| availableModels, | ||
| availableModels: models, | ||
| automaticModelId: getAutomaticModelId(credits), | ||
| authMode, | ||
| }) |
There was a problem hiding this comment.
For authMode === 'anonymous', availableModels is filtered to fallback-only, but automaticModelId is still computed from credits and can point to a non-fallback primary model. This is inconsistent with the returned availableModels and can cause the client to request a disallowed model by default. Consider returning an automaticModelId that is guaranteed to be one of the (possibly filtered) availableModels for anonymous users (e.g., the fallback model id).
| function getChatGuestSecretForMiddleware(): string | null { | ||
| if (process.env.APP_CHAT_GUEST_SECRET) { | ||
| return process.env.APP_CHAT_GUEST_SECRET | ||
| } | ||
| // For the middleware we cannot easily replicate the HMAC derivation from | ||
| // ltiGuest.ts (which uses Node crypto). Instead, when APP_CHAT_GUEST_SECRET | ||
| // is not set, fall back to a convention: the hex HMAC is pre-computed at | ||
| // startup by the API routes. For middleware, we accept that in dev | ||
| // environments without APP_CHAT_GUEST_SECRET, the middleware will fall | ||
| // through to the participant_token check. The API route guards in | ||
| // apiGuards.ts handle the authoritative verification. | ||
| return null | ||
| } |
There was a problem hiding this comment.
getChatGuestSecretForMiddleware() currently returns null unless APP_CHAT_GUEST_SECRET is set, but the comment and verifyChatGuestTokenMiddleware() behavior imply a derived fallback (like ltiGuest.ts does). As written, anonymous LTI guests will always fail middleware auth in any environment that relies on the derived fallback, leading to redirects to /noLogin even with a valid chat_participant_token. Consider deriving the fallback secret in Edge using WebCrypto (HMAC-SHA-256 of APP_SECRET with the same 'chat-guest-secret' label) or updating the middleware/comments to hard-require APP_CHAT_GUEST_SECRET and fail loudly in production misconfiguration.
| const result = await jwtVerify(token, new TextEncoder().encode(secret)) | ||
| return ( | ||
| typeof result.payload.sub === 'string' && | ||
| result.payload.scope === 'CHAT_GUEST' | ||
| ) |
There was a problem hiding this comment.
jwtVerify() is called without constraining accepted algorithms. To reduce token-confusion risk, explicitly pass algorithms: ['HS256'] (and optionally expected issuer/aud if you add them to the token) when verifying chat_participant_token.
… fix, and frontend differentiation
- Validate chatbot belongs to the requested course in /auth/lti (prevents cross-course access)
- Fix middleware Edge runtime: derive chat guest secret via Web Crypto API instead of returning null
- Store authMode ('account'|'anonymous') in settingsStore from credits endpoint
- Show "Guest" badge in sidebar for anonymous LTI users
- Adapt settings panel: show static model info instead of selector for anonymous users
- Differentiate credits messaging for anonymous vs regular users
https://claude.ai/code/session_01RKzoRiYSNuVRB5DDevf2xk
|
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/chat/src/stores/settingsStore.ts (1)
212-244: 🛠️ Refactor suggestion | 🟠 Major
authModeintegration inloadCreditslooks correct, but thecatchblock lacks fallback state.The defensive defaulting to
'account'on line 213 is solid. However, thecatchblock (lines 246–248) only logs the error and doesn't resetauthMode(or any other state) to a safe default. If the credits fetch fails on a guest session whereauthModewas previously set to'anonymous', the stale value persists in the store.This is a pre-existing gap in the
catchblock (it was already missing fallback state before this PR), but the newauthModefield makes it more impactful — a stale'anonymous'value could hide the model selector and show incorrect messaging.Proposed fix
} catch (error) { console.error('Error loading credits:', error) + set({ + authMode: 'account', + }) }As per coding guidelines: "In Zustand stores, async actions must set fallback state in
catchblocks to prevent UI stuck in loading/broken state on network errors."
🧹 Nitpick comments (2)
apps/chat/src/components/app-sidebar.tsx (1)
51-55: Missing spacing between chatbot name and Guest badge.When
chatbotNameis present, the badge renders immediately adjacent to the name text with no gap. Consider adding a left margin to the badge.Proposed fix
{authMode === 'anonymous' && ( - <span className="inline-flex shrink-0 items-center rounded-full bg-blue-100 px-1.5 py-0.5 text-[10px] font-medium text-blue-700"> + <span className="ml-1.5 inline-flex shrink-0 items-center rounded-full bg-blue-100 px-1.5 py-0.5 text-[10px] font-medium text-blue-700"> Guest </span> )}apps/chat/src/components/settings-panel.tsx (1)
99-108: Reuse existingselectedModelOptioninstead of repeating thefindcall.
selectedModelOptionis already computed on line 42–44. The.find()on line 102 duplicates that lookup.Proposed fix
{authMode === 'anonymous' ? ( <div className="rounded-md border bg-blue-50 px-3 py-2 text-sm"> <p className="font-medium text-blue-800"> - {modelOptions.find((option) => option.id === selectedModel) - ?.name || selectedModel} + {selectedModelOption?.name || selectedModel} </p> <p className="text-muted-foreground mt-1 text-xs"> Guest access uses the standard model. </p> </div>
Design document for extending the LTI semi-anonymous auth pattern to support any third-party embedding app via a launch token API, EmbedIntegration model, and shared guest identity core. https://claude.ai/code/session_01RKzoRiYSNuVRB5DDevf2xk



Summary
This PR implements LTI guest authentication for the chat app, enabling anonymous access to chatbots via LTI launch while maintaining cost controls and security boundaries.
Key Changes
New LTI Guest Authentication System
ltiGuest.ts: Core library for LTI guest persona managementderiveGuestSsoId(): Deterministic, non-reversible guest SSO ID derivation using HMACfindOrCreateGuestPersona(): Creates or retrieves anonymous guest participants with random credentialssignChatGuestToken()/verifyChatGuestToken(): JWT operations forchat_participant_token(14-day expiry)verifyLtiToken(): Validates short-lived LTI tokens from the LTI app (5-minute expiry)APP_CHAT_GUEST_SECRET) for guest tokens to prevent cross-service token reuseLTI Authentication Route
auth/lti/route.ts: New GET endpoint handling LTI launch flowchat_participant_tokencookie (host-only, 14-day max-age)Middleware Updates
verifyChatGuestTokenMiddleware()to checkchat_participant_tokenbeforeparticipant_token/auth/ltirouteAPI Guard Enhancements
getParticipantId()now returnsauthMode: 'account' | 'anonymous'withChatbotAuth()propagatesauthModeto route handlersCost Control for Anonymous Users
credits/route.ts: Anonymous users restricted to fallback models onlychat/route.ts: Enforces fallback model selection for anonymous participantsUX Improvements
noLogin/page.tsx: Enhanced error messaging for expired/invalid LTI sessionsImplementation Details
subcannot be recovered from stored valuechat_participant_tokenusesAPP_CHAT_GUEST_SECRET(or HMAC-derived fallback) to prevent token confusion with main APIchat_participant_tokenset without domain to ensure it never leaves the chat subdomainhttps://claude.ai/code/session_01RKzoRiYSNuVRB5DDevf2xk
Summary by CodeRabbit