[Feat] Discord 웹훅 로그 상세화 — 로드맵 생성/수정 파이프라인 내부 추적#70
Conversation
- job_start 로그로 생성/수정 요청 시작 시점 추적 - llm_router에서 모든 OpenAI API 호출 자동 추적 (stage, model, tier, latency) - 스켈레톤 생성 완료 시 슬롯 요약 및 repair/autofix 여부 포함 - Google Places 검색 시 fallback 단계별 통계 집계 - finalize 단계를 desc/time/summary 3단계로 분리 - 의도 분석 및 mutate 검색 상세 로그 추가 - 4000자 초과 시 detail 레벨 항목 자동 축약
- notify_callback_failure를 no-op으로 패치하여 call_count 오염 방지
|
@CodeRabbit review |
|
Note Reviews pausedUse the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
Walkthrough작업 로깅에 Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/graph/roadmap/nodes/places.py (1)
147-165:⚠️ Potential issue | 🟡 Minor
region_bbox가 없는 슬롯은 지금fb_stage가 잘못 집계됩니다.
get_region_bbox(region)이None이면 첫 검색은 이미 unfiltered인데,fallback_stage기본값이"restriction"이라서 성공 시fb_stats에는 restriction hit로 누적되고places_searchdetail 로그도 건너뜁니다. 새 fallback 통계를 믿기 어렵게 되니 bbox 부재를 별도 stage로 분리하거나 초기값을 region 여부에 따라 설정하는 편이 맞습니다.예시 수정안
- fallback_stage = "restriction" + fallback_stage = "pending" restriction_used = False bias_used = False unfiltered_used = False region_bbox = get_region_bbox(region) if region_bbox is None: geo_missing_region_bbox = True + fallback_stage = "no_region_bbox" + else: + fallback_stage = "restriction"Also applies to: 265-271, 280-286, 355-359
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/graph/roadmap/nodes/places.py` around lines 147 - 165, In search_for_slot, when get_region_bbox(region) returns None the initial search is already unfiltered but fallback_stage is initialized to "restriction", causing fb_stats and places_search detail logs to misattribute the hit; change the initialization so fallback_stage reflects whether region_bbox is present (e.g., set fallback_stage = "unfiltered" or a distinct "missing_region" when region_bbox is None) and/or explicitly set fallback_stage immediately after computing region_bbox; update the same pattern in the other similar blocks (the later search branches that also call get_region_bbox and use fallback_stage/fb_stats) so that fb_stats counts and places_search detail logging use the correct stage when bbox is missing.
🧹 Nitpick comments (1)
app/core/job_log_context.py (1)
36-38:extra가 예약 필드를 덮어쓰지 못하게 막아 두는 편이 안전합니다.이번 PR부터
level이 로그 축약 동작을 제어하는 예약 필드가 됐는데, 지금은entry.update(extra)가 뒤에서 실행돼서extra={"level": ...}같은 값이 명시적level인자를 그대로 덮어쓸 수 있습니다.stage,message,elapsed_ms,level정도는 예약 키로 막아 두는 편이 안전합니다.예시 수정안
- entry: dict[str, Any] = {"stage": stage, "message": message, "elapsed_ms": elapsed_ms, "level": level} - if extra: - entry.update(extra) + entry: dict[str, Any] = {"stage": stage, "message": message, "elapsed_ms": elapsed_ms, "level": level} + if extra: + reserved = {"stage", "message", "elapsed_ms", "level"} + entry.update({key: value for key, value in extra.items() if key not in reserved})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/core/job_log_context.py` around lines 36 - 38, The current code unconditionally does entry.update(extra) which allows callers to overwrite reserved fields; modify the code that builds and updates the log `entry` (the block that creates entry: dict[str, Any] = {"stage": stage, "message": message, "elapsed_ms": elapsed_ms, "level": level}) to filter `extra` so any keys in the reserved set {"stage", "message", "elapsed_ms", "level"} are dropped before merging (e.g. build a new dict from extra excluding those keys or pop those keys from extra) and then update `entry` with the filtered dict to ensure reserved fields cannot be overwritten.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/core/llm_router.py`:
- Around line 202-207: The llm_error paths currently log only the exception
type; update every error logging call that uses append_job_log with event
"llm_error" (and related error places noted around the append_job_log calls) to
include the same minimum common fields as the success path: include latency_ms
in the extra dict, include tier (use tier.value if tier else 'N/A') and fallback
(true/false) in the message or extra so webhook traces show which tier/fallback
path and latency caused the failure; keep the existing stage and selected_model
fields consistent with the success log format used by append_job_log("llm",
...).
In `@app/services/chat_service.py`:
- Around line 127-128: 현재 init_job_log와 append_job_log("job_start", f"type=chat
job_id={request.job_id} query={request.user_query[:60]}")에서 원문 user_query를 그대로
일부라도 남기고 있어 민감정보가 유출될 위험이 있습니다; 대신 request.user_query의 원문을 남기지 말고 길이(len), 안전한
해시(e.g., SHA256), 그리고 이미 파싱한 intent 또는 엔티티 요약만 로그에 기록하도록 변경하고(예:
append_job_log("job_start", f"type=chat job_id={request.job_id}
query_len={len(request.user_query)} query_hash={hash}
intent={intent_summary}")), 동일한 파일에서 사용된 다른 로그 패턴들(kw=, q= 계열 문자열 포함)을 찾아 같은
방식으로 교체하여 원문이 절대로 기록되지 않도록 하세요 (참조 심볼: init_job_log, append_job_log,
request.user_query, "job_start", any "kw=" / "q=" usages).
In `@app/services/google_places_service.py`:
- Around line 107-113: The detail log currently includes the raw query substring
(query[:50]) which may leak sensitive user text; update the append_job_log call
in google_places_service (around variables query, geo, price_tag, min_rating) to
omit the raw query and instead log safe metadata—e.g. include q_len (len(query))
or a short hash of query and the existing derived fields (geo, price_tag,
min_rating) so the original query text is never written to the detail log.
---
Outside diff comments:
In `@app/graph/roadmap/nodes/places.py`:
- Around line 147-165: In search_for_slot, when get_region_bbox(region) returns
None the initial search is already unfiltered but fallback_stage is initialized
to "restriction", causing fb_stats and places_search detail logs to misattribute
the hit; change the initialization so fallback_stage reflects whether
region_bbox is present (e.g., set fallback_stage = "unfiltered" or a distinct
"missing_region" when region_bbox is None) and/or explicitly set fallback_stage
immediately after computing region_bbox; update the same pattern in the other
similar blocks (the later search branches that also call get_region_bbox and use
fallback_stage/fb_stats) so that fb_stats counts and places_search detail
logging use the correct stage when bbox is missing.
---
Nitpick comments:
In `@app/core/job_log_context.py`:
- Around line 36-38: The current code unconditionally does entry.update(extra)
which allows callers to overwrite reserved fields; modify the code that builds
and updates the log `entry` (the block that creates entry: dict[str, Any] =
{"stage": stage, "message": message, "elapsed_ms": elapsed_ms, "level": level})
to filter `extra` so any keys in the reserved set {"stage", "message",
"elapsed_ms", "level"} are dropped before merging (e.g. build a new dict from
extra excluding those keys or pop those keys from extra) and then update `entry`
with the filtered dict to ensure reserved fields cannot be overwritten.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3aeabd3a-c0cd-4b8b-963f-a0a321ce759b
📒 Files selected for processing (12)
app/core/job_log_context.pyapp/core/llm_router.pyapp/graph/chat/nodes/analyze_intent.pyapp/graph/chat/nodes/mutate.pyapp/graph/roadmap/nodes/finalize.pyapp/graph/roadmap/nodes/places.pyapp/graph/roadmap/nodes/skeleton.pyapp/services/chat_service.pyapp/services/generate_service.pyapp/services/google_places_service.pyapp/services/webhook_notification.pytests/test_callback_delivery.py
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/services/chat_service.py`:
- Around line 129-132: Replace the weak 8-char sha256 prefix used for query_hash
with an HMAC-based digest using a server-side secret and a longer prefix:
compute HMAC-SHA256 over request.user_query with a secret loaded from config/env
(fail loudly if missing), take a longer hex substring (e.g., 32 chars) and use
that value for query_hash in the append_job_log call; update references to
query_hash and ensure the secret access is performed securely (not checked into
source) so the change affects the calculation used in
append_job_log("job_start", ...).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fd3bdb93-7e34-40c6-b9dd-81f1bddc3f45
📒 Files selected for processing (3)
app/core/llm_router.pyapp/services/chat_service.pyapp/services/google_places_service.py
🚧 Files skipped from review as they are similar to previous changes (2)
- app/services/google_places_service.py
- app/core/llm_router.py
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/core/config.py`:
- Line 18: You made HMAC_SECRET a required setting (HMAC_SECRET in Settings), so
update the sample env file (.env.example) to include HMAC_SECRET with a clear
placeholder value and a short comment; add a line like
HMAC_SECRET=your_hmac_secret_here (or similar) so new deployments won’t fail
Settings initialization, and ensure any README or docs referencing required env
vars mention HMAC_SECRET as mandatory.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 26d1ace2-0039-4051-9c9c-cd6d3a90bd82
📒 Files selected for processing (8)
app/core/config.pyapp/services/chat_service.pyapp/services/google_places_service.pytests/test_callback_delivery.pytests/test_main.pytests/test_readiness.pytests/test_recommend_service.pytests/test_timeout_policy.py
✅ Files skipped from review due to trivial changes (2)
- tests/test_readiness.py
- tests/test_recommend_service.py
🚧 Files skipped from review as they are similar to previous changes (3)
- tests/test_callback_delivery.py
- app/services/chat_service.py
- app/services/google_places_service.py
|
@CodeRabbit ignore |
1 similar comment
|
@CodeRabbit ignore |
|
✅ Actions performedReviews paused. |
✅ Actions performedReviews paused. |
1 similar comment
✅ Actions performedReviews paused. |
1. 개요
2. 작업 내용
3. AI 활용 및 검증
AI가 생성한 코드 포함
100% 직접 작성
검증 방법: uv run pytest 35 passed
4. 스크린샷 (선택)
5. 체크리스트
uv run pre-commit run --all-files를 실행하여 통과했는가?Summary by CodeRabbit
새로운 기능
개선 사항
테스트
문서/환경