Skip to content

Commit 153cd2d

Browse files
recoupableorgsidneyswiftsweetmantechclaude
authored
feat: add modular content creation primitive endpoints (#390)
* feat: add modular content creation primitive endpoints New endpoints under POST /api/content/create/: - /image — triggers create-image task - /video — triggers create-video task - /audio — triggers create-audio task - /render — triggers create-render task - /upscale — triggers create-upscale task - /text — inline LLM text generation (no task) DRY shared factories eliminate boilerplate: - triggerPrimitive: one function replaces 5 trigger files - validatePrimitiveBody: shared auth + Zod parsing - createPrimitiveHandler: factory for async handlers - createPrimitiveRoute: shared CORS + dynamic config Existing POST /api/content/create (V1 full pipeline) is untouched. Made-with: Cursor * fix: inline route segment config (Next.js 16 requires static analysis) Next.js 16 Turbopack requires dynamic, fetchCache, and revalidate to be declared directly in route files — they cannot be re-exported from shared modules. Moved these exports inline into each route file. Made-with: Cursor * fix: address CodeRabbit review comments - Add JSDoc to all 6 route files per API route convention - Add numeric bounds (.nonnegative/.positive) to timing fields in schemas - Add 30s AbortController timeout to text handler's upstream fetch - Run prettier to fix formatting Made-with: Cursor * fix: resolve all lint errors in new primitive files - Add @param descriptions and @returns to all JSDoc blocks - Replace `as any` with proper types in tests (satisfies AuthContext, Awaited<ReturnType<typeof triggerPrimitive>>) - All new files pass lint and format checks Made-with: Cursor * refactor: make primitives run inline instead of triggering tasks Image, video, audio, text, and upscale now call fal.ai directly from the API handler — no Trigger.dev task needed. This means they work on any Vercel deployment (including previews) without needing task infrastructure. Render still triggers a Trigger.dev task because it needs ffmpeg. - Added @fal-ai/client dependency - Created inline handlers for image, video, audio, upscale - Deleted handlePrimitiveTrigger factory (no longer needed) - Deleted primitiveRoute helper (no longer imported) - Updated all route files to use inline handlers Made-with: Cursor * fix: enforce validateAuthContext at handler level in content primitives Move auth out of validatePrimitiveBody into each handler directly, matching the standard pattern used by pulse, sandbox, and flamingo handlers. Auth is now visible at the top of every handler. Made-with: Cursor * feat: add POST /api/content/create/analyze (Twelve Labs video analysis) New content primitive that accepts a video URL and prompt, analyzes the video content, and returns generated text. Follows the standard handler pattern with validateAuthContext at the top. Made-with: Cursor * fix: rename content/create/analyze to content/analyze Analysis is a separate action, not content creation. Made-with: Cursor * refactor: rename content primitive routes to verb-qualifier pattern content/create/image → content/generate-image content/create/video → content/generate-video content/create/text → content/generate-caption content/create/audio → content/transcribe-audio content/create/render → content/render content/create/upscale → content/upscale content/analyze → content/analyze-video Each route name now honestly describes what it does. Follows cli-for-agents convention of consistent verb-based naming. Made-with: Cursor * refactor: make content primitives generic + replace render with edit - Rename music-specific params: face_guide_url → reference_image_url, song_url → audio_url, songs → audio_urls, song → topic - Remove unused required fields: artist_account_id, template, lipsync from primitive schemas (kept in pipeline schema) - Add optional model param to generate-image, generate-video, transcribe-audio for caller-specified fal model IDs - Replace content/render with content/edit — accepts operations array (trim, crop, resize, overlay_text, mux_audio) or template name for deterministic edit config Made-with: Cursor * fix: address CodeRabbit review — split text handler, DRY route factory - Split createTextHandler into composeCaptionPrompt, callRecoupGenerate, normalizeGeneratedText helpers (SRP) - Create createPrimitiveRoute factory for shared OPTIONS + POST wiring across all 7 content primitive routes (DRY) - Add JSDoc to all new helpers and factory - Run pnpm format on touched files Made-with: Cursor * fix: make image_url optional in generate-video, add prompt field Video generation shouldn't assume image-to-video is the only mode. Now accepts optional prompt, optional image_url, optional audio_url — the model determines what's needed. Made-with: Cursor * chore: redeploy with FAL_KEY Made-with: Cursor * chore: redeploy with updated FAL_KEY Made-with: Cursor * fix: upgrade to nano-banana-2, auto-select t2i vs edit model - No reference images → fal-ai/nano-banana-2 (text-to-image) - With reference images → fal-ai/nano-banana-2/edit (image editing) - Edit model uses image_urls array (not singular image_url) - Both verified working with live fal.ai calls Made-with: Cursor * feat: add num_images, aspect_ratio, resolution to generate-image Exposes the essential controls from fal's nano-banana-2: - num_images (1-4, default 1) — generate multiple to pick from - aspect_ratio (auto, 9:16, 16:9, etc.) — match platform format - resolution (0.5K-4K, default 1K) — quality vs cost tradeoff Response now returns both imageUrl (first) and images (all URLs) when generating multiple. Made-with: Cursor * feat: set optimal internal defaults for image generation Server-side defaults not exposed to users: - output_format: png (lossless for downstream editing) - safety_tolerance: 6 (least restrictive for creative platform) - enable_web_search: true (better results for real references) - thinking_level: high (best quality, +$0.002) - limit_generations: true (predictable output count) Made-with: Cursor * feat: auto-select Veo 3.1 model variant based on inputs - Prompt only → veo3.1/text-to-video (standard quality) - Image + prompt → veo3.1/image-to-video (standard quality) - Lipsync + audio → ltx-2-19b/audio-to-video - Removed hardcoded music-specific motion prompt defaults Made-with: Cursor * feat: full generate-video upgrade — extend mode, duration, resolution, audio Schema additions: - video_url (extend an existing video) - aspect_ratio (auto, 16:9, 9:16) - duration (4s, 6s, 7s, 8s — default 8s) - resolution (720p, 1080p, 4k — default 720p) - negative_prompt - generate_audio (default true) Auto-selects model based on inputs: - Prompt only → veo3.1/text-to-video - Image → veo3.1/image-to-video - Video → veo3.1/extend-video - Lipsync → ltx-2-19b/audio-to-video Server-side defaults: safety_tolerance 6, auto_fix true Made-with: Cursor * feat: expose missing params for transcribe + upscale, fix generate_audio default Transcribe audio: - language (default "en", was hardcoded) - chunk_level (word/segment/none, default "word") - diarize (default false — identify different speakers) Upscale: - upscale_factor (1-4, default 2) - target_resolution (720p/1080p/1440p/2160p — overrides factor) Generate video: - generate_audio default changed to false (was true) Made-with: Cursor * feat: add mode param to generate-video with 6 modes Modes: prompt, animate, reference, extend, first-last, lipsync Each maps to a specific Veo 3.1 / LTX model variant. Auto-inferred from inputs when omitted. New params: mode, end_image_url Removed: lipsync boolean (replaced by mode: "lipsync") Added first-last-frame and reference-to-video support Made-with: Cursor * fix: correct text-to-video model ID (fal-ai/veo3.1, not /text-to-video) Made-with: Cursor * fix: correct fal field mappings for reference and first-last modes - reference mode: image_url → image_urls (array, like nano-banana-2/edit) - first-last mode: image_url → first_frame_url, end_image_url → last_frame_url - Both verified working with live fal calls Made-with: Cursor * refactor: simplify endpoint paths per code review generate-image → image, generate-video → video, generate-caption → caption, transcribe-audio → transcribe, analyze-video → analyze. Edit merged into video as PATCH. Made-with: Cursor * chore: redeploy with updated RECOUP_API_KEY Made-with: Cursor * chore: redeploy with new RECOUP_API_KEY for caption Made-with: Cursor * fix: always send prompt field to fal (LTX lipsync requires it even when empty) Made-with: Cursor * refactor: caption handler calls AI SDK directly instead of HTTP self-call Removes the fetch to /api/chat/generate (API calling itself over HTTP). Now uses generateText from lib/ai/generateText with LIGHTWEIGHT_MODEL. Eliminates: network round trip, RECOUP_API_KEY dependency for captions, 30s timeout, and the env var debugging headache. Made-with: Cursor * refactor: extract configureFal and buildFalInput (DRY + SRP) DRY: FAL_KEY check + fal.config() was duplicated in 4 handlers. Extracted to configureFal() — single shared helper. SRP: Video handler was doing mode inference, field mapping, and fal call in one function. Extracted buildFalInput() to handle mode-specific field name mapping (reference→image_urls, first-last→first_frame_url/last_frame_url, etc.). Made-with: Cursor * fix: remove unused stream from analyze schema, DRY video OPTIONS handler - Removed stream field from createAnalyzeBodySchema (was always hardcoded to false in handler — misleading) - Extracted primitiveOptionsHandler from createPrimitiveRoute for reuse in video route (which needs both POST and PATCH) Made-with: Cursor * feat: add template support to all content primitives Templates are static JSON configs that each primitive applies server-side when template param is passed: - generate-image: uses template prompt, picks random reference image, appends style rules - generate-caption: injects template caption guide + examples into LLM system prompt - generate-video: picks random mood + movement from template for motion prompt - edit (PATCH video): loads template edit operations as defaults 4 templates shipped: artist-caption-bedroom, artist-caption-outside, artist-caption-stage, album-record-store. Reference images uploaded to Supabase storage with signed URLs. GET /api/content/templates now returns id + description (like skills). Override priority: caller params > template defaults. Made-with: Cursor * feat: content V2 — edit route, template detail, malleable mode, MCP tools - Move PATCH edit handler from /api/content/video to /api/content - Add GET /api/content/templates/[id] detail endpoint - Add template field to video body schema - Make pipeline template optional (remove default) - Create 9 content MCP tools via fetch-proxy DRY pattern (generate_image, generate_video, generate_caption, transcribe_audio, edit_content, upscale_content, analyze_video, list_templates, create_content) - All 1749 tests pass Made-with: Cursor * fix: add packages field to pnpm-workspace.yaml pnpm 9 requires the packages field in workspace config. Adding empty array since this is not a monorepo. Also fixes onlyBuiltDependencies format to use proper YAML array syntax. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: remove pnpm-workspace.yaml Not needed for this non-monorepo project. The file was causing CI failures (packages field missing or empty). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * revert: remove lint-only changes to focus PR on content primitives Reverts 76 files that only had JSDoc @param tag additions or import reordering — no functional changes. Keeps the PR focused on the content primitive endpoints feature. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: make template optional in content create, fix edit type error - Remove default template from validateCreateContentBody (malleable mode) - Only validate template when provided - Cast template edit operations to satisfy discriminated union type Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: remove createPrimitiveRoute, use standard route pattern Replace the createPrimitiveRoute abstraction with explicit OPTIONS/POST exports matching the convention used by every other endpoint in the repo. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: use standard route pattern for template detail endpoint Replace re-export with explicit GET function definition to match the convention used by every other endpoint in the repo. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: move configureFal to lib/fal/server.ts Move fal client configuration out of content primitives into its own domain directory, consistent with lib/ organization conventions. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: replace primitives/ with domain subdirectories under lib/content/ Move each handler into a descriptive subdirectory matching its API route: - primitives/createImageHandler.ts → image/createImageHandler.ts - primitives/createVideoHandler.ts → video/createVideoHandler.ts - primitives/createTextHandler.ts → caption/createTextHandler.ts - primitives/createAudioHandler.ts → transcribe/createAudioHandler.ts - primitives/createUpscaleHandler.ts → upscale/createUpscaleHandler.ts - primitives/createAnalyzeHandler.ts → analyze/createAnalyzeHandler.ts - primitives/editHandler.ts → edit/editHandler.ts - primitives/schemas.ts → content/schemas.ts (shared) - primitives/validatePrimitiveBody.ts → content/validatePrimitiveBody.ts (shared) All 80 content tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: address PR review comments (SRP, KISS) 1. lib/fal/server.ts: export configured fal client (like supabase serverClient) 2. caption/composeCaptionPrompt.ts: extract to own file (SRP) 3. lib/twelvelabs/analyzeVideo.ts: extract fetch + API key handling (SRP) 4. image/buildImageInput.ts: extract URL generation logic (SRP) 5. templates/index.ts: use satisfies instead of unknown cast (KISS) All 80 content tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: extract business logic from handlers (SRP) Address 5 additional PR review comments: 1. transcribe/transcribeAudio.ts: extract fal transcription logic 2. upscale/upscaleMedia.ts: extract fal upscale logic 3. video/inferMode.ts: extract mode inference to own file 4. video/buildFalInput.ts: extract fal input builder to own file 5. video/generateVideo.ts: extract fal generation logic Each handler now only does auth, validation, and response formatting. All 80 content tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: move schemas into validate functions, fix naming and abbreviations Address PR review comments: 1. Move all schemas from schemas.ts into domain-specific validate files (validateCreateImageBody, validateCreateVideoBody, etc.) 2. Include validateAuthContext inside each validate function 3. analyzeVideo now accepts raw validated object (KISS) 4. Rename tpl → template (no abbreviations) 5. Delete schemas.ts and validatePrimitiveBody.ts 6. Fix formatting All 78 content tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: convert template JSON to typed TypeScript exports (KISS) Convert all 4 template JSON files to .ts files that export natively typed Template objects. Removes all casts and satisfies workarounds from index.ts — templates are now imported with full type safety. All 78 content tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: split templates/index.ts into SRP files - types.ts: Template and TemplateEditOperation interfaces - loadTemplate.ts: load a template by ID - listTemplates.ts: list all template summaries - index.ts: re-exports only (no logic) - Fix circular import: template files now import from types.ts All 78 content tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: delete callContentEndpoint abstraction (KISS) Inline auth resolution and fetch directly into each MCP tool. Removes the opaque proxy layer that obfuscated auth logic. Each tool now explicitly resolves accountId, validates the API key, and makes the fetch call. All 1792 tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: remove all new MCP content tools Remove entire lib/mcp/tools/content/ directory and its registration. These MCP tools are not defined in the API docs and should not be included in this PR which focuses on REST endpoints only. All 1792 tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: single TEMPLATES definition, enum validation, raw validated passthrough 1. Extract TEMPLATES to templates.ts — single source of truth for both loadTemplate and listTemplates 2. Export TEMPLATE_IDS const array, use z.enum(TEMPLATE_IDS) in all validate functions for fast-fail on invalid template 3. editHandler passes raw validated to triggerPrimitive (KISS) All 78 content tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: delete triggerPrimitive wrapper, use tasks.trigger directly (KISS) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: derive TEMPLATE_IDS from TEMPLATES keys (single source of truth) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: use fal-ai/veo3.1/fast/image-to-video model for video generation Align with the working model from the tasks codebase. The previous fal-ai/veo3.1 model returned "Unprocessable Entity". Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: edit endpoint requires video_url, remove audio/mux_audio PATCH /api/content is for ffmpeg post-processing of video only: - video_url is now required (not optional) - Remove audio_url param - Remove mux_audio from edit operation types - Update schema tests to match - Add 8 new validation tests for edit schema All 1790 tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * rename: create-render → ffmpeg-edit task ID More specific name — clearly describes what the task does. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Sidney Swift <158200036+sidneyswift@users.noreply.github.com> Co-authored-by: Sweets Sweetman <sweetmantech@gmail.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent a6f64ab commit 153cd2d

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+2493
-13
lines changed

app/api/content/analyze/route.ts

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
import { NextRequest, NextResponse } from "next/server";
2+
import { getCorsHeaders } from "@/lib/networking/getCorsHeaders";
3+
import { createAnalyzeHandler } from "@/lib/content/analyze/createAnalyzeHandler";
4+
5+
/**
6+
* OPTIONS handler for CORS preflight requests.
7+
*/
8+
export async function OPTIONS() {
9+
return new NextResponse(null, { status: 204, headers: getCorsHeaders() });
10+
}
11+
12+
/**
13+
* POST /api/content/analyze
14+
*
15+
* Analyze a video with AI — describe scenes, check quality, evaluate content.
16+
*/
17+
export async function POST(request: NextRequest): Promise<NextResponse> {
18+
return createAnalyzeHandler(request);
19+
}
20+
21+
export const dynamic = "force-dynamic";
22+
export const fetchCache = "force-no-store";
23+
export const revalidate = 0;

app/api/content/caption/route.ts

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
import { NextRequest, NextResponse } from "next/server";
2+
import { getCorsHeaders } from "@/lib/networking/getCorsHeaders";
3+
import { createTextHandler } from "@/lib/content/caption/createTextHandler";
4+
5+
/**
6+
* OPTIONS handler for CORS preflight requests.
7+
*/
8+
export async function OPTIONS() {
9+
return new NextResponse(null, { status: 204, headers: getCorsHeaders() });
10+
}
11+
12+
/**
13+
* POST /api/content/caption
14+
*
15+
* Generate on-screen caption text for a social video.
16+
*/
17+
export async function POST(request: NextRequest): Promise<NextResponse> {
18+
return createTextHandler(request);
19+
}
20+
21+
export const dynamic = "force-dynamic";
22+
export const fetchCache = "force-no-store";
23+
export const revalidate = 0;

app/api/content/image/route.ts

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
import { NextRequest, NextResponse } from "next/server";
2+
import { getCorsHeaders } from "@/lib/networking/getCorsHeaders";
3+
import { createImageHandler } from "@/lib/content/image/createImageHandler";
4+
5+
/**
6+
* OPTIONS handler for CORS preflight requests.
7+
*/
8+
export async function OPTIONS() {
9+
return new NextResponse(null, { status: 204, headers: getCorsHeaders() });
10+
}
11+
12+
/**
13+
* POST /api/content/image
14+
*
15+
* Generate an image from a prompt and optional reference image.
16+
*/
17+
export async function POST(request: NextRequest): Promise<NextResponse> {
18+
return createImageHandler(request);
19+
}
20+
21+
export const dynamic = "force-dynamic";
22+
export const fetchCache = "force-no-store";
23+
export const revalidate = 0;

app/api/content/route.ts

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
import { NextRequest, NextResponse } from "next/server";
2+
import { getCorsHeaders } from "@/lib/networking/getCorsHeaders";
3+
import { editHandler } from "@/lib/content/edit/editHandler";
4+
5+
/**
6+
* OPTIONS handler for CORS preflight requests.
7+
*/
8+
export async function OPTIONS() {
9+
return new NextResponse(null, { status: 204, headers: getCorsHeaders() });
10+
}
11+
12+
/**
13+
* PATCH /api/content
14+
*
15+
* Edit media with operations or a template preset.
16+
*/
17+
export async function PATCH(request: NextRequest): Promise<NextResponse> {
18+
return editHandler(request);
19+
}
20+
21+
export const dynamic = "force-dynamic";
22+
export const fetchCache = "force-no-store";
23+
export const revalidate = 0;
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
import { NextRequest, NextResponse } from "next/server";
2+
import { getCorsHeaders } from "@/lib/networking/getCorsHeaders";
3+
import { getContentTemplateDetailHandler } from "@/lib/content/getContentTemplateDetailHandler";
4+
5+
/**
6+
* OPTIONS handler for CORS preflight requests.
7+
*/
8+
export async function OPTIONS() {
9+
return new NextResponse(null, { status: 204, headers: getCorsHeaders() });
10+
}
11+
12+
/**
13+
* GET /api/content/templates/[id]
14+
*
15+
* Returns the full template configuration for a given template id.
16+
*/
17+
export async function GET(
18+
request: NextRequest,
19+
context: { params: Promise<{ id: string }> },
20+
): Promise<NextResponse> {
21+
return getContentTemplateDetailHandler(request, context);
22+
}
23+
24+
export const dynamic = "force-dynamic";
25+
export const fetchCache = "force-no-store";
26+
export const revalidate = 0;
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
import { NextRequest, NextResponse } from "next/server";
2+
import { getCorsHeaders } from "@/lib/networking/getCorsHeaders";
3+
import { createAudioHandler } from "@/lib/content/transcribe/createAudioHandler";
4+
5+
/**
6+
* OPTIONS handler for CORS preflight requests.
7+
*/
8+
export async function OPTIONS() {
9+
return new NextResponse(null, { status: 204, headers: getCorsHeaders() });
10+
}
11+
12+
/**
13+
* POST /api/content/transcribe
14+
*
15+
* Transcribe audio into text with word-level timestamps.
16+
*/
17+
export async function POST(request: NextRequest): Promise<NextResponse> {
18+
return createAudioHandler(request);
19+
}
20+
21+
export const dynamic = "force-dynamic";
22+
export const fetchCache = "force-no-store";
23+
export const revalidate = 0;

app/api/content/upscale/route.ts

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
import { NextRequest, NextResponse } from "next/server";
2+
import { getCorsHeaders } from "@/lib/networking/getCorsHeaders";
3+
import { createUpscaleHandler } from "@/lib/content/upscale/createUpscaleHandler";
4+
5+
/**
6+
* OPTIONS handler for CORS preflight requests.
7+
*/
8+
export async function OPTIONS() {
9+
return new NextResponse(null, { status: 204, headers: getCorsHeaders() });
10+
}
11+
12+
/**
13+
* POST /api/content/upscale
14+
*
15+
* Upscale an image or video to higher resolution.
16+
*/
17+
export async function POST(request: NextRequest): Promise<NextResponse> {
18+
return createUpscaleHandler(request);
19+
}
20+
21+
export const dynamic = "force-dynamic";
22+
export const fetchCache = "force-no-store";
23+
export const revalidate = 0;

app/api/content/video/route.ts

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
import { NextRequest, NextResponse } from "next/server";
2+
import { getCorsHeaders } from "@/lib/networking/getCorsHeaders";
3+
import { createVideoHandler } from "@/lib/content/video/createVideoHandler";
4+
5+
/**
6+
* OPTIONS handler for CORS preflight requests.
7+
*/
8+
export async function OPTIONS() {
9+
return new NextResponse(null, { status: 204, headers: getCorsHeaders() });
10+
}
11+
12+
/**
13+
* POST /api/content/video
14+
*
15+
* Generate a video from a prompt, image, or existing video.
16+
*/
17+
export async function POST(request: NextRequest): Promise<NextResponse> {
18+
return createVideoHandler(request);
19+
}
20+
21+
export const dynamic = "force-dynamic";
22+
export const fetchCache = "force-no-store";
23+
export const revalidate = 0;
Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
import { beforeEach, describe, expect, it, vi } from "vitest";
2+
import { NextRequest, NextResponse } from "next/server";
3+
import { getContentTemplateDetailHandler } from "@/lib/content/getContentTemplateDetailHandler";
4+
import { validateAuthContext } from "@/lib/auth/validateAuthContext";
5+
import { loadTemplate } from "@/lib/content/templates";
6+
7+
vi.mock("@/lib/networking/getCorsHeaders", () => ({
8+
getCorsHeaders: vi.fn(() => ({ "Access-Control-Allow-Origin": "*" })),
9+
}));
10+
11+
vi.mock("@/lib/auth/validateAuthContext", () => ({
12+
validateAuthContext: vi.fn(),
13+
}));
14+
15+
vi.mock("@/lib/content/templates", () => ({
16+
loadTemplate: vi.fn(),
17+
}));
18+
19+
describe("getContentTemplateDetailHandler", () => {
20+
beforeEach(() => {
21+
vi.clearAllMocks();
22+
});
23+
24+
it("returns 401 when not authenticated", async () => {
25+
vi.mocked(validateAuthContext).mockResolvedValue(
26+
NextResponse.json({ status: "error", error: "Unauthorized" }, { status: 401 }),
27+
);
28+
const request = new NextRequest("http://localhost/api/content/templates/bedroom", {
29+
method: "GET",
30+
});
31+
32+
const result = await getContentTemplateDetailHandler(request, {
33+
params: Promise.resolve({ id: "bedroom" }),
34+
});
35+
36+
expect(result.status).toBe(401);
37+
});
38+
39+
it("returns 404 for unknown template", async () => {
40+
vi.mocked(validateAuthContext).mockResolvedValue({
41+
accountId: "acc_123",
42+
orgId: null,
43+
authToken: "test-key",
44+
});
45+
vi.mocked(loadTemplate).mockReturnValue(null);
46+
47+
const request = new NextRequest("http://localhost/api/content/templates/nonexistent", {
48+
method: "GET",
49+
});
50+
51+
const result = await getContentTemplateDetailHandler(request, {
52+
params: Promise.resolve({ id: "nonexistent" }),
53+
});
54+
const body = await result.json();
55+
56+
expect(result.status).toBe(404);
57+
expect(body.error).toBe("Template not found");
58+
});
59+
60+
it("returns full template for valid id", async () => {
61+
vi.mocked(validateAuthContext).mockResolvedValue({
62+
accountId: "acc_123",
63+
orgId: null,
64+
authToken: "test-key",
65+
});
66+
const mockTemplate = {
67+
id: "artist-caption-bedroom",
68+
description: "Moody purple bedroom setting",
69+
image: { prompt: "test", reference_images: [], style_rules: {} },
70+
video: { moods: ["calm"], movements: ["slow pan"] },
71+
caption: { guide: { tone: "dreamy", rules: [], formats: [] }, examples: [] },
72+
edit: { operations: [] },
73+
};
74+
vi.mocked(loadTemplate).mockReturnValue(mockTemplate);
75+
76+
const request = new NextRequest(
77+
"http://localhost/api/content/templates/artist-caption-bedroom",
78+
{ method: "GET" },
79+
);
80+
81+
const result = await getContentTemplateDetailHandler(request, {
82+
params: Promise.resolve({ id: "artist-caption-bedroom" }),
83+
});
84+
const body = await result.json();
85+
86+
expect(result.status).toBe(200);
87+
expect(body.id).toBe("artist-caption-bedroom");
88+
expect(body.description).toBe("Moody purple bedroom setting");
89+
expect(body.image).toBeDefined();
90+
expect(body.video).toBeDefined();
91+
expect(body.caption).toBeDefined();
92+
expect(body.edit).toBeDefined();
93+
});
94+
});

0 commit comments

Comments
 (0)