Skip to content

Conversation

@kliu2python
Copy link

No description provided.

kliu2python and others added 30 commits August 6, 2025 07:23
…ilities

feat: add cross-platform automation and reporting
…emote-access

Expose automation runner via FastAPI service
…or-multithreading-support

Add asynchronous runner support for concurrent runs
…agement-for-api

Add Redis-backed queue for asynchronous task execution
…sk-ids-by-status

Add tasks aggregation endpoint
…egui-for-backend-apis

Add NiceGUI frontend for AI Testing Tool API
…eplace-with-typescript

Replace NiceGUI frontend with React TypeScript interface
…auth-system

Add authentication, authorization, and result tabs
…-user-prompt-options

Enhance frontend layout and task handling
…ame-change

Add persistent task storage and report folder restructuring
…rontend-and-api

Add Dockerfiles for frontend, API, and queue runner services
…e-and-rename-components

Refactor project structure to backend and frontend servers
kliu2python and others added 29 commits October 9, 2025 11:51
…le-and-improve-ui

feat: improve task management table experience
…-one-line

Improve task status layout and scroll behavior
…on-for-human-input

Add human feedback scoring support
…on-for-human-input-ckay6t

Add human scoring UI and API for generated code
Remove manual score input from code library
…event-overlay

Adjust grid lines to avoid frame overlap
…os-and-move-dropdown

Set iOS as the default test device and surface the selector
…-user-data-access

Add admin portal to review user task statuses
…ith-chrome-devtools-mcp

Add chrome-devtools-mcp web driver integration
…egrate-ai-agents

Expand README with multi-agent email integration guidance
…p-dropdown

Simplify subscription IMAP settings
…sable-entity-error

Fix subscription keyword input sanitization
This commit modernizes the frontend and backend to allow users to configure
LLM parameters for their automation tasks:

Backend changes:
- Extended RunRequest model with llm_model, temperature, and max_tokens fields
- Updated generate_next_action() to use configurable LLM parameters
- Modified _run_tasks(), run_tasks(), and run_tasks_async() to accept and pass through LLM config
- Updated queue_runner to extract LLM parameters from task payload
- Parameters fall back to environment variables when not specified

Frontend changes:
- Added LLM configuration section to RunTaskForm with:
  - Model selection field (with examples)
  - Temperature slider (0.0-2.0)
  - Max tokens input (1-16384)
- Added same LLM controls to TaskEditDialog for editing stored tasks
- Updated TypeScript types to include new optional fields
- Payload construction includes LLM parameters when provided

Features:
- Users can now specify custom LLM models per task
- Temperature control for adjusting response randomness
- Max tokens limit for controlling response length
- All parameters are optional with sensible defaults
- Backward compatible with existing tasks
@kliu2python
Copy link
Author

merge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants