-
Notifications
You must be signed in to change notification settings - Fork 34
Issue with Magentic Marketplace Simulation Stuck in Retry Loop #160
Description
As a student and AI beginner, I find this project truly impressive. Thank you for your outstanding contributions.
However, I've been unable to get it running, which is quite frustrating. After reviewing several existing issues, I noticed that most problems seem related to the adaptation and invocation of large models.
I would be very grateful if a more updated version could be provided. I'm really looking forward to experiencing this amazing project!
#Environment Setup
Platform with Docker restrictions
PostgreSQL managed via Conda
Local LLM served with vllm
Configuration Details
bash
Local model serving
vllm serve /dssg/home/acct-luymin/yelu/.cache/modelscope/hub/models/Qwen/Qwen2.5-7B-Instruct --host 0.0.0.0 --port 8000
Environment variables
LLM_PROVIDER=openai
LLM_MODEL=Qwen2.5-7B-Instruct
LLM_API_BASE=http://localhost:8000/v1
LLM_API_KEY=token-abc123
db_type="postgres"
LLM_MAX_CONCURRENCY=2
LLM_TIMEOUT=60
LLM_TEMPERATURE=0.3
Problem Description
The simulation gets stuck in an infinite retry loop after agent initialization, preventing any meaningful marketplace interactions.
Steps to Reproduce
Start vllm server with Qwen2.5-7B-Instruct model
Set environment variables as shown
Run: python ./experiments/example.py
Actual Behavior
Agents initialize successfully but then enter endless retry cycle. My debug code reveals the system is stuck in retry loops:
text
INFO [customer_0002] Starting autonomous shopping agent
INFO [openai._base_client] Retrying request to /chat/completions in 0.379s
INFO [openai._base_client] Retrying request to /chat/completions in 0.489s
Additional Context
Incomplete data files can be generated
PostgreSQL managed via Conda due to Docker restrictions
vllm server confirmed running at localhost:8000
The retry loop behavior was identified through custom debug instrumentation