Skip to content

Conversation

@d42me
Copy link
Contributor

@d42me d42me commented Jan 14, 2026

Note

Introduces adaptive, concurrent batching for pushing evaluation samples in both sync and async clients.

  • New push_samples implementations split samples into size-aware batches via _build_batches and upload concurrently (ThreadPoolExecutor for sync, semaphore-limited coroutines for async)
  • Configurable limits: max_payload_bytes (default 512KB), max_workers/max_concurrent with validation
  • Returns {"samples_pushed": <count>} and aggregates per-batch errors into a single EvalsAPIError
  • Helper methods _upload_batch (sync) and _build_batches added; existing create/finalize/list/get/update logic unchanged aside from minor formatting

Written by Cursor Bugbot for commit b67f8cc. This will update automatically on new commits. Configure here.

@d42me d42me marked this pull request as ready for review January 14, 2026 19:13
try:
total_samples_pushed += future.result()
except Exception as e:
errors.append(f"Batch {futures[future] + 1}: {e}")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concurrent thread access to non-thread-safe httpx client

Medium Severity

The sync push_samples method uses ThreadPoolExecutor with max_workers=4 by default to upload batches concurrently. Each worker thread calls self.client.request() on the shared client instance. The prime_evals.core.APIClient provided by this package uses httpx.Client, which is documented as not thread-safe. Users following the README's documented pattern of using APIClient with EvalsClient would encounter potential race conditions in the HTTP connection pool, which could cause intermittent failures.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants