Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 75 additions & 0 deletions docs/AGENT_LEARNINGS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# Agent learnings: er-client merge and test patterns

**Purpose:** Domain-specific context for agents and humans continuing the JoshuaVulcan PR merge work (main → open PR branches). Append to this file as new learnings arise.

**Repo:** PADAS/er-client. **Base branch:** `main`. Use `gh` CLI (not GitHub MCP; org has SAML).

---

## 1. Project state (as of 2026-02-11)

- **#23** (API versioning): merged. `_api_root(version)`, `base_url` params, `api_paths.py`, service_root normalization are on `main`.
- **#45** (canonical async _delete + 204 in _call): merged. One-liner `_delete` and 204 handling are on `main`.
- **#46** (shared sync_client conftest): merged. Canonical `tests/sync_client/conftest.py` and `__init__.py` are on `main`.
- **#38**: closed. Use **#41** for analyzers/choices/gear/reports.
- **#30**: merge **almost last**; coordinate with #23’s developer.

---

## 2. Respx / async test URL pattern (PR #24)

After #23, the client builds request URLs with `_api_root(version)` (e.g. `https://example.org/api/v1.0/...`). Conftest’s `er_server_info["service_root"]` may still be the full URL including `/api/v1.0`.

**Rule:** In async tests that use `respx.mock()`, set the mock **base_url** to the same base the client uses:

- **Use:** `base_url=er_client._api_root("v1.0")`
- **Do not use:** `base_url=er_client.service_root`

Otherwise you get `RESPX: <Request('GET', '.../api/v1.0/subjects')> not mocked!` because the mock was registered against a different base.

**Reference:** Latest commit on PR #24 (`ERA-12672/async-post-observation`), file `tests/async_client/test_post_observation.py`.

**Files updated so far with this fix:**
`test_get_subjects.py`, `test_get_subject_tracks.py`, `test_get_sources.py` (PR 27).

---

## 3. Merging main into a PR branch

1. `git fetch origin && git checkout <PR-branch> && git merge origin/main -m "Merge main to resolve conftest and infra conflicts"`.
2. **Conflicts on conftest:**
For `tests/sync_client/conftest.py` and `tests/sync_client/__init__.py`, keep **main’s** version:
`git checkout --theirs -- tests/sync_client/conftest.py tests/sync_client/__init__.py` then `git add` those files.
3. Resolve any other conflicts in `erclient/client.py` (or elsewhere) by combining main’s infra with the PR’s new methods; remove duplicate method definitions (e.g. keep one `get_event`).
4. Run tests: `uv run pytest tests/sync_client/ tests/async_client/ -q --tb=line`. Fix any new failures (often respx `base_url` as above).
5. Commit and push. If push is rejected (branch behind remote), `git pull origin <branch> --no-rebase`, resolve any new conftest conflict again with main’s version, commit, push.

---

## 4. Sync client `get_event` and tests

- **#26** keeps a single `get_event(event_id, ...)` (positional) for both sync and async; main’s keyword-only `get_event(*, event_id=None, ...)` was removed to avoid duplicate.
- Sync tests that assert exact URL equality with `er_client.service_root` can fail after #23. Prefer asserting that the path and `event_id` appear in the URL (e.g. `assert f"activity/event/{event_id}" in url`).

---

## 5. Strategy and merge order

- Full merge strategy and wave order: see **ER_CLIENT_MERGE_STRATEGY_RECOMMENDATION.md** in the DAS repo (`/Users/joshuak/Projects/das/`) if available, or the same filename in notes.
- Merges are done by hand in the GitHub UI (merge commits). This doc and the learnings here support the person/agent doing the merge-main and conflict resolution on each branch.

---

## 6. Progress (merge main into PR branches)

| PR | Branch | Status |
|----|--------|--------|
| 24 | ERA-12672/async-post-observation | Already up to date with main |
| 25 | ERA-12658/v2-event-type-schema-updates | Merged main, resolved client.py, pushed |
| 26 | ERA-12654/sync-get-event | Merged main, kept #26 get_event + main conftest, pushed |
| 27 | ERA-12652/async-subject-source-read | Merged main; respx base_url fixed; **commit/push + next PR pending** |
| 28–44 | (see strategy doc) | Pending |

---

*Append new learnings below as you go.*
75 changes: 75 additions & 0 deletions erclient/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -1613,6 +1613,81 @@ async def get_feature_group(self, feature_group_id: str):
"""
return await self._get(f"spatialfeaturegroup/{feature_group_id}", params={})

async def get_subjects(self, **kwargs):
"""
Get the list of subjects to whom the user has access.

Optional kwargs passed as query params:
subject_group: filter to a single subject group.
include_inactive: include inactive subjects. Default False.

Returns:
list: list of subject data
"""
params = dict((k, v) for k, v in kwargs.items() if k in
('subject_group', 'include_inactive'))
return await self._get('subjects', params=params)

async def get_subject(self, subject_id: str):
"""
Get a single subject by subject_id.

Args:
subject_id: the UUID for the subject

Returns:
dict: subject data
"""
return await self._get(f'subject/{subject_id}')

async def get_subject_tracks(self, subject_id: str, start=None, end=None):
"""
Get the latest tracks for the Subject having the given subject_id.

Args:
subject_id: the UUID for the subject
start: optional datetime, filter tracks since this time
end: optional datetime, filter tracks until this time

Returns:
dict: track data for the subject
"""
p = {}
if start is not None and isinstance(start, datetime):
p['since'] = start.isoformat()
if end is not None and isinstance(end, datetime):
p['until'] = end.isoformat()
return await self._get(f'subject/{subject_id}/tracks', params=p)

async def get_sources(self, **kwargs):
"""
Returns an async generator to iterate over sources (paginated).

Optional kwargs:
page_size: Change the page size. Default 100.
batch_size: The generator returns sources in batches (list)
instead of one by one. Default 0 (means no batching).
"""
page_size = kwargs.get('page_size', 100)
batch_size = kwargs.get('batch_size', 0)
params = {'page_size': page_size}
if batch_size and page_size:
params['page_size'] = batch_size
Comment on lines +1667 to +1675
async for source in self._get_data(endpoint='sources', params=params, batch_size=batch_size):
yield source
Comment on lines +1662 to +1677

async def get_source_by_id(self, source_id: str):
"""
Get a source by its UUID.

Args:
source_id: the source UUID

Returns:
dict: source data
"""
return await self._get(f'source/{source_id}')

async def _get_data(self, endpoint, params, batch_size=0):
if "page" not in params: # Use cursor paginator unless the user has specified a page
params["use_cursor"] = "true"
Expand Down
Loading
Loading