Skip to content

Conversation

@venkat1701
Copy link

Context

Memory strategies are now represented using named, top-level memory scopes (e.g., a long-term sqlite store) that agents reference explicitly, instead of older/implicit memory configuration patterns. This PR updates the memory demo example and the sqlite memory backend plumbing so the strategy-based configuration works cleanly and consistently, and so CI lint/tests pass.

What I did

Added a new driver-based memory subsystem, implemented the initial sqlite backend, wired memory scopes into team loading/runtime:

Before / After

Before

There was no unified “memory scope” model: team loading couldn’t reliably create per-agent memory tools from named memory configurations, and there wasn’t strategy-aware validation for memory backends/configs.
After

Memory is modeled as named scopes with explicit backends/strategies and validated centrally.
Team loading creates memory drivers once, and agents get memory tools based on their referenced scopes.
The existing memory tool behavior is preserved via an adapter, while enabling future backends/strategies beyond sqlite.

@krissetto
Copy link
Contributor

/review

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

Reviewed the pluggable memory system implementation. Found 1 high-severity issue that needs addressing.

Issues Found

  • HIGH: Resource leak - memory drivers are never closed

Note

The SQL query construction using fmt.Sprintf for LIMIT clause is safe since query.Limit is typed as int, not a string.

The redundant conditional check in validate.go (lines 168-172) is a minor code quality issue but doesn't affect correctness.

if err != nil {
return nil, fmt.Errorf("failed to create memory driver %q: %w", name, err)
}
memoryDrivers[name] = driver
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resource Leak: Memory drivers are never closed

The memory drivers created here implement io.Closer (database connections need cleanup), but they're never closed. This will leak database connections and file descriptors.

Impact: Each time agents are loaded, database connections accumulate without being released. This will cause:

  • File descriptor exhaustion
  • Database connection pool exhaustion
  • Memory leaks from unclosed resources

Fix: Add cleanup logic, for example:

// Store drivers for cleanup
team.memoryDrivers = memoryDrivers

// In team cleanup/shutdown:
for _, driver := range team.memoryDrivers {
    driver.Close()
}

Or use a defer pattern if drivers should be closed when LoadWithConfig returns (though this seems incorrect for the use case - drivers likely need to live as long as the agents).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants