Fix database accumulation in gzip snapshots across workflow runs #6
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Database snapshots (
database.sql.gz,database2.sql.gz) were being overwritten each run instead of accumulating historical auction data.Root Cause
__main__.pycreates fresh databases on each run. Afterprepare_db_snapshots.pyremoves the.dbfiles, subsequent runs had no restoration step, causing data loss.Changes
restore_database_from_gzip()to__main__.py:.sql.gzfiles usinggzip.open()sqlite3.executescript()with context managermain()Now each workflow run restores existing data, appends new auctions, then
prepare_db_snapshots.pycreates updated gzip snapshots containing full history.Original prompt
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.