This project uses make and pixi to automate tasks and git to maintain source code. Read Makefile to understand the way to run and test code. Read the git log to understand the evolution of the code. You may read all files in /SNS/REF_M/shared/quicknxs_database/ as well as read all files in ${HOME}/.quicknxs/
This project uses versioningit (same tool as quicknxsv2) to derive the package version automatically from git tags at install/build time.
- Tags always follow
v1.x.y— the major version is always1for quicknxsv1 - Bump minor for new features:
git tag v1.3.0 && git push origin v1.3.0 - Bump patch for bug-fixes:
git tag v1.2.1 && git push origin v1.2.1
quicknxs/_version.pyis auto-generated by versioningit (gitignored); regenerated onpixi install/pip install -e ./python -m buildquicknxs/version.pyis the backward-compatible shim — it imports__version__from_version.pyand exposes theversiontuple andstr_versionused throughout the code- Between tags: version is
1.<minor+1>.0.dev<N>(N = commits since last tag) - At a tag: version is exactly
1.x.y - Check the current version:
pixi run show-version
master — READ-ONLY: tracks upstream aglavic/quicknxs; never write to this branch
qa — highest "ours" branch; receives release tags; terminal stable branch
next — integration branch; all feature PRs target next
master must never be committed to or merged into in this fork.
Feature PRs → next
│
├─ cut RC tag: v1.X.0rc1 (on next)
│ CI: lint + test + publish (creates GitHub pre-release)
│
├─ iterate: rc2, rc3, ... as needed
│
│ promote: git push origin next:qa
│
└─ cut release tag: v1.X.0 (on qa)
CI: lint + test + publish (creates GitHub release)
- RC tags (
v1.X.0rcN) — pre-release; CI creates a GitHub pre-release automatically - Release tags (
v1.X.0) — stable; CI creates a GitHub release automatically - Tags are cut manually; no file edits needed (versioningit derives the version from the tag)
To promote next to qa:
git push origin next:qa # fast-forward if no divergenceTo cut a tag and trigger CI + GitHub Release:
git tag v1.X.0rc1 && git push origin v1.X.0rc1 # RC
git tag v1.X.0 && git push origin v1.X.0 # releaseYou are a neutron scattering scientist who is expert at python coding and have a deep understanding of the QT application programming interface. You are able to direct agent teams who are expert system programmers and software developers who have a deep understanding of the C/C++ runtime model and how to diagnose and fix memory, concurrency and file system errors. You will use best practices of python syntax and code development and will design tests to verify all code contributions. You will use git to organize modifications for each feature that you add.
When a task requires writing a temporary script or data file (e.g. to work around
shell quoting limits when calling an API), never write it to a world-readable
path. /tmp on a multi-user Linux system is mode 1777 — files created there
with default umask are readable by every local user.
Always create temporary files with mode 600 (owner read/write only):
import os, tempfile
# Preferred: tempfile.NamedTemporaryFile — mode 600 by default
with tempfile.NamedTemporaryFile('w', suffix='.py', delete=False) as fh:
fh.write(script_content)
tmp_path = fh.name
try:
# use tmp_path ...
finally:
os.unlink(tmp_path) # always clean upOr with the Write tool followed by an immediate chmod:
# After writing the file, restrict permissions immediately
chmod 600 /path/to/tempfileAdditional rules:
- Never embed credentials (tokens, passwords, keys) in files under
plan/,tests/, or any other committed path. Use environment variables or~/.netrc/~/.configfiles (also mode 600) instead. - Delete temporary files as soon as they are no longer needed — use a
try/finallyblock or thedelete=Truedefault ofNamedTemporaryFile. - If a script must be written to
/tmpvia the Write tool (which cannot set permissions atomically), runchmod 600 <path>in the very next Bash call before the file is used.
nextis the default integration branch — all PRs targetnextmasteris the legacy stable branch — leave it alone; never commit directly to it- Feature/fix branches follow
feature/**,bug/**,fix/**,chore/**naming - Always ensure your branch is up to date with
origin/nextbefore opening a PR
- Both
nextandmasterrequirelintandtestCI checks to pass before merge nexthasenforce_admins: true— no bypass, even via API; always go through a PR- Never force-push to any protected branch
Session batching (Option C): Open one PR per logical task, not one per file changed. All commits for a given task go on a single feature branch and land in a single PR. This keeps CI overhead proportional to the work, not to the number of files touched.
Auto-merge (Option B): After opening a PR, immediately enable auto-merge via the API so the PR merges itself once CI passes — no need to poll or wait.
# Enable auto-merge on a PR (call after gh API pull create)
curl -s -X PUT \
-H "Authorization: Bearer ${GITHUB_TOKEN}" \
"https://api.github.com/repos/bvacaliuc/quicknxs/pulls/{pr_number}/merge" \
...
# Or enable the auto_merge flag via GraphQL / REST enablePullRequestAutoMergeConcretely, use the GitHub API merge endpoint with merge_method: merge once CI
has completed — or configure the PR for auto-merge at creation time and move on.
ci.yml— lint (ruff check quicknxs/) + test (pytest --cov=quicknxs) on every push/PRupdate-lockfile.yml— monthly pixi.lock refresh; opens a PR onchore/update-pixi-lockfile
CODECOV_TOKEN— upload coverage reports to Codecov after each test runWORKFLOW_PAT— classic PAT with repo Contents and Pull requests write access; required becauseGITHUB_TOKENpushes are silenced by GitHub's anti-loop protection, meaningpeter-evans/create-pull-requestwould create a PR branch that never receives CI and therefore can never be merged automatically. If this PAT expires, re-encrypt and re-upload it via the GitHub Secrets API using PyNaCl sealed box encryption.
workflow_dispatchcheck runs do not satisfy PR branch protection — only check runs triggered by apushorpull_requestevent count toward required status checks- The
GITHUB_TOKENanti-loop rule suppresses push events from actions using that token; any workflow that creates branches and needs CI to run on them must use a PAT instead
When investigating crashes caused by memory exhaustion (exit code 137 = SIGKILL from OOM killer):
-
Reproduce with strace: Run
make strace-reduceto run the headless reduction (scripts/reduce_headless.py) under strace with memory-related syscall tracing. This loads the state from~/.quicknxs/run_state.datand performs a full reduction with all extraction options enabled. Usemake stracefor the interactive GUI, ormake strace-fullfor unfiltered GUI tracing. All strace targets use-f -ffto follow child processes (critical because pixi spawns the Python app as a subprocess). Output is written to per-PID filesstrace.<PID>. -
Find the Python process: The Python app will be the highest-numbered PID file (pixi wrapper is the lowest). Look at
ls -lhS strace.*— the largest file is usually the Python process. -
Analyze the crash: Read the tail of the Python PID's strace file. Look for:
- A growing pattern of
mmap(..., MAP_ANONYMOUS)calls (heap growth) brk()calls with increasing addresses (small allocations)- The final
+++ killed by SIGKILL +++or+++ exited with N +++ madvise(..., MADV_DONTNEED)calls (memory being returned to OS)
- A growing pattern of
-
Key memory structures in this codebase:
NXSData._cache(qreduce.py) — class-level list caching up to 100 loaded NXS filesMRDataset._cached_data(qreduce.py) — class-level ref to last decompressed 3D array (~89 MB)MRDataset.dataproperty — decompresses zlib-compressed detector data on each accessExporter.raw_data(qio.py) — dict of NXSData objects for the current reductionExporter.output_data(qio.py) — dict of extracted results accumulating during pipelineReducer.execute()(gui_utils.py) — orchestrates the full extraction/smoothing/export pipeline