Skip to content

HatPdotS/LPSA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Warning: Experimental research software. Basic functionality works, but the software and results have not been validated by peer review yet.

LPSA — Analysis of continuous time‑resolved crystallographic data

This project applies a modified Low Pass Spectral Analysis to analyze continuous time‑resolved crystallographic datasets. It extracts time‑dependent signals, decomposes them into interpretable components.

Highlights

  • Works with unmerged Partialator output (.hkl) and an event‑time list
  • Pre/post LPSA processing pipelines and export utilities
  • Config‑driven workflow (JSON) with sensible defaults
  • Intended to run on a slurm cluster, depending on parameters might submit a lot of jobs
  • uses Xtrapol8 for map creation at the moment (subject to change, q weighting produces weird results sometimes)

Installation (Conda + editable pip)

Steps

# Create and activate an environment (adjust Python if needed)
conda create -n lpsa python=3.11 -y
conda activate lpsa

# From the project root (this folder)
python -m pip install --upgrade pip
pip install -e .

Inputs you need

  1. Unmerged Partialator output (.hkl)
  • Set the path via the config key partialator_unmerged.
  1. Event time list (tab‑separated)
  • A plain text file with whitespace/TSV columns: file event time
  • Parsed by lpsa.io_functions.load_time_from_list()
  • Set the path via the config key times_file.
image_0001.h5	/1	12.3
image_0002.h5	/2	12.4
image_0003.h5	/3	-0.1

Creating a config (Python API)

Do not hand‑craft JSON—use the API. Create a minimal config with defaults and then edit as needed.

from lpsa.organise_config import make_config

# directory is the working folder where config.json and outputs will live
config = make_config(
		directory="/path/to/work",
		partialator_unmerged="/path/to/unmerged.hkl",
		times_file="/path/to/event_times.tsv",
)

# The function writes config.json into the directory and returns the dict
print(config["configfile"])  # /path/to/work/config.json

Key defaults come from lpsa.organise_config.make_config; you can override any parameter by passing it explicitly or by editing the written JSON later.

Important config keys

Core inputs

  • partialator_unmerged (str): Path to the unmerged Partialator .hkl file.
  • times_file (str): Path to the tab‑separated list with columns (file, event, time).

Timing and sampling

  • min_t, max_t (float): Time window to include.
  • time_sigma (float), relative_ts (float): Temporal broadening and relative sigma options.
  • periodic (bool), mirroring (bool): Period handling and time‑axis mirroring.
  • down_sampled_rate (int), resample_x (bool): Downsampling and resampling controls.

Decomposition and model

  • q (int): Number of temporal samples/components for basis construction.
  • m (int): Model order.
  • fmax (int): Frequency cutoff; jcut = 2 * fmax + 1 is derived.
  • mode_encoding (str): Encoding mode (e.g., late_weights_parallel, late_weights_parallel_relative_ts).
  • mode_low_pass_filter (str): Temporal filtering mode (e.g., use_real_time).
  • use_weights (bool), self_referencing (bool): Weighting and reference strategies.

Crystallography

  • cell (list[6]), cif (str), dark_pdb (str), dark_mtz (str)
  • cut_res (bool), max_res (float), min_res (float)
  • point_group (str), symmetry (str)

I/O and storage

  • x_out_dir (str): Folder for X/P sparse matrices and metadata. Helper sets:
    • x_as_sparse, p_as_sparse, x_time_series, x_as_memmap, x_as_df_path, x_meta_path_as_df, x_resampled, dataframe_times_path, etc.
  • preLPSA, postLPSA (str): Subfolders for pre/post processing outputs; helpers populate:
    • Pre: hkl_out_pre_LPSA, mtz_out_pre_LPSA, difference_mtz_out_pre_LPSA, difference_map_out_pre_LPAS
    • Post: reconstruction_out_hkl, reconstruction_out_mtz, reconstruction_out_difference_mtz, reconstruction_out_difference_map
  • sub_dir_a, sub_dir_SVD (str): Result subfolders for A and SVD; auto‑derived names via set_q_paths using q, m, sigma/mode.

Misc

  • nproc (int): Parallel workers.
  • min_count (int): Minimum observations per bin.
  • load_all (bool): Load entire dataset vs. streamed.

Running via Python

Typical high‑level pattern (pseudocode—adapt to your workflow APIs):

from lpsa import LPSA

config_path = "/path/to/work/config.json"
runner = LPSA.LPSA(config_path)

# Example actions exposed by the internal pipeline; consult source for details
# runner.make_x()
# runner.make_A()
# runner.calc_ev()
# runner.calc_modes()

Using the GUI

The GUI lets you create/load configs and run analyses interactively.

Launch

python -m lpsa.LPSA_GUI.source.gui
# or
python lpsa/LPSA_GUI/source/gui.py

In the GUI

  1. Create new config: picks a JSON path and calls make_config(os.path.dirname(path)).
  2. Set partialator_unmerged and times_file to your actual files.
  3. Adjust parameters (q, m, time_sigma, fmax, windows).
  4. Save and start processing; monitor status and plots.

Outputs are written to the folders configured in your JSON (see keys above). Eigenvalue plots, chronos, and reconstructions land under the auto‑generated sub_dir_a/sub_dir_SVD and pre/post‑LPSA directories.

Troubleshooting

  • Install/build issues
    • Ensure a compiler toolchain is present; upgrade pip/setuptools/wheel.
  • Event time file parsing
    • Must be whitespace/TSV with three columns (file, event, time). See lpsa/io_functions.py:load_time_from_list.
  • Paths moved or renamed
    • Use the update helpers: update_config, set_paths_x, set_paths_pre_LPSA, set_paths_post_LPSA to re‑wire outputs safely.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published