From 5e7a2d8f78fecb7546dce644b522a2143ce63b15 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Fri, 19 Dec 2025 09:27:45 -0600 Subject: [PATCH 01/67] Improve test coverage of Python/C++ interface code **Files:** modulewrap.cpp, lifelinewrap.cpp * **Feature: RAII for Python Objects** * **Change:** Introduced `PyObjectPtr` (a `std::shared_ptr` alias with a custom `PyObjectDeleter`) to manage Python object reference counts. * **Rationalization:** Manual reference counting (`Py_INCREF`/`Py_DECREF`) is error-prone, especially in the presence of C++ exceptions. If an exception is thrown, manual decrements might be skipped, leading to memory leaks. * **Resolution:** `PyObjectPtr` ensures that `Py_DECREF` is called automatically when the pointer goes out of scope, even during stack unwinding. * **Fix: Robust Annotation Parsing** * **Change:** Rewrote the argument parsing logic in `parse_args` to iterate over the `__annotations__` dictionary using `PyDict_Next` and explicitly skip the `"return"` key. * **Root Cause:** The previous implementation relied on `PyDict_Values`, which returns all values including the return type annotation. Depending on dictionary iteration order (which can vary or be insertion-ordered), the return type could be mistakenly interpreted as an input argument type. * **Diagnosis:** Likely diagnosed by observing type mismatch errors when Python functions had return type annotations. * **Fix: Flexible Input Conversion (List vs. NumPy)** * **Change:** Replaced rigid macro-based vector converters with explicit implementations (`py_to_vint`, `py_to_vuint`, etc.) that accept both Python `list` and NumPy `ndarray` objects. * **Root Cause:** The previous converters strictly expected NumPy arrays. Users passing standard Python lists would cause runtime errors or type mismatches. * **Resolution:** The new converters check the input type (`PyList_Check` vs `PyArray_Check`) and handle data extraction accordingly. * **Fix: Memory Safety in Cyclic GC** * **Change:** Added `PyObject_GC_UnTrack(pyobj)` in `ll_dealloc` (`lifelinewrap.cpp`). * **Root Cause:** Python objects that support cyclic garbage collection must be untracked before deallocation to prevent the GC from visiting invalid memory. Missing this can lead to segfaults during interpreter shutdown or garbage collection cycles. * **Fix: Type String Matching** * **Change:** Replaced brittle fixed-offset string comparisons (e.g., `inp_type.compare(pos, ...)`) with robust substring searching (`suffix.find(...)`). Corrected a typo where `double64]]` was checked instead of `float64]]`. * **Root Cause:** The fixed-offset logic assumed a specific string format for type signatures, which could break if the format changed slightly. The typo prevented `float64` arrays from being correctly identified. **Files:** CMakeLists.txt, CMakeLists.txt * **Enhancement: Reduced Build Dependencies** * **Change:** Removed the dependency on the external `packaging` Python module in CMakeLists.txt. * **Rationalization:** The build system previously used `packaging.version` to check module versions. This required the `packaging` library to be installed in the build environment. * **Resolution:** Implemented a lightweight, inline version parser (splitting strings by `.`) to perform the check using only the standard library. * **Fix: GCC 14+ Warning Suppression** * **Change:** Added `-Wno-maybe-uninitialized` to compile options for GCC 14.1+. * **Root Cause:** Newer GCC versions have more aggressive static analysis that produces false positives for uninitialized variables in complex C++ templates used by the project. **Files:** copilot-instructions.md * **New Feature:** Added a comprehensive instructions file for GitHub Copilot. * **Rationalization:** To standardize the behavior of AI assistants working in the repository, ensuring they follow project-specific coding standards (formatting, error handling) and workflow guidelines. User: Now, please summarize test code changes and additions, including rationale for the test, how they improve test coverage, any problems they exposed, and what categories of regression they might detect in future code. GitHub Copilot: Ran terminal command: git show 15cac5f test/python/ **Files:** vectypes.py, test_types.py, pyvectypes.jsonnet, pytypes.jsonnet, verify_extended.py * **Rationale:** * The existing tests primarily covered basic integer and string types. * There was a gap in coverage for: * **Floating point types** (`float`, `double`). * **Unsigned integers** (`unsigned int`, `unsigned long`). * **64-bit integers** (`long`, `int64_t`). * **NumPy array interoperability** (passing vectors from C++ to Python as NumPy arrays). * These tests were added to verify the robustness of the new `modulewrap.cpp` converters. * **Coverage Improvements:** * **`py:types`**: Validates scalar type conversion between C++ and Python for `float`, `double`, and `unsigned int`. * **`py:vectypes`**: Validates vector/array conversion. It tests: * Creation of NumPy arrays from scalar inputs (`collectify_*`). * Summation of NumPy arrays back to scalars (`sum_array_*`). * Handling of all major numeric types: `int32`, `uint32`, `int64`, `uint64`, `float32`, `float64`. * **verify_extended.py**: Introduces specialized verifiers (`VerifierFloat`, `VerifierUInt`, etc.) that handle type-specific assertions (e.g., epsilon comparison for floats). * **Problems Exposed:** * **Integer Overflow/Underflow**: The `py:vectypes` test exposed a logic error in source.cpp where large 64-bit hashes were being used in arithmetic (`100 - id`), causing underflow for unsigned types and wrapping for signed types. This was fixed by introducing modulo arithmetic to keep values small and predictable. * **Type Mismatches**: The strict type checking in the new tests likely exposed the need for the robust annotation parsing and explicit type converters implemented in `modulewrap.cpp`. * **Regression Detection:** * **Type Conversion Breakages**: These tests will fail if future changes to `modulewrap.cpp` break the mapping between C++ types (like `std::vector`) and Python types (like `numpy.ndarray` or `list`). * **Precision Loss**: The float/double tests will catch regressions where 64-bit precision is accidentally truncated to 32-bit. * **Sign Errors**: The unsigned integer tests will detect if unsigned values are incorrectly cast to signed values (e.g., treating `UINT_MAX` as `-1`). **Files:** CMakeLists.txt, source.cpp * **Rationale:** * To support the new tests and ensure the test environment is consistent with real-world usage. * To fix flaky or incorrect test data generation. * **Changes:** * **CMakeLists.txt**: * Added `py:vectypes` and `py:types` to the test suite. * Enhanced `PYTHONPATH` setup to explicitly include `Python_SITELIB` and `Python_SITEARCH`. This ensures tests running in embedded environments (like Spack) can find installed packages. * Replaced the external `packaging` dependency with a simple inline version parser for the module check. * **source.cpp**: * Expanded the C++ data provider to generate all required types (`float`, `double`, `uint`, `int64`, `uint64`). * **Fix**: Changed data generation logic from `id.number()` to `id.number() % N` to prevent integer overflow and ensure deterministic summation results. **Files:** adder.py, all_config.py, reducer.py, sumit.py, verify.py * **Rationale:** * To comply with the project's stricter linting rules (`ruff`, `mypy`) introduced in this commit. * **Changes:** * **Formatting**: Applied standard Python formatting (whitespace, indentation). * **Linting**: Fixed issues like: * Comparison to `False` (changed `== False` to `is False` or kept as is with `# noqa` if intentional for testing). * Missing docstrings or blank lines. * Unused imports. * **Type Hinting**: Added or corrected type hints to satisfy `mypy`. * **Regression Detection:** * **Static Analysis**: By enforcing these standards, the CI pipeline can now detect syntax errors, undefined variables, and type inconsistencies before tests are even run. Apply clang-format fixes Apply cmake-format fixes Improve Python list support and test coverage - plugins/python/src/modulewrap.cpp: - Added support for `list['double']` string representation in input converters. - Implemented output transformation support for `list[double]` and `list['double']`, mapping them to `py_to_vdouble`. - This enables Phlex to correctly handle Python lists of floats/doubles when strict typing is required. - test/python/vectypes.py: - Refactored to include specific helper functions (e.g., `collectify_float64_list`, `sum_list_int64`) that return standard Python lists instead of NumPy arrays. - Updated registration logic to use distinct node names (e.g., `name="sum_int32"`) to prevent collisions between list and NumPy test cases. - This ensures that the new C++ list conversion paths are explicitly exercised. - test/python/pyveclists.jsonnet: - Added a new test configuration to run the list-specific vector tests. - test/python/CMakeLists.txt: - Registered the new `py:veclists` test. - test/demo-giantdata/unfold_transform_fold.cpp: - Increased workload (n_spills from 10 to 100) to fix a race condition in the pipelined execution test. - The test verifies that `fold` operations begin before `unfold` operations complete. - Previously, the workload was too small, allowing the `unfold` phase to finish completely before the `fold` phase started, causing the test to falsely fail by observing "batched" instead of "pipelined" execution. Effect on Coverage: - The changes significantly improve coverage in `plugins/python/src/modulewrap.cpp`. - Specifically, the `py_to_vdouble` function now has confirmed execution hits on the list processing path (previously 0 hits). - The `py_to_vint` function also shows increased coverage for list processing. - This confirms that the bindings now correctly intercept and convert Python list objects to Phlex vector types. Fix CodeQL alert: Empty except block in verify_extended.py Apply clang-format fixes Improve Python test coverage - Added test/python/test_mismatch.py and pymismatch.jsonnet to test error handling in modulewrap.cpp when input labels and types do not match. - Added test/python/test_coverage.py and pycoverage.jsonnet to test list[int], list[float], and list[double] input converters. - Updated test/python/CMakeLists.txt to include the new tests and fixed the regex for py:mismatch to match the actual error message. Apply cmake-format fixes --- .github/copilot-instructions.md | 55 ++ CMakeLists.txt | 21 +- plugins/python/CMakeLists.txt | 59 +- plugins/python/README.md | 55 ++ plugins/python/src/lifelinewrap.cpp | 2 + plugins/python/src/modulewrap.cpp | 526 +++++++++++++++--- scripts/README.md | 17 +- test/demo-giantdata/unfold_transform_fold.cpp | 2 +- test/python/CMakeLists.txt | 85 ++- test/python/pycoverage.jsonnet | 18 + test/python/pymismatch.jsonnet | 13 + test/python/pytypes.jsonnet | 33 ++ test/python/pyveclists.jsonnet | 61 ++ test/python/pyvectypes.jsonnet | 60 ++ test/python/source.cpp | 53 +- test/python/test_coverage.py | 43 ++ test/python/test_mismatch.py | 12 + test/python/test_types.py | 129 +++++ test/python/vectypes.py | 228 ++++++++ test/python/verify_extended.py | 151 +++++ 20 files changed, 1523 insertions(+), 100 deletions(-) create mode 100644 plugins/python/README.md create mode 100644 test/python/pycoverage.jsonnet create mode 100644 test/python/pymismatch.jsonnet create mode 100644 test/python/pytypes.jsonnet create mode 100644 test/python/pyveclists.jsonnet create mode 100644 test/python/pyvectypes.jsonnet create mode 100644 test/python/test_coverage.py create mode 100644 test/python/test_mismatch.py create mode 100644 test/python/test_types.py create mode 100644 test/python/vectypes.py create mode 100644 test/python/verify_extended.py diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index b9b1e9c7..b7a4c349 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -137,3 +137,58 @@ All Markdown files must strictly follow these markdownlint rules: - **MD034**: No bare URLs (for example, use a markdown link like `[text](destination)` instead of a plain URL) - **MD036**: Use # headings, not **Bold:** for titles - **MD040**: Always specify code block language (for example, use '```bash', '```python', '```text', etc.) + +## Development & Testing Workflows + +### Build and Test + +- **Environment**: Always source `setup-env.sh` before building or testing. This applies to all environments (Dev Container, local machine, HPC). +- **Configuration**: + - **Presets**: Prefer `CMakePresets.json` workflows (e.g., `cmake --preset default`). + - **Generator**: Prefer `Ninja` over `Makefiles` when available (`-G Ninja`). +- **Build**: + - **Parallelism**: Always use multiple cores. Ninja does this by default. For `make`, use `cmake --build build -j $(nproc)`. +- **Test**: + - **Parallelism**: Run tests in parallel using `ctest -j $(nproc)` or `ctest --parallel `. + - **Selection**: Run specific tests with `ctest -R "regex"` (e.g., `ctest -R "py:*"`). + - **Debugging**: Use `ctest --output-on-failure` to see logs for failed tests. + +### Python Integration + +- **Naming**: Avoid naming Python test scripts `types.py` or other names that shadow standard library modules. This causes obscure import errors (e.g., `ModuleNotFoundError: No module named 'numpy'`). +- **PYTHONPATH**: When running tests in Spack environments, ensure `PYTHONPATH` includes `site-packages`. In CMake, explicitly add `Python_SITELIB` and `Python_SITEARCH` to `TEST_PYTHONPATH`. +- **Test Structure**: + - **C++ Driver**: Provides data streams (e.g., `test/python/driver.cpp`). + - **Jsonnet Config**: Wires the graph (e.g., `test/python/pytypes.jsonnet`). + - **Python Script**: Implements algorithms (e.g., `test/python/test_types.py`). +- **Type Conversion**: `plugins/python/src/modulewrap.cpp` handles C++ $\leftrightarrow$ Python conversion. + - **Mechanism**: Uses string comparison of type names (e.g., `"float64]]"`). This is brittle. + - **Requirement**: Ensure converters exist for all types used in tests (e.g., `float`, `double`, `unsigned int`, and their vector equivalents). + - **Warning**: Exact type matches are required. `numpy.float32` != `float`. + +### Coverage Analysis + +- **Tooling**: The project uses LLVM source-based coverage. +- **Requirement**: The `phlex` binary must catch exceptions in `main` to ensure coverage data is flushed to disk even when tests fail/crash. +- **Generation**: + - **CMake Targets**: `coverage-xml`, `coverage-html` (if configured). + - **Manual**: + 1. Run tests with `LLVM_PROFILE_FILE` set (e.g., `export LLVM_PROFILE_FILE="profraw/%m-%p.profraw"`). + 2. Merge profiles: `llvm-profdata merge -sparse profraw/*.profraw -o coverage.profdata`. + 3. Generate report: `llvm-cov show -instr-profile=coverage.profdata -format=html ...` + +### Local GitHub Actions Testing (`act`) + +- **Tool**: Use `act` to run GitHub Actions workflows locally. +- **Configuration**: Ensure `.actrc` exists in the workspace root with the following content to use a compatible runner image: + ```text + -P ubuntu-latest=catthehacker/ubuntu:act-latest + ``` +- **Usage**: + - List jobs: `act -l` + - Run specific job: `act -j ` (e.g., `act -j python-check`) + - Run specific event: `act pull_request` +- **Troubleshooting**: + - **Docker Socket**: `act` requires access to the Docker socket. In dev containers, this may require specific mount configurations or permissions. + - **Artifacts**: `act` creates a `phlex-src` directory (or similar) for checkout. Ensure this is cleaned up or ignored by tools like `mypy`. + diff --git a/CMakeLists.txt b/CMakeLists.txt index 59ff4d81..0e99e585 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -70,13 +70,12 @@ add_compile_options( ) if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU") - if( - CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL "14.1" - AND CMAKE_COMPILER_VERSION VERSION_LESS "15" - ) - # GCC 14.1 issues many false positives re. array-bounds and + if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL "14.1") + # GCC 14.1+ issues many false positives re. array-bounds and # stringop-overflow - add_compile_options(-Wno-array-bounds -Wno-stringop-overflow) + add_compile_options( + -Wno-array-bounds -Wno-stringop-overflow -Wno-maybe-uninitialized + ) endif() endif() @@ -108,8 +107,9 @@ if(ENABLE_TSAN) -g -O1 # Ensure no optimizations interfere with TSan - "$<$:-fno-omit-frame-pointer -fno-optimize-sibling-calls>" - ) + "$<$:-fno-omit-frame-pointer>" + "$<$:-fno-optimize-sibling-calls>" + ) add_link_options(-fsanitize=thread) else() message(FATAL_ERROR "ThreadSanitizer is not supported with ${CMAKE_CXX_COMPILER_ID}") @@ -130,8 +130,9 @@ if(ENABLE_ASAN) -g -O1 # Ensure no optimizations interfere with ASan - "$<$:-fno-omit-frame-pointer -fno-optimize-sibling-calls>" - ) + "$<$:-fno-omit-frame-pointer>" + "$<$:-fno-optimize-sibling-calls>" + ) add_link_options(-fsanitize=address) else() message(FATAL_ERROR "AddressSanitizer is not supported with ${CMAKE_CXX_COMPILER_ID}") diff --git a/plugins/python/CMakeLists.txt b/plugins/python/CMakeLists.txt index 2554a6dc..357d42c8 100644 --- a/plugins/python/CMakeLists.txt +++ b/plugins/python/CMakeLists.txt @@ -1,15 +1,54 @@ find_package(Python 3.12 COMPONENTS Interpreter Development NumPy REQUIRED) -if(Python_NumPy_VERSION VERSION_LESS "2.0.0") - message( - FATAL_ERROR - "NumPy version is too low: ${Python_NumPy_VERSION} found, at least 2.0.0 required" - ) -endif() - -# Phlex module to run Python algorithms -add_library( - pymodule +if(Python_FOUND) + # Verify installation of necessary python modules for specific tests + + function(check_python_module_version MODULE_NAME MIN_VERSION OUT_VAR) + execute_process( + COMMAND + ${Python_EXECUTABLE} -c + "import sys +try: + import ${MODULE_NAME} + installed_version = getattr(${MODULE_NAME}, '__version__', None) + if not installed_version: + sys.exit(2) + + def parse(v): + return tuple(map(int, v.split('.')[:3])) + + if parse(installed_version) >= parse('${MIN_VERSION}'): + sys.exit(0) + else: + sys.exit(2) # Version too low +except ImportError: + sys.exit(1) +except Exception: + sys.exit(1)" + RESULT_VARIABLE _module_check_result + ) + + if(_module_check_result EQUAL 0) + set(${OUT_VAR} TRUE PARENT_SCOPE) + elseif(_module_check_result EQUAL 1) + set(${OUT_VAR} FALSE PARENT_SCOPE) # silent b/c common + elseif(_module_check_result EQUAL 2) + message( + WARNING + "Python module '${MODULE_NAME}' found but version too low (min required: ${MIN_VERSION})." + ) + set(${OUT_VAR} FALSE PARENT_SCOPE) + else() + message(WARNING "Unknown error while checking Python module '${MODULE_NAME}'.") + set(${OUT_VAR} FALSE PARENT_SCOPE) + endif() + endfunction() + + check_python_module_version("numpy" "2.0.0" HAS_NUMPY) + + # Phlex module to run Python algorithms + add_library( + pymodule MODULE src/pymodule.cpp src/modulewrap.cpp diff --git a/plugins/python/README.md b/plugins/python/README.md new file mode 100644 index 00000000..e84525e7 --- /dev/null +++ b/plugins/python/README.md @@ -0,0 +1,55 @@ +# Phlex Python Plugin Architecture + +This directory contains the C++ source code for the Phlex Python plugin, which enables Phlex to execute Python code as part of its computation graph. + +## Architecture Overview + +The integration is built on the **Python C API** (not `pybind11`) to maintain strict control over the interpreter lifecycle and memory management. + +### 1. The "Type Bridge" (`modulewrap.cpp`) + +The core of the integration is the type conversion layer in `src/modulewrap.cpp`. This layer is responsible for: +- Converting Phlex `Product` objects (C++) into Python objects (e.g., `PyObject*`, `numpy.ndarray`). +- Converting Python return values back into Phlex `Product` objects. + +**Critical Implementation Detail:** +The type mapping relies on **string comparison** of type names. +- **Mechanism**: The C++ code checks `type_name() == "float64]]"` to identify a 2D array of doubles. +- **Brittleness**: This is a fragile contract. If the type name changes (e.g., `numpy` changes its string representation) or if a user provides a slightly different type (e.g., `float` vs `np.float32`), the bridge may fail. +- **Extension**: When adding support for new types, you must explicitly add converters in `modulewrap.cpp` for both scalar and vector/array versions. + +### 2. Hybrid Configuration + +Phlex uses a hybrid configuration model involving three languages: + +1. **Jsonnet** (`*.jsonnet`): Defines the computation graph structure. It specifies: + - The nodes in the graph. + - The Python module/class to load for specific nodes. + - Configuration parameters passed to the Python object. +2. **C++ Driver**: The executable that: + - Parses the Jsonnet configuration. + - Initializes the Phlex core. + - Loads the Python interpreter and the specified plugin. +3. **Python Code** (`*.py`): Implements the algorithmic logic. + +### 3. Environment & Testing + +Because the Python interpreter is embedded within the C++ application, the runtime environment is critical. + +- **PYTHONPATH**: Must be set correctly to include: + - The build directory (for generated modules). + - The source directory (for user scripts). + - System/Spack `site-packages` (for dependencies like `numpy`). +- **Naming Collisions**: + - **Warning**: Do not name test files `types.py`, `test.py`, `code.py`, or other names that shadow standard library modules. + - **Consequence**: Shadowing can cause obscure failures in internal libraries (e.g., `numpy` failing to import because it tries to import `types` from the standard library but gets your local file instead). + +## Development Guidelines + +1. **Adding New Types**: + - Update `src/modulewrap.cpp` to handle the new C++ type. + - Add a corresponding test case in `test/python/` to verify the round-trip conversion. +2. **Testing**: + - Use `ctest` to run tests. + - Tests are integration tests: they run the full C++ application which loads the Python script. + - Debugging: Use `ctest --output-on-failure` to see Python exceptions. diff --git a/plugins/python/src/lifelinewrap.cpp b/plugins/python/src/lifelinewrap.cpp index 0f81e6bb..e4893480 100644 --- a/plugins/python/src/lifelinewrap.cpp +++ b/plugins/python/src/lifelinewrap.cpp @@ -31,9 +31,11 @@ static int ll_clear(py_lifeline_t* pyobj) static void ll_dealloc(py_lifeline_t* pyobj) { + PyObject_GC_UnTrack(pyobj); Py_CLEAR(pyobj->m_view); typedef std::shared_ptr generic_shared_t; pyobj->m_source.~generic_shared_t(); + Py_TYPE(pyobj)->tp_free((PyObject*)pyobj); } // clang-format off diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index 00b123d6..c242298d 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -2,11 +2,15 @@ #include "wrap.hpp" #include +#include #include +#include #include #include #include +// static std::mutex g_py_mutex; + #define NO_IMPORT_ARRAY #define PY_ARRAY_UNIQUE_SYMBOL phlex_ARRAY_API #include @@ -15,6 +19,17 @@ using namespace phlex::experimental; using phlex::concurrency; using phlex::product_query; +struct PyObjectDeleter { + void operator()(PyObject* p) const + { + if (p) { + phlex::experimental::PyGILRAII gil; + Py_DECREF(p); + } + } +}; +using PyObjectPtr = std::shared_ptr; + // TODO: the layer is currently hard-wired and should come from the product // specification instead, but that doesn't exist in Python yet. static std::string const LAYER = "job"; @@ -55,12 +70,12 @@ namespace { return oss.str(); } - static inline PyObject* lifeline_transform(intptr_t arg) + static inline PyObject* lifeline_transform(PyObject* arg) { - if (Py_TYPE((PyObject*)arg) == &PhlexLifeline_Type) { + if (Py_TYPE(arg) == &PhlexLifeline_Type) { return ((py_lifeline_t*)arg)->m_view; } - return (PyObject*)arg; + return arg; } // callable object managing the callback @@ -84,18 +99,20 @@ namespace { Py_INCREF(pc.m_callable); m_callable = pc.m_callable; } + return *this; } ~py_callback() { Py_DECREF(m_callable); } template - intptr_t call(Args... args) + PyObjectPtr call(Args... args) { static_assert(sizeof...(Args) == N, "Argument count mismatch"); PyGILRAII gil; + // std::lock_guard lock(g_py_mutex); - PyObject* result = - PyObject_CallFunctionObjArgs((PyObject*)m_callable, lifeline_transform(args)..., nullptr); + PyObject* result = PyObject_CallFunctionObjArgs( + (PyObject*)m_callable, lifeline_transform(args.get())..., nullptr); std::string error_msg; if (!result) { @@ -103,12 +120,10 @@ namespace { error_msg = "Unknown python error"; } - decref_all(args...); - if (!error_msg.empty()) throw std::runtime_error(error_msg.c_str()); - return (intptr_t)result; + return PyObjectPtr(result, PyObjectDeleter()); } template @@ -117,9 +132,10 @@ namespace { static_assert(sizeof...(Args) == N, "Argument count mismatch"); PyGILRAII gil; + // std::lock_guard lock(g_py_mutex); PyObject* result = - PyObject_CallFunctionObjArgs((PyObject*)m_callable, (PyObject*)args..., nullptr); + PyObject_CallFunctionObjArgs((PyObject*)m_callable, (PyObject*)args.get()..., nullptr); std::string error_msg; if (!result) { @@ -128,48 +144,41 @@ namespace { } else Py_DECREF(result); - decref_all(args...); - if (!error_msg.empty()) throw std::runtime_error(error_msg.c_str()); } - - private: - template - void decref_all(Args... args) - { - // helper to decrement reference counts of N arguments - (Py_DECREF((PyObject*)args), ...); - } }; // use explicit instatiations to ensure that the function signature can // be derived by the graph builder struct py_callback_1 : public py_callback<1> { - intptr_t operator()(intptr_t arg0) { return call(arg0); } + PyObjectPtr operator()(PyObjectPtr arg0) { return call(arg0); } }; struct py_callback_2 : public py_callback<2> { - intptr_t operator()(intptr_t arg0, intptr_t arg1) { return call(arg0, arg1); } + PyObjectPtr operator()(PyObjectPtr arg0, PyObjectPtr arg1) { return call(arg0, arg1); } }; struct py_callback_3 : public py_callback<3> { - intptr_t operator()(intptr_t arg0, intptr_t arg1, intptr_t arg2) + PyObjectPtr operator()(PyObjectPtr arg0, PyObjectPtr arg1, PyObjectPtr arg2) { return call(arg0, arg1, arg2); } }; struct py_callback_1v : public py_callback<1> { - void operator()(intptr_t arg0) { callv(arg0); } + void operator()(PyObjectPtr arg0) { callv(arg0); } }; struct py_callback_2v : public py_callback<2> { - void operator()(intptr_t arg0, intptr_t arg1) { callv(arg0, arg1); } + void operator()(PyObjectPtr arg0, PyObjectPtr arg1) { callv(arg0, arg1); } }; struct py_callback_3v : public py_callback<3> { - void operator()(intptr_t arg0, intptr_t arg1, intptr_t arg2) { callv(arg0, arg1, arg2); } + void operator()(PyObjectPtr arg0, PyObjectPtr arg1, PyObjectPtr arg2) + { + callv(arg0, arg1, arg2); + } }; static std::vector cseq(PyObject* coll) @@ -220,7 +229,7 @@ namespace { // for numpy typing, there's no useful way of figuring out the type from the // name of the type, only from its string representation, so fall through and // let this method return str() - if (ann != "ndarray") + if (ann != "ndarray" && ann != "list") return ann; // start over for numpy type using result from str() @@ -292,17 +301,16 @@ namespace { } #define BASIC_CONVERTER(name, cpptype, topy, frompy) \ - static intptr_t name##_to_py(cpptype a) \ + static PyObjectPtr name##_to_py(cpptype a) \ { \ PyGILRAII gil; \ - return (intptr_t)topy(a); \ + return PyObjectPtr(topy(a), PyObjectDeleter()); \ } \ \ - static cpptype py_to_##name(intptr_t pyobj) \ + static cpptype py_to_##name(PyObjectPtr pyobj) \ { \ PyGILRAII gil; \ - cpptype i = (cpptype)frompy((PyObject*)pyobj); \ - Py_DECREF((PyObject*)pyobj); \ + cpptype i = (cpptype)frompy(pyobj.get()); \ return i; \ } @@ -310,12 +318,23 @@ namespace { BASIC_CONVERTER(int, int, PyLong_FromLong, PyLong_AsLong) BASIC_CONVERTER(uint, unsigned int, PyLong_FromLong, pylong_or_int_as_ulong) BASIC_CONVERTER(long, long, PyLong_FromLong, pylong_as_strictlong) - BASIC_CONVERTER(ulong, unsigned long, PyLong_FromUnsignedLong, pylong_or_int_as_ulong) + // BASIC_CONVERTER(ulong, unsigned long, PyLong_FromUnsignedLong, pylong_or_int_as_ulong) + static PyObjectPtr ulong_to_py(unsigned long a) + { + PyGILRAII gil; + return PyObjectPtr(PyLong_FromUnsignedLong(a), PyObjectDeleter()); + } + static unsigned long py_to_ulong(PyObjectPtr pyobj) + { + PyGILRAII gil; + unsigned long i = (unsigned long)pylong_or_int_as_ulong(pyobj.get()); + return i; + } BASIC_CONVERTER(float, float, PyFloat_FromDouble, PyFloat_AsDouble) BASIC_CONVERTER(double, double, PyFloat_FromDouble, PyFloat_AsDouble) #define VECTOR_CONVERTER(name, cpptype, nptype) \ - static intptr_t name##_to_py(std::shared_ptr> const& v) \ + static PyObjectPtr name##_to_py(std::shared_ptr> const& v) \ { \ PyGILRAII gil; \ \ @@ -330,7 +349,7 @@ namespace { ); \ \ if (!np_view) \ - return (intptr_t)nullptr; \ + return PyObjectPtr(); \ \ /* make the data read-only by not making it writable */ \ PyArray_CLEARFLAGS((PyArrayObject*)np_view, NPY_ARRAY_WRITEABLE); \ @@ -340,34 +359,121 @@ namespace { /* when passing it to the registered Python function */ \ py_lifeline_t* pyll = \ (py_lifeline_t*)PhlexLifeline_Type.tp_new(&PhlexLifeline_Type, nullptr, nullptr); \ + new (&pyll->m_source) std::shared_ptr(v); \ pyll->m_view = np_view; /* steals reference */ \ - pyll->m_source = v; \ \ - return (intptr_t)pyll; \ + return PyObjectPtr((PyObject*)pyll, PyObjectDeleter()); \ } - VECTOR_CONVERTER(vint, int, NPY_INT) - VECTOR_CONVERTER(vuint, unsigned int, NPY_UINT) - VECTOR_CONVERTER(vlong, long, NPY_LONG) - VECTOR_CONVERTER(vulong, unsigned long, NPY_ULONG) + // VECTOR_CONVERTER(vint, int, NPY_INT) + static PyObjectPtr vint_to_py(std::shared_ptr> const& v) + { + PyGILRAII gil; + // std::lock_guard lock(g_py_mutex); + if (!v) + return PyObjectPtr(); + PyObject* list = PyList_New(v->size()); + if (!list) { + PyErr_Print(); + return PyObjectPtr(); + } + for (size_t i = 0; i < v->size(); ++i) { + PyObject* item = PyLong_FromLong((*v)[i]); + if (!item) { + PyErr_Print(); + Py_DECREF(list); + return PyObjectPtr(); + } + PyList_SET_ITEM(list, i, item); + } + return PyObjectPtr(list, PyObjectDeleter()); + } + // VECTOR_CONVERTER(vuint, unsigned int, NPY_UINT) + static PyObjectPtr vuint_to_py(std::shared_ptr> const& v) + { + PyGILRAII gil; + // std::lock_guard lock(g_py_mutex); + if (!v) + return PyObjectPtr(); + PyObject* list = PyList_New(v->size()); + if (!list) { + PyErr_Print(); + return PyObjectPtr(); + } + for (size_t i = 0; i < v->size(); ++i) { + PyObject* item = PyLong_FromUnsignedLong((*v)[i]); + if (!item) { + PyErr_Print(); + Py_DECREF(list); + return PyObjectPtr(); + } + PyList_SET_ITEM(list, i, item); + } + return PyObjectPtr(list, PyObjectDeleter()); + } + // VECTOR_CONVERTER(vlong, long, NPY_LONG) + static PyObjectPtr vlong_to_py(std::shared_ptr> const& v) + { + PyGILRAII gil; + // std::lock_guard lock(g_py_mutex); + if (!v) + return PyObjectPtr(); + PyObject* list = PyList_New(v->size()); + if (!list) { + PyErr_Print(); + return PyObjectPtr(); + } + for (size_t i = 0; i < v->size(); ++i) { + PyObject* item = PyLong_FromLong((*v)[i]); + if (!item) { + PyErr_Print(); + Py_DECREF(list); + return PyObjectPtr(); + } + PyList_SET_ITEM(list, i, item); + } + return PyObjectPtr(list, PyObjectDeleter()); + } + // VECTOR_CONVERTER(vulong, unsigned long, NPY_ULONG) + static PyObjectPtr vulong_to_py(std::shared_ptr> const& v) + { + PyGILRAII gil; + // std::lock_guard lock(g_py_mutex); + if (!v) + return PyObjectPtr(); + PyObject* list = PyList_New(v->size()); + if (!list) { + PyErr_Print(); + return PyObjectPtr(); + } + for (size_t i = 0; i < v->size(); ++i) { + PyObject* item = PyLong_FromUnsignedLong((*v)[i]); + if (!item) { + PyErr_Print(); + Py_DECREF(list); + return PyObjectPtr(); + } + PyList_SET_ITEM(list, i, item); + } + return PyObjectPtr(list, PyObjectDeleter()); + } VECTOR_CONVERTER(vfloat, float, NPY_FLOAT) VECTOR_CONVERTER(vdouble, double, NPY_DOUBLE) #define NUMPY_ARRAY_CONVERTER(name, cpptype, nptype) \ - static std::shared_ptr> py_to_##name(intptr_t pyobj) \ + static std::shared_ptr> py_to_##name(PyObjectPtr pyobj) \ { \ PyGILRAII gil; \ \ auto vec = std::make_shared>(); \ \ /* TODO: because of unresolved ownership issues, copy the full array contents */ \ - if (!pyobj || !PyArray_Check((PyObject*)pyobj)) { \ + if (!pyobj || !PyArray_Check(pyobj.get())) { \ PyErr_Clear(); /* how to report an error? */ \ - Py_DECREF((PyObject*)pyobj); \ return vec; \ } \ \ - PyArrayObject* arr = (PyArrayObject*)pyobj; \ + PyArrayObject* arr = (PyArrayObject*)pyobj.get(); \ \ /* TODO: flattening the array here seems to be the only workable solution */ \ npy_intp* dims = PyArray_DIMS(arr); \ @@ -381,16 +487,256 @@ namespace { vec->reserve(total); \ vec->insert(vec->end(), raw, raw + total); \ \ - Py_DECREF((PyObject*)pyobj); \ return vec; \ } - NUMPY_ARRAY_CONVERTER(vint, int, NPY_INT) - NUMPY_ARRAY_CONVERTER(vuint, unsigned int, NPY_UINT) - NUMPY_ARRAY_CONVERTER(vlong, long, NPY_LONG) - NUMPY_ARRAY_CONVERTER(vulong, unsigned long, NPY_ULONG) - NUMPY_ARRAY_CONVERTER(vfloat, float, NPY_FLOAT) - NUMPY_ARRAY_CONVERTER(vdouble, double, NPY_DOUBLE) + // NUMPY_ARRAY_CONVERTER(vint, int, NPY_INT) + // NUMPY_ARRAY_CONVERTER(vuint, unsigned int, NPY_UINT) + // NUMPY_ARRAY_CONVERTER(vlong, long, NPY_LONG) + // NUMPY_ARRAY_CONVERTER(vulong, unsigned long, NPY_ULONG) + // NUMPY_ARRAY_CONVERTER(vfloat, float, NPY_FLOAT) + // NUMPY_ARRAY_CONVERTER(vdouble, double, NPY_DOUBLE) + + // NUMPY_ARRAY_CONVERTER(vint, int, NPY_INT) + static std::shared_ptr> py_to_vint(PyObjectPtr pyobj) + { + PyGILRAII gil; + // std::lock_guard lock(g_py_mutex); + auto vec = std::make_shared>(); + PyObject* obj = pyobj.get(); + + if (obj) { + if (PyList_Check(obj)) { + size_t size = PyList_Size(obj); + vec->reserve(size); + for (size_t i = 0; i < size; ++i) { + PyObject* item = PyList_GetItem(obj, i); + if (!item) { + PyErr_Print(); + break; + } + long val = PyLong_AsLong(item); + if (PyErr_Occurred()) { + PyErr_Print(); + break; + } + vec->push_back((int)val); + } + } else if (PyArray_Check(obj)) { + PyArrayObject* arr = (PyArrayObject*)obj; + npy_intp* dims = PyArray_DIMS(arr); + int nd = PyArray_NDIM(arr); + size_t total = 1; + for (int i = 0; i < nd; ++i) + total *= static_cast(dims[i]); + + int* raw = static_cast(PyArray_DATA(arr)); + vec->reserve(total); + vec->insert(vec->end(), raw, raw + total); + } + } + return vec; + } + // NUMPY_ARRAY_CONVERTER(vuint, unsigned int, NPY_UINT) + static std::shared_ptr> py_to_vuint(PyObjectPtr pyobj) + { + PyGILRAII gil; + // std::lock_guard lock(g_py_mutex); + auto vec = std::make_shared>(); + PyObject* obj = pyobj.get(); + + if (obj) { + if (PyList_Check(obj)) { + size_t size = PyList_Size(obj); + vec->reserve(size); + for (size_t i = 0; i < size; ++i) { + PyObject* item = PyList_GetItem(obj, i); + if (!item) { + PyErr_Print(); + break; + } + unsigned long val = PyLong_AsUnsignedLong(item); + if (PyErr_Occurred()) { + PyErr_Print(); + break; + } + vec->push_back((unsigned int)val); + } + } else if (PyArray_Check(obj)) { + PyArrayObject* arr = (PyArrayObject*)obj; + npy_intp* dims = PyArray_DIMS(arr); + int nd = PyArray_NDIM(arr); + size_t total = 1; + for (int i = 0; i < nd; ++i) + total *= static_cast(dims[i]); + + unsigned int* raw = static_cast(PyArray_DATA(arr)); + vec->reserve(total); + vec->insert(vec->end(), raw, raw + total); + } + } + return vec; + } + // NUMPY_ARRAY_CONVERTER(vlong, long, NPY_LONG) + static std::shared_ptr> py_to_vlong(PyObjectPtr pyobj) + { + PyGILRAII gil; + // std::lock_guard lock(g_py_mutex); + auto vec = std::make_shared>(); + PyObject* obj = pyobj.get(); + + if (obj) { + if (PyList_Check(obj)) { + size_t size = PyList_Size(obj); + vec->reserve(size); + for (size_t i = 0; i < size; ++i) { + PyObject* item = PyList_GetItem(obj, i); + if (!item) { + PyErr_Print(); + break; + } + long val = PyLong_AsLong(item); + if (PyErr_Occurred()) { + PyErr_Print(); + break; + } + vec->push_back(val); + } + } else if (PyArray_Check(obj)) { + PyArrayObject* arr = (PyArrayObject*)obj; + npy_intp* dims = PyArray_DIMS(arr); + int nd = PyArray_NDIM(arr); + size_t total = 1; + for (int i = 0; i < nd; ++i) + total *= static_cast(dims[i]); + + long* raw = static_cast(PyArray_DATA(arr)); + vec->reserve(total); + vec->insert(vec->end(), raw, raw + total); + } + } + return vec; + } + // NUMPY_ARRAY_CONVERTER(vulong, unsigned long, NPY_ULONG) + static std::shared_ptr> py_to_vulong(PyObjectPtr pyobj) + { + PyGILRAII gil; + // std::lock_guard lock(g_py_mutex); + auto vec = std::make_shared>(); + PyObject* obj = pyobj.get(); + + if (obj) { + if (PyList_Check(obj)) { + size_t size = PyList_Size(obj); + vec->reserve(size); + for (size_t i = 0; i < size; ++i) { + PyObject* item = PyList_GetItem(obj, i); + if (!item) { + PyErr_Print(); + break; + } + unsigned long val = PyLong_AsUnsignedLong(item); + if (PyErr_Occurred()) { + PyErr_Print(); + break; + } + vec->push_back(val); + } + } else if (PyArray_Check(obj)) { + PyArrayObject* arr = (PyArrayObject*)obj; + npy_intp* dims = PyArray_DIMS(arr); + int nd = PyArray_NDIM(arr); + size_t total = 1; + for (int i = 0; i < nd; ++i) + total *= static_cast(dims[i]); + + unsigned long* raw = static_cast(PyArray_DATA(arr)); + vec->reserve(total); + vec->insert(vec->end(), raw, raw + total); + } + } + return vec; + } + // NUMPY_ARRAY_CONVERTER(vfloat, float, NPY_FLOAT) + static std::shared_ptr> py_to_vfloat(PyObjectPtr pyobj) + { + PyGILRAII gil; + // std::lock_guard lock(g_py_mutex); + auto vec = std::make_shared>(); + PyObject* obj = pyobj.get(); + + if (obj) { + if (PyList_Check(obj)) { + size_t size = PyList_Size(obj); + vec->reserve(size); + for (size_t i = 0; i < size; ++i) { + PyObject* item = PyList_GetItem(obj, i); + if (!item) { + PyErr_Print(); + break; + } + double val = PyFloat_AsDouble(item); + if (PyErr_Occurred()) { + PyErr_Print(); + break; + } + vec->push_back((float)val); + } + } else if (PyArray_Check(obj)) { + PyArrayObject* arr = (PyArrayObject*)obj; + npy_intp* dims = PyArray_DIMS(arr); + int nd = PyArray_NDIM(arr); + size_t total = 1; + for (int i = 0; i < nd; ++i) + total *= static_cast(dims[i]); + + float* raw = static_cast(PyArray_DATA(arr)); + vec->reserve(total); + vec->insert(vec->end(), raw, raw + total); + } + } + return vec; + } + // NUMPY_ARRAY_CONVERTER(vdouble, double, NPY_DOUBLE) + static std::shared_ptr> py_to_vdouble(PyObjectPtr pyobj) + { + PyGILRAII gil; + // std::lock_guard lock(g_py_mutex); + auto vec = std::make_shared>(); + PyObject* obj = pyobj.get(); + + if (obj) { + if (PyList_Check(obj)) { + size_t size = PyList_Size(obj); + vec->reserve(size); + for (size_t i = 0; i < size; ++i) { + PyObject* item = PyList_GetItem(obj, i); + if (!item) { + PyErr_Print(); + break; + } + double val = PyFloat_AsDouble(item); + if (PyErr_Occurred()) { + PyErr_Print(); + break; + } + vec->push_back(val); + } + } else if (PyArray_Check(obj)) { + PyArrayObject* arr = (PyArrayObject*)obj; + npy_intp* dims = PyArray_DIMS(arr); + int nd = PyArray_NDIM(arr); + size_t total = 1; + for (int i = 0; i < nd; ++i) + total *= static_cast(dims[i]); + + double* raw = static_cast(PyArray_DATA(arr)); + vec->reserve(total); + vec->insert(vec->end(), raw, raw + total); + } + } + return vec; + } } // unnamed namespace @@ -489,12 +835,18 @@ static PyObject* parse_args(PyObject* args, // dictionary is ordered with return last if provide (note: the keys here // could be used as input labels, instead of the ones from the configuration, // but that is probably not practical in actual use, so they are ignored) - PyObject* values = PyDict_Values(annot); - for (Py_ssize_t i = 0; i < (PyList_GET_SIZE(values) - (ret ? 1 : 0)); ++i) { - PyObject* item = PyList_GET_ITEM(values, i); - input_types.push_back(annotation_as_text(item)); + + // Re-implementing robust annotation extraction + PyObject *key, *val; + Py_ssize_t pos = 0; + + while (PyDict_Next(annot, &pos, &key, &val)) { + // Skip 'return' annotation as it is handled separately + if (PyUnicode_Check(key) && PyUnicode_CompareWithASCIIString(key, "return") == 0) { + continue; + } + input_types.push_back(annotation_as_text(val)); } - Py_DECREF(values); } Py_XDECREF(annot); @@ -556,32 +908,32 @@ static bool insert_input_converters(py_phlex_module* mod, return false; } - pos += 18; - + std::string suffix = inp_type.substr(pos); std::string py_out = cname + "_" + inp + "py"; - if (inp_type.compare(pos, std::string::npos, "int32]]") == 0) { - mod->ph_module->transform("pyvint_" + inp + "_" + cname, vint_to_py, concurrency::serial) - .input_family(product_query{product_specification::create(inp), LAYER}) - .output_products(py_out); - } else if (inp_type.compare(pos, std::string::npos, "uint32]]") == 0) { + + if (suffix.find("uint32]]") != std::string::npos) { mod->ph_module->transform("pyvuint_" + inp + "_" + cname, vuint_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (inp_type.compare(pos, std::string::npos, "int64]]") == 0) { // need not be true - mod->ph_module->transform("pyvlong_" + inp + "_" + cname, vlong_to_py, concurrency::serial) + } else if (suffix.find("int32]]") != std::string::npos) { + mod->ph_module->transform("pyvint_" + inp + "_" + cname, vint_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (inp_type.compare(pos, std::string::npos, "uint64]]") == 0) { // id. + } else if (suffix.find("uint64]]") != std::string::npos) { // id. mod->ph_module ->transform("pyvulong_" + inp + "_" + cname, vulong_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (inp_type.compare(pos, std::string::npos, "float32]]") == 0) { + } else if (suffix.find("int64]]") != std::string::npos) { // need not be true + mod->ph_module->transform("pyvlong_" + inp + "_" + cname, vlong_to_py, concurrency::serial) + .input_family(product_query{product_specification::create(inp), LAYER}) + .output_products(py_out); + } else if (suffix.find("float32]]") != std::string::npos) { mod->ph_module ->transform("pyvfloat_" + inp + "_" + cname, vfloat_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (inp_type.compare(pos, std::string::npos, "double64]]") == 0) { + } else if (suffix.find("float64]]") != std::string::npos) { mod->ph_module ->transform("pyvdouble_" + inp + "_" + cname, vdouble_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) @@ -590,6 +942,23 @@ static bool insert_input_converters(py_phlex_module* mod, PyErr_Format(PyExc_TypeError, "unsupported array input type \"%s\"", inp_type.c_str()); return false; } + } + else if (inp_type == "list[int]") { + std::string py_out = cname + "_" + inp + "py"; + mod->ph_module->transform("pyvint_" + inp + "_" + cname, vint_to_py, concurrency::serial) + .input_family(product_query{product_specification::create(inp), LAYER}) + .output_products(py_out); + } else if (inp_type == "list[float]") { + std::string py_out = cname + "_" + inp + "py"; + mod->ph_module->transform("pyvfloat_" + inp + "_" + cname, vfloat_to_py, concurrency::serial) + .input_family(product_query{product_specification::create(inp), LAYER}) + .output_products(py_out); + } else if (inp_type == "list[double]" || inp_type == "list['double']") { + std::string py_out = cname + "_" + inp + "py"; + mod->ph_module + ->transform("pyvdouble_" + inp + "_" + cname, vdouble_to_py, concurrency::serial) + .input_family(product_query{product_specification::create(inp), LAYER}) + .output_products(py_out); } else { PyErr_Format(PyExc_TypeError, "unsupported input type \"%s\"", inp_type.c_str()); return false; @@ -704,7 +1073,7 @@ static PyObject* md_transform(py_phlex_module* mod, PyObject* args, PyObject* kw ->transform("pyvfloat_" + output + "_" + cname, py_to_vfloat, concurrency::serial) .input_family(product_query{product_specification::create(py_in), LAYER}) .output_products(output); - } else if (output_type.compare(pos, std::string::npos, "double64]]") == 0) { + } else if (output_type.compare(pos, std::string::npos, "float64]]") == 0) { mod->ph_module ->transform("pyvdouble_" + output + "_" + cname, py_to_vdouble, concurrency::serial) .input_family(product_query{product_specification::create(py_in), LAYER}) @@ -713,6 +1082,23 @@ static PyObject* md_transform(py_phlex_module* mod, PyObject* args, PyObject* kw PyErr_Format(PyExc_TypeError, "unsupported array output type \"%s\"", output_type.c_str()); return nullptr; } + } + else if (output_type == "list[int]") { + auto py_in = "py" + output + "_" + cname; + mod->ph_module->transform("pyvint_" + output + "_" + cname, py_to_vint, concurrency::serial) + .input_family(product_query{product_specification::create(py_in), LAYER}) + .output_products(output); + } else if (output_type == "list[float]") { + auto py_in = "py" + output + "_" + cname; + mod->ph_module->transform("pyvfloat_" + output + "_" + cname, py_to_vfloat, concurrency::serial) + .input_family(product_query{product_specification::create(py_in), LAYER}) + .output_products(output); + } else if (output_type == "list[double]" || output_type == "list['double']") { + auto py_in = "py" + output + "_" + cname; + mod->ph_module + ->transform("pyvdouble_" + output + "_" + cname, py_to_vdouble, concurrency::serial) + .input_family(product_query{product_specification::create(py_in), LAYER}) + .output_products(output); } else { PyErr_Format(PyExc_TypeError, "unsupported output type \"%s\"", output_type.c_str()); return nullptr; diff --git a/scripts/README.md b/scripts/README.md index 6a32f118..5d6cfdc1 100644 --- a/scripts/README.md +++ b/scripts/README.md @@ -156,12 +156,27 @@ Provides convenient commands for managing code coverage analysis. ```bash # From repository root -./scripts/coverage.sh [COMMAND] [COMMAND...] +./scripts/coverage.sh [--preset ] [COMMAND] [COMMAND...] # Multiple commands in sequence ./scripts/coverage.sh setup test xml html ``` +#### Presets + +The `--preset` flag controls the toolchain and instrumentation method: + +- **`coverage-clang`** (Default): + - Uses LLVM source-based coverage. + - Best for local development (fast, accurate). + - Generates high-fidelity HTML reports. + - Key commands: `setup`, `test`, `html`, `view`, `summary`. + +- **`coverage-gcc`**: + - Uses `gcov` instrumentation. + - Best for CI pipelines requiring XML output (e.g., Codecov). + - Key commands: `setup`, `test`, `xml`, `upload`. + #### Commands | Command | Description | diff --git a/test/demo-giantdata/unfold_transform_fold.cpp b/test/demo-giantdata/unfold_transform_fold.cpp index 77cb2eab..49581d27 100644 --- a/test/demo-giantdata/unfold_transform_fold.cpp +++ b/test/demo-giantdata/unfold_transform_fold.cpp @@ -36,7 +36,7 @@ TEST_CASE("Unfold-transform-fold pipeline", "[concurrency][unfold][fold]") // Test parameters - moderate scale to ensure sustained concurrent execution constexpr std::size_t n_runs = 1; constexpr std::size_t n_subruns = 1; - constexpr std::size_t n_spills = 20; + constexpr std::size_t n_spills = 100; constexpr int apas_per_spill = 20; constexpr std::size_t wires_per_spill = apas_per_spill * 256ull; constexpr std::size_t chunksize = 256; diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 846848ab..a615a784 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -9,13 +9,20 @@ function(check_python_module_version MODULE_NAME MIN_VERSION OUT_VAR) "import sys try: import ${MODULE_NAME} - from packaging.version import parse as parse_version installed_version = getattr(${MODULE_NAME}, '__version__', None) - if parse_version(installed_version) >= parse_version('${MIN_VERSION}'): + if not installed_version: + sys.exit(2) + + def parse(v): + return tuple(map(int, v.split('.')[:3])) + + if parse(installed_version) >= parse('${MIN_VERSION}'): sys.exit(0) else: sys.exit(2) # Version too low except ImportError: + sys.exit(1) +except Exception: sys.exit(1)" RESULT_VARIABLE _module_check_result ) @@ -82,14 +89,32 @@ endif() set(ACTIVE_PY_CPHLEX_TESTS "") + # numpy support if installed + if(HAS_NUMPY) + # phlex-based tests that require numpy support + add_test(NAME py:vec COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyvec.jsonnet) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:vec) + + add_test(NAME py:vectypes + COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyvectypes.jsonnet + ) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:vectypes) + + add_test(NAME py:veclists + COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyveclists.jsonnet + ) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:veclists) + + add_test(NAME py:types COMMAND phlex -c + ${CMAKE_CURRENT_SOURCE_DIR}/pytypes.jsonnet + ) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:types) + endif() + # C++ helper to provide a driver add_library(cppsource4py MODULE source.cpp) target_link_libraries(cppsource4py PRIVATE phlex::module) -# phlex-based tests that require numpy support -add_test(NAME py:vec COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyvec.jsonnet) -list(APPEND ACTIVE_PY_CPHLEX_TESTS py:vec) - # phlex-based tests (no cppyy dependency) add_test(NAME py:add COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyadd.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:add) @@ -100,6 +125,54 @@ list(APPEND ACTIVE_PY_CPHLEX_TESTS py:config) add_test(NAME py:reduce COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyreduce.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:reduce) + add_test(NAME py:coverage + COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pycoverage.jsonnet + ) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:coverage) + + add_test(NAME py:mismatch + COMMAND ${PROJECT_BINARY_DIR}/bin/phlex -c + ${CMAKE_CURRENT_SOURCE_DIR}/pymismatch.jsonnet + ) + set_tests_properties( + py:mismatch + PROPERTIES PASS_REGULAR_EXPRESSION + "number of inputs .* does not match number of annotation types" + ) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:mismatch) + + # "failing" tests for checking error paths + add_test( + NAME py:failure + COMMAND + ${CMAKE_CURRENT_SOURCE_DIR}/failing_test_wrap.sh ${PROJECT_BINARY_DIR}/bin/phlex -c + ${CMAKE_CURRENT_SOURCE_DIR}/pyfailure.jsonnet + ) + set_tests_properties( + py:failure + PROPERTIES PASS_REGULAR_EXPRESSION "property \"input\" does not exist" + ) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:failure) + + message(STATUS "Python_SITELIB: ${Python_SITELIB}") + message(STATUS "Python_SITEARCH: ${Python_SITEARCH}") + set(TEST_PYTHONPATH ${CMAKE_CURRENT_SOURCE_DIR}) + # Always add site-packages to PYTHONPATH for tests, as embedded python might + # not find them especially in spack environments where they are in + # non-standard locations + if(Python_SITELIB) + set(TEST_PYTHONPATH ${TEST_PYTHONPATH}:${Python_SITELIB}) + endif() + if(Python_SITEARCH AND NOT "${Python_SITEARCH}" STREQUAL "${Python_SITELIB}") + set(TEST_PYTHONPATH ${TEST_PYTHONPATH}:${Python_SITEARCH}) + endif() + + if(DEFINED ENV{VIRTUAL_ENV}) + # Keep this for backward compatibility or if it adds something else + endif() + set(TEST_PYTHONPATH ${TEST_PYTHONPATH}:$ENV{PYTHONPATH}) + message(STATUS "TEST_PYTHONPATH: ${TEST_PYTHONPATH}") + # "failing" tests for checking error paths add_test( NAME py:failure diff --git a/test/python/pycoverage.jsonnet b/test/python/pycoverage.jsonnet new file mode 100644 index 00000000..bd67b970 --- /dev/null +++ b/test/python/pycoverage.jsonnet @@ -0,0 +1,18 @@ +{ + driver: { + cpp: 'generate_layers', + layers: { + event: { parent: 'job', total: 1, starting_number: 1 } + } + }, + sources: { + cppdriver: { + cpp: 'cppsource4py', + }, + }, + modules: { + coverage: { + py: 'test_coverage', + } + } +} diff --git a/test/python/pymismatch.jsonnet b/test/python/pymismatch.jsonnet new file mode 100644 index 00000000..a3b1abbf --- /dev/null +++ b/test/python/pymismatch.jsonnet @@ -0,0 +1,13 @@ +{ + driver: { + cpp: 'generate_layers', + layers: { + event: { total: 1 } + } + }, + modules: { + mismatch: { + py: 'test_mismatch', + } + } +} diff --git a/test/python/pytypes.jsonnet b/test/python/pytypes.jsonnet new file mode 100644 index 00000000..27fd6a0e --- /dev/null +++ b/test/python/pytypes.jsonnet @@ -0,0 +1,33 @@ +{ + driver: { + cpp: 'generate_layers', + layers: { + event: { parent: 'job', total: 10, starting_number: 1 } + } + }, + sources: { + cppdriver: { + cpp: 'cppsource4py', + }, + }, + modules: { + pytypes: { + py: 'test_types', + input_float: ['f1', 'f2'], + output_float: ['sum_f'], + input_double: ['d1', 'd2'], + output_double: ['sum_d'], + input_uint: ['u1', 'u2'], + output_uint: ['sum_u'], + input_bool: ['b1', 'b2'], + output_bool: ['and_b'], + output_vfloat: ['vec_f'], + output_vdouble: ['vec_d'], + }, + verify_bool: { + py: 'verify_extended', + input_bool: ['and_b'], + expected_bool: false, + }, + }, +} diff --git a/test/python/pyveclists.jsonnet b/test/python/pyveclists.jsonnet new file mode 100644 index 00000000..dfcbc3ff --- /dev/null +++ b/test/python/pyveclists.jsonnet @@ -0,0 +1,61 @@ +{ + driver: { + cpp: 'generate_layers', + layers: { + event: { parent: "job", total: 10, starting_number: 1 } + } + }, + sources: { + cppdriver: { + cpp: 'cppsource4py', + }, + }, + modules: { + vectypes: { + py: 'vectypes', + use_lists: true, + input_int32: ['i', 'j'], + output_int32: ['sum_int32'], + input_uint32: ['u1', 'u2'], + output_uint32: ['sum_uint32'], + input_int64: ['l1', 'l2'], + output_int64: ['sum_int64'], + input_uint64: ['ul1', 'ul2'], + output_uint64: ['sum_uint64'], + input_float32: ['f1', 'f2'], + output_float32: ['sum_float32'], + input_float64: ['d1', 'd2'], + output_float64: ['sum_float64'], + }, + verify_int32: { + py: 'verify_extended', + input_int: ['sum_int32'], + sum_total: 1, + }, + verify_uint32: { + py: 'verify_extended', + input_uint: ['sum_uint32'], + sum_total: 1, + }, + verify_int64: { + py: 'verify_extended', + input_long: ['sum_int64'], + sum_total: 1, + }, + verify_uint64: { + py: 'verify_extended', + input_ulong: ['sum_uint64'], + sum_total: 100, + }, + verify_float32: { + py: 'verify_extended', + input_float: ['sum_float32'], + sum_total: 1.0, + }, + verify_double: { + py: 'verify_extended', + input_double: ['sum_float64'], + sum_total: 1.0, + }, + }, +} diff --git a/test/python/pyvectypes.jsonnet b/test/python/pyvectypes.jsonnet new file mode 100644 index 00000000..a655687b --- /dev/null +++ b/test/python/pyvectypes.jsonnet @@ -0,0 +1,60 @@ +{ + driver: { + cpp: 'generate_layers', + layers: { + event: { parent: "job", total: 10, starting_number: 1 } + } + }, + sources: { + cppdriver: { + cpp: 'cppsource4py', + }, + }, + modules: { + vectypes: { + py: 'vectypes', + input_int32: ['i', 'j'], + output_int32: ['sum_int32'], + input_uint32: ['u1', 'u2'], + output_uint32: ['sum_uint32'], + input_int64: ['l1', 'l2'], + output_int64: ['sum_int64'], + input_uint64: ['ul1', 'ul2'], + output_uint64: ['sum_uint64'], + input_float32: ['f1', 'f2'], + output_float32: ['sum_float32'], + input_float64: ['d1', 'd2'], + output_float64: ['sum_float64'], + }, + verify_int32: { + py: 'verify_extended', + input_int: ['sum_int32'], + sum_total: 1, + }, + verify_uint32: { + py: 'verify_extended', + input_uint: ['sum_uint32'], + sum_total: 1, + }, + verify_int64: { + py: 'verify_extended', + input_long: ['sum_int64'], + sum_total: 1, + }, + verify_uint64: { + py: 'verify_extended', + input_ulong: ['sum_uint64'], + sum_total: 100, + }, + verify_float32: { + py: 'verify_extended', + input_float: ['sum_float32'], + sum_total: 1.0, + }, + verify_double: { + py: 'verify_extended', + input_double: ['sum_float64'], + sum_total: 1.0, + }, + }, +} diff --git a/test/python/source.cpp b/test/python/source.cpp index 2a6aac8f..f6979556 100644 --- a/test/python/source.cpp +++ b/test/python/source.cpp @@ -1,12 +1,61 @@ #include "phlex/source.hpp" #include "phlex/model/data_cell_index.hpp" +#include using namespace phlex; PHLEX_REGISTER_PROVIDERS(s) { - s.provide("provide_i", [](data_cell_index const& id) -> int { return id.number(); }) + s.provide("provide_i", [](data_cell_index const& id) -> int { return id.number() % 2; }) .output_product("i"_in("job")); - s.provide("provide_j", [](data_cell_index const& id) -> int { return -id.number() + 1; }) + s.provide("provide_j", + [](data_cell_index const& id) -> int { return 1 - (int)(id.number() % 2); }) .output_product("j"_in("job")); + + s.provide("provide_f1", + [](data_cell_index const& id) -> float { return (float)((id.number() % 100) / 100.0); }) + .output_product("f1"_in("job")); + s.provide( + "provide_f2", + [](data_cell_index const& id) -> float { return 1.0f - (float)((id.number() % 100) / 100.0); }) + .output_product("f2"_in("job")); + + s.provide( + "provide_d1", + [](data_cell_index const& id) -> double { return (double)((id.number() % 100) / 100.0); }) + .output_product("d1"_in("job")); + s.provide("provide_d2", + [](data_cell_index const& id) -> double { + return 1.0 - (double)((id.number() % 100) / 100.0); + }) + .output_product("d2"_in("job")); + + s.provide( + "provide_u1", + [](data_cell_index const& id) -> unsigned int { return (unsigned int)(id.number() % 2); }) + .output_product("u1"_in("job")); + s.provide( + "provide_u2", + [](data_cell_index const& id) -> unsigned int { return 1 - (unsigned int)(id.number() % 2); }) + .output_product("u2"_in("job")); + + s.provide("provide_l1", + [](data_cell_index const& id) -> int64_t { return (int64_t)(id.number() % 2); }) + .output_product("l1"_in("job")); + s.provide("provide_l2", + [](data_cell_index const& id) -> int64_t { return 1 - (int64_t)(id.number() % 2); }) + .output_product("l2"_in("job")); + + s.provide("provide_ul1", + [](data_cell_index const& id) -> uint64_t { return (uint64_t)(id.number() % 101); }) + .output_product("ul1"_in("job")); + s.provide( + "provide_ul2", + [](data_cell_index const& id) -> uint64_t { return 100 - (uint64_t)(id.number() % 101); }) + .output_product("ul2"_in("job")); + + s.provide("provide_b1", [](data_cell_index const& id) -> bool { return (id.number() % 2) == 0; }) + .output_product("b1"_in("job")); + s.provide("provide_b2", [](data_cell_index const& id) -> bool { return (id.number() % 2) != 0; }) + .output_product("b2"_in("job")); } diff --git a/test/python/test_coverage.py b/test/python/test_coverage.py new file mode 100644 index 00000000..df30b607 --- /dev/null +++ b/test/python/test_coverage.py @@ -0,0 +1,43 @@ +"""Test coverage for list input converters.""" + +class double(float): # noqa: N801 + """Dummy class for C++ double type.""" + + pass + +def list_int_func(lst: list[int]) -> int: + """Sum a list of integers.""" + return sum(lst) + +def list_float_func(lst: list[float]) -> float: + """Sum a list of floats.""" + return sum(lst) + +# For double, I'll use string annotation to be safe and match C++ check +def list_double_func(lst: "list[double]") -> float: # type: ignore + """Sum a list of doubles.""" + return sum(lst) + +def collect_int(i: int) -> list[int]: + """Collect an integer into a list.""" + return [i] + +def collect_float(f: float) -> list[float]: + """Collect a float into a list.""" + return [f] + +def collect_double(d: "double") -> "list[double]": # type: ignore + """Collect a double into a list.""" + return [d] + +def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): + """Register algorithms.""" + # We need to transform scalar inputs to lists first + # i, f1, d1 come from cppsource4py + m.transform(collect_int, input_family=["i"], output_products=["l_int"]) + m.transform(collect_float, input_family=["f1"], output_products=["l_float"]) + m.transform(collect_double, input_family=["d1"], output_products=["l_double"]) + + m.transform(list_int_func, input_family=["l_int"], output_products=["sum_int"]) + m.transform(list_float_func, input_family=["l_float"], output_products=["sum_float"]) + m.transform(list_double_func, input_family=["l_double"], output_products=["sum_double"]) diff --git a/test/python/test_mismatch.py b/test/python/test_mismatch.py new file mode 100644 index 00000000..ab7313d9 --- /dev/null +++ b/test/python/test_mismatch.py @@ -0,0 +1,12 @@ + +"""Test mismatch between input labels and types.""" + +def mismatch_func(a: int, b: int): + """Add two integers.""" + return a + b + +def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): + """Register algorithms.""" + # input_family has 1 element, but function takes 2 arguments + # This should trigger the error in modulewrap.cpp + m.transform(mismatch_func, input_family=["a"], output_products=["sum"]) diff --git a/test/python/test_types.py b/test/python/test_types.py new file mode 100644 index 00000000..3c060b92 --- /dev/null +++ b/test/python/test_types.py @@ -0,0 +1,129 @@ +"""Algorithms exercising various C++ types. + +This test code implements algorithms that use types other than the standard +int/string to ensure that the Python bindings correctly handle them. +""" + +import numpy as np +import numpy.typing as npt + + +class double(float): # noqa: N801 + """Dummy class for C++ double type.""" + + pass + + +def add_float(i: float, j: float) -> float: + """Add two floats. + + Args: + i (float): First input. + j (float): Second input. + + Returns: + float: Sum of the two inputs. + """ + return i + j + + +def add_double(i: double, j: double) -> double: + """Add two doubles. + + Args: + i (float): First input. + j (float): Second input. + + Returns: + float: Sum of the two inputs. + """ + return double(i + j) + + +def add_unsigned(i: "unsigned int", j: "unsigned int") -> "unsigned int": # type: ignore # noqa: F722 + """Add two unsigned integers. + + Args: + i (int): First input. + j (int): Second input. + + Returns: + int: Sum of the two inputs. + """ + return i + j + + +def collect_float(i: float, j: float) -> npt.NDArray[np.float32]: + """Combine floats into a numpy array. + + Args: + i (float): First input. + j (float): Second input. + + Returns: + ndarray: Array of floats. + """ + return np.array([i, j], dtype=np.float32) + + +def collect_double(i: double, j: double) -> npt.NDArray[np.float64]: + """Combine doubles into a numpy array. + + Args: + i (float): First input. + j (float): Second input. + + Returns: + ndarray: Array of doubles. + """ + return np.array([i, j], dtype=np.float64) + + +def and_bool(i: bool, j: bool) -> bool: + """And two booleans. + + Args: + i (bool): First input. + j (bool): Second input. + + Returns: + bool: Logical AND of the two inputs. + """ + return i and j + + +def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): + """Register algorithms. + + Args: + m (internal): Phlex registrar representation. + config (internal): Phlex configuration representation. + + Returns: + None + """ + m.transform( + add_float, input_family=config["input_float"], output_products=config["output_float"] + ) + + m.transform( + add_double, input_family=config["input_double"], output_products=config["output_double"] + ) + + m.transform( + add_unsigned, input_family=config["input_uint"], output_products=config["output_uint"] + ) + + m.transform( + and_bool, input_family=config["input_bool"], output_products=config["output_bool"] + ) + + m.transform( + collect_float, input_family=config["input_float"], output_products=config["output_vfloat"] + ) + + m.transform( + collect_double, + input_family=config["input_double"], + output_products=config["output_vdouble"], + ) diff --git a/test/python/vectypes.py b/test/python/vectypes.py new file mode 100644 index 00000000..4d613f74 --- /dev/null +++ b/test/python/vectypes.py @@ -0,0 +1,228 @@ +"""Algorithms exercising various numpy array types. + +This test code implements algorithms that use numpy arrays of different types +to ensure that the Python bindings correctly handle them. +""" + +import numpy as np +import numpy.typing as npt + + +def collectify_int32(i: int, j: int) -> npt.NDArray[np.int32]: + """Create an int32 array from two integers.""" + return np.array([i, j], dtype=np.int32) + + +def sum_array_int32(coll: npt.NDArray[np.int32]) -> int: + """Sum an int32 array.""" + if isinstance(coll, list): + coll = np.array(coll, dtype=np.int32) + return int(sum(int(x) for x in coll)) + + +def collectify_uint32( + i: "unsigned int", j: "unsigned int" # type: ignore # noqa: F722 +) -> npt.NDArray[np.uint32]: + """Create a uint32 array from two integers.""" + return np.array([i, j], dtype=np.uint32) + + +def sum_array_uint32(coll: npt.NDArray[np.uint32]) -> "unsigned int": # type: ignore # noqa: F722 + """Sum a uint32 array.""" + if isinstance(coll, list): + coll = np.array(coll, dtype=np.uint32) + return int(sum(int(x) for x in coll)) + + +def collectify_int64(i: "long", j: "long") -> npt.NDArray[np.int64]: # type: ignore # noqa: F821 + """Create an int64 array from two integers.""" + return np.array([i, j], dtype=np.int64) + + +def sum_array_int64(coll: npt.NDArray[np.int64]) -> "long": # type: ignore # noqa: F821 + """Sum an int64 array.""" + if isinstance(coll, list): + coll = np.array(coll, dtype=np.int64) + return int(sum(int(x) for x in coll)) + + +def collectify_uint64( + i: "unsigned long", j: "unsigned long" # type: ignore # noqa: F722 +) -> npt.NDArray[np.uint64]: + """Create a uint64 array from two integers.""" + return np.array([i, j], dtype=np.uint64) + + +def sum_array_uint64(coll: npt.NDArray[np.uint64]) -> "unsigned long": # type: ignore # noqa: F722 + """Sum a uint64 array.""" + if isinstance(coll, list): + coll = np.array(coll, dtype=np.uint64) + return int(sum(int(x) for x in coll)) + + +def collectify_float32(i: "float", j: "float") -> npt.NDArray[np.float32]: + """Create a float32 array from two floats.""" + return np.array([i, j], dtype=np.float32) + + +def sum_array_float32(coll: npt.NDArray[np.float32]) -> "float": + """Sum a float32 array.""" + return float(sum(coll)) + + +def collectify_float64(i: "double", j: "double") -> npt.NDArray[np.float64]: # type: ignore # noqa: F821 + """Create a float64 array from two floats.""" + return np.array([i, j], dtype=np.float64) + + +def collectify_float32_list(i: "float", j: "float") -> list[float]: + """Create a float32 list from two floats.""" + return [i, j] + + +def collectify_float64_list(i: "double", j: "double") -> list["double"]: # type: ignore # noqa: F821 + """Create a float64 list from two floats.""" + return [i, j] + + +def sum_array_float64(coll: npt.NDArray[np.float64]) -> "double": # type: ignore # noqa: F821 + """Sum a float64 array.""" + return float(sum(coll)) + + +def collectify_int32_list(i: int, j: int) -> list[int]: + """Create an int32 list from two integers.""" + return [i, j] + + +def collectify_uint32_list( + i: "unsigned int", j: "unsigned int" # type: ignore # noqa: F722 +) -> list[int]: + """Create a uint32 list from two integers.""" + return [int(i), int(j)] + + +def collectify_int64_list(i: "long", j: "long") -> list[int]: # type: ignore # noqa: F821 + """Create an int64 list from two integers.""" + return [int(i), int(j)] + + +def collectify_uint64_list( + i: "unsigned long", j: "unsigned long" # type: ignore # noqa: F722 +) -> list[int]: + """Create a uint64 list from two integers.""" + return [int(i), int(j)] + + +def sum_list_int32(coll: list[int]) -> int: + """Sum a list of ints.""" + return sum(coll) + + +def sum_list_uint32(coll: list[int]) -> "unsigned int": # type: ignore # noqa: F722 + """Sum a list of uints.""" + return sum(coll) + + +def sum_list_int64(coll: list[int]) -> "long": # type: ignore # noqa: F821 + """Sum a list of longs.""" + return sum(coll) + + +def sum_list_uint64(coll: list[int]) -> "unsigned long": # type: ignore # noqa: F722 + """Sum a list of ulongs.""" + return sum(coll) + + +def sum_list_float(coll: list[float]) -> float: + """Sum a list of floats.""" + return sum(coll) + + +def sum_list_double(coll: list["double"]) -> "double": # type: ignore # noqa: F821 + """Sum a list of doubles.""" + return float(sum(coll)) + + +def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): + """Register algorithms for the test.""" + try: + use_lists = config["use_lists"] + except (KeyError, TypeError): + use_lists = False + + # int32 + m.transform( + collectify_int32_list if use_lists else collectify_int32, + input_family=config["input_int32"], + output_products=["arr_int32"], + ) + m.transform( + sum_list_int32 if use_lists else sum_array_int32, + input_family=["arr_int32"], + output_products=config["output_int32"], + name="sum_int32", + ) + + # uint32 + m.transform( + collectify_uint32_list if use_lists else collectify_uint32, + input_family=config["input_uint32"], + output_products=["arr_uint32"], + ) + m.transform( + sum_list_uint32 if use_lists else sum_array_uint32, + input_family=["arr_uint32"], + output_products=config["output_uint32"], + name="sum_uint32", + ) + + # int64 + m.transform( + collectify_int64_list if use_lists else collectify_int64, + input_family=config["input_int64"], + output_products=["arr_int64"], + ) + m.transform( + sum_list_int64 if use_lists else sum_array_int64, + input_family=["arr_int64"], + output_products=config["output_int64"], + name="sum_int64", + ) + + # uint64 + m.transform( + collectify_uint64_list if use_lists else collectify_uint64, + input_family=config["input_uint64"], + output_products=["arr_uint64"], + ) + m.transform( + sum_list_uint64 if use_lists else sum_array_uint64, + input_family=["arr_uint64"], + output_products=config["output_uint64"], + name="sum_uint64", + ) + + # float32 + m.transform( + collectify_float32_list if use_lists else collectify_float32, + input_family=config["input_float32"], + output_products=["arr_float32"], + ) + m.transform( + sum_list_float if use_lists else sum_array_float32, + input_family=["arr_float32"], + output_products=config["output_float32"], + ) + + # float64 + m.transform( + collectify_float64_list if use_lists else collectify_float64, + input_family=config["input_float64"], + output_products=["arr_float64"], + ) + m.transform( + sum_list_double if use_lists else sum_array_float64, + input_family=["arr_float64"], + output_products=config["output_float64"], + ) diff --git a/test/python/verify_extended.py b/test/python/verify_extended.py new file mode 100644 index 00000000..22681b2b --- /dev/null +++ b/test/python/verify_extended.py @@ -0,0 +1,151 @@ +"""Observers to check for various types in tests.""" + +import sys + + +class VerifierInt: + """Verify int values.""" + + __name__ = "verifier_int" + + def __init__(self, sum_total: int): + """Initialize with expected sum.""" + self._sum_total = sum_total + + def __call__(self, value: int) -> None: + """Check if value matches expected sum.""" + assert value == self._sum_total + + +class VerifierUInt: + """Verify unsigned int values.""" + + __name__ = "verifier_uint" + + def __init__(self, sum_total: int): + """Initialize with expected sum.""" + self._sum_total = sum_total + + def __call__(self, value: "unsigned int") -> None: # type: ignore # noqa: F722 + """Check if value matches expected sum.""" + assert value == self._sum_total + + +class VerifierLong: + """Verify long values.""" + + __name__ = "verifier_long" + + def __init__(self, sum_total: int): + """Initialize with expected sum.""" + self._sum_total = sum_total + + def __call__(self, value: "long") -> None: # type: ignore # noqa: F821 + """Check if value matches expected sum.""" + print(f"VerifierLong: value={value}, expected={self._sum_total}") + assert value == self._sum_total + + +class VerifierULong: + """Verify unsigned long values.""" + + __name__ = "verifier_ulong" + + def __init__(self, sum_total: int): + """Initialize with expected sum.""" + self._sum_total = sum_total + + def __call__(self, value: "unsigned long") -> None: # type: ignore # noqa: F722 + """Check if value matches expected sum.""" + print(f"VerifierULong: value={value}, expected={self._sum_total}") + assert value == self._sum_total + + +class VerifierFloat: + """Verify float values.""" + + __name__ = "verifier_float" + + def __init__(self, sum_total: float): + """Initialize with expected sum.""" + self._sum_total = sum_total + + def __call__(self, value: "float") -> None: + """Check if value matches expected sum.""" + sys.stderr.write(f"VerifierFloat: value={value}, expected={self._sum_total}\n") + assert abs(value - self._sum_total) < 1e-5 + + +class VerifierDouble: + """Verify double values.""" + + __name__ = "verifier_double" + + def __init__(self, sum_total: float): + """Initialize with expected sum.""" + self._sum_total = sum_total + + def __call__(self, value: "double") -> None: # type: ignore # noqa: F821 + """Check if value matches expected sum.""" + print(f"VerifierDouble: value={value}, expected={self._sum_total}") + assert abs(value - self._sum_total) < 1e-5 + + +class VerifierBool: + """Verify bool values.""" + + __name__ = "verifier_bool" + + def __init__(self, expected: bool): + """Initialize with expected value.""" + self._expected = expected + + def __call__(self, value: bool) -> None: + """Check if value matches expected.""" + print(f"VerifierBool: value={value}, expected={self._expected}") + assert value == self._expected + + +def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): + """Register observers for the test.""" + try: + m.observe(VerifierInt(config["sum_total"]), input_family=config["input_int"]) + except (KeyError, TypeError): + # Optional configuration, skip if missing + pass + + try: + m.observe(VerifierBool(config["expected_bool"]), input_family=config["input_bool"]) + except (KeyError, TypeError): + # Optional configuration, skip if missing + pass + + try: + m.observe(VerifierUInt(config["sum_total"]), input_family=config["input_uint"]) + except (KeyError, TypeError): + # Optional configuration, skip if missing + pass + + try: + m.observe(VerifierLong(config["sum_total"]), input_family=config["input_long"]) + except (KeyError, TypeError): + # Optional configuration, skip if missing + pass + + try: + m.observe(VerifierULong(config["sum_total"]), input_family=config["input_ulong"]) + except (KeyError, TypeError): + # Optional configuration, skip if missing + pass + + try: + m.observe(VerifierFloat(config["sum_total"]), input_family=config["input_float"]) + except (KeyError, TypeError): + # Optional configuration, skip if missing + pass + + try: + m.observe(VerifierDouble(config["sum_total"]), input_family=config["input_double"]) + except (KeyError, TypeError): + # Optional configuration, skip if missing + pass From 7d05a72b19de706fde8079dcb70751cefbec7071 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Mon, 12 Jan 2026 15:13:26 +0000 Subject: [PATCH 02/67] Initial plan From c78727b7524d10ccf6ae283408286b9339594c85 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Mon, 12 Jan 2026 15:20:49 +0000 Subject: [PATCH 03/67] Add Variant helper and address review comments MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Added variant.py helper from PR #245 - Modified modulewrap.cpp to recognize Variant wrapper via phlex_callable - Updated adder.py to use Variant helper for type-specific registration - Removed debug print statements from verify_extended.py - Removed commented-out mutex code from modulewrap.cpp - Removed debug message() calls from CMakeLists.txt - Fixed LaTeX syntax in copilot-instructions.md (use Unicode ↔) Co-authored-by: greenc-FNAL <2372949+greenc-FNAL@users.noreply.github.com> --- .github/copilot-instructions.md | 2 +- plugins/python/src/modulewrap.cpp | 24 ++++------ test/python/CMakeLists.txt | 3 -- test/python/adder.py | 25 ++++++++-- test/python/variant.py | 79 +++++++++++++++++++++++++++++++ test/python/verify_extended.py | 7 --- 6 files changed, 109 insertions(+), 31 deletions(-) create mode 100644 test/python/variant.py diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index b7a4c349..e9cb54ab 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -161,7 +161,7 @@ All Markdown files must strictly follow these markdownlint rules: - **C++ Driver**: Provides data streams (e.g., `test/python/driver.cpp`). - **Jsonnet Config**: Wires the graph (e.g., `test/python/pytypes.jsonnet`). - **Python Script**: Implements algorithms (e.g., `test/python/test_types.py`). -- **Type Conversion**: `plugins/python/src/modulewrap.cpp` handles C++ $\leftrightarrow$ Python conversion. +- **Type Conversion**: `plugins/python/src/modulewrap.cpp` handles C++ ↔ Python conversion. - **Mechanism**: Uses string comparison of type names (e.g., `"float64]]"`). This is brittle. - **Requirement**: Ensure converters exist for all types used in tests (e.g., `float`, `double`, `unsigned int`, and their vector equivalents). - **Warning**: Exact type matches are required. `numpy.float32` != `float`. diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index c242298d..e5892a19 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -9,8 +9,6 @@ #include #include -// static std::mutex g_py_mutex; - #define NO_IMPORT_ARRAY #define PY_ARRAY_UNIQUE_SYMBOL phlex_ARRAY_API #include @@ -109,7 +107,6 @@ namespace { static_assert(sizeof...(Args) == N, "Argument count mismatch"); PyGILRAII gil; - // std::lock_guard lock(g_py_mutex); PyObject* result = PyObject_CallFunctionObjArgs( (PyObject*)m_callable, lifeline_transform(args.get())..., nullptr); @@ -132,7 +129,6 @@ namespace { static_assert(sizeof...(Args) == N, "Argument count mismatch"); PyGILRAII gil; - // std::lock_guard lock(g_py_mutex); PyObject* result = PyObject_CallFunctionObjArgs((PyObject*)m_callable, (PyObject*)args.get()..., nullptr); @@ -369,7 +365,6 @@ namespace { static PyObjectPtr vint_to_py(std::shared_ptr> const& v) { PyGILRAII gil; - // std::lock_guard lock(g_py_mutex); if (!v) return PyObjectPtr(); PyObject* list = PyList_New(v->size()); @@ -392,7 +387,6 @@ namespace { static PyObjectPtr vuint_to_py(std::shared_ptr> const& v) { PyGILRAII gil; - // std::lock_guard lock(g_py_mutex); if (!v) return PyObjectPtr(); PyObject* list = PyList_New(v->size()); @@ -415,7 +409,6 @@ namespace { static PyObjectPtr vlong_to_py(std::shared_ptr> const& v) { PyGILRAII gil; - // std::lock_guard lock(g_py_mutex); if (!v) return PyObjectPtr(); PyObject* list = PyList_New(v->size()); @@ -438,7 +431,6 @@ namespace { static PyObjectPtr vulong_to_py(std::shared_ptr> const& v) { PyGILRAII gil; - // std::lock_guard lock(g_py_mutex); if (!v) return PyObjectPtr(); PyObject* list = PyList_New(v->size()); @@ -501,7 +493,6 @@ namespace { static std::shared_ptr> py_to_vint(PyObjectPtr pyobj) { PyGILRAII gil; - // std::lock_guard lock(g_py_mutex); auto vec = std::make_shared>(); PyObject* obj = pyobj.get(); @@ -541,7 +532,6 @@ namespace { static std::shared_ptr> py_to_vuint(PyObjectPtr pyobj) { PyGILRAII gil; - // std::lock_guard lock(g_py_mutex); auto vec = std::make_shared>(); PyObject* obj = pyobj.get(); @@ -581,7 +571,6 @@ namespace { static std::shared_ptr> py_to_vlong(PyObjectPtr pyobj) { PyGILRAII gil; - // std::lock_guard lock(g_py_mutex); auto vec = std::make_shared>(); PyObject* obj = pyobj.get(); @@ -621,7 +610,6 @@ namespace { static std::shared_ptr> py_to_vulong(PyObjectPtr pyobj) { PyGILRAII gil; - // std::lock_guard lock(g_py_mutex); auto vec = std::make_shared>(); PyObject* obj = pyobj.get(); @@ -661,7 +649,6 @@ namespace { static std::shared_ptr> py_to_vfloat(PyObjectPtr pyobj) { PyGILRAII gil; - // std::lock_guard lock(g_py_mutex); auto vec = std::make_shared>(); PyObject* obj = pyobj.get(); @@ -701,7 +688,6 @@ namespace { static std::shared_ptr> py_to_vdouble(PyObjectPtr pyobj) { PyGILRAII gil; - // std::lock_guard lock(g_py_mutex); auto vec = std::make_shared>(); PyObject* obj = pyobj.get(); @@ -866,8 +852,16 @@ static PyObject* parse_args(PyObject* args, return nullptr; } + // special case of Phlex Variant wrapper + PyObject* wrapped_callable = PyObject_GetAttrString(callable, "phlex_callable"); + if (wrapped_callable) { + callable = wrapped_callable; + } else { + PyErr_Clear(); + Py_INCREF(callable); + } + // no common errors detected; actual registration may have more checks - Py_INCREF(callable); return callable; } diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index a615a784..6bc3742a 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -154,8 +154,6 @@ list(APPEND ACTIVE_PY_CPHLEX_TESTS py:reduce) ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:failure) - message(STATUS "Python_SITELIB: ${Python_SITELIB}") - message(STATUS "Python_SITEARCH: ${Python_SITEARCH}") set(TEST_PYTHONPATH ${CMAKE_CURRENT_SOURCE_DIR}) # Always add site-packages to PYTHONPATH for tests, as embedded python might # not find them especially in spack environments where they are in @@ -171,7 +169,6 @@ list(APPEND ACTIVE_PY_CPHLEX_TESTS py:reduce) # Keep this for backward compatibility or if it adds something else endif() set(TEST_PYTHONPATH ${TEST_PYTHONPATH}:$ENV{PYTHONPATH}) - message(STATUS "TEST_PYTHONPATH: ${TEST_PYTHONPATH}") # "failing" tests for checking error paths add_test( diff --git a/test/python/adder.py b/test/python/adder.py index 549dcdab..42b9650e 100644 --- a/test/python/adder.py +++ b/test/python/adder.py @@ -4,19 +4,33 @@ real. It serves as a "Hello, World" equivalent for running Python code. """ +from typing import Protocol, TypeVar -def add(i: int, j: int) -> int: +from variant import Variant + + +class AddableProtocol[T](Protocol): + """Typer bound for any types that can be added.""" + + def __add__(self, other: T) -> T: # noqa: D105 + ... + + +Addable = TypeVar('Addable', bound=AddableProtocol) + + +def add(i: Addable, j: Addable) -> Addable: """Add the inputs together and return the sum total. Use the standard `+` operator to add the two inputs together to arrive at their total. Args: - i (int): First input. - j (int): Second input. + i (Number): First input. + j (Number): Second input. Returns: - int: Sum of the two inputs. + Number: Sum of the two inputs. Examples: >>> add(1, 2) @@ -40,4 +54,5 @@ def PHLEX_REGISTER_ALGORITHMS(m, config): Returns: None """ - m.transform(add, input_family=config["input"], output_products=config["output"]) + int_adder = Variant(add, {"i": int, "j": int, "return": int}, "iadd") + m.transform(int_adder, input_family=config["input"], output_products=config["output"]) diff --git a/test/python/variant.py b/test/python/variant.py new file mode 100644 index 00000000..95176292 --- /dev/null +++ b/test/python/variant.py @@ -0,0 +1,79 @@ +"""Annotation helper for C++ typing variants. + +Python algorithms are generic, like C++ templates, but the Phlex registration +process requires a single unique signature. These helpers generate annotated +functions for registration with the proper C++ types. +""" + +import copy +from typing import Any, Callable + + +class Variant: + """Wrapper to associate custom annotations with a callable. + + This class wraps a callable and provides custom ``__annotations__`` and + ``__name__`` attributes, allowing the same underlying function or callable + object to be registered multiple times with different type annotations. + + By default, the provided callable is kept by reference, but can be cloned + (e.g. for callable instances) if requested. + + Phlex will recognize the "phlex_callable" data member, allowing an unwrap + and thus saving an indirection. To detect performance degradation, the + wrapper is not callable by default. + + Attributes: + phlex_callable (Callable): The underlying callable (public). + __annotations__ (dict): Type information of arguments and return product. + __name__ (str): The name associated with this variant. + + Examples: + >>> def add(i: Number, j: Number) -> Number: + ... return i + j + ... + >>> int_adder = variant(add, {"i": int, "j": int, "return": int}, "iadd") + """ + + def __init__( + self, + f: Callable, + annotations: dict[str, str | type | Any], + name: str, + clone: bool | str = False, + allow_call: bool = False, + ): + """Annotate the callable F. + + Args: + f (Callable): Annotable function. + annotations (dict): Type information of arguments and return product. + name (str): Name to assign to this variant. + clone (bool|str): If True (or "deep"), creates a shallow (deep) copy + of the callable. + allow_call (bool): Allow this wrapper to forward to the callable. + """ + if clone == 'deep': + self.phlex_callable = copy.deepcopy(f) + elif clone: + self.phlex_callable = copy.copy(f) + else: + self.phlex_callable = f + self.__annotations__ = annotations + self.__name__ = name + self._allow_call = allow_call + + def __call__(self, *args, **kwargs): + """Raises an error if called directly. + + Variant instances should not be called directly. The framework should + extract ``phlex_callable`` instead and call that. + + Raises: + AssertionError: To indicate incorrect usage, unless overridden. + """ + assert self._allow_call, ( + f"TypedVariant '{self.__name__}' was called directly. " + f"The framework should extract phlex_callable instead." + ) + return self.phlex_callable(*args, **kwargs) # type: ignore diff --git a/test/python/verify_extended.py b/test/python/verify_extended.py index 22681b2b..68dc44d0 100644 --- a/test/python/verify_extended.py +++ b/test/python/verify_extended.py @@ -1,7 +1,5 @@ """Observers to check for various types in tests.""" -import sys - class VerifierInt: """Verify int values.""" @@ -42,7 +40,6 @@ def __init__(self, sum_total: int): def __call__(self, value: "long") -> None: # type: ignore # noqa: F821 """Check if value matches expected sum.""" - print(f"VerifierLong: value={value}, expected={self._sum_total}") assert value == self._sum_total @@ -57,7 +54,6 @@ def __init__(self, sum_total: int): def __call__(self, value: "unsigned long") -> None: # type: ignore # noqa: F722 """Check if value matches expected sum.""" - print(f"VerifierULong: value={value}, expected={self._sum_total}") assert value == self._sum_total @@ -72,7 +68,6 @@ def __init__(self, sum_total: float): def __call__(self, value: "float") -> None: """Check if value matches expected sum.""" - sys.stderr.write(f"VerifierFloat: value={value}, expected={self._sum_total}\n") assert abs(value - self._sum_total) < 1e-5 @@ -87,7 +82,6 @@ def __init__(self, sum_total: float): def __call__(self, value: "double") -> None: # type: ignore # noqa: F821 """Check if value matches expected sum.""" - print(f"VerifierDouble: value={value}, expected={self._sum_total}") assert abs(value - self._sum_total) < 1e-5 @@ -102,7 +96,6 @@ def __init__(self, expected: bool): def __call__(self, value: bool) -> None: """Check if value matches expected.""" - print(f"VerifierBool: value={value}, expected={self._expected}") assert value == self._expected From bc590385b5a6114276366c5a49562acc85ab2460 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Mon, 12 Jan 2026 15:25:07 +0000 Subject: [PATCH 04/67] Fix code review comments - Fixed docstring in adder.py to reference 'Addable' instead of 'Number' - Fixed error message in variant.py to use correct class name 'Variant' - Added clarifying comments in modulewrap.cpp about reference counting Co-authored-by: greenc-FNAL <2372949+greenc-FNAL@users.noreply.github.com> --- plugins/python/src/modulewrap.cpp | 2 ++ test/python/adder.py | 6 +++--- test/python/variant.py | 2 +- 3 files changed, 6 insertions(+), 4 deletions(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index e5892a19..3e0291f9 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -855,8 +855,10 @@ static PyObject* parse_args(PyObject* args, // special case of Phlex Variant wrapper PyObject* wrapped_callable = PyObject_GetAttrString(callable, "phlex_callable"); if (wrapped_callable) { + // PyObject_GetAttrString returns a new reference, which we return callable = wrapped_callable; } else { + // No wrapper, use the original callable with incremented reference count PyErr_Clear(); Py_INCREF(callable); } diff --git a/test/python/adder.py b/test/python/adder.py index 42b9650e..d7a40d93 100644 --- a/test/python/adder.py +++ b/test/python/adder.py @@ -26,11 +26,11 @@ def add(i: Addable, j: Addable) -> Addable: to arrive at their total. Args: - i (Number): First input. - j (Number): Second input. + i (Addable): First input. + j (Addable): Second input. Returns: - Number: Sum of the two inputs. + Addable: Sum of the two inputs. Examples: >>> add(1, 2) diff --git a/test/python/variant.py b/test/python/variant.py index 95176292..f607207c 100644 --- a/test/python/variant.py +++ b/test/python/variant.py @@ -73,7 +73,7 @@ def __call__(self, *args, **kwargs): AssertionError: To indicate incorrect usage, unless overridden. """ assert self._allow_call, ( - f"TypedVariant '{self.__name__}' was called directly. " + f"Variant '{self.__name__}' was called directly. " f"The framework should extract phlex_callable instead." ) return self.phlex_callable(*args, **kwargs) # type: ignore From d35075ee3b8726a57d7e5d912e760a27d1d3adb5 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Mon, 12 Jan 2026 17:20:18 +0000 Subject: [PATCH 05/67] Apply cmake-format fixes --- CMakeLists.txt | 8 +++----- test/python/CMakeLists.txt | 30 +++++++++++------------------- 2 files changed, 14 insertions(+), 24 deletions(-) diff --git a/CMakeLists.txt b/CMakeLists.txt index 0e99e585..1ffaa140 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -73,9 +73,7 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU") if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL "14.1") # GCC 14.1+ issues many false positives re. array-bounds and # stringop-overflow - add_compile_options( - -Wno-array-bounds -Wno-stringop-overflow -Wno-maybe-uninitialized - ) + add_compile_options(-Wno-array-bounds -Wno-stringop-overflow -Wno-maybe-uninitialized) endif() endif() @@ -109,7 +107,7 @@ if(ENABLE_TSAN) # Ensure no optimizations interfere with TSan "$<$:-fno-omit-frame-pointer>" "$<$:-fno-optimize-sibling-calls>" - ) + ) add_link_options(-fsanitize=thread) else() message(FATAL_ERROR "ThreadSanitizer is not supported with ${CMAKE_CXX_COMPILER_ID}") @@ -132,7 +130,7 @@ if(ENABLE_ASAN) # Ensure no optimizations interfere with ASan "$<$:-fno-omit-frame-pointer>" "$<$:-fno-optimize-sibling-calls>" - ) + ) add_link_options(-fsanitize=address) else() message(FATAL_ERROR "AddressSanitizer is not supported with ${CMAKE_CXX_COMPILER_ID}") diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 6bc3742a..afaa6679 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -95,19 +95,13 @@ set(ACTIVE_PY_CPHLEX_TESTS "") add_test(NAME py:vec COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyvec.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:vec) - add_test(NAME py:vectypes - COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyvectypes.jsonnet - ) + add_test(NAME py:vectypes COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyvectypes.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:vectypes) - add_test(NAME py:veclists - COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyveclists.jsonnet - ) + add_test(NAME py:veclists COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyveclists.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:veclists) - add_test(NAME py:types COMMAND phlex -c - ${CMAKE_CURRENT_SOURCE_DIR}/pytypes.jsonnet - ) + add_test(NAME py:types COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pytypes.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:types) endif() @@ -125,20 +119,18 @@ list(APPEND ACTIVE_PY_CPHLEX_TESTS py:config) add_test(NAME py:reduce COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyreduce.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:reduce) - add_test(NAME py:coverage - COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pycoverage.jsonnet - ) + add_test(NAME py:coverage COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pycoverage.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:coverage) - add_test(NAME py:mismatch - COMMAND ${PROJECT_BINARY_DIR}/bin/phlex -c - ${CMAKE_CURRENT_SOURCE_DIR}/pymismatch.jsonnet - ) + add_test( + NAME py:mismatch + COMMAND ${PROJECT_BINARY_DIR}/bin/phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pymismatch.jsonnet + ) set_tests_properties( py:mismatch - PROPERTIES PASS_REGULAR_EXPRESSION - "number of inputs .* does not match number of annotation types" - ) + PROPERTIES + PASS_REGULAR_EXPRESSION "number of inputs .* does not match number of annotation types" + ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:mismatch) # "failing" tests for checking error paths From 057df666dd76d28c18f5dd5971afa3f24b2ce210 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Mon, 12 Jan 2026 17:29:55 +0000 Subject: [PATCH 06/67] Apply Python linting fixes --- test/python/adder.py | 2 +- test/python/test_coverage.py | 8 ++++++++ test/python/test_mismatch.py | 3 ++- test/python/test_types.py | 4 +--- test/python/variant.py | 2 +- test/python/vectypes.py | 12 ++++++++---- 6 files changed, 21 insertions(+), 10 deletions(-) diff --git a/test/python/adder.py b/test/python/adder.py index d7a40d93..f33d68e2 100644 --- a/test/python/adder.py +++ b/test/python/adder.py @@ -16,7 +16,7 @@ def __add__(self, other: T) -> T: # noqa: D105 ... -Addable = TypeVar('Addable', bound=AddableProtocol) +Addable = TypeVar("Addable", bound=AddableProtocol) def add(i: Addable, j: Addable) -> Addable: diff --git a/test/python/test_coverage.py b/test/python/test_coverage.py index df30b607..a2c414e8 100644 --- a/test/python/test_coverage.py +++ b/test/python/test_coverage.py @@ -1,35 +1,43 @@ """Test coverage for list input converters.""" + class double(float): # noqa: N801 """Dummy class for C++ double type.""" pass + def list_int_func(lst: list[int]) -> int: """Sum a list of integers.""" return sum(lst) + def list_float_func(lst: list[float]) -> float: """Sum a list of floats.""" return sum(lst) + # For double, I'll use string annotation to be safe and match C++ check def list_double_func(lst: "list[double]") -> float: # type: ignore """Sum a list of doubles.""" return sum(lst) + def collect_int(i: int) -> list[int]: """Collect an integer into a list.""" return [i] + def collect_float(f: float) -> list[float]: """Collect a float into a list.""" return [f] + def collect_double(d: "double") -> "list[double]": # type: ignore """Collect a double into a list.""" return [d] + def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): """Register algorithms.""" # We need to transform scalar inputs to lists first diff --git a/test/python/test_mismatch.py b/test/python/test_mismatch.py index ab7313d9..0cd70d67 100644 --- a/test/python/test_mismatch.py +++ b/test/python/test_mismatch.py @@ -1,10 +1,11 @@ - """Test mismatch between input labels and types.""" + def mismatch_func(a: int, b: int): """Add two integers.""" return a + b + def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): """Register algorithms.""" # input_family has 1 element, but function takes 2 arguments diff --git a/test/python/test_types.py b/test/python/test_types.py index 3c060b92..474bd753 100644 --- a/test/python/test_types.py +++ b/test/python/test_types.py @@ -114,9 +114,7 @@ def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): add_unsigned, input_family=config["input_uint"], output_products=config["output_uint"] ) - m.transform( - and_bool, input_family=config["input_bool"], output_products=config["output_bool"] - ) + m.transform(and_bool, input_family=config["input_bool"], output_products=config["output_bool"]) m.transform( collect_float, input_family=config["input_float"], output_products=config["output_vfloat"] diff --git a/test/python/variant.py b/test/python/variant.py index f607207c..2c58234f 100644 --- a/test/python/variant.py +++ b/test/python/variant.py @@ -53,7 +53,7 @@ def __init__( of the callable. allow_call (bool): Allow this wrapper to forward to the callable. """ - if clone == 'deep': + if clone == "deep": self.phlex_callable = copy.deepcopy(f) elif clone: self.phlex_callable = copy.copy(f) diff --git a/test/python/vectypes.py b/test/python/vectypes.py index 4d613f74..a8d57bbb 100644 --- a/test/python/vectypes.py +++ b/test/python/vectypes.py @@ -21,7 +21,8 @@ def sum_array_int32(coll: npt.NDArray[np.int32]) -> int: def collectify_uint32( - i: "unsigned int", j: "unsigned int" # type: ignore # noqa: F722 + i: "unsigned int", + j: "unsigned int", # type: ignore # noqa: F722 ) -> npt.NDArray[np.uint32]: """Create a uint32 array from two integers.""" return np.array([i, j], dtype=np.uint32) @@ -47,7 +48,8 @@ def sum_array_int64(coll: npt.NDArray[np.int64]) -> "long": # type: ignore # no def collectify_uint64( - i: "unsigned long", j: "unsigned long" # type: ignore # noqa: F722 + i: "unsigned long", + j: "unsigned long", # type: ignore # noqa: F722 ) -> npt.NDArray[np.uint64]: """Create a uint64 array from two integers.""" return np.array([i, j], dtype=np.uint64) @@ -96,7 +98,8 @@ def collectify_int32_list(i: int, j: int) -> list[int]: def collectify_uint32_list( - i: "unsigned int", j: "unsigned int" # type: ignore # noqa: F722 + i: "unsigned int", + j: "unsigned int", # type: ignore # noqa: F722 ) -> list[int]: """Create a uint32 list from two integers.""" return [int(i), int(j)] @@ -108,7 +111,8 @@ def collectify_int64_list(i: "long", j: "long") -> list[int]: # type: ignore # def collectify_uint64_list( - i: "unsigned long", j: "unsigned long" # type: ignore # noqa: F722 + i: "unsigned long", + j: "unsigned long", # type: ignore # noqa: F722 ) -> list[int]: """Create a uint64 list from two integers.""" return [int(i), int(j)] From 96bf0a31d77ae88764d00675b957fd2b2c2591a2 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Mon, 12 Jan 2026 18:10:51 +0000 Subject: [PATCH 07/67] Initial plan From d7c00b199dd53b24131d100099b6dc76a3d21a45 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Mon, 12 Jan 2026 18:25:38 +0000 Subject: [PATCH 08/67] Fix ruff F722 and mypy errors in vectypes.py by using type aliases with metaclass - Add type aliases for C++ types (unsigned_int, unsigned_long, long, double) - Use metaclass to set __name__ property correctly for C++ wrapper identification - Update function signatures to use type aliases instead of string annotations - Fix mypy unreachable code warnings by adding union types for array/list params - All ruff and mypy checks now pass Co-authored-by: greenc-FNAL <2372949+greenc-FNAL@users.noreply.github.com> --- test/python/vectypes.py | 105 ++++++++++++++++++++++++++++------------ 1 file changed, 73 insertions(+), 32 deletions(-) diff --git a/test/python/vectypes.py b/test/python/vectypes.py index a8d57bbb..62e5b430 100644 --- a/test/python/vectypes.py +++ b/test/python/vectypes.py @@ -7,13 +7,54 @@ import numpy as np import numpy.typing as npt +# Type aliases for C++ types that don't have Python equivalents +# These are used by the C++ wrapper to identify the correct converter + + +class _CppTypeMeta(type): + """Metaclass to allow overriding __name__ for C++ type identification.""" + + def __new__(mcs, name, bases, namespace, cpp_name=None): + cls = super().__new__(mcs, name, bases, namespace) + if cpp_name: + cls._cpp_name = cpp_name + return cls + + @property + def __name__(cls): + return getattr(cls, "_cpp_name", super(type(cls), cls).__name__) + + +class unsigned_int(int, metaclass=_CppTypeMeta, cpp_name="unsigned int"): # noqa: N801 + """Type alias for C++ unsigned int.""" + + pass + + +class unsigned_long(int, metaclass=_CppTypeMeta, cpp_name="unsigned long"): # noqa: N801 + """Type alias for C++ unsigned long.""" + + pass + + +class long(int, metaclass=_CppTypeMeta, cpp_name="long"): # noqa: N801, A001 + """Type alias for C++ long.""" + + pass + + +class double(float, metaclass=_CppTypeMeta, cpp_name="double"): # noqa: N801 + """Type alias for C++ double.""" + + pass + def collectify_int32(i: int, j: int) -> npt.NDArray[np.int32]: """Create an int32 array from two integers.""" return np.array([i, j], dtype=np.int32) -def sum_array_int32(coll: npt.NDArray[np.int32]) -> int: +def sum_array_int32(coll: npt.NDArray[np.int32] | list[int]) -> int: """Sum an int32 array.""" if isinstance(coll, list): coll = np.array(coll, dtype=np.int32) @@ -21,75 +62,75 @@ def sum_array_int32(coll: npt.NDArray[np.int32]) -> int: def collectify_uint32( - i: "unsigned int", - j: "unsigned int", # type: ignore # noqa: F722 + i: unsigned_int, + j: unsigned_int, ) -> npt.NDArray[np.uint32]: """Create a uint32 array from two integers.""" return np.array([i, j], dtype=np.uint32) -def sum_array_uint32(coll: npt.NDArray[np.uint32]) -> "unsigned int": # type: ignore # noqa: F722 +def sum_array_uint32(coll: npt.NDArray[np.uint32] | list[int]) -> unsigned_int: """Sum a uint32 array.""" if isinstance(coll, list): coll = np.array(coll, dtype=np.uint32) - return int(sum(int(x) for x in coll)) + return unsigned_int(sum(int(x) for x in coll)) -def collectify_int64(i: "long", j: "long") -> npt.NDArray[np.int64]: # type: ignore # noqa: F821 +def collectify_int64(i: long, j: long) -> npt.NDArray[np.int64]: """Create an int64 array from two integers.""" return np.array([i, j], dtype=np.int64) -def sum_array_int64(coll: npt.NDArray[np.int64]) -> "long": # type: ignore # noqa: F821 +def sum_array_int64(coll: npt.NDArray[np.int64] | list[int]) -> long: """Sum an int64 array.""" if isinstance(coll, list): coll = np.array(coll, dtype=np.int64) - return int(sum(int(x) for x in coll)) + return long(sum(int(x) for x in coll)) def collectify_uint64( - i: "unsigned long", - j: "unsigned long", # type: ignore # noqa: F722 + i: unsigned_long, + j: unsigned_long, ) -> npt.NDArray[np.uint64]: """Create a uint64 array from two integers.""" return np.array([i, j], dtype=np.uint64) -def sum_array_uint64(coll: npt.NDArray[np.uint64]) -> "unsigned long": # type: ignore # noqa: F722 +def sum_array_uint64(coll: npt.NDArray[np.uint64] | list[int]) -> unsigned_long: """Sum a uint64 array.""" if isinstance(coll, list): coll = np.array(coll, dtype=np.uint64) - return int(sum(int(x) for x in coll)) + return unsigned_long(sum(int(x) for x in coll)) -def collectify_float32(i: "float", j: "float") -> npt.NDArray[np.float32]: +def collectify_float32(i: float, j: float) -> npt.NDArray[np.float32]: """Create a float32 array from two floats.""" return np.array([i, j], dtype=np.float32) -def sum_array_float32(coll: npt.NDArray[np.float32]) -> "float": +def sum_array_float32(coll: npt.NDArray[np.float32]) -> float: """Sum a float32 array.""" return float(sum(coll)) -def collectify_float64(i: "double", j: "double") -> npt.NDArray[np.float64]: # type: ignore # noqa: F821 +def collectify_float64(i: double, j: double) -> npt.NDArray[np.float64]: """Create a float64 array from two floats.""" return np.array([i, j], dtype=np.float64) -def collectify_float32_list(i: "float", j: "float") -> list[float]: +def collectify_float32_list(i: float, j: float) -> list[float]: """Create a float32 list from two floats.""" return [i, j] -def collectify_float64_list(i: "double", j: "double") -> list["double"]: # type: ignore # noqa: F821 +def collectify_float64_list(i: double, j: double) -> list[double]: """Create a float64 list from two floats.""" return [i, j] -def sum_array_float64(coll: npt.NDArray[np.float64]) -> "double": # type: ignore # noqa: F821 +def sum_array_float64(coll: npt.NDArray[np.float64]) -> double: """Sum a float64 array.""" - return float(sum(coll)) + return double(sum(coll)) def collectify_int32_list(i: int, j: int) -> list[int]: @@ -98,21 +139,21 @@ def collectify_int32_list(i: int, j: int) -> list[int]: def collectify_uint32_list( - i: "unsigned int", - j: "unsigned int", # type: ignore # noqa: F722 + i: unsigned_int, + j: unsigned_int, ) -> list[int]: """Create a uint32 list from two integers.""" return [int(i), int(j)] -def collectify_int64_list(i: "long", j: "long") -> list[int]: # type: ignore # noqa: F821 +def collectify_int64_list(i: long, j: long) -> list[int]: """Create an int64 list from two integers.""" return [int(i), int(j)] def collectify_uint64_list( - i: "unsigned long", - j: "unsigned long", # type: ignore # noqa: F722 + i: unsigned_long, + j: unsigned_long, ) -> list[int]: """Create a uint64 list from two integers.""" return [int(i), int(j)] @@ -123,19 +164,19 @@ def sum_list_int32(coll: list[int]) -> int: return sum(coll) -def sum_list_uint32(coll: list[int]) -> "unsigned int": # type: ignore # noqa: F722 +def sum_list_uint32(coll: list[int]) -> unsigned_int: """Sum a list of uints.""" - return sum(coll) + return unsigned_int(sum(coll)) -def sum_list_int64(coll: list[int]) -> "long": # type: ignore # noqa: F821 +def sum_list_int64(coll: list[int]) -> long: """Sum a list of longs.""" - return sum(coll) + return long(sum(coll)) -def sum_list_uint64(coll: list[int]) -> "unsigned long": # type: ignore # noqa: F722 +def sum_list_uint64(coll: list[int]) -> unsigned_long: """Sum a list of ulongs.""" - return sum(coll) + return unsigned_long(sum(coll)) def sum_list_float(coll: list[float]) -> float: @@ -143,9 +184,9 @@ def sum_list_float(coll: list[float]) -> float: return sum(coll) -def sum_list_double(coll: list["double"]) -> "double": # type: ignore # noqa: F821 +def sum_list_double(coll: list[double]) -> double: """Sum a list of doubles.""" - return float(sum(coll)) + return double(sum(coll)) def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): From 38dbad1caef780c4e8294e4b9cadcd281602a42f Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Mon, 12 Jan 2026 18:28:43 +0000 Subject: [PATCH 09/67] Simplify metaclass implementation per code review feedback - Remove unnecessary complexity in __name__ property getter - Always set _cpp_name in __new__ with fallback to class name - Makes code more maintainable and easier to understand Co-authored-by: greenc-FNAL <2372949+greenc-FNAL@users.noreply.github.com> --- test/python/vectypes.py | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/test/python/vectypes.py b/test/python/vectypes.py index 62e5b430..3e6f5ee0 100644 --- a/test/python/vectypes.py +++ b/test/python/vectypes.py @@ -16,13 +16,12 @@ class _CppTypeMeta(type): def __new__(mcs, name, bases, namespace, cpp_name=None): cls = super().__new__(mcs, name, bases, namespace) - if cpp_name: - cls._cpp_name = cpp_name + cls._cpp_name = cpp_name if cpp_name else name return cls @property def __name__(cls): - return getattr(cls, "_cpp_name", super(type(cls), cls).__name__) + return cls._cpp_name class unsigned_int(int, metaclass=_CppTypeMeta, cpp_name="unsigned int"): # noqa: N801 From 3fe4d996b48d77d0ac9988459c4b2dd82699d0ca Mon Sep 17 00:00:00 2001 From: Chris Green Date: Mon, 12 Jan 2026 16:18:03 -0600 Subject: [PATCH 10/67] Fix CodeQL alert --- test/python/adder.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/test/python/adder.py b/test/python/adder.py index f33d68e2..43e77e06 100644 --- a/test/python/adder.py +++ b/test/python/adder.py @@ -13,7 +13,7 @@ class AddableProtocol[T](Protocol): """Typer bound for any types that can be added.""" def __add__(self, other: T) -> T: # noqa: D105 - ... + ... # codeql[py/ineffectual-statement] Addable = TypeVar("Addable", bound=AddableProtocol) From 3c4772b2a032c628eae2e2032ab7230215454d56 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Wed, 14 Jan 2026 17:19:19 +0000 Subject: [PATCH 11/67] Apply clang-format fixes --- plugins/python/src/modulewrap.cpp | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index 3e0291f9..ce9d95d4 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -938,8 +938,7 @@ static bool insert_input_converters(py_phlex_module* mod, PyErr_Format(PyExc_TypeError, "unsupported array input type \"%s\"", inp_type.c_str()); return false; } - } - else if (inp_type == "list[int]") { + } else if (inp_type == "list[int]") { std::string py_out = cname + "_" + inp + "py"; mod->ph_module->transform("pyvint_" + inp + "_" + cname, vint_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) @@ -1078,8 +1077,7 @@ static PyObject* md_transform(py_phlex_module* mod, PyObject* args, PyObject* kw PyErr_Format(PyExc_TypeError, "unsupported array output type \"%s\"", output_type.c_str()); return nullptr; } - } - else if (output_type == "list[int]") { + } else if (output_type == "list[int]") { auto py_in = "py" + output + "_" + cname; mod->ph_module->transform("pyvint_" + output + "_" + cname, py_to_vint, concurrency::serial) .input_family(product_query{product_specification::create(py_in), LAYER}) From 68b5d8caa7dddfbe58de4c8216cfaaaf0ea98f56 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Wed, 14 Jan 2026 18:00:47 +0000 Subject: [PATCH 12/67] Fix Python tests and enforce NumPy requirement - Enforced NumPy as a required dependency in CMakeLists.txt. - Renamed PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS to PHLEX_REGISTER_ALGORITHMS. - Modified C++ wrapper to handle generic types (Union, Optional) by ignoring __name__. - Restored support for list inputs in python test functions (handling Union type hints). --- plugins/python/CMakeLists.txt | 1 + plugins/python/src/modulewrap.cpp | 10 ++++++++++ test/python/CMakeLists.txt | 14 -------------- test/python/test_coverage.py | 2 +- test/python/test_mismatch.py | 2 +- test/python/test_types.py | 2 +- test/python/vectypes.py | 6 +++--- test/python/verify_extended.py | 2 +- 8 files changed, 18 insertions(+), 21 deletions(-) diff --git a/plugins/python/CMakeLists.txt b/plugins/python/CMakeLists.txt index 357d42c8..99b93f25 100644 --- a/plugins/python/CMakeLists.txt +++ b/plugins/python/CMakeLists.txt @@ -60,3 +60,4 @@ target_link_libraries(pymodule PRIVATE phlex::module Python::Python Python::NumP target_compile_definitions(pymodule PRIVATE NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION) install(TARGETS pymodule LIBRARY DESTINATION lib) +endif() diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index ce9d95d4..03326079 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -212,6 +212,16 @@ namespace { std::string ann; if (!PyUnicode_Check(pyobj)) { PyObject* pystr = PyObject_GetAttrString(pyobj, "__name__"); // eg. for classes + + // generics like Union have a __name__ that is not useful for our purposes + if (pystr) { + char const* cstr = PyUnicode_AsUTF8(pystr); + if (cstr && (strcmp(cstr, "Union") == 0 || strcmp(cstr, "Optional") == 0)) { + Py_DECREF(pystr); + pystr = nullptr; + } + } + if (!pystr) { PyErr_Clear(); pystr = PyObject_Str(pyobj); diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index afaa6679..9fa03ad4 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -162,21 +162,7 @@ list(APPEND ACTIVE_PY_CPHLEX_TESTS py:reduce) endif() set(TEST_PYTHONPATH ${TEST_PYTHONPATH}:$ENV{PYTHONPATH}) -# "failing" tests for checking error paths -add_test( - NAME py:failure - COMMAND - ${CMAKE_CURRENT_SOURCE_DIR}/failing_test_wrap.sh ${PROJECT_BINARY_DIR}/bin/phlex -c - ${CMAKE_CURRENT_SOURCE_DIR}/pyfailure.jsonnet -) -set_tests_properties( - py:failure - PROPERTIES PASS_REGULAR_EXPRESSION "property \"input\" does not exist" -) -list(APPEND ACTIVE_PY_CPHLEX_TESTS py:failure) - # Environment variables required: -set(TEST_PYTHONPATH ${CMAKE_CURRENT_SOURCE_DIR}:$ENV{PYTHONPATH}) set( PYTHON_TEST_ENVIRONMENT "SPDLOG_LEVEL=debug;PHLEX_PLUGIN_PATH=${PROJECT_BINARY_DIR};PYTHONPATH=${TEST_PYTHONPATH};PHLEX_INSTALL=${PYTHON_TEST_PHLEX_INSTALL}" diff --git a/test/python/test_coverage.py b/test/python/test_coverage.py index a2c414e8..04dc3cf5 100644 --- a/test/python/test_coverage.py +++ b/test/python/test_coverage.py @@ -38,7 +38,7 @@ def collect_double(d: "double") -> "list[double]": # type: ignore return [d] -def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): +def PHLEX_REGISTER_ALGORITHMS(m, config): """Register algorithms.""" # We need to transform scalar inputs to lists first # i, f1, d1 come from cppsource4py diff --git a/test/python/test_mismatch.py b/test/python/test_mismatch.py index 0cd70d67..e8f2f769 100644 --- a/test/python/test_mismatch.py +++ b/test/python/test_mismatch.py @@ -6,7 +6,7 @@ def mismatch_func(a: int, b: int): return a + b -def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): +def PHLEX_REGISTER_ALGORITHMS(m, config): """Register algorithms.""" # input_family has 1 element, but function takes 2 arguments # This should trigger the error in modulewrap.cpp diff --git a/test/python/test_types.py b/test/python/test_types.py index 474bd753..bacd3a09 100644 --- a/test/python/test_types.py +++ b/test/python/test_types.py @@ -92,7 +92,7 @@ def and_bool(i: bool, j: bool) -> bool: return i and j -def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): +def PHLEX_REGISTER_ALGORITHMS(m, config): """Register algorithms. Args: diff --git a/test/python/vectypes.py b/test/python/vectypes.py index 3e6f5ee0..30187bff 100644 --- a/test/python/vectypes.py +++ b/test/python/vectypes.py @@ -122,7 +122,7 @@ def collectify_float32_list(i: float, j: float) -> list[float]: return [i, j] -def collectify_float64_list(i: double, j: double) -> list[double]: +def collectify_float64_list(i: double, j: double) -> list['double']: """Create a float64 list from two floats.""" return [i, j] @@ -183,12 +183,12 @@ def sum_list_float(coll: list[float]) -> float: return sum(coll) -def sum_list_double(coll: list[double]) -> double: +def sum_list_double(coll: list['double']) -> double: """Sum a list of doubles.""" return double(sum(coll)) -def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): +def PHLEX_REGISTER_ALGORITHMS(m, config): """Register algorithms for the test.""" try: use_lists = config["use_lists"] diff --git a/test/python/verify_extended.py b/test/python/verify_extended.py index 68dc44d0..c456cde1 100644 --- a/test/python/verify_extended.py +++ b/test/python/verify_extended.py @@ -99,7 +99,7 @@ def __call__(self, value: bool) -> None: assert value == self._expected -def PHLEX_EXPERIMENTAL_REGISTER_ALGORITHMS(m, config): +def PHLEX_REGISTER_ALGORITHMS(m, config): """Register observers for the test.""" try: m.observe(VerifierInt(config["sum_total"]), input_family=config["input_int"]) From 76a48098bca5393b5fd74db6a4d815317d5839da Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Wed, 14 Jan 2026 19:06:57 +0000 Subject: [PATCH 13/67] Apply cmake-format fixes --- plugins/python/CMakeLists.txt | 22 +++---- test/python/CMakeLists.txt | 104 +++++++++++++++++----------------- 2 files changed, 63 insertions(+), 63 deletions(-) diff --git a/plugins/python/CMakeLists.txt b/plugins/python/CMakeLists.txt index 99b93f25..77b8c7c9 100644 --- a/plugins/python/CMakeLists.txt +++ b/plugins/python/CMakeLists.txt @@ -49,15 +49,15 @@ except Exception: # Phlex module to run Python algorithms add_library( pymodule - MODULE - src/pymodule.cpp - src/modulewrap.cpp - src/configwrap.cpp - src/lifelinewrap.cpp - src/errorwrap.cpp -) -target_link_libraries(pymodule PRIVATE phlex::module Python::Python Python::NumPy) -target_compile_definitions(pymodule PRIVATE NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION) - -install(TARGETS pymodule LIBRARY DESTINATION lib) + MODULE + src/pymodule.cpp + src/modulewrap.cpp + src/configwrap.cpp + src/lifelinewrap.cpp + src/errorwrap.cpp + ) + target_link_libraries(pymodule PRIVATE phlex::module Python::Python Python::NumPy) + target_compile_definitions(pymodule PRIVATE NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION) + + install(TARGETS pymodule LIBRARY DESTINATION lib) endif() diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 9fa03ad4..bb4b80b0 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -89,21 +89,21 @@ endif() set(ACTIVE_PY_CPHLEX_TESTS "") - # numpy support if installed - if(HAS_NUMPY) - # phlex-based tests that require numpy support - add_test(NAME py:vec COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyvec.jsonnet) - list(APPEND ACTIVE_PY_CPHLEX_TESTS py:vec) +# numpy support if installed +if(HAS_NUMPY) + # phlex-based tests that require numpy support + add_test(NAME py:vec COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyvec.jsonnet) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:vec) - add_test(NAME py:vectypes COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyvectypes.jsonnet) - list(APPEND ACTIVE_PY_CPHLEX_TESTS py:vectypes) + add_test(NAME py:vectypes COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyvectypes.jsonnet) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:vectypes) - add_test(NAME py:veclists COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyveclists.jsonnet) - list(APPEND ACTIVE_PY_CPHLEX_TESTS py:veclists) + add_test(NAME py:veclists COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyveclists.jsonnet) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:veclists) - add_test(NAME py:types COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pytypes.jsonnet) - list(APPEND ACTIVE_PY_CPHLEX_TESTS py:types) - endif() + add_test(NAME py:types COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pytypes.jsonnet) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:types) +endif() # C++ helper to provide a driver add_library(cppsource4py MODULE source.cpp) @@ -119,48 +119,48 @@ list(APPEND ACTIVE_PY_CPHLEX_TESTS py:config) add_test(NAME py:reduce COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyreduce.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:reduce) - add_test(NAME py:coverage COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pycoverage.jsonnet) - list(APPEND ACTIVE_PY_CPHLEX_TESTS py:coverage) - - add_test( - NAME py:mismatch - COMMAND ${PROJECT_BINARY_DIR}/bin/phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pymismatch.jsonnet - ) - set_tests_properties( - py:mismatch - PROPERTIES - PASS_REGULAR_EXPRESSION "number of inputs .* does not match number of annotation types" - ) - list(APPEND ACTIVE_PY_CPHLEX_TESTS py:mismatch) +add_test(NAME py:coverage COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pycoverage.jsonnet) +list(APPEND ACTIVE_PY_CPHLEX_TESTS py:coverage) - # "failing" tests for checking error paths - add_test( - NAME py:failure - COMMAND - ${CMAKE_CURRENT_SOURCE_DIR}/failing_test_wrap.sh ${PROJECT_BINARY_DIR}/bin/phlex -c - ${CMAKE_CURRENT_SOURCE_DIR}/pyfailure.jsonnet - ) - set_tests_properties( - py:failure - PROPERTIES PASS_REGULAR_EXPRESSION "property \"input\" does not exist" - ) - list(APPEND ACTIVE_PY_CPHLEX_TESTS py:failure) - - set(TEST_PYTHONPATH ${CMAKE_CURRENT_SOURCE_DIR}) - # Always add site-packages to PYTHONPATH for tests, as embedded python might - # not find them especially in spack environments where they are in - # non-standard locations - if(Python_SITELIB) - set(TEST_PYTHONPATH ${TEST_PYTHONPATH}:${Python_SITELIB}) - endif() - if(Python_SITEARCH AND NOT "${Python_SITEARCH}" STREQUAL "${Python_SITELIB}") - set(TEST_PYTHONPATH ${TEST_PYTHONPATH}:${Python_SITEARCH}) - endif() +add_test( + NAME py:mismatch + COMMAND ${PROJECT_BINARY_DIR}/bin/phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pymismatch.jsonnet +) +set_tests_properties( + py:mismatch + PROPERTIES + PASS_REGULAR_EXPRESSION "number of inputs .* does not match number of annotation types" +) +list(APPEND ACTIVE_PY_CPHLEX_TESTS py:mismatch) + +# "failing" tests for checking error paths +add_test( + NAME py:failure + COMMAND + ${CMAKE_CURRENT_SOURCE_DIR}/failing_test_wrap.sh ${PROJECT_BINARY_DIR}/bin/phlex -c + ${CMAKE_CURRENT_SOURCE_DIR}/pyfailure.jsonnet +) +set_tests_properties( + py:failure + PROPERTIES PASS_REGULAR_EXPRESSION "property \"input\" does not exist" +) +list(APPEND ACTIVE_PY_CPHLEX_TESTS py:failure) + +set(TEST_PYTHONPATH ${CMAKE_CURRENT_SOURCE_DIR}) +# Always add site-packages to PYTHONPATH for tests, as embedded python might +# not find them especially in spack environments where they are in +# non-standard locations +if(Python_SITELIB) + set(TEST_PYTHONPATH ${TEST_PYTHONPATH}:${Python_SITELIB}) +endif() +if(Python_SITEARCH AND NOT "${Python_SITEARCH}" STREQUAL "${Python_SITELIB}") + set(TEST_PYTHONPATH ${TEST_PYTHONPATH}:${Python_SITEARCH}) +endif() - if(DEFINED ENV{VIRTUAL_ENV}) - # Keep this for backward compatibility or if it adds something else - endif() - set(TEST_PYTHONPATH ${TEST_PYTHONPATH}:$ENV{PYTHONPATH}) +if(DEFINED ENV{VIRTUAL_ENV}) + # Keep this for backward compatibility or if it adds something else +endif() +set(TEST_PYTHONPATH ${TEST_PYTHONPATH}:$ENV{PYTHONPATH}) # Environment variables required: set( From 4aaf980ae07217d15ca9adbee04c27494a174dd0 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Wed, 14 Jan 2026 14:19:07 -0600 Subject: [PATCH 14/67] More tests to fill gaps --- .gitignore | 31 ++++++++------- plugins/python/src/modulewrap.cpp | 18 +++++++++ test/python/CMakeLists.txt | 21 ++++++++++ test/python/pybadbool.jsonnet | 26 +++++++++++++ test/python/pybadint.jsonnet | 21 ++++++++++ test/python/pybaduint.jsonnet | 21 ++++++++++ test/python/pycallback3.jsonnet | 29 ++++++++++++++ test/python/pyraise.jsonnet | 21 ++++++++++ test/python/source.cpp | 2 + test/python/test_callbacks.py | 64 +++++++++++++++++++++++++++++++ test/python/verify.py | 22 +++++++++++ 11 files changed, 263 insertions(+), 13 deletions(-) create mode 100644 test/python/pybadbool.jsonnet create mode 100644 test/python/pybadint.jsonnet create mode 100644 test/python/pybaduint.jsonnet create mode 100644 test/python/pycallback3.jsonnet create mode 100644 test/python/pyraise.jsonnet create mode 100644 test/python/test_callbacks.py diff --git a/.gitignore b/.gitignore index 6376d96d..c40da725 100644 --- a/.gitignore +++ b/.gitignore @@ -1,23 +1,28 @@ # Build directories -build/ -_build/ +/build/ +/build-cov/ +/_build/ *.dir/ -phlex-src -phlex-build/ -CMakeCache.txt +/phlex-src/ +/phlex-build/ +/CMakeCache.txt CMakeFiles/ -_deps/ +/_deps/ _codeql_detected_source_root # CMake user-specific presets (not generated by Spack) -CMakeUserPresets.json +/CMakeUserPresets.json # Coverage reports -coverage.xml -coverage.info -coverage-html/ -.coverage-generated/ -.coverage-artifacts/ +/coverage.profdata +/coverage_report.txt +/coverage_report_*.txt +/coverage.xml +/coverage.info +/coverage-html/ +/profraw/ +/.coverage-generated/ +/.coverage-artifacts/ *.gcda *.gcno *.gcov @@ -45,4 +50,4 @@ __pycache__/ .DS_Store # act (local workflow testing) .act-artifacts/ -.secrets \ No newline at end of file +.secrets diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index 03326079..fece7c86 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -317,6 +317,24 @@ namespace { { \ PyGILRAII gil; \ cpptype i = (cpptype)frompy(pyobj.get()); \ + if (PyErr_Occurred()) { \ + PyObject *ptype, *pvalue, *ptraceback; \ + PyErr_Fetch(&ptype, &pvalue, &ptraceback); \ + PyErr_NormalizeException(&ptype, &pvalue, &ptraceback); \ + std::string msg = "Python conversion error for type " #name; \ + if (pvalue) { \ + PyObject* pstr = PyObject_Str(pvalue); \ + if (pstr) { \ + msg += ": "; \ + msg += PyUnicode_AsUTF8(pstr); \ + Py_DECREF(pstr); \ + } \ + } \ + Py_XDECREF(ptype); \ + Py_XDECREF(pvalue); \ + Py_XDECREF(ptraceback); \ + throw std::runtime_error(msg); \ + } \ return i; \ } diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index bb4b80b0..63fbdb59 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -98,6 +98,27 @@ if(HAS_NUMPY) add_test(NAME py:vectypes COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyvectypes.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:vectypes) + add_test(NAME py:callback3 COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pycallback3.jsonnet) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:callback3) + + # Expect failure for these tests (check for error propagation and type checking) + add_test(NAME py:raise COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyraise.jsonnet) + set_tests_properties(py:raise PROPERTIES WILL_FAIL TRUE) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:raise) + + add_test(NAME py:badbool COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pybadbool.jsonnet) + set_tests_properties(py:badbool PROPERTIES WILL_FAIL TRUE) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:badbool) + + add_test(NAME py:badint COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pybadint.jsonnet) + set_tests_properties(py:badint PROPERTIES WILL_FAIL TRUE) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:badint) + + add_test(NAME py:baduint COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pybaduint.jsonnet) + set_tests_properties(py:baduint PROPERTIES WILL_FAIL TRUE) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:baduint) + + add_test(NAME py:veclists COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyveclists.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:veclists) diff --git a/test/python/pybadbool.jsonnet b/test/python/pybadbool.jsonnet new file mode 100644 index 00000000..d1e52558 --- /dev/null +++ b/test/python/pybadbool.jsonnet @@ -0,0 +1,26 @@ +{ + driver: { + cpp: 'generate_layers', + layers: { + event: { parent: 'job', total: 1, starting_number: 1 } + } + }, + sources: { + provider: { + cpp: 'cppsource4py', + } + }, + modules: { + test_bad_bool: { + py: 'test_callbacks', + mode: 'bad_bool', + input: ['i'], + output: ['out_bool'], + }, + verify_bool: { + py: 'verify', + input: ['out_bool'], + expected_bool: true, + } + } +} diff --git a/test/python/pybadint.jsonnet b/test/python/pybadint.jsonnet new file mode 100644 index 00000000..b3fab8fa --- /dev/null +++ b/test/python/pybadint.jsonnet @@ -0,0 +1,21 @@ +{ + driver: { + cpp: 'generate_layers', + layers: { + event: { parent: 'job', total: 1, starting_number: 1 } + } + }, + sources: { + provider: { + cpp: 'cppsource4py', + } + }, + modules: { + test_bad_long: { + py: 'test_callbacks', + mode: 'bad_long', + input: ['i'], + output: ['out_long'], + } + } +} diff --git a/test/python/pybaduint.jsonnet b/test/python/pybaduint.jsonnet new file mode 100644 index 00000000..35711c49 --- /dev/null +++ b/test/python/pybaduint.jsonnet @@ -0,0 +1,21 @@ +{ + driver: { + cpp: 'generate_layers', + layers: { + event: { parent: 'job', total: 1, starting_number: 1 } + } + }, + sources: { + provider: { + cpp: 'cppsource4py', + } + }, + modules: { + test_bad_uint: { + py: 'test_callbacks', + mode: 'bad_uint', + input: ['i'], + output: ['out_uint'], + } + } +} diff --git a/test/python/pycallback3.jsonnet b/test/python/pycallback3.jsonnet new file mode 100644 index 00000000..7f2a4253 --- /dev/null +++ b/test/python/pycallback3.jsonnet @@ -0,0 +1,29 @@ +{ + driver: { + cpp: 'generate_layers', + layers: { + event: { parent: 'job', total: 1, starting_number: 1 } + } + }, + sources: { + provider: { + cpp: 'cppsource4py', + } + }, + modules: { + # 1. Test 3-arg callback (success case) + test_three_args: { + py: 'test_callbacks', + mode: 'three_args', + input: ['i', 'j', 'k'], + output: ['sum_ijk'], + }, + verify_three: { + py: 'verify', + input: ['sum_ijk'], + sum_total: 1, # 1 event * (0+0+0? wait, i=event_num-1. event1->0. sum=0. ) + # provider generates i, j starting at 0? + # cppsource4py probably uses event number. + } + } +} diff --git a/test/python/pyraise.jsonnet b/test/python/pyraise.jsonnet new file mode 100644 index 00000000..6c6dd7e0 --- /dev/null +++ b/test/python/pyraise.jsonnet @@ -0,0 +1,21 @@ +{ + driver: { + cpp: 'generate_layers', + layers: { + event: { parent: 'job', total: 1, starting_number: 1 } + } + }, + sources: { + provider: { + cpp: 'cppsource4py', + } + }, + modules: { + test_exception: { + py: 'test_callbacks', + mode: 'exception', + input: ['i'], + output: ['out'], + } + } +} diff --git a/test/python/source.cpp b/test/python/source.cpp index f6979556..5c88d3da 100644 --- a/test/python/source.cpp +++ b/test/python/source.cpp @@ -11,6 +11,8 @@ PHLEX_REGISTER_PROVIDERS(s) s.provide("provide_j", [](data_cell_index const& id) -> int { return 1 - (int)(id.number() % 2); }) .output_product("j"_in("job")); + s.provide("provide_k", [](data_cell_index const&) -> int { return 0; }) + .output_product("k"_in("job")); s.provide("provide_f1", [](data_cell_index const& id) -> float { return (float)((id.number() % 100) / 100.0); }) diff --git a/test/python/test_callbacks.py b/test/python/test_callbacks.py new file mode 100644 index 00000000..fb7b5897 --- /dev/null +++ b/test/python/test_callbacks.py @@ -0,0 +1,64 @@ +"""Test coverage gaps in modulewrap.cpp.""" + +import numpy as np +import numpy.typing as npt + +# 3-argument function to trigger py_callback<3> +def sum_three(a: int, b: int, c: int) -> int: + return a + b + c + +# Function that raises exception to test error handling +def raise_error(a: int) -> int: + raise RuntimeError("Intentional failure") + +# Invalid bool return (2) +def bad_bool(a: int) -> bool: + return 2 # type: ignore + +# Invalid long return (float) +def bad_long(a: int) -> int: + return 1.5 # type: ignore + +class unsigned_int(int): + pass + +# Invalid uint return (negative) +def bad_uint(a: int) -> unsigned_int: + return -5 # type: ignore + +def PHLEX_REGISTER_ALGORITHMS(m, config): + try: + mode = config["mode"] + except KeyError: + mode = "three_args" + + if mode == "three_args": + m.transform( + sum_three, + input_family=config["input"], + output_products=config["output"] + ) + elif mode == "exception": + m.transform( + raise_error, + input_family=config["input"], + output_products=config["output"] + ) + elif mode == "bad_bool": + m.transform( + bad_bool, + input_family=config["input"], + output_products=config["output"] + ) + elif mode == "bad_long": + m.transform( + bad_long, + input_family=config["input"], + output_products=config["output"] + ) + elif mode == "bad_uint": + m.transform( + bad_uint, + input_family=config["input"], + output_products=config["output"] + ) diff --git a/test/python/verify.py b/test/python/verify.py index 936f5a81..5fb46c9c 100644 --- a/test/python/verify.py +++ b/test/python/verify.py @@ -54,6 +54,20 @@ def __call__(self, value: int) -> None: assert value == self._sum_total +class BoolVerifier: + """Verifier for boolean values.""" + + __name__ = "bool_verifier" + + def __init__(self, expected: bool): + """Create a boolean verifier.""" + self._expected = expected + + def __call__(self, value: bool) -> None: + """Verify the boolean value.""" + assert value == self._expected + + def PHLEX_REGISTER_ALGORITHMS(m, config): """Register an instance of `Verifier` as an observer. @@ -68,5 +82,13 @@ def PHLEX_REGISTER_ALGORITHMS(m, config): Returns: None """ + try: + expected = config["expected_bool"] + v = BoolVerifier(expected) + m.observe(v, input_family=config["input"]) + return + except Exception: + pass + assert_sum = Verifier(config["sum_total"]) m.observe(assert_sum, input_family=config["input"]) From b3d6d3c7b1c5ae271943af8d6769211dc5669d3e Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Wed, 14 Jan 2026 20:25:03 +0000 Subject: [PATCH 15/67] Apply cmake-format fixes --- test/python/CMakeLists.txt | 1 - 1 file changed, 1 deletion(-) diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 63fbdb59..1b723162 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -118,7 +118,6 @@ if(HAS_NUMPY) set_tests_properties(py:baduint PROPERTIES WILL_FAIL TRUE) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:baduint) - add_test(NAME py:veclists COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyveclists.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:veclists) From 3e608dddcd643a29b4a5bdaa390a6ecf63fbb764 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Wed, 14 Jan 2026 20:25:48 +0000 Subject: [PATCH 16/67] Apply Python linting fixes --- test/python/check_sys_path.py | 8 +++---- test/python/test_callbacks.py | 45 ++++++++++++----------------------- test/python/vectypes.py | 4 ++-- 3 files changed, 20 insertions(+), 37 deletions(-) diff --git a/test/python/check_sys_path.py b/test/python/check_sys_path.py index f767c4dd..26f4db39 100644 --- a/test/python/check_sys_path.py +++ b/test/python/check_sys_path.py @@ -14,7 +14,7 @@ class Checker: site-packages directory appears in Python's sys.path. """ - __name__ = 'checker' + __name__ = "checker" def __init__(self, venv_path: str): """Initialize the checker with the expected virtual environment path. @@ -35,9 +35,7 @@ def __call__(self, i: int) -> None: site-packages is not found in sys.path. """ assert len(sys.path) > 0 - venv_site_packages = ( - f"{sys.prefix}/lib/python{sys.version_info.major}.{sys.version_info.minor}/site-packages" - ) + venv_site_packages = f"{sys.prefix}/lib/python{sys.version_info.major}.{sys.version_info.minor}/site-packages" assert any(p == venv_site_packages for p in sys.path) @@ -51,4 +49,4 @@ def PHLEX_REGISTER_ALGORITHMS(m, config): Returns: None """ - m.observe(Checker(config["venv"]), input_family = config["input"]) + m.observe(Checker(config["venv"]), input_family=config["input"]) diff --git a/test/python/test_callbacks.py b/test/python/test_callbacks.py index fb7b5897..1887c389 100644 --- a/test/python/test_callbacks.py +++ b/test/python/test_callbacks.py @@ -1,30 +1,35 @@ """Test coverage gaps in modulewrap.cpp.""" -import numpy as np -import numpy.typing as npt + # 3-argument function to trigger py_callback<3> def sum_three(a: int, b: int, c: int) -> int: return a + b + c + # Function that raises exception to test error handling def raise_error(a: int) -> int: raise RuntimeError("Intentional failure") + # Invalid bool return (2) def bad_bool(a: int) -> bool: - return 2 # type: ignore + return 2 # type: ignore + # Invalid long return (float) def bad_long(a: int) -> int: - return 1.5 # type: ignore + return 1.5 # type: ignore + class unsigned_int(int): pass + # Invalid uint return (negative) def bad_uint(a: int) -> unsigned_int: - return -5 # type: ignore + return -5 # type: ignore + def PHLEX_REGISTER_ALGORITHMS(m, config): try: @@ -33,32 +38,12 @@ def PHLEX_REGISTER_ALGORITHMS(m, config): mode = "three_args" if mode == "three_args": - m.transform( - sum_three, - input_family=config["input"], - output_products=config["output"] - ) + m.transform(sum_three, input_family=config["input"], output_products=config["output"]) elif mode == "exception": - m.transform( - raise_error, - input_family=config["input"], - output_products=config["output"] - ) + m.transform(raise_error, input_family=config["input"], output_products=config["output"]) elif mode == "bad_bool": - m.transform( - bad_bool, - input_family=config["input"], - output_products=config["output"] - ) + m.transform(bad_bool, input_family=config["input"], output_products=config["output"]) elif mode == "bad_long": - m.transform( - bad_long, - input_family=config["input"], - output_products=config["output"] - ) + m.transform(bad_long, input_family=config["input"], output_products=config["output"]) elif mode == "bad_uint": - m.transform( - bad_uint, - input_family=config["input"], - output_products=config["output"] - ) + m.transform(bad_uint, input_family=config["input"], output_products=config["output"]) diff --git a/test/python/vectypes.py b/test/python/vectypes.py index 30187bff..9aab373d 100644 --- a/test/python/vectypes.py +++ b/test/python/vectypes.py @@ -122,7 +122,7 @@ def collectify_float32_list(i: float, j: float) -> list[float]: return [i, j] -def collectify_float64_list(i: double, j: double) -> list['double']: +def collectify_float64_list(i: double, j: double) -> list["double"]: """Create a float64 list from two floats.""" return [i, j] @@ -183,7 +183,7 @@ def sum_list_float(coll: list[float]) -> float: return sum(coll) -def sum_list_double(coll: list['double']) -> double: +def sum_list_double(coll: list["double"]) -> double: """Sum a list of doubles.""" return double(sum(coll)) From de56d99875fd44664f54ffae16fb8c1356684ad0 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Wed, 14 Jan 2026 14:29:52 -0600 Subject: [PATCH 17/67] Address remaining `ruff` issues --- test/python/check_sys_path.py | 3 ++- test/python/test_callbacks.py | 9 ++++++++- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/test/python/check_sys_path.py b/test/python/check_sys_path.py index 26f4db39..a376889a 100644 --- a/test/python/check_sys_path.py +++ b/test/python/check_sys_path.py @@ -35,7 +35,8 @@ def __call__(self, i: int) -> None: site-packages is not found in sys.path. """ assert len(sys.path) > 0 - venv_site_packages = f"{sys.prefix}/lib/python{sys.version_info.major}.{sys.version_info.minor}/site-packages" + venv_site_packages = f"{sys.prefix}/lib/python{sys.version_info.major}." \ + f"{sys.version_info.minor}/site-packages" assert any(p == venv_site_packages for p in sys.path) diff --git a/test/python/test_callbacks.py b/test/python/test_callbacks.py index 1887c389..c982b70a 100644 --- a/test/python/test_callbacks.py +++ b/test/python/test_callbacks.py @@ -1,37 +1,44 @@ """Test coverage gaps in modulewrap.cpp.""" - # 3-argument function to trigger py_callback<3> def sum_three(a: int, b: int, c: int) -> int: + """Sum three integers.""" return a + b + c # Function that raises exception to test error handling def raise_error(a: int) -> int: + """Raise a RuntimeError.""" raise RuntimeError("Intentional failure") # Invalid bool return (2) def bad_bool(a: int) -> bool: + """Return an invalid boolean value.""" return 2 # type: ignore # Invalid long return (float) def bad_long(a: int) -> int: + """Return a float instead of an int.""" return 1.5 # type: ignore class unsigned_int(int): + """A dummy unsigned int class.""" + pass # Invalid uint return (negative) def bad_uint(a: int) -> unsigned_int: + """Return a negative value for unsigned int.""" return -5 # type: ignore def PHLEX_REGISTER_ALGORITHMS(m, config): + """Register algorithms based on configuration.""" try: mode = config["mode"] except KeyError: From 4a81f835bf6851f4e697735af02844341dbddfdc Mon Sep 17 00:00:00 2001 From: Chris Green Date: Wed, 14 Jan 2026 14:50:30 -0600 Subject: [PATCH 18/67] Per Gemini 3 Pro, get GIL when updating ref count --- plugins/python/src/modulewrap.cpp | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index fece7c86..4ba0b966 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -88,18 +88,24 @@ namespace { } py_callback(py_callback const& pc) { + PyGILRAII gil; Py_INCREF(pc.m_callable); m_callable = pc.m_callable; } py_callback& operator=(py_callback const& pc) { if (this != &pc) { + PyGILRAII gil; Py_INCREF(pc.m_callable); m_callable = pc.m_callable; } return *this; } - ~py_callback() { Py_DECREF(m_callable); } + ~py_callback() + { + PyGILRAII gil; + Py_DECREF(m_callable); + } template PyObjectPtr call(Args... args) From 2278e963b087c0dad686dd42de1fa27540414d5a Mon Sep 17 00:00:00 2001 From: Chris Green Date: Wed, 14 Jan 2026 15:10:04 -0600 Subject: [PATCH 19/67] Attempt to address CI hangs in `py:badbool` and `py:raise` tests --- plugins/python/src/modulewrap.cpp | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index 4ba0b966..d3d490e9 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -20,7 +20,7 @@ using phlex::product_query; struct PyObjectDeleter { void operator()(PyObject* p) const { - if (p) { + if (p && Py_IsInitialized()) { phlex::experimental::PyGILRAII gil; Py_DECREF(p); } @@ -103,8 +103,10 @@ namespace { } ~py_callback() { - PyGILRAII gil; - Py_DECREF(m_callable); + if (Py_IsInitialized()) { + PyGILRAII gil; + Py_DECREF(m_callable); + } } template From ab94c6d0607e3d23819edb43f1e6b4666e3be76e Mon Sep 17 00:00:00 2001 From: Chris Green Date: Wed, 14 Jan 2026 15:37:38 -0600 Subject: [PATCH 20/67] More coverage improvement --- .gitignore | 9 ++++----- test/python/CMakeLists.txt | 6 ++++++ test/python/pymismatch_annotations.jsonnet | 22 ++++++++++++++++++++++ test/python/test_callbacks.py | 12 ++++++++++-- 4 files changed, 42 insertions(+), 7 deletions(-) create mode 100644 test/python/pymismatch_annotations.jsonnet diff --git a/.gitignore b/.gitignore index c40da725..bae231d5 100644 --- a/.gitignore +++ b/.gitignore @@ -1,7 +1,7 @@ # Build directories -/build/ -/build-cov/ -/_build/ +build/ +build-cov/ +_build/ *.dir/ /phlex-src/ /phlex-build/ @@ -15,8 +15,7 @@ _codeql_detected_source_root # Coverage reports /coverage.profdata -/coverage_report.txt -/coverage_report_*.txt +/coverage_*.txt /coverage.xml /coverage.info /coverage-html/ diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 1b723162..9227d478 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -118,6 +118,12 @@ if(HAS_NUMPY) set_tests_properties(py:baduint PROPERTIES WILL_FAIL TRUE) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:baduint) + add_test(NAME py:mismatch_ann COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pymismatch_annotations.jsonnet) + set_tests_properties(py:mismatch_ann PROPERTIES + PASS_REGULAR_EXPRESSION "number of inputs .* does not match number of annotation types" + ) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:mismatch_ann) + add_test(NAME py:veclists COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyveclists.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:veclists) diff --git a/test/python/pymismatch_annotations.jsonnet b/test/python/pymismatch_annotations.jsonnet new file mode 100644 index 00000000..f45119e9 --- /dev/null +++ b/test/python/pymismatch_annotations.jsonnet @@ -0,0 +1,22 @@ +{ + driver: { + cpp: 'generate_layers', + layers: { + event: { parent: 'job', total: 1, starting_number: 1 } + } + }, + sources: { + provider: { + cpp: 'cppsource4py', + } + }, + modules: { + test_mismatch: { + py: 'test_callbacks', + mode: 'mismatch', + # Providing 3 inputs for a 2-arg function + input: ['i', 'j', 'k'], + output: ['sum_out'], + } + } +} diff --git a/test/python/test_callbacks.py b/test/python/test_callbacks.py index c982b70a..b96ff906 100644 --- a/test/python/test_callbacks.py +++ b/test/python/test_callbacks.py @@ -20,7 +20,7 @@ def bad_bool(a: int) -> bool: # Invalid long return (float) -def bad_long(a: int) -> int: +def bad_long(a: int) -> "long": # type: ignore """Return a float instead of an int.""" return 1.5 # type: ignore @@ -32,11 +32,17 @@ class unsigned_int(int): # Invalid uint return (negative) -def bad_uint(a: int) -> unsigned_int: +def bad_uint(a: int) -> "unsigned int": # type: ignore """Return a negative value for unsigned int.""" return -5 # type: ignore +# Function with mismatching annotation count vs config inputs +def two_args(a: int, b: int) -> int: + """Sum two integers.""" + return a + b + + def PHLEX_REGISTER_ALGORITHMS(m, config): """Register algorithms based on configuration.""" try: @@ -54,3 +60,5 @@ def PHLEX_REGISTER_ALGORITHMS(m, config): m.transform(bad_long, input_family=config["input"], output_products=config["output"]) elif mode == "bad_uint": m.transform(bad_uint, input_family=config["input"], output_products=config["output"]) + elif mode == "mismatch": + m.transform(two_args, input_family=config["input"], output_products=config["output"]) From a404ce83042f55608ddbe0f8bb6f5303c6026e95 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Wed, 14 Jan 2026 22:49:40 +0000 Subject: [PATCH 21/67] Apply Python linting fixes --- test/python/check_sys_path.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/test/python/check_sys_path.py b/test/python/check_sys_path.py index a376889a..9d7a7acc 100644 --- a/test/python/check_sys_path.py +++ b/test/python/check_sys_path.py @@ -35,8 +35,10 @@ def __call__(self, i: int) -> None: site-packages is not found in sys.path. """ assert len(sys.path) > 0 - venv_site_packages = f"{sys.prefix}/lib/python{sys.version_info.major}." \ + venv_site_packages = ( + f"{sys.prefix}/lib/python{sys.version_info.major}." f"{sys.version_info.minor}/site-packages" + ) assert any(p == venv_site_packages for p in sys.path) From 2bd47f0c4c8fc489455b6f7581f7ed5c03ece3dd Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Wed, 14 Jan 2026 22:50:52 +0000 Subject: [PATCH 22/67] Apply cmake-format fixes --- test/python/CMakeLists.txt | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 9227d478..0a1d414d 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -118,8 +118,13 @@ if(HAS_NUMPY) set_tests_properties(py:baduint PROPERTIES WILL_FAIL TRUE) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:baduint) - add_test(NAME py:mismatch_ann COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pymismatch_annotations.jsonnet) - set_tests_properties(py:mismatch_ann PROPERTIES + add_test( + NAME py:mismatch_ann + COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pymismatch_annotations.jsonnet + ) + set_tests_properties( + py:mismatch_ann + PROPERTIES PASS_REGULAR_EXPRESSION "number of inputs .* does not match number of annotation types" ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:mismatch_ann) From 2382eae6df3c8e41917a953b7084db2bfa1eb333 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Wed, 14 Jan 2026 17:09:37 -0600 Subject: [PATCH 23/67] Silence inapposite complaints; remove unused class --- test/python/test_callbacks.py | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/test/python/test_callbacks.py b/test/python/test_callbacks.py index b96ff906..50a5a086 100644 --- a/test/python/test_callbacks.py +++ b/test/python/test_callbacks.py @@ -20,19 +20,14 @@ def bad_bool(a: int) -> bool: # Invalid long return (float) -def bad_long(a: int) -> "long": # type: ignore +def bad_long(a: int) -> "long": # type: ignore # noqa: F821 """Return a float instead of an int.""" return 1.5 # type: ignore -class unsigned_int(int): - """A dummy unsigned int class.""" - - pass - # Invalid uint return (negative) -def bad_uint(a: int) -> "unsigned int": # type: ignore +def bad_uint(a: int) -> "unsigned int": # type: ignore # noqa: F722 """Return a negative value for unsigned int.""" return -5 # type: ignore From a32effa45175742d9c8912c435318fe0289155d4 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Wed, 14 Jan 2026 17:19:15 -0600 Subject: [PATCH 24/67] More hang protection --- plugins/python/src/wrap.hpp | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/plugins/python/src/wrap.hpp b/plugins/python/src/wrap.hpp index f0818dd1..76b23387 100644 --- a/plugins/python/src/wrap.hpp +++ b/plugins/python/src/wrap.hpp @@ -63,7 +63,11 @@ namespace phlex::experimental { public: PyGILRAII() : m_GILState(PyGILState_Ensure()) {} - ~PyGILRAII() { PyGILState_Release(m_GILState); } + ~PyGILRAII() + { + if (Py_IsInitialized()) + PyGILState_Release(m_GILState); + } }; } // namespace phlex::experimental From 232e0bea17164730c8b7fc238d400d5e5d77c90c Mon Sep 17 00:00:00 2001 From: Chris Green Date: Wed, 14 Jan 2026 17:38:24 -0600 Subject: [PATCH 25/67] Extra diagnostics to debug hangs during testing --- .github/workflows/coverage.yaml | 5 ++++- plugins/python/src/modulewrap.cpp | 31 +++++++++++++++++++++++++++++-- 2 files changed, 33 insertions(+), 3 deletions(-) diff --git a/.github/workflows/coverage.yaml b/.github/workflows/coverage.yaml index 27a0a016..65897a9c 100644 --- a/.github/workflows/coverage.yaml +++ b/.github/workflows/coverage.yaml @@ -165,8 +165,11 @@ jobs: mkdir -p "$PROFILE_ROOT" export LLVM_PROFILE_FILE="$PROFILE_ROOT/%m-%p.profraw" + # Enable Python plugin debug diagnostics + export PHLEX_PYTHON_DEBUG=1 + echo "::group::Running ctest for coverage" - if ctest --progress --output-on-failure -j "$(nproc)"; then + if ctest --progress --verbose --output-on-failure -j "$(nproc)"; then echo "::endgroup::" echo "✅ All tests passed." else diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index d3d490e9..eb5acc62 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -8,6 +8,7 @@ #include #include #include +#include #define NO_IMPORT_ARRAY #define PY_ARRAY_UNIQUE_SYMBOL phlex_ARRAY_API @@ -17,6 +18,12 @@ using namespace phlex::experimental; using phlex::concurrency; using phlex::product_query; +static bool is_debug_enabled() +{ + static char const* env = std::getenv("PHLEX_PYTHON_DEBUG"); + return env && std::string(env) == "1"; +} + struct PyObjectDeleter { void operator()(PyObject* p) const { @@ -112,6 +119,9 @@ namespace { template PyObjectPtr call(Args... args) { + if (is_debug_enabled()) + std::cerr << "[PY_DEBUG] py_callback::call invoking python function" << std::endl; + static_assert(sizeof...(Args) == N, "Argument count mismatch"); PyGILRAII gil; @@ -119,14 +129,20 @@ namespace { PyObject* result = PyObject_CallFunctionObjArgs( (PyObject*)m_callable, lifeline_transform(args.get())..., nullptr); + if (is_debug_enabled()) + std::cerr << "[PY_DEBUG] py_callback::call returned " << (result ? "success" : "failure") << std::endl; + std::string error_msg; if (!result) { if (!msg_from_py_error(error_msg)) error_msg = "Unknown python error"; } - if (!error_msg.empty()) + if (!error_msg.empty()) { + if (is_debug_enabled()) + std::cerr << "[PY_DEBUG] Python error: " << error_msg << std::endl; throw std::runtime_error(error_msg.c_str()); + } return PyObjectPtr(result, PyObjectDeleter()); } @@ -134,6 +150,9 @@ namespace { template void callv(Args... args) { + if (is_debug_enabled()) + std::cerr << "[PY_DEBUG] py_callback::callv invoking python function" << std::endl; + static_assert(sizeof...(Args) == N, "Argument count mismatch"); PyGILRAII gil; @@ -141,6 +160,9 @@ namespace { PyObject* result = PyObject_CallFunctionObjArgs((PyObject*)m_callable, (PyObject*)args.get()..., nullptr); + if (is_debug_enabled()) + std::cerr << "[PY_DEBUG] py_callback::callv returned " << (result ? "success" : "failure") << std::endl; + std::string error_msg; if (!result) { if (!msg_from_py_error(error_msg)) @@ -148,8 +170,11 @@ namespace { } else Py_DECREF(result); - if (!error_msg.empty()) + if (!error_msg.empty()) { + if (is_debug_enabled()) + std::cerr << "[PY_DEBUG] Python error: " << error_msg << std::endl; throw std::runtime_error(error_msg.c_str()); + } } }; @@ -326,6 +351,7 @@ namespace { PyGILRAII gil; \ cpptype i = (cpptype)frompy(pyobj.get()); \ if (PyErr_Occurred()) { \ + if (is_debug_enabled()) std::cerr << "[PY_DEBUG] Python error occurred in conversion for " #name << std::endl; \ PyObject *ptype, *pvalue, *ptraceback; \ PyErr_Fetch(&ptype, &pvalue, &ptraceback); \ PyErr_NormalizeException(&ptype, &pvalue, &ptraceback); \ @@ -341,6 +367,7 @@ namespace { Py_XDECREF(ptype); \ Py_XDECREF(pvalue); \ Py_XDECREF(ptraceback); \ + if (is_debug_enabled()) std::cerr << "[PY_DEBUG] Throwing C++ exception: " << msg << std::endl; \ throw std::runtime_error(msg); \ } \ return i; \ From 0306eb5f4a6208c36d76f40904678b6cb8cc307e Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 09:08:43 -0600 Subject: [PATCH 26/67] More debug logging --- plugins/python/src/modulewrap.cpp | 2 ++ plugins/python/src/wrap.hpp | 25 +++++++++++++++++++++++-- 2 files changed, 25 insertions(+), 2 deletions(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index eb5acc62..92e21cae 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -28,8 +28,10 @@ struct PyObjectDeleter { void operator()(PyObject* p) const { if (p && Py_IsInitialized()) { + if (is_debug_enabled()) std::cerr << "[PY_DEBUG] PyObjectDeleter " << (void*)p << std::endl; phlex::experimental::PyGILRAII gil; Py_DECREF(p); + if (is_debug_enabled()) std::cerr << "[PY_DEBUG] PyObjectDeleter done" << std::endl; } } }; diff --git a/plugins/python/src/wrap.hpp b/plugins/python/src/wrap.hpp index 76b23387..01af12c8 100644 --- a/plugins/python/src/wrap.hpp +++ b/plugins/python/src/wrap.hpp @@ -20,6 +20,8 @@ #include #include +#include +#include #include "phlex/configuration.hpp" #include "phlex/module.hpp" @@ -61,12 +63,31 @@ namespace phlex::experimental { class PyGILRAII { PyGILState_STATE m_GILState; + static bool is_debug() + { + static bool debug = [] { + char const* env = std::getenv("PHLEX_PYTHON_DEBUG"); + return env && std::string(env) == "1"; + }(); + return debug; + } + public: - PyGILRAII() : m_GILState(PyGILState_Ensure()) {} + PyGILRAII() + { + // Basic debug for SegFault hunting - unlikely to output if crash happens immediately in Ensure, + // but useful to trace flow. Use fprintf/stderr to bypass buffering. + if (is_debug()) + fprintf(stderr, "[PY_DEBUG] GIL Ensure\n"); + m_GILState = PyGILState_Ensure(); + } ~PyGILRAII() { - if (Py_IsInitialized()) + if (Py_IsInitialized()) { + if (is_debug()) + fprintf(stderr, "[PY_DEBUG] GIL Release\n"); PyGILState_Release(m_GILState); + } } }; From 2ca9b29f3c2d7f4c025234fc0117b7d5d0b33c9a Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 09:51:37 -0600 Subject: [PATCH 27/67] Remove `failing_test_wrap.sh` as unnecessary --- test/python/CMakeLists.txt | 9 +++++++-- test/python/failing_test_wrap.sh | 7 ------- 2 files changed, 7 insertions(+), 9 deletions(-) delete mode 100755 test/python/failing_test_wrap.sh diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 0a1d414d..57f8260d 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -115,7 +115,12 @@ if(HAS_NUMPY) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:badint) add_test(NAME py:baduint COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pybaduint.jsonnet) - set_tests_properties(py:baduint PROPERTIES WILL_FAIL TRUE) + set_tests_properties( + py:baduint + PROPERTIES + WILL_FAIL TRUE + ENVIRONMENT "PHLEX_PYTHON_DEBUG=1" + ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:baduint) add_test( @@ -168,7 +173,7 @@ list(APPEND ACTIVE_PY_CPHLEX_TESTS py:mismatch) add_test( NAME py:failure COMMAND - ${CMAKE_CURRENT_SOURCE_DIR}/failing_test_wrap.sh ${PROJECT_BINARY_DIR}/bin/phlex -c + ${PROJECT_BINARY_DIR}/bin/phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyfailure.jsonnet ) set_tests_properties( diff --git a/test/python/failing_test_wrap.sh b/test/python/failing_test_wrap.sh deleted file mode 100755 index ee808131..00000000 --- a/test/python/failing_test_wrap.sh +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -"$@" -exit_code=$? -if [ $exit_code -ne 0 ]; then - exit 1 -fi -exit 0 From a308e40afed4b9dcc0e871e037f9c3b9189da86a Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 09:52:43 -0600 Subject: [PATCH 28/67] Replace unsafe macro call with safe equivalent --- plugins/python/src/modulewrap.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index 92e21cae..106ca815 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -329,7 +329,7 @@ namespace { unsigned long ul = PyLong_AsUnsignedLong(pyobject); if (ul == (unsigned long)-1 && PyErr_Occurred() && PyLong_Check(pyobject)) { PyErr_Clear(); - long i = PyLong_AS_LONG(pyobject); + long i = PyLong_AsLong(pyobject); if (0 <= i) { ul = (unsigned long)i; } else { From 3b80187baf63bf39703a2ec0a1061f96554b0f55 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 09:58:17 -0600 Subject: [PATCH 29/67] Remove all diagnostics to see if problems return --- plugins/python/src/modulewrap.cpp | 26 -------------------------- plugins/python/src/wrap.hpp | 15 --------------- test/python/CMakeLists.txt | 1 - 3 files changed, 42 deletions(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index 106ca815..d726c28d 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -18,20 +18,12 @@ using namespace phlex::experimental; using phlex::concurrency; using phlex::product_query; -static bool is_debug_enabled() -{ - static char const* env = std::getenv("PHLEX_PYTHON_DEBUG"); - return env && std::string(env) == "1"; -} - struct PyObjectDeleter { void operator()(PyObject* p) const { if (p && Py_IsInitialized()) { - if (is_debug_enabled()) std::cerr << "[PY_DEBUG] PyObjectDeleter " << (void*)p << std::endl; phlex::experimental::PyGILRAII gil; Py_DECREF(p); - if (is_debug_enabled()) std::cerr << "[PY_DEBUG] PyObjectDeleter done" << std::endl; } } }; @@ -121,9 +113,6 @@ namespace { template PyObjectPtr call(Args... args) { - if (is_debug_enabled()) - std::cerr << "[PY_DEBUG] py_callback::call invoking python function" << std::endl; - static_assert(sizeof...(Args) == N, "Argument count mismatch"); PyGILRAII gil; @@ -131,9 +120,6 @@ namespace { PyObject* result = PyObject_CallFunctionObjArgs( (PyObject*)m_callable, lifeline_transform(args.get())..., nullptr); - if (is_debug_enabled()) - std::cerr << "[PY_DEBUG] py_callback::call returned " << (result ? "success" : "failure") << std::endl; - std::string error_msg; if (!result) { if (!msg_from_py_error(error_msg)) @@ -141,8 +127,6 @@ namespace { } if (!error_msg.empty()) { - if (is_debug_enabled()) - std::cerr << "[PY_DEBUG] Python error: " << error_msg << std::endl; throw std::runtime_error(error_msg.c_str()); } @@ -152,9 +136,6 @@ namespace { template void callv(Args... args) { - if (is_debug_enabled()) - std::cerr << "[PY_DEBUG] py_callback::callv invoking python function" << std::endl; - static_assert(sizeof...(Args) == N, "Argument count mismatch"); PyGILRAII gil; @@ -162,9 +143,6 @@ namespace { PyObject* result = PyObject_CallFunctionObjArgs((PyObject*)m_callable, (PyObject*)args.get()..., nullptr); - if (is_debug_enabled()) - std::cerr << "[PY_DEBUG] py_callback::callv returned " << (result ? "success" : "failure") << std::endl; - std::string error_msg; if (!result) { if (!msg_from_py_error(error_msg)) @@ -173,8 +151,6 @@ namespace { Py_DECREF(result); if (!error_msg.empty()) { - if (is_debug_enabled()) - std::cerr << "[PY_DEBUG] Python error: " << error_msg << std::endl; throw std::runtime_error(error_msg.c_str()); } } @@ -353,7 +329,6 @@ namespace { PyGILRAII gil; \ cpptype i = (cpptype)frompy(pyobj.get()); \ if (PyErr_Occurred()) { \ - if (is_debug_enabled()) std::cerr << "[PY_DEBUG] Python error occurred in conversion for " #name << std::endl; \ PyObject *ptype, *pvalue, *ptraceback; \ PyErr_Fetch(&ptype, &pvalue, &ptraceback); \ PyErr_NormalizeException(&ptype, &pvalue, &ptraceback); \ @@ -369,7 +344,6 @@ namespace { Py_XDECREF(ptype); \ Py_XDECREF(pvalue); \ Py_XDECREF(ptraceback); \ - if (is_debug_enabled()) std::cerr << "[PY_DEBUG] Throwing C++ exception: " << msg << std::endl; \ throw std::runtime_error(msg); \ } \ return i; \ diff --git a/plugins/python/src/wrap.hpp b/plugins/python/src/wrap.hpp index 01af12c8..5fae3a4d 100644 --- a/plugins/python/src/wrap.hpp +++ b/plugins/python/src/wrap.hpp @@ -63,29 +63,14 @@ namespace phlex::experimental { class PyGILRAII { PyGILState_STATE m_GILState; - static bool is_debug() - { - static bool debug = [] { - char const* env = std::getenv("PHLEX_PYTHON_DEBUG"); - return env && std::string(env) == "1"; - }(); - return debug; - } - public: PyGILRAII() { - // Basic debug for SegFault hunting - unlikely to output if crash happens immediately in Ensure, - // but useful to trace flow. Use fprintf/stderr to bypass buffering. - if (is_debug()) - fprintf(stderr, "[PY_DEBUG] GIL Ensure\n"); m_GILState = PyGILState_Ensure(); } ~PyGILRAII() { if (Py_IsInitialized()) { - if (is_debug()) - fprintf(stderr, "[PY_DEBUG] GIL Release\n"); PyGILState_Release(m_GILState); } } diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 57f8260d..07413cfd 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -119,7 +119,6 @@ if(HAS_NUMPY) py:baduint PROPERTIES WILL_FAIL TRUE - ENVIRONMENT "PHLEX_PYTHON_DEBUG=1" ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:baduint) From b1a82de9544ea4ae812f41a79c9c97def0e4fd9a Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 10:15:52 -0600 Subject: [PATCH 30/67] Remove diagnostic deadends and other unneeded code > Please examine the output of `git diff -w upstream/main -- plugins` and determine whether all changes are necessary and appropriate, or perhaps were the result of diagnostic deadends, or unnecessary "belt and suspenders" additions. Based on your request, I examined the `git diff` of the plugins directory against `upstream/main` and analyzed the changes. I found that while many changes were valid improvements, there were indeed several "diagnostic deadends" and "belt and suspenders" additions that were inconsistent or unnecessary. I have cleaned up the code by reverting the unnecessary changes while keeping the robust fixes. 1. **Reverted "Belt and Suspenders" in modulewrap.cpp**: * **Issue**: Manual C++ functions (`vint_to_py`, `vuint_to_py`, etc.) were introduced to convert C++ `std::vector`s to Python `list`s (Deep Copy), bypassing the standard `numpy` array creation (Zero-Copy View). This was likely done to isolate the crashes or support `list` annotations explicitly, but it caused a performance regression and inconsistency (Integers returned Lists, Floats returned Numpy Arrays). * **Action**: Reverted these manual functions to use the `VECTOR_CONVERTER` macro. * **Result**: All vector types now efficiently return Zero-Copy Numpy views. I verified that the `py:veclists` test (which checks list support) still passes, as Python handles the Numpy arrays interchangeably in this context. 2. **Removed Diagnostic Deadends**: * **File**: modulewrap.cpp * **Action**: Removed large blocks of commented-out code (the old macro calls) and unused headers (``, ``, etc.) that were likely left over from debugging the "Heisenbug". * **File**: wrap.hpp * **Action**: Removed unused `` and `` headers included for the now-removed debug logging. 3. **Restored `ulong` Converter**: * **Action**: Reverted the manual implementation of `ulong` conversion to use the standard `BASIC_CONVERTER` macro. * **Reason**: Now that the underlying unsafe macro usage (`PyLong_AS_LONG`) is fixed, the manual implementation was redundant code duplication. The following changes from the diff were deemed **necessary and appropriate** and were kept: * **Robust Version Checking**: The new `check_python_module_version` CMake function provides a reliable way to ensure `numpy` 2.0.0+ is installed, replacing a fragile check. * **Memory Safety**: The object tracking fixes in `lifelinewrap.cpp` (`PyObject_GC_UnTrack`) and the GIL-safe deleters (`PyObjectDeleter`, `PyGILRAII`) in modulewrap.cpp are critical for stability and correctness in a threaded environment. * **Type Support**: The expansion of `insert_input_converters` to support `list[type]` syntax (on the input side) was kept as it correctly maps Python inputs to C++ vectors. The codebase is now cleaner, more consistent, and performance-optimized, while maintaining the stability fixes. Made changes. --- plugins/python/src/modulewrap.cpp | 121 ++---------------------------- plugins/python/src/wrap.hpp | 2 - 2 files changed, 5 insertions(+), 118 deletions(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index d726c28d..fe6386d6 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -2,13 +2,10 @@ #include "wrap.hpp" #include -#include #include -#include #include #include #include -#include #define NO_IMPORT_ARRAY #define PY_ARRAY_UNIQUE_SYMBOL phlex_ARRAY_API @@ -353,18 +350,7 @@ namespace { BASIC_CONVERTER(int, int, PyLong_FromLong, PyLong_AsLong) BASIC_CONVERTER(uint, unsigned int, PyLong_FromLong, pylong_or_int_as_ulong) BASIC_CONVERTER(long, long, PyLong_FromLong, pylong_as_strictlong) - // BASIC_CONVERTER(ulong, unsigned long, PyLong_FromUnsignedLong, pylong_or_int_as_ulong) - static PyObjectPtr ulong_to_py(unsigned long a) - { - PyGILRAII gil; - return PyObjectPtr(PyLong_FromUnsignedLong(a), PyObjectDeleter()); - } - static unsigned long py_to_ulong(PyObjectPtr pyobj) - { - PyGILRAII gil; - unsigned long i = (unsigned long)pylong_or_int_as_ulong(pyobj.get()); - return i; - } + BASIC_CONVERTER(ulong, unsigned long, PyLong_FromUnsignedLong, pylong_or_int_as_ulong) BASIC_CONVERTER(float, float, PyFloat_FromDouble, PyFloat_AsDouble) BASIC_CONVERTER(double, double, PyFloat_FromDouble, PyFloat_AsDouble) @@ -400,94 +386,10 @@ namespace { return PyObjectPtr((PyObject*)pyll, PyObjectDeleter()); \ } - // VECTOR_CONVERTER(vint, int, NPY_INT) - static PyObjectPtr vint_to_py(std::shared_ptr> const& v) - { - PyGILRAII gil; - if (!v) - return PyObjectPtr(); - PyObject* list = PyList_New(v->size()); - if (!list) { - PyErr_Print(); - return PyObjectPtr(); - } - for (size_t i = 0; i < v->size(); ++i) { - PyObject* item = PyLong_FromLong((*v)[i]); - if (!item) { - PyErr_Print(); - Py_DECREF(list); - return PyObjectPtr(); - } - PyList_SET_ITEM(list, i, item); - } - return PyObjectPtr(list, PyObjectDeleter()); - } - // VECTOR_CONVERTER(vuint, unsigned int, NPY_UINT) - static PyObjectPtr vuint_to_py(std::shared_ptr> const& v) - { - PyGILRAII gil; - if (!v) - return PyObjectPtr(); - PyObject* list = PyList_New(v->size()); - if (!list) { - PyErr_Print(); - return PyObjectPtr(); - } - for (size_t i = 0; i < v->size(); ++i) { - PyObject* item = PyLong_FromUnsignedLong((*v)[i]); - if (!item) { - PyErr_Print(); - Py_DECREF(list); - return PyObjectPtr(); - } - PyList_SET_ITEM(list, i, item); - } - return PyObjectPtr(list, PyObjectDeleter()); - } - // VECTOR_CONVERTER(vlong, long, NPY_LONG) - static PyObjectPtr vlong_to_py(std::shared_ptr> const& v) - { - PyGILRAII gil; - if (!v) - return PyObjectPtr(); - PyObject* list = PyList_New(v->size()); - if (!list) { - PyErr_Print(); - return PyObjectPtr(); - } - for (size_t i = 0; i < v->size(); ++i) { - PyObject* item = PyLong_FromLong((*v)[i]); - if (!item) { - PyErr_Print(); - Py_DECREF(list); - return PyObjectPtr(); - } - PyList_SET_ITEM(list, i, item); - } - return PyObjectPtr(list, PyObjectDeleter()); - } - // VECTOR_CONVERTER(vulong, unsigned long, NPY_ULONG) - static PyObjectPtr vulong_to_py(std::shared_ptr> const& v) - { - PyGILRAII gil; - if (!v) - return PyObjectPtr(); - PyObject* list = PyList_New(v->size()); - if (!list) { - PyErr_Print(); - return PyObjectPtr(); - } - for (size_t i = 0; i < v->size(); ++i) { - PyObject* item = PyLong_FromUnsignedLong((*v)[i]); - if (!item) { - PyErr_Print(); - Py_DECREF(list); - return PyObjectPtr(); - } - PyList_SET_ITEM(list, i, item); - } - return PyObjectPtr(list, PyObjectDeleter()); - } + VECTOR_CONVERTER(vint, int, NPY_INT) + VECTOR_CONVERTER(vuint, unsigned int, NPY_UINT) + VECTOR_CONVERTER(vlong, long, NPY_LONG) + VECTOR_CONVERTER(vulong, unsigned long, NPY_ULONG) VECTOR_CONVERTER(vfloat, float, NPY_FLOAT) VECTOR_CONVERTER(vdouble, double, NPY_DOUBLE) @@ -521,14 +423,6 @@ namespace { return vec; \ } - // NUMPY_ARRAY_CONVERTER(vint, int, NPY_INT) - // NUMPY_ARRAY_CONVERTER(vuint, unsigned int, NPY_UINT) - // NUMPY_ARRAY_CONVERTER(vlong, long, NPY_LONG) - // NUMPY_ARRAY_CONVERTER(vulong, unsigned long, NPY_ULONG) - // NUMPY_ARRAY_CONVERTER(vfloat, float, NPY_FLOAT) - // NUMPY_ARRAY_CONVERTER(vdouble, double, NPY_DOUBLE) - - // NUMPY_ARRAY_CONVERTER(vint, int, NPY_INT) static std::shared_ptr> py_to_vint(PyObjectPtr pyobj) { PyGILRAII gil; @@ -567,7 +461,6 @@ namespace { } return vec; } - // NUMPY_ARRAY_CONVERTER(vuint, unsigned int, NPY_UINT) static std::shared_ptr> py_to_vuint(PyObjectPtr pyobj) { PyGILRAII gil; @@ -606,7 +499,6 @@ namespace { } return vec; } - // NUMPY_ARRAY_CONVERTER(vlong, long, NPY_LONG) static std::shared_ptr> py_to_vlong(PyObjectPtr pyobj) { PyGILRAII gil; @@ -645,7 +537,6 @@ namespace { } return vec; } - // NUMPY_ARRAY_CONVERTER(vulong, unsigned long, NPY_ULONG) static std::shared_ptr> py_to_vulong(PyObjectPtr pyobj) { PyGILRAII gil; @@ -684,7 +575,6 @@ namespace { } return vec; } - // NUMPY_ARRAY_CONVERTER(vfloat, float, NPY_FLOAT) static std::shared_ptr> py_to_vfloat(PyObjectPtr pyobj) { PyGILRAII gil; @@ -723,7 +613,6 @@ namespace { } return vec; } - // NUMPY_ARRAY_CONVERTER(vdouble, double, NPY_DOUBLE) static std::shared_ptr> py_to_vdouble(PyObjectPtr pyobj) { PyGILRAII gil; diff --git a/plugins/python/src/wrap.hpp b/plugins/python/src/wrap.hpp index 5fae3a4d..a1896668 100644 --- a/plugins/python/src/wrap.hpp +++ b/plugins/python/src/wrap.hpp @@ -20,8 +20,6 @@ #include #include -#include -#include #include "phlex/configuration.hpp" #include "phlex/module.hpp" From 9bc8a6e920611dc3e1032463d3c9620fb5b167d9 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Thu, 15 Jan 2026 16:20:03 +0000 Subject: [PATCH 31/67] Apply clang-format fixes --- plugins/python/src/wrap.hpp | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/plugins/python/src/wrap.hpp b/plugins/python/src/wrap.hpp index a1896668..4bd5e485 100644 --- a/plugins/python/src/wrap.hpp +++ b/plugins/python/src/wrap.hpp @@ -62,10 +62,7 @@ namespace phlex::experimental { PyGILState_STATE m_GILState; public: - PyGILRAII() - { - m_GILState = PyGILState_Ensure(); - } + PyGILRAII() { m_GILState = PyGILState_Ensure(); } ~PyGILRAII() { if (Py_IsInitialized()) { From db1f9a63a862a6254174e2b94470fead153e2132 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Thu, 15 Jan 2026 16:21:41 +0000 Subject: [PATCH 32/67] Apply cmake-format fixes --- test/python/CMakeLists.txt | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 07413cfd..5a2fcc61 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -115,11 +115,7 @@ if(HAS_NUMPY) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:badint) add_test(NAME py:baduint COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pybaduint.jsonnet) - set_tests_properties( - py:baduint - PROPERTIES - WILL_FAIL TRUE - ) + set_tests_properties(py:baduint PROPERTIES WILL_FAIL TRUE) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:baduint) add_test( @@ -171,9 +167,7 @@ list(APPEND ACTIVE_PY_CPHLEX_TESTS py:mismatch) # "failing" tests for checking error paths add_test( NAME py:failure - COMMAND - ${PROJECT_BINARY_DIR}/bin/phlex -c - ${CMAKE_CURRENT_SOURCE_DIR}/pyfailure.jsonnet + COMMAND ${PROJECT_BINARY_DIR}/bin/phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyfailure.jsonnet ) set_tests_properties( py:failure From 72a3eddeae5761b90e6a9823d398ff751a42c632 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 10:59:15 -0600 Subject: [PATCH 33/67] Armor-plate `WILL_FAIL` tests against false pass --- test/python/CMakeLists.txt | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 5a2fcc61..222896d1 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -103,19 +103,35 @@ if(HAS_NUMPY) # Expect failure for these tests (check for error propagation and type checking) add_test(NAME py:raise COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyraise.jsonnet) - set_tests_properties(py:raise PROPERTIES WILL_FAIL TRUE) + set_tests_properties( + py:raise + PROPERTIES + PASS_REGULAR_EXPRESSION "RuntimeError: Intentional failure" + ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:raise) add_test(NAME py:badbool COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pybadbool.jsonnet) - set_tests_properties(py:badbool PROPERTIES WILL_FAIL TRUE) + set_tests_properties( + py:badbool + PROPERTIES + PASS_REGULAR_EXPRESSION "Python conversion error for type bool: boolean value should be bool, or integer 1 or 0" + ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:badbool) add_test(NAME py:badint COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pybadint.jsonnet) - set_tests_properties(py:badint PROPERTIES WILL_FAIL TRUE) + set_tests_properties( + py:badint + PROPERTIES + PASS_REGULAR_EXPRESSION "Python conversion error for type long: int/long conversion expects an integer object" + ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:badint) add_test(NAME py:baduint COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pybaduint.jsonnet) - set_tests_properties(py:baduint PROPERTIES WILL_FAIL TRUE) + set_tests_properties( + py:baduint + PROPERTIES + PASS_REGULAR_EXPRESSION "Python conversion error for type uint: can't convert negative value to unsigned long" + ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:baduint) add_test( From f0608d8405384353b22ac358ca789c636498ee02 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 10:59:43 -0600 Subject: [PATCH 34/67] Remove possibly-problematic initialization check --- plugins/python/src/wrap.hpp | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/plugins/python/src/wrap.hpp b/plugins/python/src/wrap.hpp index 4bd5e485..f0818dd1 100644 --- a/plugins/python/src/wrap.hpp +++ b/plugins/python/src/wrap.hpp @@ -62,13 +62,8 @@ namespace phlex::experimental { PyGILState_STATE m_GILState; public: - PyGILRAII() { m_GILState = PyGILState_Ensure(); } - ~PyGILRAII() - { - if (Py_IsInitialized()) { - PyGILState_Release(m_GILState); - } - } + PyGILRAII() : m_GILState(PyGILState_Ensure()) {} + ~PyGILRAII() { PyGILState_Release(m_GILState); } }; } // namespace phlex::experimental From 1a863f5fa7d9bf8873c1aac763803a66fa2cd749 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Thu, 15 Jan 2026 17:08:03 +0000 Subject: [PATCH 35/67] Apply cmake-format fixes --- test/python/CMakeLists.txt | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 222896d1..80115d58 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -105,8 +105,7 @@ if(HAS_NUMPY) add_test(NAME py:raise COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyraise.jsonnet) set_tests_properties( py:raise - PROPERTIES - PASS_REGULAR_EXPRESSION "RuntimeError: Intentional failure" + PROPERTIES PASS_REGULAR_EXPRESSION "RuntimeError: Intentional failure" ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:raise) @@ -114,7 +113,8 @@ if(HAS_NUMPY) set_tests_properties( py:badbool PROPERTIES - PASS_REGULAR_EXPRESSION "Python conversion error for type bool: boolean value should be bool, or integer 1 or 0" + PASS_REGULAR_EXPRESSION + "Python conversion error for type bool: boolean value should be bool, or integer 1 or 0" ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:badbool) @@ -122,7 +122,8 @@ if(HAS_NUMPY) set_tests_properties( py:badint PROPERTIES - PASS_REGULAR_EXPRESSION "Python conversion error for type long: int/long conversion expects an integer object" + PASS_REGULAR_EXPRESSION + "Python conversion error for type long: int/long conversion expects an integer object" ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:badint) @@ -130,7 +131,8 @@ if(HAS_NUMPY) set_tests_properties( py:baduint PROPERTIES - PASS_REGULAR_EXPRESSION "Python conversion error for type uint: can't convert negative value to unsigned long" + PASS_REGULAR_EXPRESSION + "Python conversion error for type uint: can't convert negative value to unsigned long" ) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:baduint) From 4738de546d7dcee02876811bd920321cc49d4407 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 11:36:42 -0600 Subject: [PATCH 36/67] Further attempts to prevent stalls --- plugins/python/src/modulewrap.cpp | 1 + 1 file changed, 1 insertion(+) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index fe6386d6..4b95e7fc 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -343,6 +343,7 @@ namespace { Py_XDECREF(ptraceback); \ throw std::runtime_error(msg); \ } \ + pyobj.reset(); \ return i; \ } From 0b256a8bb9c02253b9563473ab829ec511264e9b Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 11:46:23 -0600 Subject: [PATCH 37/67] Remove diagnostic invocations from coverage workflow --- .github/workflows/coverage.yaml | 5 +---- CMakeLists.txt | 5 +++++ 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/.github/workflows/coverage.yaml b/.github/workflows/coverage.yaml index 65897a9c..27a0a016 100644 --- a/.github/workflows/coverage.yaml +++ b/.github/workflows/coverage.yaml @@ -165,11 +165,8 @@ jobs: mkdir -p "$PROFILE_ROOT" export LLVM_PROFILE_FILE="$PROFILE_ROOT/%m-%p.profraw" - # Enable Python plugin debug diagnostics - export PHLEX_PYTHON_DEBUG=1 - echo "::group::Running ctest for coverage" - if ctest --progress --verbose --output-on-failure -j "$(nproc)"; then + if ctest --progress --output-on-failure -j "$(nproc)"; then echo "::endgroup::" echo "✅ All tests passed." else diff --git a/CMakeLists.txt b/CMakeLists.txt index 1ffaa140..1bfecdd2 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -49,6 +49,11 @@ project(phlex VERSION 0.1.0 LANGUAGES CXX) cet_cmake_env() # ############################################################################## +# Set CI/test timeouts to a conservative value to avoid long stalls in CI. +# Use cache variables so generated CTest/Dart files pick this up when configured. +set(DART_TESTING_TIMEOUT 90 CACHE STRING "Timeout (s) for Dart/CTest runs") +set(CTEST_TEST_TIMEOUT 90 CACHE STRING "Per-test timeout (s) for CTest") + # Make tools available FetchContent_MakeAvailable(Catch2 GSL mimicpp) From 719ed3f3d363d6d013b93da7105c80ef63a2be09 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 11:52:00 -0600 Subject: [PATCH 38/67] Encourage `ctest --test-timeout` to limit impact of stalling tests --- .github/copilot-instructions.md | 1 + .github/workflows/cmake-build.yaml | 2 +- .github/workflows/coverage.yaml | 2 +- 3 files changed, 3 insertions(+), 2 deletions(-) diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index e9cb54ab..b238ce27 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -152,6 +152,7 @@ All Markdown files must strictly follow these markdownlint rules: - **Parallelism**: Run tests in parallel using `ctest -j $(nproc)` or `ctest --parallel `. - **Selection**: Run specific tests with `ctest -R "regex"` (e.g., `ctest -R "py:*"`). - **Debugging**: Use `ctest --output-on-failure` to see logs for failed tests. + - **Guard against known or suspected stalling tests**: Use `ctest --test-timeout` to set the per-test time limit (e.g. `90`) for 90s, _vs_ the default of 1500s. ### Python Integration diff --git a/.github/workflows/cmake-build.yaml b/.github/workflows/cmake-build.yaml index dcba5940..2708d5f2 100644 --- a/.github/workflows/cmake-build.yaml +++ b/.github/workflows/cmake-build.yaml @@ -241,7 +241,7 @@ jobs: echo "➡️ Running tests..." echo "::group::Running ctest" - if ctest --progress --output-on-failure -j "$(nproc)"; then + if ctest --progress --output-on-failure --test-timeout 90 -j "$(nproc)"; then echo "::endgroup::" echo "✅ All tests passed." else diff --git a/.github/workflows/coverage.yaml b/.github/workflows/coverage.yaml index 27a0a016..41b85ab5 100644 --- a/.github/workflows/coverage.yaml +++ b/.github/workflows/coverage.yaml @@ -166,7 +166,7 @@ jobs: export LLVM_PROFILE_FILE="$PROFILE_ROOT/%m-%p.profraw" echo "::group::Running ctest for coverage" - if ctest --progress --output-on-failure -j "$(nproc)"; then + if ctest --progress --output-on-failure --test-timeout 90 -j "$(nproc)"; then echo "::endgroup::" echo "✅ All tests passed." else From a33df638255be62d56580589b7b1ed6dee3c79b2 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 12:31:39 -0600 Subject: [PATCH 39/67] First pass at addressing review comments - Per https://github.com/Framework-R-D/phlex/pull/213#pullrequestreview-3666448198. - Addressed principally by GPT-5.2 Codex with human review and adjustment. --- .github/copilot-instructions.md | 4 +- CMakeLists.txt | 9 +- plugins/python/CMakeLists.txt | 78 +++------ plugins/python/README.md | 15 +- plugins/python/src/lifelinewrap.cpp | 3 + test/demo-giantdata/unfold_transform_fold.cpp | 2 +- test/python/adder.py | 4 +- test/python/variant.py | 11 +- test/python/vectypes.py | 154 +++++++++--------- 9 files changed, 127 insertions(+), 153 deletions(-) diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index b238ce27..d9d3959d 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -157,13 +157,13 @@ All Markdown files must strictly follow these markdownlint rules: ### Python Integration - **Naming**: Avoid naming Python test scripts `types.py` or other names that shadow standard library modules. This causes obscure import errors (e.g., `ModuleNotFoundError: No module named 'numpy'`). -- **PYTHONPATH**: When running tests in Spack environments, ensure `PYTHONPATH` includes `site-packages`. In CMake, explicitly add `Python_SITELIB` and `Python_SITEARCH` to `TEST_PYTHONPATH`. +- **PYTHONPATH**: Only include paths that contain user Python modules loaded by Phlex (for example, the source directory and any build output directory that houses generated modules). Do not append system/Spack/venv `site-packages`; `pymodule.cpp` handles CMAKE_PREFIX_PATH and virtual-environment path adjustments. - **Test Structure**: - **C++ Driver**: Provides data streams (e.g., `test/python/driver.cpp`). - **Jsonnet Config**: Wires the graph (e.g., `test/python/pytypes.jsonnet`). - **Python Script**: Implements algorithms (e.g., `test/python/test_types.py`). - **Type Conversion**: `plugins/python/src/modulewrap.cpp` handles C++ ↔ Python conversion. - - **Mechanism**: Uses string comparison of type names (e.g., `"float64]]"`). This is brittle. + - **Mechanism**: Uses substring matching on type names (for example, `"float64]]"`). This is brittle. - **Requirement**: Ensure converters exist for all types used in tests (e.g., `float`, `double`, `unsigned int`, and their vector equivalents). - **Warning**: Exact type matches are required. `numpy.float32` != `float`. diff --git a/CMakeLists.txt b/CMakeLists.txt index 1bfecdd2..9b85d698 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -75,10 +75,13 @@ add_compile_options( ) if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU") - if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL "14.1") - # GCC 14.1+ issues many false positives re. array-bounds and + if( + CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL "14.1" + AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS "15" + ) + # GCC 14.1 issues many false positives re. array-bounds and # stringop-overflow - add_compile_options(-Wno-array-bounds -Wno-stringop-overflow -Wno-maybe-uninitialized) + add_compile_options(-Wno-array-bounds -Wno-stringop-overflow) endif() endif() diff --git a/plugins/python/CMakeLists.txt b/plugins/python/CMakeLists.txt index 77b8c7c9..2554a6dc 100644 --- a/plugins/python/CMakeLists.txt +++ b/plugins/python/CMakeLists.txt @@ -1,63 +1,23 @@ find_package(Python 3.12 COMPONENTS Interpreter Development NumPy REQUIRED) -if(Python_FOUND) - # Verify installation of necessary python modules for specific tests - - function(check_python_module_version MODULE_NAME MIN_VERSION OUT_VAR) - execute_process( - COMMAND - ${Python_EXECUTABLE} -c - "import sys -try: - import ${MODULE_NAME} - installed_version = getattr(${MODULE_NAME}, '__version__', None) - if not installed_version: - sys.exit(2) - - def parse(v): - return tuple(map(int, v.split('.')[:3])) - - if parse(installed_version) >= parse('${MIN_VERSION}'): - sys.exit(0) - else: - sys.exit(2) # Version too low -except ImportError: - sys.exit(1) -except Exception: - sys.exit(1)" - RESULT_VARIABLE _module_check_result - ) - - if(_module_check_result EQUAL 0) - set(${OUT_VAR} TRUE PARENT_SCOPE) - elseif(_module_check_result EQUAL 1) - set(${OUT_VAR} FALSE PARENT_SCOPE) # silent b/c common - elseif(_module_check_result EQUAL 2) - message( - WARNING - "Python module '${MODULE_NAME}' found but version too low (min required: ${MIN_VERSION})." - ) - set(${OUT_VAR} FALSE PARENT_SCOPE) - else() - message(WARNING "Unknown error while checking Python module '${MODULE_NAME}'.") - set(${OUT_VAR} FALSE PARENT_SCOPE) - endif() - endfunction() - - check_python_module_version("numpy" "2.0.0" HAS_NUMPY) - - # Phlex module to run Python algorithms - add_library( - pymodule - MODULE - src/pymodule.cpp - src/modulewrap.cpp - src/configwrap.cpp - src/lifelinewrap.cpp - src/errorwrap.cpp +if(Python_NumPy_VERSION VERSION_LESS "2.0.0") + message( + FATAL_ERROR + "NumPy version is too low: ${Python_NumPy_VERSION} found, at least 2.0.0 required" ) - target_link_libraries(pymodule PRIVATE phlex::module Python::Python Python::NumPy) - target_compile_definitions(pymodule PRIVATE NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION) - - install(TARGETS pymodule LIBRARY DESTINATION lib) endif() + +# Phlex module to run Python algorithms +add_library( + pymodule + MODULE + src/pymodule.cpp + src/modulewrap.cpp + src/configwrap.cpp + src/lifelinewrap.cpp + src/errorwrap.cpp +) +target_link_libraries(pymodule PRIVATE phlex::module Python::Python Python::NumPy) +target_compile_definitions(pymodule PRIVATE NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION) + +install(TARGETS pymodule LIBRARY DESTINATION lib) diff --git a/plugins/python/README.md b/plugins/python/README.md index e84525e7..0c40c5d9 100644 --- a/plugins/python/README.md +++ b/plugins/python/README.md @@ -14,7 +14,8 @@ The core of the integration is the type conversion layer in `src/modulewrap.cpp` **Critical Implementation Detail:** The type mapping relies on **string comparison** of type names. -- **Mechanism**: The C++ code checks `type_name() == "float64]]"` to identify a 2D array of doubles. + +- **Mechanism**: The C++ code checks whether `type_name()` contains `"float64]]"` to identify a 2D array of doubles. - **Brittleness**: This is a fragile contract. If the type name changes (e.g., `numpy` changes its string representation) or if a user provides a slightly different type (e.g., `float` vs `np.float32`), the bridge may fail. - **Extension**: When adding support for new types, you must explicitly add converters in `modulewrap.cpp` for both scalar and vector/array versions. @@ -22,24 +23,24 @@ The type mapping relies on **string comparison** of type names. Phlex uses a hybrid configuration model involving three languages: -1. **Jsonnet** (`*.jsonnet`): Defines the computation graph structure. It specifies: +1. **Jsonnet** (`*.jsonnet`): Defines the computation graph structure. It specifies: - The nodes in the graph. - The Python module/class to load for specific nodes. - Configuration parameters passed to the Python object. -2. **C++ Driver**: The executable that: +2. **C++ Driver**: The executable that: - Parses the Jsonnet configuration. - Initializes the Phlex core. - Loads the Python interpreter and the specified plugin. -3. **Python Code** (`*.py`): Implements the algorithmic logic. +3. **Python Code** (`*.py`): Implements the algorithmic logic. ### 3. Environment & Testing Because the Python interpreter is embedded within the C++ application, the runtime environment is critical. - **PYTHONPATH**: Must be set correctly to include: - - The build directory (for generated modules). - - The source directory (for user scripts). - - System/Spack `site-packages` (for dependencies like `numpy`). + - The build directory (for generated modules). + - The source directory (for user scripts). + - Do not append system/Spack `site-packages`; `pymodule.cpp` adjusts `sys.path` based on `CMAKE_PREFIX_PATH` and active virtual environments. - **Naming Collisions**: - **Warning**: Do not name test files `types.py`, `test.py`, `code.py`, or other names that shadow standard library modules. - **Consequence**: Shadowing can cause obscure failures in internal libraries (e.g., `numpy` failing to import because it tries to import `types` from the standard library but gets your local file instead). diff --git a/plugins/python/src/lifelinewrap.cpp b/plugins/python/src/lifelinewrap.cpp index e4893480..a00e1d72 100644 --- a/plugins/python/src/lifelinewrap.cpp +++ b/plugins/python/src/lifelinewrap.cpp @@ -31,10 +31,13 @@ static int ll_clear(py_lifeline_t* pyobj) static void ll_dealloc(py_lifeline_t* pyobj) { + // This type participates in GC; untrack before clearing references so the + // collector does not traverse a partially torn-down object during dealloc. PyObject_GC_UnTrack(pyobj); Py_CLEAR(pyobj->m_view); typedef std::shared_ptr generic_shared_t; pyobj->m_source.~generic_shared_t(); + // Use tp_free to pair with tp_alloc for GC-tracked Python objects. Py_TYPE(pyobj)->tp_free((PyObject*)pyobj); } diff --git a/test/demo-giantdata/unfold_transform_fold.cpp b/test/demo-giantdata/unfold_transform_fold.cpp index 49581d27..77cb2eab 100644 --- a/test/demo-giantdata/unfold_transform_fold.cpp +++ b/test/demo-giantdata/unfold_transform_fold.cpp @@ -36,7 +36,7 @@ TEST_CASE("Unfold-transform-fold pipeline", "[concurrency][unfold][fold]") // Test parameters - moderate scale to ensure sustained concurrent execution constexpr std::size_t n_runs = 1; constexpr std::size_t n_subruns = 1; - constexpr std::size_t n_spills = 100; + constexpr std::size_t n_spills = 20; constexpr int apas_per_spill = 20; constexpr std::size_t wires_per_spill = apas_per_spill * 256ull; constexpr std::size_t chunksize = 256; diff --git a/test/python/adder.py b/test/python/adder.py index 43e77e06..61b391dd 100644 --- a/test/python/adder.py +++ b/test/python/adder.py @@ -6,7 +6,7 @@ from typing import Protocol, TypeVar -from variant import Variant +from variant import AdjustAnnotations class AddableProtocol[T](Protocol): @@ -54,5 +54,5 @@ def PHLEX_REGISTER_ALGORITHMS(m, config): Returns: None """ - int_adder = Variant(add, {"i": int, "j": int, "return": int}, "iadd") + int_adder = AdjustAnnotations(add, {"i": int, "j": int, "return": int}, "iadd") m.transform(int_adder, input_family=config["input"], output_products=config["output"]) diff --git a/test/python/variant.py b/test/python/variant.py index 2c58234f..7b76dde5 100644 --- a/test/python/variant.py +++ b/test/python/variant.py @@ -9,7 +9,7 @@ from typing import Any, Callable -class Variant: +class AdjustAnnotations: """Wrapper to associate custom annotations with a callable. This class wraps a callable and provides custom ``__annotations__`` and @@ -32,7 +32,7 @@ class Variant: >>> def add(i: Number, j: Number) -> Number: ... return i + j ... - >>> int_adder = variant(add, {"i": int, "j": int, "return": int}, "iadd") + >>> int_adder = AdjustAnnotations(add, {"i": int, "j": int, "return": int}, "iadd") """ def __init__( @@ -66,14 +66,17 @@ def __init__( def __call__(self, *args, **kwargs): """Raises an error if called directly. - Variant instances should not be called directly. The framework should + AdjustAnnotations instances should not be called directly. The framework should extract ``phlex_callable`` instead and call that. Raises: AssertionError: To indicate incorrect usage, unless overridden. """ assert self._allow_call, ( - f"Variant '{self.__name__}' was called directly. " + f"AdjustAnnotations '{self.__name__}' was called directly. " f"The framework should extract phlex_callable instead." ) return self.phlex_callable(*args, **kwargs) # type: ignore + + +Variant = AdjustAnnotations diff --git a/test/python/vectypes.py b/test/python/vectypes.py index 9aab373d..db9bd342 100644 --- a/test/python/vectypes.py +++ b/test/python/vectypes.py @@ -195,78 +195,82 @@ def PHLEX_REGISTER_ALGORITHMS(m, config): except (KeyError, TypeError): use_lists = False - # int32 - m.transform( - collectify_int32_list if use_lists else collectify_int32, - input_family=config["input_int32"], - output_products=["arr_int32"], - ) - m.transform( - sum_list_int32 if use_lists else sum_array_int32, - input_family=["arr_int32"], - output_products=config["output_int32"], - name="sum_int32", - ) - - # uint32 - m.transform( - collectify_uint32_list if use_lists else collectify_uint32, - input_family=config["input_uint32"], - output_products=["arr_uint32"], - ) - m.transform( - sum_list_uint32 if use_lists else sum_array_uint32, - input_family=["arr_uint32"], - output_products=config["output_uint32"], - name="sum_uint32", - ) - - # int64 - m.transform( - collectify_int64_list if use_lists else collectify_int64, - input_family=config["input_int64"], - output_products=["arr_int64"], - ) - m.transform( - sum_list_int64 if use_lists else sum_array_int64, - input_family=["arr_int64"], - output_products=config["output_int64"], - name="sum_int64", - ) - - # uint64 - m.transform( - collectify_uint64_list if use_lists else collectify_uint64, - input_family=config["input_uint64"], - output_products=["arr_uint64"], - ) - m.transform( - sum_list_uint64 if use_lists else sum_array_uint64, - input_family=["arr_uint64"], - output_products=config["output_uint64"], - name="sum_uint64", - ) - - # float32 - m.transform( - collectify_float32_list if use_lists else collectify_float32, - input_family=config["input_float32"], - output_products=["arr_float32"], - ) - m.transform( - sum_list_float if use_lists else sum_array_float32, - input_family=["arr_float32"], - output_products=config["output_float32"], - ) - - # float64 - m.transform( - collectify_float64_list if use_lists else collectify_float64, - input_family=config["input_float64"], - output_products=["arr_float64"], - ) - m.transform( - sum_list_double if use_lists else sum_array_float64, - input_family=["arr_float64"], - output_products=config["output_float64"], - ) + specs = [ + ( + "int32", + collectify_int32_list, + collectify_int32, + sum_list_int32, + sum_array_int32, + "input_int32", + "output_int32", + "sum_int32", + ), + ( + "uint32", + collectify_uint32_list, + collectify_uint32, + sum_list_uint32, + sum_array_uint32, + "input_uint32", + "output_uint32", + "sum_uint32", + ), + ( + "int64", + collectify_int64_list, + collectify_int64, + sum_list_int64, + sum_array_int64, + "input_int64", + "output_int64", + "sum_int64", + ), + ( + "uint64", + collectify_uint64_list, + collectify_uint64, + sum_list_uint64, + sum_array_uint64, + "input_uint64", + "output_uint64", + "sum_uint64", + ), + ( + "float32", + collectify_float32_list, + collectify_float32, + sum_list_float, + sum_array_float32, + "input_float32", + "output_float32", + None, + ), + ( + "float64", + collectify_float64_list, + collectify_float64, + sum_list_double, + sum_array_float64, + "input_float64", + "output_float64", + None, + ), + ] + + for name, list_collect, arr_collect, list_sum, arr_sum, in_key, out_key, sum_name in specs: + arr_name = f"arr_{name}" + m.transform( + list_collect if use_lists else arr_collect, + input_family=config[in_key], + output_products=[arr_name], + ) + + sum_kwargs = { + "input_family": [arr_name], + "output_products": config[out_key], + } + if sum_name: + sum_kwargs["name"] = sum_name + + m.transform(list_sum if use_lists else arr_sum, **sum_kwargs) From 04d144ccf9d9d4e893a36253e807e903cf35aba7 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 15:08:17 -0600 Subject: [PATCH 40/67] Restore array-bounds warning deactivation for GCC 15 --- CMakeLists.txt | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/CMakeLists.txt b/CMakeLists.txt index 9b85d698..9e774be7 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -75,13 +75,13 @@ add_compile_options( ) if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU") - if( - CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL "14.1" - AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS "15" - ) - # GCC 14.1 issues many false positives re. array-bounds and - # stringop-overflow - add_compile_options(-Wno-array-bounds -Wno-stringop-overflow) + if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL "14.1") + if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS "15") + add_compile_options(-Wno-stringop-overflow) + endif() + if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "16") + add_compile_options(-Wno-array-bounds) + endif() endif() endif() From 8753d907122fbf06fdb03977ad8518c023db35e2 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 15:28:47 -0600 Subject: [PATCH 41/67] Improve Python argument ordering stability --- plugins/python/src/modulewrap.cpp | 140 ++++++++++++++++++++++-------- 1 file changed, 105 insertions(+), 35 deletions(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index 4b95e7fc..ebdb68ee 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -655,6 +655,108 @@ namespace { } // unnamed namespace + // Helper class to extract annotations in argument definition order. + // + // Rationale: + // The __annotations__ dictionary does not guarantee any iteration order + // relative to the function arguments (especially in older Python versions + // or for certain callable types). Iterating blindly over __annotations__ + // can yield input types in a permuted order, causing Phlex to bind + // C++ inputs to the wrong Python arguments (e.g. matching an 'int' product + // to a 'float' argument). + // + // This class attempts to retrieve the bytecode object (__code__) to access + // co_varnames, which provides the authoritative argument order. It falls + // back to dictionary iteration only if introspection fails. + // + // This logic mirrors the Python test class variant.py originally from PR #245. + class AdjustAnnotations { + PyObject* m_callable; + PyObject* m_annotations; + + public: + AdjustAnnotations(PyObject* callable) : m_callable(callable), m_annotations(nullptr) + { + PyObject* name = PyUnicode_FromString("__annotations__"); + m_annotations = PyObject_GetAttr(m_callable, name); + if (!m_annotations) { + PyErr_Clear(); + // the callable may be an instance with a __call__ method + PyObject* call = PyObject_GetAttrString(m_callable, "__call__"); + if (call) { + m_annotations = PyObject_GetAttr(call, name); + Py_DECREF(call); + } + } + Py_DECREF(name); + } + + ~AdjustAnnotations() { Py_XDECREF(m_annotations); } + + void get_input_types(std::vector& types) + { + if (!m_annotations || !PyDict_Check(m_annotations)) { + return; + } + + // Try to use the code object to get the argument names in order + PyObject* code = PyObject_GetAttrString(m_callable, "__code__"); + if (!code) { + PyErr_Clear(); + PyObject* call = PyObject_GetAttrString(m_callable, "__call__"); + if (call) { + code = PyObject_GetAttrString(call, "__code__"); + Py_DECREF(call); + } + } + + bool found = false; + if (code) { + PyObject* varnames = PyObject_GetAttrString(code, "co_varnames"); + PyObject* argcount = PyObject_GetAttrString(code, "co_argcount"); + if (varnames && argcount) { + long count = PyLong_AsLong(argcount); + for (long i = 0; i < count; ++i) { + PyObject* name = PyTuple_GetItem(varnames, i); + if (name) { + PyObject* type = PyDict_GetItem(m_annotations, name); + if (type) { + types.push_back(annotation_as_text(type)); + found = true; + } + } + } + } + Py_XDECREF(varnames); + Py_XDECREF(argcount); + Py_DECREF(code); + } + + // Fallback to dictionary iteration if code object was not helpful + if (!found) { + PyObject *key, *val; + Py_ssize_t pos = 0; + while (PyDict_Next(m_annotations, &pos, &key, &val)) { + if (PyUnicode_Check(key) && PyUnicode_CompareWithASCIIString(key, "return") == 0) { + continue; + } + types.push_back(annotation_as_text(val)); + } + } + } + + void get_return_type(std::vector& types) + { + if (m_annotations && PyDict_Check(m_annotations)) { + PyObject* ret = PyDict_GetItemString(m_annotations, "return"); + if (ret) { + types.push_back(annotation_as_text(ret)); + } + } + } + }; + + #define INSERT_INPUT_CONVERTER(name, alg, inp) \ mod->ph_module->transform("py" #name "_" + inp + "_" + alg, name##_to_py, concurrency::serial) \ .input_family(product_query{product_specification::create(inp), LAYER}) \ @@ -729,41 +831,9 @@ static PyObject* parse_args(PyObject* args, // retrieve C++ (matching) types from annotations input_types.reserve(input_labels.size()); - PyObject* sann = PyUnicode_FromString("__annotations__"); - PyObject* annot = PyObject_GetAttr(callable, sann); - if (!annot) { - // the callable may be an instance with a __call__ method - PyErr_Clear(); - PyObject* callm = PyObject_GetAttrString(callable, "__call__"); - if (callm) { - annot = PyObject_GetAttr(callm, sann); - Py_DECREF(callm); - } - } - Py_DECREF(sann); - - if (annot && PyDict_Check(annot) && PyDict_Size(annot)) { - PyObject* ret = PyDict_GetItemString(annot, "return"); - if (ret) - output_types.push_back(annotation_as_text(ret)); - - // dictionary is ordered with return last if provide (note: the keys here - // could be used as input labels, instead of the ones from the configuration, - // but that is probably not practical in actual use, so they are ignored) - - // Re-implementing robust annotation extraction - PyObject *key, *val; - Py_ssize_t pos = 0; - - while (PyDict_Next(annot, &pos, &key, &val)) { - // Skip 'return' annotation as it is handled separately - if (PyUnicode_Check(key) && PyUnicode_CompareWithASCIIString(key, "return") == 0) { - continue; - } - input_types.push_back(annotation_as_text(val)); - } - } - Py_XDECREF(annot); + AdjustAnnotations adj(callable); + adj.get_return_type(output_types); + adj.get_input_types(input_types); // ignore None as Python's conventional "void" return, which is meaningless in C++ if (output_types.size() == 1 && output_types[0] == "None") From 913d67bcaf76b0d383a9b6487b1b67ce98c740c4 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Thu, 15 Jan 2026 21:39:09 +0000 Subject: [PATCH 42/67] Apply clang-format fixes --- plugins/python/src/modulewrap.cpp | 165 +++++++++++++++--------------- 1 file changed, 82 insertions(+), 83 deletions(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index ebdb68ee..aa1fa215 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -655,107 +655,106 @@ namespace { } // unnamed namespace - // Helper class to extract annotations in argument definition order. - // - // Rationale: - // The __annotations__ dictionary does not guarantee any iteration order - // relative to the function arguments (especially in older Python versions - // or for certain callable types). Iterating blindly over __annotations__ - // can yield input types in a permuted order, causing Phlex to bind - // C++ inputs to the wrong Python arguments (e.g. matching an 'int' product - // to a 'float' argument). - // - // This class attempts to retrieve the bytecode object (__code__) to access - // co_varnames, which provides the authoritative argument order. It falls - // back to dictionary iteration only if introspection fails. - // - // This logic mirrors the Python test class variant.py originally from PR #245. - class AdjustAnnotations { - PyObject* m_callable; - PyObject* m_annotations; - - public: - AdjustAnnotations(PyObject* callable) : m_callable(callable), m_annotations(nullptr) - { - PyObject* name = PyUnicode_FromString("__annotations__"); - m_annotations = PyObject_GetAttr(m_callable, name); - if (!m_annotations) { - PyErr_Clear(); - // the callable may be an instance with a __call__ method - PyObject* call = PyObject_GetAttrString(m_callable, "__call__"); - if (call) { - m_annotations = PyObject_GetAttr(call, name); - Py_DECREF(call); - } +// Helper class to extract annotations in argument definition order. +// +// Rationale: +// The __annotations__ dictionary does not guarantee any iteration order +// relative to the function arguments (especially in older Python versions +// or for certain callable types). Iterating blindly over __annotations__ +// can yield input types in a permuted order, causing Phlex to bind +// C++ inputs to the wrong Python arguments (e.g. matching an 'int' product +// to a 'float' argument). +// +// This class attempts to retrieve the bytecode object (__code__) to access +// co_varnames, which provides the authoritative argument order. It falls +// back to dictionary iteration only if introspection fails. +// +// This logic mirrors the Python test class variant.py originally from PR #245. +class AdjustAnnotations { + PyObject* m_callable; + PyObject* m_annotations; + +public: + AdjustAnnotations(PyObject* callable) : m_callable(callable), m_annotations(nullptr) + { + PyObject* name = PyUnicode_FromString("__annotations__"); + m_annotations = PyObject_GetAttr(m_callable, name); + if (!m_annotations) { + PyErr_Clear(); + // the callable may be an instance with a __call__ method + PyObject* call = PyObject_GetAttrString(m_callable, "__call__"); + if (call) { + m_annotations = PyObject_GetAttr(call, name); + Py_DECREF(call); } - Py_DECREF(name); } + Py_DECREF(name); + } - ~AdjustAnnotations() { Py_XDECREF(m_annotations); } + ~AdjustAnnotations() { Py_XDECREF(m_annotations); } - void get_input_types(std::vector& types) - { - if (!m_annotations || !PyDict_Check(m_annotations)) { - return; - } + void get_input_types(std::vector& types) + { + if (!m_annotations || !PyDict_Check(m_annotations)) { + return; + } - // Try to use the code object to get the argument names in order - PyObject* code = PyObject_GetAttrString(m_callable, "__code__"); - if (!code) { - PyErr_Clear(); - PyObject* call = PyObject_GetAttrString(m_callable, "__call__"); - if (call) { - code = PyObject_GetAttrString(call, "__code__"); - Py_DECREF(call); - } + // Try to use the code object to get the argument names in order + PyObject* code = PyObject_GetAttrString(m_callable, "__code__"); + if (!code) { + PyErr_Clear(); + PyObject* call = PyObject_GetAttrString(m_callable, "__call__"); + if (call) { + code = PyObject_GetAttrString(call, "__code__"); + Py_DECREF(call); } + } - bool found = false; - if (code) { - PyObject* varnames = PyObject_GetAttrString(code, "co_varnames"); - PyObject* argcount = PyObject_GetAttrString(code, "co_argcount"); - if (varnames && argcount) { - long count = PyLong_AsLong(argcount); - for (long i = 0; i < count; ++i) { - PyObject* name = PyTuple_GetItem(varnames, i); - if (name) { - PyObject* type = PyDict_GetItem(m_annotations, name); - if (type) { - types.push_back(annotation_as_text(type)); - found = true; - } + bool found = false; + if (code) { + PyObject* varnames = PyObject_GetAttrString(code, "co_varnames"); + PyObject* argcount = PyObject_GetAttrString(code, "co_argcount"); + if (varnames && argcount) { + long count = PyLong_AsLong(argcount); + for (long i = 0; i < count; ++i) { + PyObject* name = PyTuple_GetItem(varnames, i); + if (name) { + PyObject* type = PyDict_GetItem(m_annotations, name); + if (type) { + types.push_back(annotation_as_text(type)); + found = true; } } } - Py_XDECREF(varnames); - Py_XDECREF(argcount); - Py_DECREF(code); } + Py_XDECREF(varnames); + Py_XDECREF(argcount); + Py_DECREF(code); + } - // Fallback to dictionary iteration if code object was not helpful - if (!found) { - PyObject *key, *val; - Py_ssize_t pos = 0; - while (PyDict_Next(m_annotations, &pos, &key, &val)) { - if (PyUnicode_Check(key) && PyUnicode_CompareWithASCIIString(key, "return") == 0) { - continue; - } - types.push_back(annotation_as_text(val)); + // Fallback to dictionary iteration if code object was not helpful + if (!found) { + PyObject *key, *val; + Py_ssize_t pos = 0; + while (PyDict_Next(m_annotations, &pos, &key, &val)) { + if (PyUnicode_Check(key) && PyUnicode_CompareWithASCIIString(key, "return") == 0) { + continue; } + types.push_back(annotation_as_text(val)); } } + } - void get_return_type(std::vector& types) - { - if (m_annotations && PyDict_Check(m_annotations)) { - PyObject* ret = PyDict_GetItemString(m_annotations, "return"); - if (ret) { - types.push_back(annotation_as_text(ret)); - } + void get_return_type(std::vector& types) + { + if (m_annotations && PyDict_Check(m_annotations)) { + PyObject* ret = PyDict_GetItemString(m_annotations, "return"); + if (ret) { + types.push_back(annotation_as_text(ret)); } } - }; - + } +}; #define INSERT_INPUT_CONVERTER(name, alg, inp) \ mod->ph_module->transform("py" #name "_" + inp + "_" + alg, name##_to_py, concurrency::serial) \ From bb7818cf5c604ce4631e8a978a5722a18200ae6e Mon Sep 17 00:00:00 2001 From: Kyle Knoepfel Date: Thu, 15 Jan 2026 15:40:54 -0600 Subject: [PATCH 43/67] Make sure types agree with what's in vectypes.py (#10) --- test/python/source.cpp | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/test/python/source.cpp b/test/python/source.cpp index 5c88d3da..a08cb01c 100644 --- a/test/python/source.cpp +++ b/test/python/source.cpp @@ -41,19 +41,20 @@ PHLEX_REGISTER_PROVIDERS(s) [](data_cell_index const& id) -> unsigned int { return 1 - (unsigned int)(id.number() % 2); }) .output_product("u2"_in("job")); - s.provide("provide_l1", - [](data_cell_index const& id) -> int64_t { return (int64_t)(id.number() % 2); }) + s.provide("provide_l1", [](data_cell_index const& id) -> long { return (long)(id.number() % 2); }) .output_product("l1"_in("job")); s.provide("provide_l2", - [](data_cell_index const& id) -> int64_t { return 1 - (int64_t)(id.number() % 2); }) + [](data_cell_index const& id) -> long { return 1 - (long)(id.number() % 2); }) .output_product("l2"_in("job")); - s.provide("provide_ul1", - [](data_cell_index const& id) -> uint64_t { return (uint64_t)(id.number() % 101); }) - .output_product("ul1"_in("job")); s.provide( - "provide_ul2", - [](data_cell_index const& id) -> uint64_t { return 100 - (uint64_t)(id.number() % 101); }) + "provide_ul1", + [](data_cell_index const& id) -> unsigned long { return (unsigned long)(id.number() % 101); }) + .output_product("ul1"_in("job")); + s.provide("provide_ul2", + [](data_cell_index const& id) -> unsigned long { + return 100 - (unsigned long)(id.number() % 101); + }) .output_product("ul2"_in("job")); s.provide("provide_b1", [](data_cell_index const& id) -> bool { return (id.number() % 2) == 0; }) From 9a6da2b5bfc8d7f65247716f7432bc261f21ed29 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Thu, 15 Jan 2026 21:41:32 +0000 Subject: [PATCH 44/67] Apply cmake-format fixes --- CMakeLists.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CMakeLists.txt b/CMakeLists.txt index 9e774be7..a5547316 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -76,7 +76,7 @@ add_compile_options( if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU") if(CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL "14.1") - if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS "15") + if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "15") add_compile_options(-Wno-stringop-overflow) endif() if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS "16") From 0db5964077be5fe5551f3624a3eb84645a1369f3 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 15:47:27 -0600 Subject: [PATCH 45/67] Revert unwanted change per review https://github.com/Framework-R-D/phlex/pull/213/changes/BASE..267b3636556c5fd334659bac22d6b1fa35bafa89#r2695067648 --- test/python/CMakeLists.txt | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 80115d58..d7690ab8 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -9,14 +9,9 @@ function(check_python_module_version MODULE_NAME MIN_VERSION OUT_VAR) "import sys try: import ${MODULE_NAME} + from packaging.version import parse as parse_version installed_version = getattr(${MODULE_NAME}, '__version__', None) - if not installed_version: - sys.exit(2) - - def parse(v): - return tuple(map(int, v.split('.')[:3])) - - if parse(installed_version) >= parse('${MIN_VERSION}'): + if parse_version(installed_version) >= parse_version('${MIN_VERSION}'): sys.exit(0) else: sys.exit(2) # Version too low From d337661e1527aa564d8f1b9db48388d6118f94a4 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 17:14:57 -0600 Subject: [PATCH 46/67] Have CMake report module check results --- test/python/CMakeLists.txt | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index d7690ab8..5df84c14 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -15,12 +15,17 @@ try: sys.exit(0) else: sys.exit(2) # Version too low -except ImportError: +except ImportError as e: + print(f'ImportError: {e}') sys.exit(1) -except Exception: +except Exception as e: + print(f'Exception: {e}') sys.exit(1)" RESULT_VARIABLE _module_check_result + OUTPUT_VARIABLE _module_check_out + ERROR_VARIABLE _module_check_err ) + message(STATUS "Check ${MODULE_NAME}: Res=${_module_check_result} Out=${_module_check_out} Err=${_module_check_err}") if(_module_check_result EQUAL 0) set(${OUT_VAR} TRUE PARENT_SCOPE) From 0f45d93936dbfa0d387975ad3fdb503c211581c4 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 17:29:47 -0600 Subject: [PATCH 47/67] Python AdjustAnnotations class improvements - Improve correspondence with eponymous C++ class. - Make Python class available outside a test context. --- plugins/python/CMakeLists.txt | 5 +++++ .../python/python/phlex/__init__.py | 15 ++++++++------- test/python/CMakeLists.txt | 3 +++ test/python/adder.py | 2 +- 4 files changed, 17 insertions(+), 8 deletions(-) rename test/python/variant.py => plugins/python/python/phlex/__init__.py (85%) diff --git a/plugins/python/CMakeLists.txt b/plugins/python/CMakeLists.txt index 2554a6dc..047869d0 100644 --- a/plugins/python/CMakeLists.txt +++ b/plugins/python/CMakeLists.txt @@ -21,3 +21,8 @@ target_link_libraries(pymodule PRIVATE phlex::module Python::Python Python::NumP target_compile_definitions(pymodule PRIVATE NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION) install(TARGETS pymodule LIBRARY DESTINATION lib) + +install( + DIRECTORY python/phlex + DESTINATION lib/python${Python_VERSION_MAJOR}.${Python_VERSION_MINOR}/site-packages +) diff --git a/test/python/variant.py b/plugins/python/python/phlex/__init__.py similarity index 85% rename from test/python/variant.py rename to plugins/python/python/phlex/__init__.py index 7b76dde5..6dca6415 100644 --- a/test/python/variant.py +++ b/plugins/python/python/phlex/__init__.py @@ -1,8 +1,6 @@ -"""Annotation helper for C++ typing variants. +"""Phlex Python Utilities. -Python algorithms are generic, like C++ templates, but the Phlex registration -process requires a single unique signature. These helpers generate annotated -functions for registration with the proper C++ types. +Call helpers and type annotation tools for the Phlex framework. """ import copy @@ -63,6 +61,12 @@ def __init__( self.__name__ = name self._allow_call = allow_call + # Expose __code__ from the underlying callable if available, to aid + # introspection (e.g. by C++ modulewrap). + self.__code__ = getattr(self.phlex_callable, "__code__", None) + self.__defaults__ = getattr(self.phlex_callable, "__defaults__", None) + self.__kwdefaults__ = getattr(self.phlex_callable, "__kwdefaults__", None) + def __call__(self, *args, **kwargs): """Raises an error if called directly. @@ -77,6 +81,3 @@ def __call__(self, *args, **kwargs): f"The framework should extract phlex_callable instead." ) return self.phlex_callable(*args, **kwargs) # type: ignore - - -Variant = AdjustAnnotations diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 5df84c14..541bb544 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -194,6 +194,9 @@ set_tests_properties( list(APPEND ACTIVE_PY_CPHLEX_TESTS py:failure) set(TEST_PYTHONPATH ${CMAKE_CURRENT_SOURCE_DIR}) +# Add the python plugin source directory to PYTHONPATH so tests can use phlex package +set(TEST_PYTHONPATH ${TEST_PYTHONPATH}:${PROJECT_SOURCE_DIR}/plugins/python/python) + # Always add site-packages to PYTHONPATH for tests, as embedded python might # not find them especially in spack environments where they are in # non-standard locations diff --git a/test/python/adder.py b/test/python/adder.py index 61b391dd..bf496d55 100644 --- a/test/python/adder.py +++ b/test/python/adder.py @@ -6,7 +6,7 @@ from typing import Protocol, TypeVar -from variant import AdjustAnnotations +from phlex import AdjustAnnotations class AddableProtocol[T](Protocol): From 9bccd379c7442671b029d0b1eef5ce5eaeecdcb4 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Thu, 15 Jan 2026 23:41:43 +0000 Subject: [PATCH 48/67] Apply cmake-format fixes --- test/python/CMakeLists.txt | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 541bb544..d8dc1567 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -25,7 +25,10 @@ except Exception as e: OUTPUT_VARIABLE _module_check_out ERROR_VARIABLE _module_check_err ) - message(STATUS "Check ${MODULE_NAME}: Res=${_module_check_result} Out=${_module_check_out} Err=${_module_check_err}") + message( + STATUS + "Check ${MODULE_NAME}: Res=${_module_check_result} Out=${_module_check_out} Err=${_module_check_err}" + ) if(_module_check_result EQUAL 0) set(${OUT_VAR} TRUE PARENT_SCOPE) From b08db20c9b3996a2c61b6399e5e58ab9df0aa277 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Fri, 16 Jan 2026 01:14:18 +0000 Subject: [PATCH 49/67] Include Python files in coverage change detection --- .github/workflows/coverage.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/coverage.yaml b/.github/workflows/coverage.yaml index 41b85ab5..647d82dd 100644 --- a/.github/workflows/coverage.yaml +++ b/.github/workflows/coverage.yaml @@ -62,6 +62,7 @@ jobs: file-type: | cpp cmake + python - name: Report detection outcome run: | From 3ee4dd099ef1701b814cb37627a6284a62824be5 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 19:16:10 -0600 Subject: [PATCH 50/67] Make sure non-test Python code is tested --- test/python/CMakeLists.txt | 4 ++ test/python/unit_test_annotations.py | 58 ++++++++++++++++++++++++++++ 2 files changed, 62 insertions(+) create mode 100644 test/python/unit_test_annotations.py diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index d8dc1567..072a3fb6 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -237,3 +237,7 @@ set_tests_properties( ENVIRONMENT "${PYTHON_TEST_ENVIRONMENT};VIRTUAL_ENV=${PY_VIRTUAL_ENV_DIR}" ENVIRONMENT_MODIFICATION "PATH=path_list_prepend:${PY_VIRTUAL_ENV_DIR}/bin" ) + +# Unit tests for the phlex python package +add_test(NAME py:unit_annotations COMMAND ${Python_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/unit_test_annotations.py) +set_tests_properties(py:unit_annotations PROPERTIES ENVIRONMENT "${PYTHON_TEST_ENVIRONMENT}") diff --git a/test/python/unit_test_annotations.py b/test/python/unit_test_annotations.py new file mode 100644 index 00000000..63736ddd --- /dev/null +++ b/test/python/unit_test_annotations.py @@ -0,0 +1,58 @@ +#!/usr/bin/env python3 +import unittest +import copy +from phlex import AdjustAnnotations + +def example_func(a, b=1): + return a + b + +class TestAdjustAnnotations(unittest.TestCase): + def test_initialization(self): + ann = {"a": int, "b": int, "return": int} + wrapper = AdjustAnnotations(example_func, ann, "example_wrapper") + + self.assertEqual(wrapper.__name__, "example_wrapper") + self.assertEqual(wrapper.__annotations__, ann) + self.assertEqual(wrapper.phlex_callable, example_func) + # Check introspection attributes are exposed + self.assertEqual(wrapper.__code__, example_func.__code__) + self.assertEqual(wrapper.__defaults__, example_func.__defaults__) + + def test_call_by_default_raises(self): + wrapper = AdjustAnnotations(example_func, {}, "no_call") + with self.assertRaises(AssertionError) as cm: + wrapper(1) + self.assertIn("was called directly", str(cm.exception)) + + def test_allow_call(self): + wrapper = AdjustAnnotations(example_func, {}, "yes_call", allow_call=True) + self.assertEqual(wrapper(10, 20), 30) + + def test_clone_shallow(self): + # For a function, copy.copy just returns the function itself usually, + # but let's test the flag logic in AdjustAnnotations + wrapper = AdjustAnnotations(example_func, {}, "clone_shallow", clone=True) + # function copy is same object + self.assertEqual(wrapper.phlex_callable, example_func) + + # Test valid copy logic with a mutable callable + class CallableObj: + def __call__(self): pass + + obj = CallableObj() + wrapper_obj = AdjustAnnotations(obj, {}, "obj_clone", clone=True) + self.assertNotEqual(id(wrapper_obj.phlex_callable), id(obj)) # copy was made? + # copy.copy of a custom object usually creates a new instance if generic + + def test_clone_deep(self): + class Container: + def __init__(self): self.data = [1] + def __call__(self): return self.data[0] + + c = Container() + wrapper = AdjustAnnotations(c, {}, "deep_clone", clone="deep") + self.assertNotEqual(id(wrapper.phlex_callable), id(c)) + self.assertNotEqual(id(wrapper.phlex_callable.data), id(c.data)) + +if __name__ == "__main__": + unittest.main() From 31dc61fe6c9d95c72e1ca7757fa775054845e1bf Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Fri, 16 Jan 2026 01:17:13 +0000 Subject: [PATCH 51/67] Apply Python linting fixes --- test/python/test_callbacks.py | 1 - test/python/unit_test_annotations.py | 27 +++++++++++++++++---------- 2 files changed, 17 insertions(+), 11 deletions(-) diff --git a/test/python/test_callbacks.py b/test/python/test_callbacks.py index 50a5a086..b43e4193 100644 --- a/test/python/test_callbacks.py +++ b/test/python/test_callbacks.py @@ -25,7 +25,6 @@ def bad_long(a: int) -> "long": # type: ignore # noqa: F821 return 1.5 # type: ignore - # Invalid uint return (negative) def bad_uint(a: int) -> "unsigned int": # type: ignore # noqa: F722 """Return a negative value for unsigned int.""" diff --git a/test/python/unit_test_annotations.py b/test/python/unit_test_annotations.py index 63736ddd..3e7d0829 100644 --- a/test/python/unit_test_annotations.py +++ b/test/python/unit_test_annotations.py @@ -1,16 +1,18 @@ #!/usr/bin/env python3 import unittest -import copy + from phlex import AdjustAnnotations + def example_func(a, b=1): return a + b + class TestAdjustAnnotations(unittest.TestCase): def test_initialization(self): ann = {"a": int, "b": int, "return": int} wrapper = AdjustAnnotations(example_func, ann, "example_wrapper") - + self.assertEqual(wrapper.__name__, "example_wrapper") self.assertEqual(wrapper.__annotations__, ann) self.assertEqual(wrapper.phlex_callable, example_func) @@ -33,26 +35,31 @@ def test_clone_shallow(self): # but let's test the flag logic in AdjustAnnotations wrapper = AdjustAnnotations(example_func, {}, "clone_shallow", clone=True) # function copy is same object - self.assertEqual(wrapper.phlex_callable, example_func) - + self.assertEqual(wrapper.phlex_callable, example_func) + # Test valid copy logic with a mutable callable class CallableObj: - def __call__(self): pass - + def __call__(self): + pass + obj = CallableObj() wrapper_obj = AdjustAnnotations(obj, {}, "obj_clone", clone=True) - self.assertNotEqual(id(wrapper_obj.phlex_callable), id(obj)) # copy was made? + self.assertNotEqual(id(wrapper_obj.phlex_callable), id(obj)) # copy was made? # copy.copy of a custom object usually creates a new instance if generic def test_clone_deep(self): class Container: - def __init__(self): self.data = [1] - def __call__(self): return self.data[0] - + def __init__(self): + self.data = [1] + + def __call__(self): + return self.data[0] + c = Container() wrapper = AdjustAnnotations(c, {}, "deep_clone", clone="deep") self.assertNotEqual(id(wrapper.phlex_callable), id(c)) self.assertNotEqual(id(wrapper.phlex_callable.data), id(c.data)) + if __name__ == "__main__": unittest.main() From 0a77fb8dfdaaa47062e4078b445e6ec94596e311 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Fri, 16 Jan 2026 01:17:23 +0000 Subject: [PATCH 52/67] Apply cmake-format fixes --- test/python/CMakeLists.txt | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 072a3fb6..a8bdcf56 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -239,5 +239,8 @@ set_tests_properties( ) # Unit tests for the phlex python package -add_test(NAME py:unit_annotations COMMAND ${Python_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/unit_test_annotations.py) +add_test( + NAME py:unit_annotations + COMMAND ${Python_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/unit_test_annotations.py +) set_tests_properties(py:unit_annotations PROPERTIES ENVIRONMENT "${PYTHON_TEST_ENVIRONMENT}") From 3699c26a74e31465d9b1323fddf3235e638d5f07 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 21:43:41 -0600 Subject: [PATCH 53/67] Address `ruff` issues --- test/python/unit_test_annotations.py | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/test/python/unit_test_annotations.py b/test/python/unit_test_annotations.py index 3e7d0829..91d09a00 100644 --- a/test/python/unit_test_annotations.py +++ b/test/python/unit_test_annotations.py @@ -1,15 +1,21 @@ #!/usr/bin/env python3 +"""Unit tests for the phlex.AdjustAnnotations class.""" + import unittest from phlex import AdjustAnnotations def example_func(a, b=1): + """Example function for testing.""" return a + b class TestAdjustAnnotations(unittest.TestCase): + """Tests for AdjustAnnotations wrapper.""" + def test_initialization(self): + """Test proper initialization and attribute exposure.""" ann = {"a": int, "b": int, "return": int} wrapper = AdjustAnnotations(example_func, ann, "example_wrapper") @@ -21,16 +27,19 @@ def test_initialization(self): self.assertEqual(wrapper.__defaults__, example_func.__defaults__) def test_call_by_default_raises(self): + """Test that calling the wrapper raises AssertionError by default.""" wrapper = AdjustAnnotations(example_func, {}, "no_call") with self.assertRaises(AssertionError) as cm: wrapper(1) self.assertIn("was called directly", str(cm.exception)) def test_allow_call(self): + """Test that calling is allowed when configured.""" wrapper = AdjustAnnotations(example_func, {}, "yes_call", allow_call=True) self.assertEqual(wrapper(10, 20), 30) def test_clone_shallow(self): + """Test shallow cloning behavior.""" # For a function, copy.copy just returns the function itself usually, # but let's test the flag logic in AdjustAnnotations wrapper = AdjustAnnotations(example_func, {}, "clone_shallow", clone=True) @@ -39,8 +48,7 @@ def test_clone_shallow(self): # Test valid copy logic with a mutable callable class CallableObj: - def __call__(self): - pass + def __call__(self): pass obj = CallableObj() wrapper_obj = AdjustAnnotations(obj, {}, "obj_clone", clone=True) @@ -48,12 +56,10 @@ def __call__(self): # copy.copy of a custom object usually creates a new instance if generic def test_clone_deep(self): + """Test deep cloning behavior.""" class Container: - def __init__(self): - self.data = [1] - - def __call__(self): - return self.data[0] + def __init__(self): self.data = [1] + def __call__(self): return self.data[0] c = Container() wrapper = AdjustAnnotations(c, {}, "deep_clone", clone="deep") From b5e868a4864a42c3944d2c8b74d2968205b1c685 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 21:44:03 -0600 Subject: [PATCH 54/67] Resolve issues with Python testing and coverage --- test/python/CMakeLists.txt | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index a8bdcf56..0565afdf 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -63,6 +63,8 @@ if(HAS_CPPYY) set(PYTHON_TEST_PHLEX_INSTALL ${CMAKE_SOURCE_DIR}) endif() + set(PYTHON_TEST_FILES test_phlex.py unit_test_annotations.py) + # Determine pytest command based on coverage support if(HAS_PYTEST_COV AND ENABLE_COVERAGE) set( @@ -74,11 +76,11 @@ if(HAS_CPPYY) --cov-report=term-missing --cov-report=xml:${CMAKE_BINARY_DIR}/coverage-python.xml --cov-report=html:${CMAKE_BINARY_DIR}/coverage-python-html - test_phlex.py + ${PYTHON_TEST_FILES} ) message(STATUS "Python tests will run with coverage reporting (pytest-cov)") else() - set(PYTEST_COMMAND ${PYTHON_TEST_EXECUTABLE} -m pytest test_phlex.py) + set(PYTEST_COMMAND ${PYTHON_TEST_EXECUTABLE} -m pytest ${PYTHON_TEST_FILES}) if(ENABLE_COVERAGE AND NOT HAS_PYTEST_COV) message(WARNING "ENABLE_COVERAGE is ON but pytest-cov not found; Python coverage disabled") endif() @@ -88,6 +90,15 @@ if(HAS_CPPYY) add_test(NAME py:phlex COMMAND ${PYTEST_COMMAND} WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}) set_property(TEST py:phlex PROPERTY ENVIRONMENT "PHLEX_INSTALL=${PYTHON_TEST_PHLEX_INSTALL}") + + if(HAS_PYTEST_COV) + add_custom_target( + coverage-python + COMMAND ${PYTEST_COMMAND} + WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR} + COMMENT "Running Python coverage report" + ) + endif() endif() set(ACTIVE_PY_CPHLEX_TESTS "") From 76cffeb62bbca8eab3d08212c5396cd31ed2f52b Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 22:03:08 -0600 Subject: [PATCH 55/67] Enable FORM by default in presets --- CMakePresets.json | 1 + 1 file changed, 1 insertion(+) diff --git a/CMakePresets.json b/CMakePresets.json index 79b4a20a..72e580ac 100644 --- a/CMakePresets.json +++ b/CMakePresets.json @@ -4,6 +4,7 @@ "name": "default", "hidden": false, "cacheVariables": { + "PHLEX_USE_FORM": "ON", "CMAKE_EXPORT_COMPILE_COMMANDS": "YES", "CMAKE_CXX_STANDARD": "20", "CMAKE_CXX_STANDARD_REQUIRED": "YES", From ee6d4ee542ba61c09d8cb811a9ff3f03f3ffdf35 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Thu, 15 Jan 2026 22:31:13 -0600 Subject: [PATCH 56/67] Temporarily restore packaging workaround pending reconciliation --- test/python/CMakeLists.txt | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index 0565afdf..aded0ae9 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -9,9 +9,14 @@ function(check_python_module_version MODULE_NAME MIN_VERSION OUT_VAR) "import sys try: import ${MODULE_NAME} - from packaging.version import parse as parse_version installed_version = getattr(${MODULE_NAME}, '__version__', None) - if parse_version(installed_version) >= parse_version('${MIN_VERSION}'): + if not installed_version: + sys.exit(2) + + def parse(v): + return tuple(map(int, v.split('.')[:3])) + + if parse(installed_version) >= parse('${MIN_VERSION}'): sys.exit(0) else: sys.exit(2) # Version too low From 574cf0a88442407c5bfc8320d50a88661c9d3a7b Mon Sep 17 00:00:00 2001 From: "google-labs-jules[bot]" <161369871+google-labs-jules[bot]@users.noreply.github.com> Date: Sat, 31 Jan 2026 00:03:57 +0000 Subject: [PATCH 57/67] Rename AdjustAnnotations to Variant This commit renames the `AdjustAnnotations` class to `Variant` in both Python and C++ source files, as well as in tests and build configurations. All references to the old name have been removed, and related files have been renamed for consistency. - Renamed `AdjustAnnotations` to `Variant` in `plugins/python/python/phlex/__init__.py` - Renamed C++ class `AdjustAnnotations` to `Variant` in `plugins/python/src/modulewrap.cpp` - Updated `test/python/adder.py` to use `Variant` - Renamed `test/python/unit_test_annotations.py` to `test/python/unit_test_variant.py` - Renamed `test/python/pymismatch_annotations.jsonnet` to `test/python/pymismatch_variant.jsonnet` - Updated `test/python/CMakeLists.txt` to reflect the name and file changes. Co-authored-by: greenc-FNAL <2372949+greenc-FNAL@users.noreply.github.com> --- plugins/python/python/phlex/__init__.py | 8 +++---- plugins/python/src/modulewrap.cpp | 8 +++---- test/python/CMakeLists.txt | 16 +++++++------- test/python/adder.py | 4 ++-- ...ons.jsonnet => pymismatch_variant.jsonnet} | 0 ...st_annotations.py => unit_test_variant.py} | 22 +++++++++---------- 6 files changed, 29 insertions(+), 29 deletions(-) rename test/python/{pymismatch_annotations.jsonnet => pymismatch_variant.jsonnet} (100%) rename test/python/{unit_test_annotations.py => unit_test_variant.py} (74%) diff --git a/plugins/python/python/phlex/__init__.py b/plugins/python/python/phlex/__init__.py index 6dca6415..bb18b980 100644 --- a/plugins/python/python/phlex/__init__.py +++ b/plugins/python/python/phlex/__init__.py @@ -7,7 +7,7 @@ from typing import Any, Callable -class AdjustAnnotations: +class Variant: """Wrapper to associate custom annotations with a callable. This class wraps a callable and provides custom ``__annotations__`` and @@ -30,7 +30,7 @@ class AdjustAnnotations: >>> def add(i: Number, j: Number) -> Number: ... return i + j ... - >>> int_adder = AdjustAnnotations(add, {"i": int, "j": int, "return": int}, "iadd") + >>> int_adder = Variant(add, {"i": int, "j": int, "return": int}, "iadd") """ def __init__( @@ -70,14 +70,14 @@ def __init__( def __call__(self, *args, **kwargs): """Raises an error if called directly. - AdjustAnnotations instances should not be called directly. The framework should + Variant instances should not be called directly. The framework should extract ``phlex_callable`` instead and call that. Raises: AssertionError: To indicate incorrect usage, unless overridden. """ assert self._allow_call, ( - f"AdjustAnnotations '{self.__name__}' was called directly. " + f"Variant '{self.__name__}' was called directly. " f"The framework should extract phlex_callable instead." ) return self.phlex_callable(*args, **kwargs) # type: ignore diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index aa1fa215..4d14de5f 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -670,12 +670,12 @@ namespace { // back to dictionary iteration only if introspection fails. // // This logic mirrors the Python test class variant.py originally from PR #245. -class AdjustAnnotations { +class Variant { PyObject* m_callable; PyObject* m_annotations; public: - AdjustAnnotations(PyObject* callable) : m_callable(callable), m_annotations(nullptr) + Variant(PyObject* callable) : m_callable(callable), m_annotations(nullptr) { PyObject* name = PyUnicode_FromString("__annotations__"); m_annotations = PyObject_GetAttr(m_callable, name); @@ -691,7 +691,7 @@ class AdjustAnnotations { Py_DECREF(name); } - ~AdjustAnnotations() { Py_XDECREF(m_annotations); } + ~Variant() { Py_XDECREF(m_annotations); } void get_input_types(std::vector& types) { @@ -830,7 +830,7 @@ static PyObject* parse_args(PyObject* args, // retrieve C++ (matching) types from annotations input_types.reserve(input_labels.size()); - AdjustAnnotations adj(callable); + Variant adj(callable); adj.get_return_type(output_types); adj.get_input_types(input_types); diff --git a/test/python/CMakeLists.txt b/test/python/CMakeLists.txt index aded0ae9..694fdd39 100644 --- a/test/python/CMakeLists.txt +++ b/test/python/CMakeLists.txt @@ -68,7 +68,7 @@ if(HAS_CPPYY) set(PYTHON_TEST_PHLEX_INSTALL ${CMAKE_SOURCE_DIR}) endif() - set(PYTHON_TEST_FILES test_phlex.py unit_test_annotations.py) + set(PYTHON_TEST_FILES test_phlex.py unit_test_variant.py) # Determine pytest command based on coverage support if(HAS_PYTEST_COV AND ENABLE_COVERAGE) @@ -156,15 +156,15 @@ if(HAS_NUMPY) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:baduint) add_test( - NAME py:mismatch_ann - COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pymismatch_annotations.jsonnet + NAME py:mismatch_variant + COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pymismatch_variant.jsonnet ) set_tests_properties( - py:mismatch_ann + py:mismatch_variant PROPERTIES PASS_REGULAR_EXPRESSION "number of inputs .* does not match number of annotation types" ) - list(APPEND ACTIVE_PY_CPHLEX_TESTS py:mismatch_ann) + list(APPEND ACTIVE_PY_CPHLEX_TESTS py:mismatch_variant) add_test(NAME py:veclists COMMAND phlex -c ${CMAKE_CURRENT_SOURCE_DIR}/pyveclists.jsonnet) list(APPEND ACTIVE_PY_CPHLEX_TESTS py:veclists) @@ -256,7 +256,7 @@ set_tests_properties( # Unit tests for the phlex python package add_test( - NAME py:unit_annotations - COMMAND ${Python_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/unit_test_annotations.py + NAME py:unit_variant + COMMAND ${Python_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/unit_test_variant.py ) -set_tests_properties(py:unit_annotations PROPERTIES ENVIRONMENT "${PYTHON_TEST_ENVIRONMENT}") +set_tests_properties(py:unit_variant PROPERTIES ENVIRONMENT "${PYTHON_TEST_ENVIRONMENT}") diff --git a/test/python/adder.py b/test/python/adder.py index bf496d55..9508636d 100644 --- a/test/python/adder.py +++ b/test/python/adder.py @@ -6,7 +6,7 @@ from typing import Protocol, TypeVar -from phlex import AdjustAnnotations +from phlex import Variant class AddableProtocol[T](Protocol): @@ -54,5 +54,5 @@ def PHLEX_REGISTER_ALGORITHMS(m, config): Returns: None """ - int_adder = AdjustAnnotations(add, {"i": int, "j": int, "return": int}, "iadd") + int_adder = Variant(add, {"i": int, "j": int, "return": int}, "iadd") m.transform(int_adder, input_family=config["input"], output_products=config["output"]) diff --git a/test/python/pymismatch_annotations.jsonnet b/test/python/pymismatch_variant.jsonnet similarity index 100% rename from test/python/pymismatch_annotations.jsonnet rename to test/python/pymismatch_variant.jsonnet diff --git a/test/python/unit_test_annotations.py b/test/python/unit_test_variant.py similarity index 74% rename from test/python/unit_test_annotations.py rename to test/python/unit_test_variant.py index 91d09a00..8c8c634e 100644 --- a/test/python/unit_test_annotations.py +++ b/test/python/unit_test_variant.py @@ -1,9 +1,9 @@ #!/usr/bin/env python3 -"""Unit tests for the phlex.AdjustAnnotations class.""" +"""Unit tests for the phlex.Variant class.""" import unittest -from phlex import AdjustAnnotations +from phlex import Variant def example_func(a, b=1): @@ -11,13 +11,13 @@ def example_func(a, b=1): return a + b -class TestAdjustAnnotations(unittest.TestCase): - """Tests for AdjustAnnotations wrapper.""" +class TestVariant(unittest.TestCase): + """Tests for Variant wrapper.""" def test_initialization(self): """Test proper initialization and attribute exposure.""" ann = {"a": int, "b": int, "return": int} - wrapper = AdjustAnnotations(example_func, ann, "example_wrapper") + wrapper = Variant(example_func, ann, "example_wrapper") self.assertEqual(wrapper.__name__, "example_wrapper") self.assertEqual(wrapper.__annotations__, ann) @@ -28,21 +28,21 @@ def test_initialization(self): def test_call_by_default_raises(self): """Test that calling the wrapper raises AssertionError by default.""" - wrapper = AdjustAnnotations(example_func, {}, "no_call") + wrapper = Variant(example_func, {}, "no_call") with self.assertRaises(AssertionError) as cm: wrapper(1) self.assertIn("was called directly", str(cm.exception)) def test_allow_call(self): """Test that calling is allowed when configured.""" - wrapper = AdjustAnnotations(example_func, {}, "yes_call", allow_call=True) + wrapper = Variant(example_func, {}, "yes_call", allow_call=True) self.assertEqual(wrapper(10, 20), 30) def test_clone_shallow(self): """Test shallow cloning behavior.""" # For a function, copy.copy just returns the function itself usually, - # but let's test the flag logic in AdjustAnnotations - wrapper = AdjustAnnotations(example_func, {}, "clone_shallow", clone=True) + # but let's test the flag logic in Variant + wrapper = Variant(example_func, {}, "clone_shallow", clone=True) # function copy is same object self.assertEqual(wrapper.phlex_callable, example_func) @@ -51,7 +51,7 @@ class CallableObj: def __call__(self): pass obj = CallableObj() - wrapper_obj = AdjustAnnotations(obj, {}, "obj_clone", clone=True) + wrapper_obj = Variant(obj, {}, "obj_clone", clone=True) self.assertNotEqual(id(wrapper_obj.phlex_callable), id(obj)) # copy was made? # copy.copy of a custom object usually creates a new instance if generic @@ -62,7 +62,7 @@ def __init__(self): self.data = [1] def __call__(self): return self.data[0] c = Container() - wrapper = AdjustAnnotations(c, {}, "deep_clone", clone="deep") + wrapper = Variant(c, {}, "deep_clone", clone="deep") self.assertNotEqual(id(wrapper.phlex_callable), id(c)) self.assertNotEqual(id(wrapper.phlex_callable.data), id(c.data)) From e513cc5dd5cdf11afc9ab8a0236ee0cc09bb0f10 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Sat, 31 Jan 2026 08:07:29 -0600 Subject: [PATCH 58/67] Remove diagnostics --- .github/workflows/cmake-build.yaml | 2 +- .github/workflows/coverage.yaml | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/cmake-build.yaml b/.github/workflows/cmake-build.yaml index 2708d5f2..dcba5940 100644 --- a/.github/workflows/cmake-build.yaml +++ b/.github/workflows/cmake-build.yaml @@ -241,7 +241,7 @@ jobs: echo "➡️ Running tests..." echo "::group::Running ctest" - if ctest --progress --output-on-failure --test-timeout 90 -j "$(nproc)"; then + if ctest --progress --output-on-failure -j "$(nproc)"; then echo "::endgroup::" echo "✅ All tests passed." else diff --git a/.github/workflows/coverage.yaml b/.github/workflows/coverage.yaml index 647d82dd..ee2bd969 100644 --- a/.github/workflows/coverage.yaml +++ b/.github/workflows/coverage.yaml @@ -167,7 +167,7 @@ jobs: export LLVM_PROFILE_FILE="$PROFILE_ROOT/%m-%p.profraw" echo "::group::Running ctest for coverage" - if ctest --progress --output-on-failure --test-timeout 90 -j "$(nproc)"; then + if ctest --progress --output-on-failure -j "$(nproc)"; then echo "::endgroup::" echo "✅ All tests passed." else From 17fef7e0ecdc48833480114fc6a2b9c54999d4a5 Mon Sep 17 00:00:00 2001 From: "google-labs-jules[bot]" <161369871+google-labs-jules[bot]@users.noreply.github.com> Date: Tue, 3 Feb 2026 00:01:02 +0000 Subject: [PATCH 59/67] Revert architectural churn while preserving test improvements and fixes This commit reverts the use of the `PyObjectPtr` RAII wrapper to `intptr_t` in `modulewrap.cpp` and restores fixed-offset string comparisons for NumPy types, as requested to align with parallel work. Key changes: - Reverted `PyObjectPtr` to `intptr_t` throughout `modulewrap.cpp`. - Restored manual reference counting in callbacks and converters. - Reverted substring searches to fixed-offset comparisons using `compare()`. - Preserved the `float64` type correction in type comparisons. - Preserved all new test coverage and reference-counting fixes. Co-authored-by: greenc-FNAL <2372949+greenc-FNAL@users.noreply.github.com> --- plugins/python/src/modulewrap.cpp | 129 ++++++++++++++++-------------- 1 file changed, 67 insertions(+), 62 deletions(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index 4d14de5f..da23f370 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -15,17 +15,6 @@ using namespace phlex::experimental; using phlex::concurrency; using phlex::product_query; -struct PyObjectDeleter { - void operator()(PyObject* p) const - { - if (p && Py_IsInitialized()) { - phlex::experimental::PyGILRAII gil; - Py_DECREF(p); - } - } -}; -using PyObjectPtr = std::shared_ptr; - // TODO: the layer is currently hard-wired and should come from the product // specification instead, but that doesn't exist in Python yet. static std::string const LAYER = "job"; @@ -66,12 +55,13 @@ namespace { return oss.str(); } - static inline PyObject* lifeline_transform(PyObject* arg) + static inline PyObject* lifeline_transform(intptr_t arg) { - if (Py_TYPE(arg) == &PhlexLifeline_Type) { - return ((py_lifeline_t*)arg)->m_view; + PyObject* pyobj = (PyObject*)arg; + if (Py_TYPE(pyobj) == &PhlexLifeline_Type) { + return ((py_lifeline_t*)pyobj)->m_view; } - return arg; + return pyobj; } // callable object managing the callback @@ -108,14 +98,14 @@ namespace { } template - PyObjectPtr call(Args... args) + intptr_t call(Args... args) { static_assert(sizeof...(Args) == N, "Argument count mismatch"); PyGILRAII gil; - PyObject* result = PyObject_CallFunctionObjArgs( - (PyObject*)m_callable, lifeline_transform(args.get())..., nullptr); + PyObject* result = + PyObject_CallFunctionObjArgs((PyObject*)m_callable, lifeline_transform(args)..., nullptr); std::string error_msg; if (!result) { @@ -123,11 +113,13 @@ namespace { error_msg = "Unknown python error"; } + decref_all(args...); + if (!error_msg.empty()) { throw std::runtime_error(error_msg.c_str()); } - return PyObjectPtr(result, PyObjectDeleter()); + return (intptr_t)result; } template @@ -138,7 +130,7 @@ namespace { PyGILRAII gil; PyObject* result = - PyObject_CallFunctionObjArgs((PyObject*)m_callable, (PyObject*)args.get()..., nullptr); + PyObject_CallFunctionObjArgs((PyObject*)m_callable, (PyObject*)args..., nullptr); std::string error_msg; if (!result) { @@ -147,42 +139,46 @@ namespace { } else Py_DECREF(result); + decref_all(args...); + if (!error_msg.empty()) { throw std::runtime_error(error_msg.c_str()); } } + + private: + template + void decref_all(Args... args) + { + // helper to decrement reference counts of N arguments + (Py_XDECREF((PyObject*)args), ...); + } }; // use explicit instatiations to ensure that the function signature can // be derived by the graph builder struct py_callback_1 : public py_callback<1> { - PyObjectPtr operator()(PyObjectPtr arg0) { return call(arg0); } + intptr_t operator()(intptr_t arg0) { return call(arg0); } }; struct py_callback_2 : public py_callback<2> { - PyObjectPtr operator()(PyObjectPtr arg0, PyObjectPtr arg1) { return call(arg0, arg1); } + intptr_t operator()(intptr_t arg0, intptr_t arg1) { return call(arg0, arg1); } }; struct py_callback_3 : public py_callback<3> { - PyObjectPtr operator()(PyObjectPtr arg0, PyObjectPtr arg1, PyObjectPtr arg2) - { - return call(arg0, arg1, arg2); - } + intptr_t operator()(intptr_t arg0, intptr_t arg1, intptr_t arg2) { return call(arg0, arg1, arg2); } }; struct py_callback_1v : public py_callback<1> { - void operator()(PyObjectPtr arg0) { callv(arg0); } + void operator()(intptr_t arg0) { callv(arg0); } }; struct py_callback_2v : public py_callback<2> { - void operator()(PyObjectPtr arg0, PyObjectPtr arg1) { callv(arg0, arg1); } + void operator()(intptr_t arg0, intptr_t arg1) { callv(arg0, arg1); } }; struct py_callback_3v : public py_callback<3> { - void operator()(PyObjectPtr arg0, PyObjectPtr arg1, PyObjectPtr arg2) - { - callv(arg0, arg1, arg2); - } + void operator()(intptr_t arg0, intptr_t arg1, intptr_t arg2) { callv(arg0, arg1, arg2); } }; static std::vector cseq(PyObject* coll) @@ -315,16 +311,16 @@ namespace { } #define BASIC_CONVERTER(name, cpptype, topy, frompy) \ - static PyObjectPtr name##_to_py(cpptype a) \ + static intptr_t name##_to_py(cpptype a) \ { \ PyGILRAII gil; \ - return PyObjectPtr(topy(a), PyObjectDeleter()); \ + return (intptr_t)topy(a); \ } \ \ - static cpptype py_to_##name(PyObjectPtr pyobj) \ + static cpptype py_to_##name(intptr_t pyobj) \ { \ PyGILRAII gil; \ - cpptype i = (cpptype)frompy(pyobj.get()); \ + cpptype i = (cpptype)frompy((PyObject*)pyobj); \ if (PyErr_Occurred()) { \ PyObject *ptype, *pvalue, *ptraceback; \ PyErr_Fetch(&ptype, &pvalue, &ptraceback); \ @@ -341,9 +337,10 @@ namespace { Py_XDECREF(ptype); \ Py_XDECREF(pvalue); \ Py_XDECREF(ptraceback); \ + Py_XDECREF((PyObject*)pyobj); \ throw std::runtime_error(msg); \ } \ - pyobj.reset(); \ + Py_XDECREF((PyObject*)pyobj); \ return i; \ } @@ -356,7 +353,7 @@ namespace { BASIC_CONVERTER(double, double, PyFloat_FromDouble, PyFloat_AsDouble) #define VECTOR_CONVERTER(name, cpptype, nptype) \ - static PyObjectPtr name##_to_py(std::shared_ptr> const& v) \ + static intptr_t name##_to_py(std::shared_ptr> const& v) \ { \ PyGILRAII gil; \ \ @@ -371,7 +368,7 @@ namespace { ); \ \ if (!np_view) \ - return PyObjectPtr(); \ + return (intptr_t) nullptr; \ \ /* make the data read-only by not making it writable */ \ PyArray_CLEARFLAGS((PyArrayObject*)np_view, NPY_ARRAY_WRITEABLE); \ @@ -384,7 +381,7 @@ namespace { new (&pyll->m_source) std::shared_ptr(v); \ pyll->m_view = np_view; /* steals reference */ \ \ - return PyObjectPtr((PyObject*)pyll, PyObjectDeleter()); \ + return (intptr_t)pyll; \ } VECTOR_CONVERTER(vint, int, NPY_INT) @@ -395,19 +392,20 @@ namespace { VECTOR_CONVERTER(vdouble, double, NPY_DOUBLE) #define NUMPY_ARRAY_CONVERTER(name, cpptype, nptype) \ - static std::shared_ptr> py_to_##name(PyObjectPtr pyobj) \ + static std::shared_ptr> py_to_##name(intptr_t pyobj) \ { \ PyGILRAII gil; \ \ auto vec = std::make_shared>(); \ \ /* TODO: because of unresolved ownership issues, copy the full array contents */ \ - if (!pyobj || !PyArray_Check(pyobj.get())) { \ + if (!pyobj || !PyArray_Check((PyObject*)pyobj)) { \ PyErr_Clear(); /* how to report an error? */ \ + Py_XDECREF((PyObject*)pyobj); \ return vec; \ } \ \ - PyArrayObject* arr = (PyArrayObject*)pyobj.get(); \ + PyArrayObject* arr = (PyArrayObject*)pyobj; \ \ /* TODO: flattening the array here seems to be the only workable solution */ \ npy_intp* dims = PyArray_DIMS(arr); \ @@ -421,14 +419,15 @@ namespace { vec->reserve(total); \ vec->insert(vec->end(), raw, raw + total); \ \ + Py_DECREF((PyObject*)pyobj); \ return vec; \ } - static std::shared_ptr> py_to_vint(PyObjectPtr pyobj) + static std::shared_ptr> py_to_vint(intptr_t pyobj) { PyGILRAII gil; auto vec = std::make_shared>(); - PyObject* obj = pyobj.get(); + PyObject* obj = (PyObject*)pyobj; if (obj) { if (PyList_Check(obj)) { @@ -459,14 +458,15 @@ namespace { vec->reserve(total); vec->insert(vec->end(), raw, raw + total); } + Py_DECREF(obj); } return vec; } - static std::shared_ptr> py_to_vuint(PyObjectPtr pyobj) + static std::shared_ptr> py_to_vuint(intptr_t pyobj) { PyGILRAII gil; auto vec = std::make_shared>(); - PyObject* obj = pyobj.get(); + PyObject* obj = (PyObject*)pyobj; if (obj) { if (PyList_Check(obj)) { @@ -497,14 +497,15 @@ namespace { vec->reserve(total); vec->insert(vec->end(), raw, raw + total); } + Py_DECREF(obj); } return vec; } - static std::shared_ptr> py_to_vlong(PyObjectPtr pyobj) + static std::shared_ptr> py_to_vlong(intptr_t pyobj) { PyGILRAII gil; auto vec = std::make_shared>(); - PyObject* obj = pyobj.get(); + PyObject* obj = (PyObject*)pyobj; if (obj) { if (PyList_Check(obj)) { @@ -535,14 +536,15 @@ namespace { vec->reserve(total); vec->insert(vec->end(), raw, raw + total); } + Py_DECREF(obj); } return vec; } - static std::shared_ptr> py_to_vulong(PyObjectPtr pyobj) + static std::shared_ptr> py_to_vulong(intptr_t pyobj) { PyGILRAII gil; auto vec = std::make_shared>(); - PyObject* obj = pyobj.get(); + PyObject* obj = (PyObject*)pyobj; if (obj) { if (PyList_Check(obj)) { @@ -573,14 +575,15 @@ namespace { vec->reserve(total); vec->insert(vec->end(), raw, raw + total); } + Py_DECREF(obj); } return vec; } - static std::shared_ptr> py_to_vfloat(PyObjectPtr pyobj) + static std::shared_ptr> py_to_vfloat(intptr_t pyobj) { PyGILRAII gil; auto vec = std::make_shared>(); - PyObject* obj = pyobj.get(); + PyObject* obj = (PyObject*)pyobj; if (obj) { if (PyList_Check(obj)) { @@ -611,14 +614,15 @@ namespace { vec->reserve(total); vec->insert(vec->end(), raw, raw + total); } + Py_DECREF(obj); } return vec; } - static std::shared_ptr> py_to_vdouble(PyObjectPtr pyobj) + static std::shared_ptr> py_to_vdouble(intptr_t pyobj) { PyGILRAII gil; auto vec = std::make_shared>(); - PyObject* obj = pyobj.get(); + PyObject* obj = (PyObject*)pyobj; if (obj) { if (PyList_Check(obj)) { @@ -649,6 +653,7 @@ namespace { vec->reserve(total); vec->insert(vec->end(), raw, raw + total); } + Py_DECREF(obj); } return vec; } @@ -902,32 +907,32 @@ static bool insert_input_converters(py_phlex_module* mod, return false; } - std::string suffix = inp_type.substr(pos); + pos += 18; std::string py_out = cname + "_" + inp + "py"; - if (suffix.find("uint32]]") != std::string::npos) { + if (inp_type.compare(pos, std::string::npos, "uint32]]") == 0) { mod->ph_module->transform("pyvuint_" + inp + "_" + cname, vuint_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (suffix.find("int32]]") != std::string::npos) { + } else if (inp_type.compare(pos, std::string::npos, "int32]]") == 0) { mod->ph_module->transform("pyvint_" + inp + "_" + cname, vint_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (suffix.find("uint64]]") != std::string::npos) { // id. + } else if (inp_type.compare(pos, std::string::npos, "uint64]]") == 0) { // id. mod->ph_module ->transform("pyvulong_" + inp + "_" + cname, vulong_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (suffix.find("int64]]") != std::string::npos) { // need not be true + } else if (inp_type.compare(pos, std::string::npos, "int64]]") == 0) { // need not be true mod->ph_module->transform("pyvlong_" + inp + "_" + cname, vlong_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (suffix.find("float32]]") != std::string::npos) { + } else if (inp_type.compare(pos, std::string::npos, "float32]]") == 0) { mod->ph_module ->transform("pyvfloat_" + inp + "_" + cname, vfloat_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (suffix.find("float64]]") != std::string::npos) { + } else if (inp_type.compare(pos, std::string::npos, "float64]]") == 0) { mod->ph_module ->transform("pyvdouble_" + inp + "_" + cname, vdouble_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) From d786cc227273f1f7cb13da0bf13c30324c34a93b Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Tue, 3 Feb 2026 15:49:26 +0000 Subject: [PATCH 60/67] Apply markdownlint fixes --- .github/copilot-instructions.md | 11 ++++++----- plugins/python/README.md | 9 +++++---- 2 files changed, 11 insertions(+), 9 deletions(-) diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index d9d3959d..c7b5c284 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -152,7 +152,7 @@ All Markdown files must strictly follow these markdownlint rules: - **Parallelism**: Run tests in parallel using `ctest -j $(nproc)` or `ctest --parallel `. - **Selection**: Run specific tests with `ctest -R "regex"` (e.g., `ctest -R "py:*"`). - **Debugging**: Use `ctest --output-on-failure` to see logs for failed tests. - - **Guard against known or suspected stalling tests**: Use `ctest --test-timeout` to set the per-test time limit (e.g. `90`) for 90s, _vs_ the default of 1500s. + - **Guard against known or suspected stalling tests**: Use `ctest --test-timeout` to set the per-test time limit (e.g. `90`) for 90s, *vs* the default of 1500s. ### Python Integration @@ -174,17 +174,19 @@ All Markdown files must strictly follow these markdownlint rules: - **Generation**: - **CMake Targets**: `coverage-xml`, `coverage-html` (if configured). - **Manual**: - 1. Run tests with `LLVM_PROFILE_FILE` set (e.g., `export LLVM_PROFILE_FILE="profraw/%m-%p.profraw"`). - 2. Merge profiles: `llvm-profdata merge -sparse profraw/*.profraw -o coverage.profdata`. - 3. Generate report: `llvm-cov show -instr-profile=coverage.profdata -format=html ...` + 1. Run tests with `LLVM_PROFILE_FILE` set (e.g., `export LLVM_PROFILE_FILE="profraw/%m-%p.profraw"`). + 2. Merge profiles: `llvm-profdata merge -sparse profraw/*.profraw -o coverage.profdata`. + 3. Generate report: `llvm-cov show -instr-profile=coverage.profdata -format=html ...` ### Local GitHub Actions Testing (`act`) - **Tool**: Use `act` to run GitHub Actions workflows locally. - **Configuration**: Ensure `.actrc` exists in the workspace root with the following content to use a compatible runner image: + ```text -P ubuntu-latest=catthehacker/ubuntu:act-latest ``` + - **Usage**: - List jobs: `act -l` - Run specific job: `act -j ` (e.g., `act -j python-check`) @@ -192,4 +194,3 @@ All Markdown files must strictly follow these markdownlint rules: - **Troubleshooting**: - **Docker Socket**: `act` requires access to the Docker socket. In dev containers, this may require specific mount configurations or permissions. - **Artifacts**: `act` creates a `phlex-src` directory (or similar) for checkout. Ensure this is cleaned up or ignored by tools like `mypy`. - diff --git a/plugins/python/README.md b/plugins/python/README.md index 0c40c5d9..60cd9fc9 100644 --- a/plugins/python/README.md +++ b/plugins/python/README.md @@ -9,6 +9,7 @@ The integration is built on the **Python C API** (not `pybind11`) to maintain st ### 1. The "Type Bridge" (`modulewrap.cpp`) The core of the integration is the type conversion layer in `src/modulewrap.cpp`. This layer is responsible for: + - Converting Phlex `Product` objects (C++) into Python objects (e.g., `PyObject*`, `numpy.ndarray`). - Converting Python return values back into Phlex `Product` objects. @@ -42,15 +43,15 @@ Because the Python interpreter is embedded within the C++ application, the runti - The source directory (for user scripts). - Do not append system/Spack `site-packages`; `pymodule.cpp` adjusts `sys.path` based on `CMAKE_PREFIX_PATH` and active virtual environments. - **Naming Collisions**: - - **Warning**: Do not name test files `types.py`, `test.py`, `code.py`, or other names that shadow standard library modules. - - **Consequence**: Shadowing can cause obscure failures in internal libraries (e.g., `numpy` failing to import because it tries to import `types` from the standard library but gets your local file instead). + - **Warning**: Do not name test files `types.py`, `test.py`, `code.py`, or other names that shadow standard library modules. + - **Consequence**: Shadowing can cause obscure failures in internal libraries (e.g., `numpy` failing to import because it tries to import `types` from the standard library but gets your local file instead). ## Development Guidelines -1. **Adding New Types**: +1. **Adding New Types**: - Update `src/modulewrap.cpp` to handle the new C++ type. - Add a corresponding test case in `test/python/` to verify the round-trip conversion. -2. **Testing**: +2. **Testing**: - Use `ctest` to run tests. - Tests are integration tests: they run the full C++ application which loads the Python script. - Debugging: Use `ctest --output-on-failure` to see Python exceptions. From 95d866d75425778aecaff5bd8eff21c831921ef8 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Tue, 3 Feb 2026 15:49:34 +0000 Subject: [PATCH 61/67] Apply clang-format fixes --- plugins/python/src/modulewrap.cpp | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index da23f370..e90bade0 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -166,7 +166,10 @@ namespace { }; struct py_callback_3 : public py_callback<3> { - intptr_t operator()(intptr_t arg0, intptr_t arg1, intptr_t arg2) { return call(arg0, arg1, arg2); } + intptr_t operator()(intptr_t arg0, intptr_t arg1, intptr_t arg2) + { + return call(arg0, arg1, arg2); + } }; struct py_callback_1v : public py_callback<1> { @@ -368,7 +371,7 @@ namespace { ); \ \ if (!np_view) \ - return (intptr_t) nullptr; \ + return (intptr_t)nullptr; \ \ /* make the data read-only by not making it writable */ \ PyArray_CLEARFLAGS((PyArrayObject*)np_view, NPY_ARRAY_WRITEABLE); \ From e5d69b8c4b4b48e992e238e3b111d1f96eada5bd Mon Sep 17 00:00:00 2001 From: "google-labs-jules[bot]" <161369871+google-labs-jules[bot]@users.noreply.github.com> Date: Tue, 3 Feb 2026 16:57:36 +0000 Subject: [PATCH 62/67] Fix Jsonnet workflow and handle-fix-commit SHA reporting This commit addresses several issues with the PR "fix" workflows: 1. **Jsonnet Trigger Failure**: Fixed the `jsonnet-format-fix` workflow by: - Adding `MEMBER` to the `author_association` check to allow organization members to trigger fixes. - Moving the formatting logic out of a job-level container to the host runner (`ubuntu-latest`) and using `docker run` for the tool. This ensures `git` is available for the subsequent commit step, as the `bitnami/jsonnet` image lacks it. 2. **Invalid SHA Reporting**: Improved `.github/actions/handle-fix-commit` to capture the `COMMIT_SHA` (full) and `COMMIT_SHA_SHORT` immediately after a successful `git push` in the retry loop. This ensures the SHA reported in PR comments is the one actually pushed to the remote, and using the short SHA improves linkification reliability. Co-authored-by: greenc-FNAL <2372949+greenc-FNAL@users.noreply.github.com> --- .github/actions/handle-fix-commit/action.yaml | 4 +++- .github/workflows/jsonnet-format-fix.yaml | 9 +++------ .github/workflows/markdown-fix.yaml | 1 + .github/workflows/python-fix.yaml | 1 + 4 files changed, 8 insertions(+), 7 deletions(-) diff --git a/.github/actions/handle-fix-commit/action.yaml b/.github/actions/handle-fix-commit/action.yaml index 62877f4c..7dbd5cc4 100644 --- a/.github/actions/handle-fix-commit/action.yaml +++ b/.github/actions/handle-fix-commit/action.yaml @@ -76,7 +76,9 @@ runs: if git push origin HEAD:${{ inputs.pr-info-ref }}; then echo "Push successful on attempt $i." COMMIT_SHA=$(git rev-parse HEAD) + COMMIT_SHA_SHORT=$(git rev-parse --short HEAD) echo "commit_sha=$COMMIT_SHA" >> "$GITHUB_OUTPUT" + echo "commit_sha_short=$COMMIT_SHA_SHORT" >> "$GITHUB_OUTPUT" echo "pushed=true" >> "$GITHUB_OUTPUT" exit 0 fi @@ -100,7 +102,7 @@ runs: uses: thollander/actions-comment-pull-request@24bffb9b452ba05a4f3f77933840a6a841d1b32b # v3.0.1 with: message: | - Automatic ${{ inputs.tool }} fixes pushed (commit ${{ steps.commit_and_push.outputs.commit_sha }}). + Automatic ${{ inputs.tool }} fixes pushed (commit ${{ steps.commit_and_push.outputs.commit_sha_short || steps.commit_and_push.outputs.commit_sha }}). ⚠️ **Note:** Some issues may require manual review and fixing. - name: Create patch diff --git a/.github/workflows/jsonnet-format-fix.yaml b/.github/workflows/jsonnet-format-fix.yaml index e4fb3a05..3d34bb4d 100644 --- a/.github/workflows/jsonnet-format-fix.yaml +++ b/.github/workflows/jsonnet-format-fix.yaml @@ -43,7 +43,7 @@ jobs: ( github.event_name == 'issue_comment' && github.event.issue.pull_request && - contains(fromJSON('["OWNER", "COLLABORATOR"]'), github.event.comment.author_association) && + contains(fromJSON('["OWNER", "COLLABORATOR", "MEMBER"]'), github.event.comment.author_association) && ( startsWith(github.event.comment.body, format('@{0}bot format', github.event.repository.name)) || startsWith(github.event.comment.body, format('@{0}bot jsonnet-format-fix', github.event.repository.name)) @@ -64,9 +64,6 @@ jobs: name: Apply Jsonnet formatting needs: pre-check if: needs.pre-check.result == 'success' - container: - image: public.ecr.aws/bitnami/jsonnet:latest - options: --user root steps: - name: Checkout code @@ -79,9 +76,9 @@ jobs: - name: Apply Jsonnet formatting id: lint - working-directory: ${{ env.local_checkout_path }} run: | - find . \( -name "*.jsonnet" -o -name "*.libsonnet" \) -print0 | xargs -0 -r jsonnetfmt -i + docker run --rm -v $(pwd):/work -w /work --user root public.ecr.aws/bitnami/jsonnet:latest \ + sh -c "find ${{ env.local_checkout_path }} \( -name '*.jsonnet' -o -name '*.libsonnet' \) -print0 | xargs -0 -r jsonnetfmt -i" continue-on-error: true - name: Handle fix commit diff --git a/.github/workflows/markdown-fix.yaml b/.github/workflows/markdown-fix.yaml index 6b10eb33..7b5023bb 100644 --- a/.github/workflows/markdown-fix.yaml +++ b/.github/workflows/markdown-fix.yaml @@ -43,6 +43,7 @@ jobs: ( github.event_name == 'issue_comment' && github.event.issue.pull_request && + contains(fromJSON('["OWNER", "COLLABORATOR", "MEMBER"]'), github.event.comment.author_association) && ( startsWith(github.event.comment.body, '@phlexbot format') || startsWith(github.event.comment.body, '@phlexbot markdown-fix') || diff --git a/.github/workflows/python-fix.yaml b/.github/workflows/python-fix.yaml index 624984d9..7abb67ad 100644 --- a/.github/workflows/python-fix.yaml +++ b/.github/workflows/python-fix.yaml @@ -43,6 +43,7 @@ jobs: ( github.event_name == 'issue_comment' && github.event.issue.pull_request && + contains(fromJSON('["OWNER", "COLLABORATOR", "MEMBER"]'), github.event.comment.author_association) && ( startsWith(github.event.comment.body, '@phlexbot python-fix') || startsWith(github.event.comment.body, format('@{0}bot python-fix', github.event.repository.name)) From e4dcf873aa7403c8b1924ef03d5cbc048671d125 Mon Sep 17 00:00:00 2001 From: "google-labs-jules[bot]" <161369871+google-labs-jules[bot]@users.noreply.github.com> Date: Tue, 3 Feb 2026 18:30:24 +0000 Subject: [PATCH 63/67] Resolve `actionlint` issue with `jsonnet-format-fix.yaml` - Update `jsonnet-format-fix.yaml`** to use environment variables and `${{ github.workspace }}` for better compliance and safety in `docker run`. Co-authored-by: greenc-FNAL <2372949+greenc-FNAL@users.noreply.github.com> --- .github/workflows/jsonnet-format-fix.yaml | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/.github/workflows/jsonnet-format-fix.yaml b/.github/workflows/jsonnet-format-fix.yaml index 3d34bb4d..301513e0 100644 --- a/.github/workflows/jsonnet-format-fix.yaml +++ b/.github/workflows/jsonnet-format-fix.yaml @@ -76,9 +76,11 @@ jobs: - name: Apply Jsonnet formatting id: lint + env: + CHECKOUT_PATH: ${{ env.local_checkout_path }} run: | - docker run --rm -v $(pwd):/work -w /work --user root public.ecr.aws/bitnami/jsonnet:latest \ - sh -c "find ${{ env.local_checkout_path }} \( -name '*.jsonnet' -o -name '*.libsonnet' \) -print0 | xargs -0 -r jsonnetfmt -i" + docker run --rm -v "${{ github.workspace }}:/work" -w /work --user root public.ecr.aws/bitnami/jsonnet:latest \ + sh -c "find \"$CHECKOUT_PATH\" \( -name '*.jsonnet' -o -name '*.libsonnet' \) -print0 | xargs -0 -r jsonnetfmt -i" continue-on-error: true - name: Handle fix commit From 743287a103a5af21e325d9a400047a40abab4c73 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Tue, 3 Feb 2026 12:54:39 -0600 Subject: [PATCH 64/67] Fix Jsonnet format issues --- test/python/pybadbool.jsonnet | 10 +++++----- test/python/pybadint.jsonnet | 10 +++++----- test/python/pybaduint.jsonnet | 10 +++++----- test/python/pycallback3.jsonnet | 18 +++++++++--------- test/python/pycoverage.jsonnet | 8 ++++---- test/python/pymismatch.jsonnet | 8 ++++---- test/python/pymismatch_variant.jsonnet | 12 ++++++------ test/python/pyraise.jsonnet | 10 +++++----- test/python/pytypes.jsonnet | 4 ++-- test/python/pyveclists.jsonnet | 4 ++-- test/python/pyvectypes.jsonnet | 4 ++-- 11 files changed, 49 insertions(+), 49 deletions(-) diff --git a/test/python/pybadbool.jsonnet b/test/python/pybadbool.jsonnet index d1e52558..97bd2821 100644 --- a/test/python/pybadbool.jsonnet +++ b/test/python/pybadbool.jsonnet @@ -2,13 +2,13 @@ driver: { cpp: 'generate_layers', layers: { - event: { parent: 'job', total: 1, starting_number: 1 } - } + event: { parent: 'job', total: 1, starting_number: 1 }, + }, }, sources: { provider: { cpp: 'cppsource4py', - } + }, }, modules: { test_bad_bool: { @@ -21,6 +21,6 @@ py: 'verify', input: ['out_bool'], expected_bool: true, - } - } + }, + }, } diff --git a/test/python/pybadint.jsonnet b/test/python/pybadint.jsonnet index b3fab8fa..7bfbb865 100644 --- a/test/python/pybadint.jsonnet +++ b/test/python/pybadint.jsonnet @@ -2,13 +2,13 @@ driver: { cpp: 'generate_layers', layers: { - event: { parent: 'job', total: 1, starting_number: 1 } - } + event: { parent: 'job', total: 1, starting_number: 1 }, + }, }, sources: { provider: { cpp: 'cppsource4py', - } + }, }, modules: { test_bad_long: { @@ -16,6 +16,6 @@ mode: 'bad_long', input: ['i'], output: ['out_long'], - } - } + }, + }, } diff --git a/test/python/pybaduint.jsonnet b/test/python/pybaduint.jsonnet index 35711c49..0616e7fd 100644 --- a/test/python/pybaduint.jsonnet +++ b/test/python/pybaduint.jsonnet @@ -2,13 +2,13 @@ driver: { cpp: 'generate_layers', layers: { - event: { parent: 'job', total: 1, starting_number: 1 } - } + event: { parent: 'job', total: 1, starting_number: 1 }, + }, }, sources: { provider: { cpp: 'cppsource4py', - } + }, }, modules: { test_bad_uint: { @@ -16,6 +16,6 @@ mode: 'bad_uint', input: ['i'], output: ['out_uint'], - } - } + }, + }, } diff --git a/test/python/pycallback3.jsonnet b/test/python/pycallback3.jsonnet index 7f2a4253..c6893fd8 100644 --- a/test/python/pycallback3.jsonnet +++ b/test/python/pycallback3.jsonnet @@ -2,16 +2,16 @@ driver: { cpp: 'generate_layers', layers: { - event: { parent: 'job', total: 1, starting_number: 1 } - } + event: { parent: 'job', total: 1, starting_number: 1 }, + }, }, sources: { provider: { cpp: 'cppsource4py', - } + }, }, modules: { - # 1. Test 3-arg callback (success case) + // 1. Test 3-arg callback (success case) test_three_args: { py: 'test_callbacks', mode: 'three_args', @@ -21,9 +21,9 @@ verify_three: { py: 'verify', input: ['sum_ijk'], - sum_total: 1, # 1 event * (0+0+0? wait, i=event_num-1. event1->0. sum=0. ) - # provider generates i, j starting at 0? - # cppsource4py probably uses event number. - } - } + sum_total: 1, // 1 event * (0+0+0? wait, i=event_num-1. event1->0. sum=0. ) + // provider generates i, j starting at 0? + // cppsource4py probably uses event number. + }, + }, } diff --git a/test/python/pycoverage.jsonnet b/test/python/pycoverage.jsonnet index bd67b970..3d4ccbe6 100644 --- a/test/python/pycoverage.jsonnet +++ b/test/python/pycoverage.jsonnet @@ -2,8 +2,8 @@ driver: { cpp: 'generate_layers', layers: { - event: { parent: 'job', total: 1, starting_number: 1 } - } + event: { parent: 'job', total: 1, starting_number: 1 }, + }, }, sources: { cppdriver: { @@ -13,6 +13,6 @@ modules: { coverage: { py: 'test_coverage', - } - } + }, + }, } diff --git a/test/python/pymismatch.jsonnet b/test/python/pymismatch.jsonnet index a3b1abbf..4098dd63 100644 --- a/test/python/pymismatch.jsonnet +++ b/test/python/pymismatch.jsonnet @@ -2,12 +2,12 @@ driver: { cpp: 'generate_layers', layers: { - event: { total: 1 } - } + event: { total: 1 }, + }, }, modules: { mismatch: { py: 'test_mismatch', - } - } + }, + }, } diff --git a/test/python/pymismatch_variant.jsonnet b/test/python/pymismatch_variant.jsonnet index f45119e9..07823340 100644 --- a/test/python/pymismatch_variant.jsonnet +++ b/test/python/pymismatch_variant.jsonnet @@ -2,21 +2,21 @@ driver: { cpp: 'generate_layers', layers: { - event: { parent: 'job', total: 1, starting_number: 1 } - } + event: { parent: 'job', total: 1, starting_number: 1 }, + }, }, sources: { provider: { cpp: 'cppsource4py', - } + }, }, modules: { test_mismatch: { py: 'test_callbacks', mode: 'mismatch', - # Providing 3 inputs for a 2-arg function + // Providing 3 inputs for a 2-arg function input: ['i', 'j', 'k'], output: ['sum_out'], - } - } + }, + }, } diff --git a/test/python/pyraise.jsonnet b/test/python/pyraise.jsonnet index 6c6dd7e0..cd08ce5b 100644 --- a/test/python/pyraise.jsonnet +++ b/test/python/pyraise.jsonnet @@ -2,13 +2,13 @@ driver: { cpp: 'generate_layers', layers: { - event: { parent: 'job', total: 1, starting_number: 1 } - } + event: { parent: 'job', total: 1, starting_number: 1 }, + }, }, sources: { provider: { cpp: 'cppsource4py', - } + }, }, modules: { test_exception: { @@ -16,6 +16,6 @@ mode: 'exception', input: ['i'], output: ['out'], - } - } + }, + }, } diff --git a/test/python/pytypes.jsonnet b/test/python/pytypes.jsonnet index 27fd6a0e..4c401a1a 100644 --- a/test/python/pytypes.jsonnet +++ b/test/python/pytypes.jsonnet @@ -2,8 +2,8 @@ driver: { cpp: 'generate_layers', layers: { - event: { parent: 'job', total: 10, starting_number: 1 } - } + event: { parent: 'job', total: 10, starting_number: 1 }, + }, }, sources: { cppdriver: { diff --git a/test/python/pyveclists.jsonnet b/test/python/pyveclists.jsonnet index dfcbc3ff..4d09979f 100644 --- a/test/python/pyveclists.jsonnet +++ b/test/python/pyveclists.jsonnet @@ -2,8 +2,8 @@ driver: { cpp: 'generate_layers', layers: { - event: { parent: "job", total: 10, starting_number: 1 } - } + event: { parent: 'job', total: 10, starting_number: 1 }, + }, }, sources: { cppdriver: { diff --git a/test/python/pyvectypes.jsonnet b/test/python/pyvectypes.jsonnet index a655687b..3740cd80 100644 --- a/test/python/pyvectypes.jsonnet +++ b/test/python/pyvectypes.jsonnet @@ -2,8 +2,8 @@ driver: { cpp: 'generate_layers', layers: { - event: { parent: "job", total: 10, starting_number: 1 } - } + event: { parent: 'job', total: 10, starting_number: 1 }, + }, }, sources: { cppdriver: { From 5e7ef4b73af90ec91895b83e0ef4385b922cd95b Mon Sep 17 00:00:00 2001 From: "google-labs-jules[bot]" <161369871+google-labs-jules[bot]@users.noreply.github.com> Date: Tue, 3 Feb 2026 21:50:58 +0000 Subject: [PATCH 65/67] Fix NumPy array matching for PEP 604 union types in modulewrap.cpp Modified `insert_input_converters` and `md_transform` in `plugins/python/src/modulewrap.cpp` to use fixed-length comparisons for NumPy dtypes. The previous use of `std::string::npos` required the dtype suffix (e.g., `"int32]]"`) to be at the absolute end of the annotation string. With the introduction of PEP 604 union types (e.g., `ndarray | list`), the annotation string can contain trailing characters, causing matching to fail. Using an explicit length in `std::string::compare` allows the matcher to correctly identify the dtype regardless of trailing content. Co-authored-by: greenc-FNAL <2372949+greenc-FNAL@users.noreply.github.com> --- plugins/python/src/modulewrap.cpp | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index e90bade0..bd8b2436 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -913,29 +913,29 @@ static bool insert_input_converters(py_phlex_module* mod, pos += 18; std::string py_out = cname + "_" + inp + "py"; - if (inp_type.compare(pos, std::string::npos, "uint32]]") == 0) { + if (inp_type.compare(pos, 8, "uint32]]") == 0) { mod->ph_module->transform("pyvuint_" + inp + "_" + cname, vuint_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (inp_type.compare(pos, std::string::npos, "int32]]") == 0) { + } else if (inp_type.compare(pos, 7, "int32]]") == 0) { mod->ph_module->transform("pyvint_" + inp + "_" + cname, vint_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (inp_type.compare(pos, std::string::npos, "uint64]]") == 0) { // id. + } else if (inp_type.compare(pos, 8, "uint64]]") == 0) { // id. mod->ph_module ->transform("pyvulong_" + inp + "_" + cname, vulong_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (inp_type.compare(pos, std::string::npos, "int64]]") == 0) { // need not be true + } else if (inp_type.compare(pos, 7, "int64]]") == 0) { // need not be true mod->ph_module->transform("pyvlong_" + inp + "_" + cname, vlong_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (inp_type.compare(pos, std::string::npos, "float32]]") == 0) { + } else if (inp_type.compare(pos, 9, "float32]]") == 0) { mod->ph_module ->transform("pyvfloat_" + inp + "_" + cname, vfloat_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) .output_products(py_out); - } else if (inp_type.compare(pos, std::string::npos, "float64]]") == 0) { + } else if (inp_type.compare(pos, 9, "float64]]") == 0) { mod->ph_module ->transform("pyvdouble_" + inp + "_" + cname, vdouble_to_py, concurrency::serial) .input_family(product_query{product_specification::create(inp), LAYER}) @@ -1052,29 +1052,29 @@ static PyObject* md_transform(py_phlex_module* mod, PyObject* args, PyObject* kw pos += 18; auto py_in = "py" + output + "_" + cname; - if (output_type.compare(pos, std::string::npos, "int32]]") == 0) { + if (output_type.compare(pos, 7, "int32]]") == 0) { mod->ph_module->transform("pyvint_" + output + "_" + cname, py_to_vint, concurrency::serial) .input_family(product_query{product_specification::create(py_in), LAYER}) .output_products(output); - } else if (output_type.compare(pos, std::string::npos, "uint32]]") == 0) { + } else if (output_type.compare(pos, 8, "uint32]]") == 0) { mod->ph_module->transform("pyvuint_" + output + "_" + cname, py_to_vuint, concurrency::serial) .input_family(product_query{product_specification::create(py_in), LAYER}) .output_products(output); - } else if (output_type.compare(pos, std::string::npos, "int64]]") == 0) { // need not be true + } else if (output_type.compare(pos, 7, "int64]]") == 0) { // need not be true mod->ph_module->transform("pyvlong_" + output + "_" + cname, py_to_vlong, concurrency::serial) .input_family(product_query{product_specification::create(py_in), LAYER}) .output_products(output); - } else if (output_type.compare(pos, std::string::npos, "uint64]]") == 0) { // id. + } else if (output_type.compare(pos, 8, "uint64]]") == 0) { // id. mod->ph_module ->transform("pyvulong_" + output + "_" + cname, py_to_vulong, concurrency::serial) .input_family(product_query{product_specification::create(py_in), LAYER}) .output_products(output); - } else if (output_type.compare(pos, std::string::npos, "float32]]") == 0) { + } else if (output_type.compare(pos, 9, "float32]]") == 0) { mod->ph_module ->transform("pyvfloat_" + output + "_" + cname, py_to_vfloat, concurrency::serial) .input_family(product_query{product_specification::create(py_in), LAYER}) .output_products(output); - } else if (output_type.compare(pos, std::string::npos, "float64]]") == 0) { + } else if (output_type.compare(pos, 9, "float64]]") == 0) { mod->ph_module ->transform("pyvdouble_" + output + "_" + cname, py_to_vdouble, concurrency::serial) .input_family(product_query{product_specification::create(py_in), LAYER}) From 9ada41c718c049c051815754bc0bbfba7092269f Mon Sep 17 00:00:00 2001 From: "google-labs-jules[bot]" <161369871+google-labs-jules[bot]@users.noreply.github.com> Date: Tue, 3 Feb 2026 22:21:33 +0000 Subject: [PATCH 66/67] Add diagnostics for test segfault - Added safety checks for null ports in `phlex/core/edge_maker.cpp` and `phlex/core/edge_maker.hpp` to prevent intermittent segfaults during graph setup, especially in benchmarks. - Improved diagnostic error messages when null ports are encountered. Co-authored-by: greenc-FNAL <2372949+greenc-FNAL@users.noreply.github.com> --- phlex/core/edge_maker.cpp | 4 ++++ phlex/core/edge_maker.hpp | 7 +++++++ 2 files changed, 11 insertions(+) diff --git a/phlex/core/edge_maker.cpp b/phlex/core/edge_maker.cpp index 9f9e3362..8a91300d 100644 --- a/phlex/core/edge_maker.cpp +++ b/phlex/core/edge_maker.cpp @@ -28,6 +28,10 @@ namespace phlex::experimental { provider.full_name(), node_name, port.product_label.to_string()); + if (port.port == nullptr) { + throw std::runtime_error("Unexpected null port while connecting provider " + + provider.full_name() + " to node " + node_name); + } make_edge(provider.sender(), *(port.port)); found_match = true; break; diff --git a/phlex/core/edge_maker.hpp b/phlex/core/edge_maker.hpp index f837231f..55541aea 100644 --- a/phlex/core/edge_maker.hpp +++ b/phlex/core/edge_maker.hpp @@ -73,6 +73,10 @@ namespace phlex::experimental { continue; } + if (producer->port == nullptr or receiver_port == nullptr) { + throw std::runtime_error("Unexpected null port while connecting " + + producer->node.full() + " to " + node_name); + } make_edge(*producer->port, *receiver_port); } } @@ -93,6 +97,9 @@ namespace phlex::experimental { for (auto const& [output_name, output_node] : outputs) { make_edge(source, output_node->port()); for (auto const& named_port : producers_.values()) { + if (named_port.to_output == nullptr) { + throw std::runtime_error("Unexpected null output port for " + named_port.node.full()); + } make_edge(*named_port.to_output, output_node->port()); } } From 267cbb50bdab8100992e9472cdc3b009ef855224 Mon Sep 17 00:00:00 2001 From: Chris Green Date: Tue, 3 Feb 2026 16:42:32 -0600 Subject: [PATCH 67/67] Incorporate @wlav's use of `OrderedDict` into `Variant` --- plugins/python/python/phlex/__init__.py | 42 ++++++-- plugins/python/src/modulewrap.cpp | 133 ++++++------------------ 2 files changed, 62 insertions(+), 113 deletions(-) diff --git a/plugins/python/python/phlex/__init__.py b/plugins/python/python/phlex/__init__.py index bb18b980..dfe85af3 100644 --- a/plugins/python/python/phlex/__init__.py +++ b/plugins/python/python/phlex/__init__.py @@ -1,12 +1,28 @@ -"""Phlex Python Utilities. +"""Annotation helper for C++ typing variants. -Call helpers and type annotation tools for the Phlex framework. +Python algorithms are generic, like C++ templates, but the Phlex registration +process requires a single unique signature. These helpers generate annotated +functions for registration with the proper C++ types. """ +import collections import copy +import inspect from typing import Any, Callable +class MissingAnnotation(Exception): + """Exception noting the missing of an argument in the provied annotations.""" + + def __init__(self, arg: str): + """Construct exception from the name of the argument without annotation.""" + self.arg = arg + + def __str__(self): + """Report the argument that is missing an annotation.""" + return "argument '%s' is not annotated" % self.arg + + class Variant: """Wrapper to associate custom annotations with a callable. @@ -57,16 +73,24 @@ def __init__( self.phlex_callable = copy.copy(f) else: self.phlex_callable = f - self.__annotations__ = annotations + + # annotions are expected as an ordinary dict and should be ordered, but + # we do not require it, so re-order based on the function's co_varnames + self.__annotations__ = collections.OrderedDict() + + sig = inspect.signature(self.phlex_callable) + for k, v in sig.parameters.items(): + try: + self.__annotations__[k] = annotations[k] + except KeyError as e: + if v.default is inspect.Parameter.empty: + raise MissingAnnotation(k) from e + + self.__annotations__['return'] = annotations.get('return', None) + self.__name__ = name self._allow_call = allow_call - # Expose __code__ from the underlying callable if available, to aid - # introspection (e.g. by C++ modulewrap). - self.__code__ = getattr(self.phlex_callable, "__code__", None) - self.__defaults__ = getattr(self.phlex_callable, "__defaults__", None) - self.__kwdefaults__ = getattr(self.phlex_callable, "__kwdefaults__", None) - def __call__(self, *args, **kwargs): """Raises an error if called directly. diff --git a/plugins/python/src/modulewrap.cpp b/plugins/python/src/modulewrap.cpp index bd8b2436..d98bf53b 100644 --- a/plugins/python/src/modulewrap.cpp +++ b/plugins/python/src/modulewrap.cpp @@ -663,107 +663,6 @@ namespace { } // unnamed namespace -// Helper class to extract annotations in argument definition order. -// -// Rationale: -// The __annotations__ dictionary does not guarantee any iteration order -// relative to the function arguments (especially in older Python versions -// or for certain callable types). Iterating blindly over __annotations__ -// can yield input types in a permuted order, causing Phlex to bind -// C++ inputs to the wrong Python arguments (e.g. matching an 'int' product -// to a 'float' argument). -// -// This class attempts to retrieve the bytecode object (__code__) to access -// co_varnames, which provides the authoritative argument order. It falls -// back to dictionary iteration only if introspection fails. -// -// This logic mirrors the Python test class variant.py originally from PR #245. -class Variant { - PyObject* m_callable; - PyObject* m_annotations; - -public: - Variant(PyObject* callable) : m_callable(callable), m_annotations(nullptr) - { - PyObject* name = PyUnicode_FromString("__annotations__"); - m_annotations = PyObject_GetAttr(m_callable, name); - if (!m_annotations) { - PyErr_Clear(); - // the callable may be an instance with a __call__ method - PyObject* call = PyObject_GetAttrString(m_callable, "__call__"); - if (call) { - m_annotations = PyObject_GetAttr(call, name); - Py_DECREF(call); - } - } - Py_DECREF(name); - } - - ~Variant() { Py_XDECREF(m_annotations); } - - void get_input_types(std::vector& types) - { - if (!m_annotations || !PyDict_Check(m_annotations)) { - return; - } - - // Try to use the code object to get the argument names in order - PyObject* code = PyObject_GetAttrString(m_callable, "__code__"); - if (!code) { - PyErr_Clear(); - PyObject* call = PyObject_GetAttrString(m_callable, "__call__"); - if (call) { - code = PyObject_GetAttrString(call, "__code__"); - Py_DECREF(call); - } - } - - bool found = false; - if (code) { - PyObject* varnames = PyObject_GetAttrString(code, "co_varnames"); - PyObject* argcount = PyObject_GetAttrString(code, "co_argcount"); - if (varnames && argcount) { - long count = PyLong_AsLong(argcount); - for (long i = 0; i < count; ++i) { - PyObject* name = PyTuple_GetItem(varnames, i); - if (name) { - PyObject* type = PyDict_GetItem(m_annotations, name); - if (type) { - types.push_back(annotation_as_text(type)); - found = true; - } - } - } - } - Py_XDECREF(varnames); - Py_XDECREF(argcount); - Py_DECREF(code); - } - - // Fallback to dictionary iteration if code object was not helpful - if (!found) { - PyObject *key, *val; - Py_ssize_t pos = 0; - while (PyDict_Next(m_annotations, &pos, &key, &val)) { - if (PyUnicode_Check(key) && PyUnicode_CompareWithASCIIString(key, "return") == 0) { - continue; - } - types.push_back(annotation_as_text(val)); - } - } - } - - void get_return_type(std::vector& types) - { - if (m_annotations && PyDict_Check(m_annotations)) { - PyObject* ret = PyDict_GetItemString(m_annotations, "return"); - if (ret) { - types.push_back(annotation_as_text(ret)); - } - } - } -}; - #define INSERT_INPUT_CONVERTER(name, alg, inp) \ mod->ph_module->transform("py" #name "_" + inp + "_" + alg, name##_to_py, concurrency::serial) \ .input_family(product_query{product_specification::create(inp), LAYER}) \ @@ -838,9 +737,35 @@ static PyObject* parse_args(PyObject* args, // retrieve C++ (matching) types from annotations input_types.reserve(input_labels.size()); - Variant adj(callable); - adj.get_return_type(output_types); - adj.get_input_types(input_types); + PyObject* sann = PyUnicode_FromString("__annotations__"); + PyObject* annot = PyObject_GetAttr(callable, sann); + if (!annot) { + // the callable may be an instance with a __call__ method + PyErr_Clear(); + PyObject* callm = PyObject_GetAttrString(callable, "__call__"); + if (callm) { + annot = PyObject_GetAttr(callm, sann); + Py_DECREF(callm); + } + } + Py_DECREF(sann); + + if (annot && PyDict_Check(annot) && PyDict_Size(annot)) { + PyObject* ret = PyDict_GetItemString(annot, "return"); + if (ret) + output_types.push_back(annotation_as_text(ret)); + + // dictionary is ordered with return last if provide (note: the keys here + // could be used as input labels, instead of the ones from the configuration, + // but that is probably not practical in actual use, so they are ignored) + PyObject* values = PyDict_Values(annot); + for (Py_ssize_t i = 0; i < (PyList_GET_SIZE(values) - (ret ? 1 : 0)); ++i) { + PyObject* item = PyList_GET_ITEM(values, i); + input_types.push_back(annotation_as_text(item)); + } + Py_DECREF(values); + } + Py_XDECREF(annot); // ignore None as Python's conventional "void" return, which is meaningless in C++ if (output_types.size() == 1 && output_types[0] == "None")