Skip to content

feat: API expansion and quantics Rust parity#22

Closed
shinaoka wants to merge 17 commits intomainfrom
feat/api-expansion
Closed

feat: API expansion and quantics Rust parity#22
shinaoka wants to merge 17 commits intomainfrom
feat/api-expansion

Conversation

@shinaoka
Copy link
Copy Markdown
Member

@shinaoka shinaoka commented Mar 31, 2026

Summary

This PR extends the existing Julia API expansion with quantics Rust parity on top of the merged tensor4all-rs C API work.

In addition to the earlier API cleanup and ComplexF64 work already on this branch, this update adds:

  • top-level DiscretizedGrid, InherentDiscreteGrid, and localdimensions exports
  • unfolding=:grouped support in QuanticsGrids
  • Julia C bindings for multivariable, affine, and binary-op quantics transforms
  • high-level QuanticsTransform wrappers for those operators
  • set_input_space!, set_output_space!, and set_iospaces! so LinearOperator application works with Julia-created MPS objects
  • focused regression tests for the new quantics grid/transform surface
  • a fix for the stale ComplexF64 tensor ABI in the Julia wrapper so complex tensor creation/data round-trip now matches the current Rust C API

Existing branch scope

This branch also contains the broader API expansion that was already open on PR #22:

  • 1-indexed everywhere: SimpleTT evaluate, sitetensor now 1-indexed. TreeTCI pivots/evaluate now 1-indexed.
  • Function renames: site_dimssitedims, link_dimslinkdims, site_tensorsitetensor, local_dimensionslocaldimensions
  • QuanticsTCI returns tuple: quanticscrossinterpolate now returns (qtci, ranks, errors)
  • QuanticsTCI signature: first arg is ::Type{V}
  • SimpleTT additions: ComplexF64 support, arithmetic, scale!, reverse, fulltensor, compress!, site-tensor constructors, and TreeTN conversions
  • TreeTCI cleanup: 1-indexed user API and naming alignment
  • TCI extension: SimpleTensorTrain ↔ TensorCrossInterpolation.TensorTrain

Why

ReFrequenTT needs the quantics backend surface more than it needs a revived TensorCI compatibility layer. The missing migration blockers were multivariable transforms, affine embeddings, mixed boundary conditions, and a clean path from Julia MPS objects into the quantics operator apply path.

The ComplexF64 failures turned out to be a separate wrapper bug: Tensor4all.jl was still calling the old split real/imag ABI for dense complex tensors, while the current Rust C API uses a single interleaved buffer. This PR aligns the Julia wrapper with the Rust ABI for both construction and extraction.

Validation

Passed:

  • TENSOR4ALL_RS_PATH=/home/shinaoka/tensor4all/tensor4all-rs/.worktrees/feat-quanticstransform-capi julia --startup-file=no --project=/home/shinaoka/tensor4all/Tensor4all.jl deps/build.jl
  • julia --startup-file=no --project=/home/shinaoka/tensor4all/Tensor4all.jl -e 'using Test, Tensor4all; const T4AIndex=Tensor4all.Index; const T4ATensor=Tensor4all.Tensor; include("/home/shinaoka/tensor4all/Tensor4all.jl/test/test_tensor.jl")'
  • julia --startup-file=no --project=/home/shinaoka/tensor4all/Tensor4all.jl -e 'using Test, Tensor4all, Tensor4all.SimpleTT, Tensor4all.TreeTN; include("/home/shinaoka/tensor4all/Tensor4all.jl/test/test_conversions.jl")'
  • julia --startup-file=no --project=/home/shinaoka/tensor4all/Tensor4all.jl -e 'using Test, Tensor4all, Tensor4all.TreeTN; const T4AIndex=Tensor4all.Index; include("/home/shinaoka/tensor4all/Tensor4all.jl/test/test_treetn.jl")'
  • julia --startup-file=no --project=/home/shinaoka/tensor4all/Tensor4all.jl -e 'include("/home/shinaoka/tensor4all/Tensor4all.jl/test/test_quanticsgrids.jl")'
  • julia --startup-file=no --project=/home/shinaoka/tensor4all/Tensor4all.jl -e 'include("/home/shinaoka/tensor4all/Tensor4all.jl/test/test_quanticstransform.jl")'
  • T4A_SKIP_HDF5_TESTS=1 julia --startup-file=no --project=/home/shinaoka/tensor4all/Tensor4all.jl /home/shinaoka/tensor4all/Tensor4all.jl/test/runtests.jl

Current suite status in this environment:

  • 443 passed
  • 0 failed
  • 0 broken

with T4A_SKIP_HDF5_TESTS=1.

Related

shinaoka and others added 16 commits March 31, 2026 20:48
…CI c64 C API functions

Add ~55 new ccall wrappers to C_API.jl for the expanded tensor4all-rs C API:
- SimpleTT f64: compress, partial_sum, from_site_tensors, add, scale, dot, reverse, fulltensor
- SimpleTT c64: full set (lifecycle, constructors, accessors, operations)
- QtciOptions: lifecycle (default, release, clone) and all setters
- QuanticsTCI f64: clone, max_bond_error, max_rank
- QuanticsTCI c64: full set (lifecycle, accessors, operations, interpolation)
- Update crossinterpolate_f64/discrete_f64 signatures with options, initial_pivots, output params

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… new operations

Complete rewrite of SimpleTT.jl:
- Type parameter now Union{Float64, ComplexF64} (was T<:Real, Float64 only)
- Type dispatch via _api(T, :name) helper for f64/c64 C API calls
- All user-facing indices are 1-based (internal C API calls subtract 1)
- Renamed functions: site_dims->sitedims, link_dims->linkdims, site_tensor->sitetensor
- New operations: +, -, scalar *, dot, scale!, reverse, fulltensor, from_site_tensors
- New functions: compress!, partial_sum (both supporting f64 and c64)
- ComplexF64 data handled via reinterpret of interleaved doubles

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…, overloads

- QuanticsTensorCI2{V} is now type-parameterized (V = Float64 or ComplexF64)
- quanticscrossinterpolate returns (qtci, ranks, errors) tuple
- First argument is ::Type{V} to match Pure Julia API
- Build ephemeral QtciOptions handle from kwargs, release after call
- Support initial_pivots, out_ranks, out_errors, out_n_iters
- Add maxbonderror(qtci) and maxrank(qtci) accessors
- ComplexF64 callback trampolines write [re, im] to result buffer
- evaluate, sum, integral for c64 return ComplexF64
- to_tensor_train for c64 returns SimpleTensorTrain{ComplexF64}
- Rename link_dims to linkdims
- Add overloads: size tuple (discrete), Array (from dense)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Cover Float64 and ComplexF64 variants:
- continuous grid (DiscretizedGrid)
- discrete domain (size tuple)
- from Array
- evaluate, sum, integral, to_tensor_train
- maxbonderror, maxrank, linkdims
- callable interface
- kwargs/options passthrough

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- All user-facing APIs now use 1-indexed (Julia convention)
- crossinterpolate2: initialpivots default is [ones(Int, n)]; pivots
  are converted to 0-indexed before passing to C API
- evaluate(ttn, indices): subtracts 1 from all indices before C API call
- evaluate(ttn, batch::Matrix): subtracts 1 from all values
- Callback trampolines: add 1 to batch values from C API before calling
  user function, so user always sees 1-indexed values
- Rename bond_dims -> bonddims, max_bond_error -> maxbonderror,
  max_rank -> maxrank, max_sample_value -> maxsamplevalue
- Replace TensorCI module include with TreeTCI in Tensor4all.jl
- Add comprehensive test/test_treetci.jl

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add MPS(tt::SimpleTensorTrain) and SimpleTensorTrain(mps::TreeTensorNetwork{Int})
constructors in TreeTN.jl for converting between the two representations.
Includes tests for round-trip conversions (both directions) with Float64 and ComplexF64.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add weak dependency on TensorCrossInterpolation and create
Tensor4allTCIExt extension providing bidirectional conversion between
TensorCrossInterpolation.TensorTrain and SimpleTensorTrain.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…t convention

Define linkdims and compress! as generic functions in Tensor4all namespace so
submodules (SimpleTT, TreeTN, QuanticsTCI) extend the same function instead of
creating separate ones that collide when multiple submodules are loaded together.
Export rank and compress! from SimpleTT.  Fix ComplexF64 dot product to follow
Julia convention (sesquilinear, conjugating the first argument) by adding a conj
implementation for SimpleTensorTrain and using it before calling the Rust bilinear
dot.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Define evaluate, maxbonderror, maxrank as generic functions in
  Tensor4all module (alongside linkdims and compress!) so submodules
  share a single dispatch table instead of creating module-local
  functions that can't be found across module boundaries.
- Change QuanticsTCI.sum to extend Base.sum (matching SimpleTT pattern)
  so that sum(qtci) dispatches correctly without explicit import.
- Update SimpleTT to import and re-export evaluate from parent module.
- Update test imports to use Tensor4all-level generics.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ailures

QuanticsTCI.evaluate expects grid indices (1-indexed, one per original
dimension), not quantics indices. Changed tests to use origcoord_to_grididx
instead of origcoord_to_quantics. Skip ComplexF64 round-trip conversion
test since the C API does not yet support complex tensor creation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@shinaoka shinaoka changed the title feat: API expansion — ComplexF64, 1-indexed, naming alignment, new operations feat: API expansion and quantics Rust parity Apr 6, 2026
@shinaoka
Copy link
Copy Markdown
Member Author

shinaoka commented Apr 6, 2026

This PR is superseded by #25.

feat/api-expansion drifted too far from current main and now conflicts heavily with already-merged work, so I rebuilt the actionable fix on a clean branch from main instead.

Please review and merge #25:

That replacement PR contains the narrow fix for the ComplexF64 SimpleTT site-tensor regression plus the corresponding round-trip regression test.

@shinaoka
Copy link
Copy Markdown
Member Author

shinaoka commented Apr 6, 2026

Superseded by #25.

@shinaoka shinaoka closed this Apr 6, 2026
auto-merge was automatically disabled April 6, 2026 08:20

Pull request was closed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant