Skip to content

Conversation

@akaptano
Copy link
Contributor

@akaptano akaptano commented Oct 23, 2025

Hello! I am attempting to get my old current voxels branch (corresponding to this paper) merged into main. I have updated it with the current version of main, fixed a number of issues, and will continue to get the code coverage and documentation improved. Will reach out when this is ready for a look through. Thanks!

akaptano and others added 30 commits January 10, 2023 10:08
… do some tests to see if it is working, like checking that projection onto C * alpha actually produces a divergenceless current. Using sparse matrices for now, which seems required for such a large matrix. Also noticed that for nhat = xhat, only 5 of the linear basis functions phi_i are picked out, and for nhat = yhat or zhat, only four of the linear basis functions are picked out.
…ration actually looked pretty good a day or two ago and now it looks bad so Im not sure what went wrong. I think it was only working well when generating a coil surface with the same symmetries as the plasma surface, since the function was using the values of phi from the plasma surface (which only have 1 / 2 nfp). Regardless the function was overly complicated and it should be easier to debug and generate a full coil surface now.
…accidently made xyz_inner = xyz_outer so that surfaces were the same -- this was the biggest problem naturally. Still looks a little funny but I suspect this is the low resolution. Also moved the flux_constraint_matrix calculation to python in order to use scipy sparse matrices, which seems to be going okay. Using sparse matrices appears to be necessary here no matter what because the constraint matrix is so large. Getting close to working on the Tikhonov version of the problem without constraints now.
…out the constraints). Solver not working yet, so maybe there is an issue with the matrix definitions (seems quite likely). Setting up the projection operator, even using the scipy sparse matrices, taking a long time, although only have to do it once so trying to figure out exactly how long it takes.
…d by finding an error in the B_matrix setup where the integration points were not being accessed correctly (Same with Phi). Now implemented a WindingVolumeField so that I can start plotting the Bnormal solution after optimization, making sure that the optimization is doing what it is intended to do.
…tter. Full optimization and plotting is now running, although the optimized Bnormal fields look quite wrong, so there are presumably fixes to make in the matrix definitions.
…king. the least squares calculation of Bnormal actually appears to be correct, as far as I can tell, but for some reason the Bnormal computed with the WindingVolumeField object is off. The calculation is just Biot Savart, so not sure what is going wrong here.
…or the geo factor calculation. Now both the geo factor and the B field at the end are computed with a new winding_field_Bext function that computes the geometric factor without the nhat term. Optimization now running well with Bnormal L2 term and a Tikhonov term. Next step is to drop the coils and start using the Itarget term.
…ger necessary, got a curve working that traces out the plasma boundary, and wrote a QA example script. Next step is probably to impose the symmetries in the current voxels before attempting the constraints.
… made small change to make the example more realistic.
…work the same as in the permanent magnet case. Everything seems to work and be consistent, but probably we are computing the Bfield incorrectly because the intracell integration is probably trickier here. Added a NCSX example as robust check on the symmetries.
…the permanent magnets do with the symmetries. Talked to Matt today and what Im doing (rotating the entire grid) works for all stellarators, but for nfp not equal to 2 or 4, it generates empty gaps where the symmetric parts of the coil surface should have been stitched together. I think the upshot is that in these cases we should make sure to only impose the stellarator symmetry. For nfp = 2, the code should already be working to impose both symmetries correctly, without flux constraint issues (from the empty gaps), while for nfp = 4 we need to be more careful. The next step is to fix the constraint matrix to obey the symmetries and correctly stitch together the surfaces (e.g. for nfp = 2) at phi = 0 and phi = pi over 2. While getting the constraint matrix finally working, it makes most sense to use the nfp = 2 QA example to avoid the edge cases for now.
…h the coil symmetries (using the QA example). Debugged divJ and found crucial error where Phi basis function 3 was incorrect ([x, 0, 0] instead of [z, 0, 0] so it had finite divergence). Fixed a whole host of errors with the flux jump constraints, and implemented the strategy for implementing the constraints suggest by Matt. Still occassionally having some issues getting a nonsingular constraint matrix, and next step is eventually to get some tests working that make sure the constraints are actually getting the flux jumps to match as desired.
…of the indexing to get the flux matching right. However, clearly still dont have the right number of constraints, since the number of constraints should always be greater than 3N. To see this, note that if we did things naively and made 6 constraints for every cell, we would have 6N constraints. Avoiding double counting gets you down to 3N constraints, but there are more than this because some cells dont have an adjacent cell in some directions.
…an issue where minus ones were not assigned to all the possible locations (so that the for loop in python knows exactly where adjacent cells are missing) and for a 2D uniform grid, it appears to be working correctly to generate the appropriate number of constraints. However, the inverse calculation is still complaining.
…th all the L2 loss terms and the flux jumps working. A few things to do still: (1) investigate mysterious blow up in the optimization every once in a while (seems to be an issue with projected gradient descent with acceleration and sometimes ameliorated with large nx?), (2) plot and investigate singular values of the flux matrix. Matt made plausible argument why it may not be full rank, and this is an issue for the projection matrix calculation, and (3) implement the stitching together of the cells at the boundary of the half-period surface, so we allow for currents that can cross this boundary, and (4) as number of cells gets big, may need to introduce the sparse scipy matrices and retime the calculation.
…rickier than I thought at first. Now made the grid uniformly spaced between -xmax and xmax, and same for the other directions. Now trying to compile all the indices of the correct cells for flux matching. For instance, the cells at phi = pi / 2 should match flux with their so-called adjacent cells in the +x direction, but actually these so-called adjacent cells should have a zlocation that is flipped so that it gets matched when the rest of the coil surface is made.
…tion now to use the full coil surface without assumption about symmetries of the coils. Seems to generate different final configurations that with the flux stitching constraints -- in particular the stitching tends to give very strong currents near where things are being stitched, so need to debug this. Also need to debug the initialization of the coil positions, since the distance between the place where the coil surface is stitched (i.e. the dx of the cell between those two points) is different than the dx of the original grid. Also, at low resolution the coil solutions can be wonky, but this seems to be because many cells have no immediate neighbors, so the current is forced to take some strange path. Presumably things should smooth out with larger grids, so it is probably a good idea at this point to see if I can speed up anything in the code (although if I can figure out the stitching this would speed things up dramatically since then I can just use the 1/2nfp part of the coil surface).
…an adjacent cell direction if two of the three distances (in the x, y, and z directions) are zero, and the third one matches exactly the grid spacing. This only screws up on certain grids, so missed it at first.
…dxx <= dx does not work if they are equal, this is classic floating point error), and hopefully have that finally working right on any grid setup. Sped up a number of the calculations, such as using B instead of Bext to compute the Bfield on the plasma surface. Speed bottlenecks are now (1) the A matrix setup, and barring using SIMD, not sure how to optimize this further, but only have to compute it once in beginning, and (2) the accelerated prox gradient, for which I am not sure yet how to speed up, unless I can start holding some of the matrices like ATA in memory. Should explore speedups on both in next few days.
…hich seems to mildly speed things up (factor of 2 ish). Tried coding up accelerated prox gradient descent in c++ but not seeing any speedup.
…e svd, which inevitably gets long as matrices get larger. Instead, there is nice upper bound estimate for the largest singular value (which appears very tight for some reason) that can be computed much quicker. Then tried to get the c++ and eigen version of the algorithm working better, but didnt get the sparse Pdot solve part working and even the non-Pdot part is somewhat suspect. One reason (discovered later when I tried sticking to the python but using sopp.matmult for fast matrix-vector products) appears to be much faster convergence of the algorithm when I use Eigen matrix-vector products than when I use numpy. I have no clue why this would be -- perhaps floating point or roundoff errors are much better tracked in Eigen. Should ask Matt about this but upshot is algorithm convering faster.
…rstand various bugs or pseudobugs. Tried harder to get c++ version of the prox algorithm working, but this still appears quite challenging. Cannot just replace the matrix-vector product with sparse matrix-vector from Eigen in the python loop, since each call to the Eigen wrapper ends up copying over the whole sparse array, slowing things down dramatically. This is apparently because pybind11 does not support references for SparseMatrix Eigen objects. So only hope is to do the full c++ calculation in one function, so that only need to copy the sparse matrix over a single time. However the c++ calculation does not work quite right. There still seems to be an issue with the way the Eigen products work, and converges much slower than the python version calling matmult.
…ed grids. Started an axisymmetric tokamak surface example, per Matts suggestion. Trying to get poincare plots working, but these have always been finicky and need quite high resolution.
…call the poincare plots on it in postprocessing script but having issues so going to merge in main branch before continuing on.
…ally working as desired. Still havent been able to figure out the poincare plots.
…md not available (that only shows up on the CI).
@akaptano akaptano self-assigned this Oct 23, 2025
@codecov
Copy link

codecov bot commented Oct 23, 2025

Codecov Report

❌ Patch coverage is 98.21628% with 16 lines in your changes missing coverage. Please review.
✅ Project coverage is 92.96%. Comparing base (8bd454c) to head (38e22f5).
⚠️ Report is 10 commits behind head on master.

Files with missing lines Patch % Lines
src/simsopt/field/magneticfieldclasses.py 85.71% 9 Missing ⚠️
.../simsopt/util/permanent_magnet_helper_functions.py 96.29% 4 Missing ⚠️
src/simsopt/geo/current_voxels_grid.py 99.66% 2 Missing ⚠️
src/simsopt/solve/current_voxels_optimization.py 99.24% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #559      +/-   ##
==========================================
+ Coverage   92.56%   92.96%   +0.39%     
==========================================
  Files          83       85       +2     
  Lines       16229    17132     +903     
==========================================
+ Hits        15023    15927     +904     
+ Misses       1206     1205       -1     
Flag Coverage Δ
unittests 92.96% <98.21%> (+0.39%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

…ellarator and not sure why. I remember having some issues with this but thought I resolved them.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants