-
Notifications
You must be signed in to change notification settings - Fork 4
change default linear solver #35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Nice! :) The one that is supposed to be even faster is this one: https://docs.sciml.ai/LinearSolve/stable/solvers/solvers/#LinearSolve.RFLUFactorization |
| min_stepsize = 1e-2, | ||
| verbose = false, | ||
| linear_solve_algorithm = KrylovJL_GMRES(), | ||
| linear_solve_algorithm = UMFPACKFactorization(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! The only reason I was a little bit reluctant to use that solver is because I don't fully understand this note:
https://github.com/SciML/LinearSolve.jl/blob/02671f612ee581eb729645dd661b4039e93cadbd/src/factorization.jl#L771-L783
By default, the SparseArrays.jl are implemented for efficiency by caching the symbolic factorization. I.e., if set_A is used, it is expected that the new A has the same sparsity pattern as the previous A. If this algorithm is to be used in a context where that assumption does not hold, set reuse_symbolic=false.
If the initial numerical realization of A has some wrong zeros then things may go wrong here. Ideally, we would like to tell it the structural zeros from the symbolic analysis which we know anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll poke around on the Julia slack to see if we can exploit the known structure more efficiently with LinearSolve.jl. It should be even faster then :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, one hacky way of doing this would be to move the linsolve object into the solver struct and call set_A once with a dummy matrix that is non-zero everywhere where we know there to be non-zeros from the symbolic jacobian. This would simultaneously remove the re-construction of the linsolve cache for each solve
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be ok with that. I have meetings for the rest of the day but if you have bw to give that a try quick go for it. Otherwise I might be able to get to it tomorrow morning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should already be on the safe side because I initialize the result_buffer with explicit zeros in the right places. For example, for in the README toy example we would get:
6×6 SparseArrays.SparseMatrixCSC{Float64, Int64} with 14 stored entries:
0.0 0.0 0.0 ⋅ ⋅ ⋅
0.0 0.0 ⋅ 0.0 ⋅ ⋅
0.0 ⋅ ⋅ ⋅ 0.0 ⋅
⋅ 0.0 ⋅ ⋅ ⋅ 0.0
⋅ ⋅ 0.0 ⋅ 0.0 ⋅
⋅ ⋅ ⋅ 0.0 ⋅ 0.0If the structure analysis in UMFPackFactoriazation does not convert them back into structural zeros, this should be safe as is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok! I'll go ahead and merge this in for now and we can make more changes later. Will also do a version bump.
|
Oh nice!! Love it ;)
|
Co-authored-by: Lasse Peters <lasse.peters@mailbox.org>
|
I just tested locally, |
Now we eat PATH for lunch...