Skip to content

Conversation

@dfridovi
Copy link
Member

@dfridovi dfridovi commented Jan 6, 2025

Now we eat PATH for lunch...

julia> data = SolverBenchmarks.benchmark(SolverBenchmarks.TrajectoryGameBenchmark(); num_samples=10, ip_kwargs=(;)); SolverBenchmarks.summary_statistics(data)
[ Info: Generating random problems...
[ Info: Generating IP MCP...
[ Info: Generating PATH MCP...
[ Info: Warming up IP solver...
[ Info: Warming up PATH solver...
Solving IP MCPs... 100%|█████████████████████████████████| Time: 0:00:00
Solving PATH MCPs... 100%|███████████████████████████████| Time: 0:00:01
[ Info: IP runtime is 11.98294130370404 % that of PATH.
(ip = (success_rate = 0.9, μ = 0.009862157555555556, σ = 0.0032738100112760103), path = (success_rate = 0.7, μ = 0.08230164285714285, σ = 0.04308178206496435))

@dfridovi dfridovi requested a review from lassepe January 6, 2025 17:06
@lassepe
Copy link
Collaborator

lassepe commented Jan 6, 2025

Nice! :) The one that is supposed to be even faster is this one: https://docs.sciml.ai/LinearSolve/stable/solvers/solvers/#LinearSolve.RFLUFactorization

min_stepsize = 1e-2,
verbose = false,
linear_solve_algorithm = KrylovJL_GMRES(),
linear_solve_algorithm = UMFPACKFactorization(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! The only reason I was a little bit reluctant to use that solver is because I don't fully understand this note:
https://github.com/SciML/LinearSolve.jl/blob/02671f612ee581eb729645dd661b4039e93cadbd/src/factorization.jl#L771-L783

By default, the SparseArrays.jl are implemented for efficiency by caching the symbolic factorization. I.e., if set_A is used, it is expected that the new A has the same sparsity pattern as the previous A. If this algorithm is to be used in a context where that assumption does not hold, set reuse_symbolic=false.

If the initial numerical realization of A has some wrong zeros then things may go wrong here. Ideally, we would like to tell it the structural zeros from the symbolic analysis which we know anyway

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll poke around on the Julia slack to see if we can exploit the known structure more efficiently with LinearSolve.jl. It should be even faster then :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, one hacky way of doing this would be to move the linsolve object into the solver struct and call set_A once with a dummy matrix that is non-zero everywhere where we know there to be non-zeros from the symbolic jacobian. This would simultaneously remove the re-construction of the linsolve cache for each solve

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be ok with that. I have meetings for the rest of the day but if you have bw to give that a try quick go for it. Otherwise I might be able to get to it tomorrow morning.

Copy link
Collaborator

@lassepe lassepe Jan 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should already be on the safe side because I initialize the result_buffer with explicit zeros in the right places. For example, for in the README toy example we would get:

6×6 SparseArrays.SparseMatrixCSC{Float64, Int64} with 14 stored entries:
 0.0  0.0  0.0           
 0.0  0.0      0.0       
 0.0              0.0   
     0.0              0.0
         0.0      0.0   
             0.0      0.0

If the structure analysis in UMFPackFactoriazation does not convert them back into structural zeros, this should be safe as is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok! I'll go ahead and merge this in for now and we can make more changes later. Will also do a version bump.

@dfridovi
Copy link
Member Author

dfridovi commented Jan 6, 2025 via email

Co-authored-by: Lasse Peters <lasse.peters@mailbox.org>
@lassepe
Copy link
Collaborator

lassepe commented Jan 6, 2025

I just tested locally, LinearSolve.RFLUFactorization has a bug that we need to get fixed upstream. Let's run with UMFPackFactoriazation for now.

@dfridovi dfridovi merged commit 54721f8 into main Jan 6, 2025
2 of 3 checks passed
@dfridovi dfridovi deleted the dfridovi/change-default-linear-solver branch January 6, 2025 19:35
@dfridovi dfridovi mentioned this pull request Jan 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants