Differentiating a simple function that calls KrylovKit.eigsolve(Symmetric(sparse(...)), x0, 1, :LR) fails in Mooncake.prepare_gradient_cache(...). The forward pass succeeds, but reverse-mode preparation throws MooncakeRuleCompilationError. This is the root blocker behind GraphNeuralNetworks.ChebConv under Mooncake, here is a minimum working example to replicate the issue:
using LinearAlgebra
using Mooncake
using SparseArrays
using KrylovKit
function leading_eigenvalue(vals::Vector{Float32})
A = sparse([1, 1, 2, 2, 3], [1, 2, 2, 3, 3], vals, 3, 3)
x0 = Float32[1, 0, 0]
return KrylovKit.eigsolve(Symmetric(A), x0, 1, :LR)[1][1]
end
vals = Float32[2, -1, 2, -1, 2]
println("forward value = ", leading_eigenvalue(vals))
# Mooncake fails here with a MooncakeRuleCompilationError.
cache = Mooncake.prepare_gradient_cache(leading_eigenvalue, vals)
Mooncake.value_and_gradient!!(cache, leading_eigenvalue, vals)
Differentiating a simple function that calls
KrylovKit.eigsolve(Symmetric(sparse(...)), x0, 1, :LR)fails inMooncake.prepare_gradient_cache(...). The forward pass succeeds, but reverse-mode preparation throws MooncakeRuleCompilationError. This is the root blocker behind GraphNeuralNetworks.ChebConv under Mooncake, here is a minimum working example to replicate the issue: