-
Notifications
You must be signed in to change notification settings - Fork 64
Description
Hi!
In the recent Kimina-Prover paper is mentioned:
Lean Server. In our reinforcement learning and evaluation pipelines, we integrate the Numina Lean Server (Numina
2025) as the verification backend to provide real-time feedback for generated proof attempts. Built upon Lean FRO’s
LeanREPL (Lean FRO 2023), the Numina Lean Server employs an LRU-based caching mechanism that reuses
preloaded environments based on import headers, significantly reducing initialization overhead. Furthermore, it
supports extensive parallelization across multiple CPUs by managing multiple Lean REPL processes concurrently.
These innovations result in a 10× speedup in verification throughput, achieving up to 100 iterations per second on
machines equipped with 64 CPU cores and 512 GB RAM.
Would you have an advise what they might mean by LeanRepl preloaded environments (based on imports)?
In effect, what would be the fastest way of initializing LeanRepl process instances, given some known imports?
E.g. is it possible to do some kind of forkserver and simply clone process once it's done loading the imports? Or is there another way of snapshotting the Lean REPL state and quickly loading / resetting to it?
For Lean-in-the-loop proof generators it's very important to have a lightning-fast LeanRepl server interface which can be deployed to a big machine and manage LeanRepl workers in the most efficient way (e.g. they should fast super-fast and maybe have a pre-ready pool of LeanRepl worker processes)?