Add MVV-LVA capture ordering, check extensions, and delta pruning#50
Add MVV-LVA capture ordering, check extensions, and delta pruning#50
Conversation
Three search improvements that compound together: 1. MVV-LVA capture ordering in main search: Replace random shuffle of captures with Most Valuable Victim - Least Valuable Attacker sorting. Uses fast integer lookups instead of PST-based evaluation. Promotions are also prioritized. 2. Check extensions: When a move puts the opponent in check, extend the search by 1 ply. This resolves tactical check sequences that would otherwise be cut off by the horizon effect. 3. Delta pruning in quiescence: If the stand-pat score plus the maximum possible capture gain (queen value + margin) is still below alpha, prune the entire quiescence node. Benchmark at depth 4: - Nodes: 4,760,507 → 3,382,748 (−29.0%) - NPS: 22,634 → 24,444 (+8.0%) - Total time: 210.32s → 138.38s (−34.2%)
BenchmarksThe following benchmarks are available for this PR:
Post a comment with the command to trigger a benchmark run. |
Greptile SummaryThis PR adds three search improvements to the alpha-beta engine: MVV-LVA capture ordering, check extensions, and delta pruning in quiescence search.
Confidence Score: 2/5
|
| Filename | Overview |
|---|---|
| moonfish/engines/alpha_beta.py | Adds delta pruning (correct) and check extensions (missing depth/recursion guard for perpetual check positions). |
| moonfish/move_ordering.py | Adds MVV-LVA capture ordering with correct scoring logic; unused MG_PIECE_VALUES import. |
Flowchart
flowchart TD
A[negamax called with depth d] --> B[organize_moves with MVV-LVA ordering]
B --> C{For each move}
C --> D[board.push move]
D --> E{board.is_check?}
E -->|Yes| F["extension = 1\n(depth stays at d)"]
E -->|No| G["extension = 0\n(depth decreases to d-1)"]
F --> H["negamax(depth - 1 + 1 = d)"]
G --> I["negamax(depth - 1)"]
H --> J{depth <= 0?}
I --> J
J -->|Yes| K[quiescence_search]
J -->|No| C
K --> L{stand_pat + DELTA_MARGIN < alpha?}
L -->|Yes| M[Delta prune: return alpha]
L -->|No| N[Search tactical moves]
N --> O[Return best score]
Last reviewed commit: a572196
| extension = 1 if board.is_check() else 0 | ||
|
|
||
| board_score = -self.negamax( | ||
| board, | ||
| depth - 1, | ||
| depth - 1 + extension, |
There was a problem hiding this comment.
Unbounded check extensions risk stack overflow
When a move gives check, the extension keeps depth at the same level (depth - 1 + 1 = depth). In positions with perpetual or long check sequences, this causes the search to never reduce depth, potentially recursing until Python's stack limit is hit.
There is no maximum-depth guard in negamax, and the repetition detection (is_repetition) only exists in quiescence search — not here. While the transposition table might catch some repeated positions, its cache key includes (alpha, beta) which vary across different branches, so there's no guarantee of termination.
A common safeguard is to limit total extensions (e.g., track cumulative ply from root and cap extensions once a maximum ply is reached):
# Example: cap total search depth at 2x the nominal depth
extension = 1 if board.is_check() and (current_ply < 2 * self.config.negamax_depth) else 0
Alternatively, adding board.is_repetition(2) as a draw-return early in negamax (similar to what quiescence already does) would prevent infinite check cycles.
|
|
||
| from moonfish.psqt import evaluate_capture, evaluate_piece, get_phase | ||
| from moonfish.psqt import ( | ||
| MG_PIECE_VALUES, |
There was a problem hiding this comment.
Unused import MG_PIECE_VALUES
MG_PIECE_VALUES is imported but never used in this file — the new MVV-LVA logic uses _MVV_LVA_VALUES instead.
| MG_PIECE_VALUES, |
Summary
Three search improvements that compound together:
MVV-LVA capture ordering in main search: Replace random shuffle of captures with Most Valuable Victim - Least Valuable Attacker sorting using fast integer lookups. Promotions are also prioritized.
Check extensions: When a move puts the opponent in check, extend the search by 1 ply to resolve tactical check sequences that would otherwise be cut off by the horizon effect.
Delta pruning in quiescence: If the stand-pat score plus the maximum possible capture gain (queen value + margin) is still below alpha, prune the entire quiescence node.
Benchmark results (depth 4)
Notable per-position improvements:
Some positions show increased nodes due to check extensions — this is intentional, as the engine now searches deeper to find tactical solutions involving checks.
Local Stockfish Benchmark
Settings: 20 games, Stockfish skill 3, 10s/move, no opening book.
Use
/run-stockfish-benchmarkfor CI validation with opening book and longer time control.Test plan
/run-nps-benchmark/run-stockfish-benchmark