Skip to content

Add killer move heuristic for move ordering#55

Open
luccabb wants to merge 1 commit intomasterfrom
improve/killer-moves
Open

Add killer move heuristic for move ordering#55
luccabb wants to merge 1 commit intomasterfrom
improve/killer-moves

Conversation

@luccabb
Copy link
Owner

@luccabb luccabb commented Feb 16, 2026

Summary

  • Killer moves: Tracks 2 non-capture moves per ply that caused beta cutoffs. These "killer moves" are searched right after captures but before other quiet moves, significantly improving move ordering.
  • Move ordering: captures → killer moves → remaining quiet moves
  • Ply tracking: Added ply parameter to negamax for proper killer table indexing

The killer move heuristic is one of the most effective chess engine improvements because quiet moves that cause a cutoff in one position often cause cutoffs in sibling positions at the same depth.

Benchmark (depth 4, 48 positions)

Metric Master This PR Change
Nodes 4,760,507 2,880,079 −39.5%
NPS 22,634 24,985 +10.4%
Time 210.32s 115.27s −45.2%

Local Stockfish Benchmark

Settings: 20 games, Stockfish skill 3, 10s/move, no opening book.

W L D Win Rate
Master (baseline) 19 1 0 95%
This PR 17 1 2 90%

Use /run-stockfish-benchmark for CI validation with opening book and longer time control.

Test plan

  • Existing tests pass
  • /run-nps-benchmark for CI validation
  • /run-stockfish-benchmark for strength validation

Track 2 non-capture moves per ply that caused beta cutoffs (killer
moves). These are tried after captures but before other quiet moves,
improving move ordering and increasing cutoff rates.

Benchmark (depth 4, 48 positions):
- Nodes: 4,760,507 → 2,880,079 (−39.5%)
- NPS: 22,634 → 24,985 (+10.4%)
- Time: 210.32s → 115.27s (−45.2%)
@luccabb
Copy link
Owner Author

luccabb commented Feb 16, 2026

/run-nps-benchmark

@github-actions
Copy link

Benchmarks

The following benchmarks are available for this PR:

Command Description
/run-nps-benchmark NPS speed benchmark (depth 5, 48 positions)
/run-stockfish-benchmark Stockfish strength benchmark (300 games)

Post a comment with the command to trigger a benchmark run.

@greptile-apps
Copy link

greptile-apps bot commented Feb 16, 2026

Greptile Summary

Adds the killer move heuristic to the alpha-beta search engine, a well-known chess engine optimization that tracks non-capture moves causing beta cutoffs and prioritizes them in sibling nodes at the same ply. The implementation stores up to 2 killer moves per ply in a table initialized at the root, passes it through recursive negamax calls, and integrates them into organize_moves with the ordering: captures → killer moves → quiet moves.

  • Killer move storage and deduplication logic in _store_killer is correct
  • Move ordering in organize_moves correctly separates killers from quiet moves and validates legality
  • NUM_KILLERS constant is defined but unused — the killer table width is hardcoded instead
  • Redundant legality check in organize_moves: legal_moves_set is constructed even though legal killers were already identified during the main loop
  • Parallel engines (LazySMP, Layer1ParallelAlphaBeta, Layer2ParallelAlphaBeta) call negamax without passing killers, so they don't benefit from this optimization — this is harmless since killers defaults to None, but may be worth extending in the future

Confidence Score: 4/5

  • This PR is safe to merge — the killer move heuristic is correctly implemented with proper bounds checking and no breaking changes to existing behavior.
  • Score of 4 reflects a well-implemented, standard chess engine optimization with correct logic and no regressions. Minor deductions for an unused constant and a redundant legality check that slightly impacts hot-path performance.
  • No files require special attention — both changes are straightforward and low-risk.

Important Files Changed

Filename Overview
moonfish/engines/alpha_beta.py Adds killer move heuristic with ply tracking and killer table initialization. The NUM_KILLERS constant is defined but unused. Core logic is correct — killer moves are stored on beta-cutoff for non-captures and properly passed through recursive calls.
moonfish/move_ordering.py Integrates killer moves into move ordering (captures → killers → quiet). Implementation is correct but has a redundant legality check — iterates legal moves then separately builds legal_moves_set to verify killers.

Sequence Diagram

sequenceDiagram
    participant SM as search_move
    participant NM as negamax (ply=0)
    participant NM1 as negamax (ply=1)
    participant OM as organize_moves
    participant KT as killers table

    SM->>KT: Initialize [[None,None]] × (depth+20)
    SM->>NM: negamax(board, depth, killers)
    NM->>KT: Read killers[ply=0]
    NM->>OM: organize_moves(board, killers=ply_killers)
    OM-->>NM: captures + killer_moves + quiet_moves
    loop For each move
        NM->>NM1: negamax(board, depth-1, ply=1, killers)
        NM1->>KT: Read killers[ply=1]
        NM1->>OM: organize_moves(board, killers=ply_killers)
        OM-->>NM1: ordered moves
        NM1-->>NM: score
        alt Beta cutoff on quiet move
            NM->>KT: _store_killer(killers, ply=0, move)
            Note over KT: Shift slot[1]=slot[0], slot[0]=move
        end
    end
    NM-->>SM: best_score, best_move
Loading

Last reviewed commit: ba49e0a

Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

NULL_MOVE = Move.null()

# Number of killer moves to store per ply
NUM_KILLERS = 2
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused constant NUM_KILLERS

NUM_KILLERS is defined here but never referenced. The killer table initialization on line 347 hardcodes [None, None], and _store_killer uses literal indices [0] and [1]. Consider either using this constant in both places or removing it to avoid confusion.

Suggested change
NUM_KILLERS = 2

Comment on lines 34 to +53
for move in board.legal_moves:
if board.is_capture(move):
captures.append(move)
elif move in killer_set:
# Skip killers from non_captures; we'll insert them after captures
continue
else:
non_captures.append(move)

random.shuffle(captures)
random.shuffle(non_captures)
return captures + non_captures

# Insert legal killer moves between captures and quiet moves
killer_moves = []
if killers is not None:
legal_moves_set = set(board.legal_moves)
for k in killers:
if k is not None and k in legal_moves_set and not board.is_capture(k):
killer_moves.append(k)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant legality check for killer moves

Killer moves are filtered from non_captures at line 37-39 using killer_set, but then a second pass at lines 48-52 re-validates them against legal_moves_set. Since the first loop already iterates all legal moves, any killer move that wasn't seen in the loop is guaranteed to be illegal (or a capture). You could track "seen legal killers" in the first loop and avoid constructing legal_moves_set entirely:

Suggested change
for move in board.legal_moves:
if board.is_capture(move):
captures.append(move)
elif move in killer_set:
# Skip killers from non_captures; we'll insert them after captures
continue
else:
non_captures.append(move)
random.shuffle(captures)
random.shuffle(non_captures)
return captures + non_captures
# Insert legal killer moves between captures and quiet moves
killer_moves = []
if killers is not None:
legal_moves_set = set(board.legal_moves)
for k in killers:
if k is not None and k in legal_moves_set and not board.is_capture(k):
killer_moves.append(k)
for move in board.legal_moves:
if board.is_capture(move):
captures.append(move)
elif move in killer_set:
# Skip killers from non_captures; we'll insert them after captures
killer_set.discard(move) # mark as legal
continue
else:
non_captures.append(move)
random.shuffle(captures)
random.shuffle(non_captures)
# Insert legal killer moves between captures and quiet moves
# Only include killers that were discarded above (i.e., legal and non-capture)
killer_moves = []
if killers is not None:
for k in killers:
if k is not None and k not in killer_set and not board.is_capture(k):
killer_moves.append(k)

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

@github-actions
Copy link

⚡ NPS Benchmark Results

Metric Value
Depth 5
Positions 48
Total nodes 12825136
Total time 1205.41s
Nodes/second 10639

Node count is the primary signal — it's deterministic and catches search behavior changes. If the node count changes, the PR changed search behavior. NPS is informational only (CI runner performance varies).

Per-position breakdown
Position  1/48: nodes=32467      time=2.08s  nps=15640
Position  2/48: nodes=686662     time=67.86s  nps=10119
Position  3/48: nodes=9128       time=0.51s  nps=17877
Position  4/48: nodes=522075     time=51.94s  nps=10051
Position  5/48: nodes=131484     time=15.04s  nps=8740
Position  6/48: nodes=267867     time=26.19s  nps=10227
Position  7/48: nodes=279323     time=30.39s  nps=9190
Position  8/48: nodes=190561     time=13.64s  nps=13971
Position  9/48: nodes=577303     time=44.23s  nps=13053
Position 10/48: nodes=374828     time=29.15s  nps=12858
Position 11/48: nodes=287193     time=31.60s  nps=9088
Position 12/48: nodes=441974     time=42.63s  nps=10367
Position 13/48: nodes=426241     time=39.91s  nps=10680
Position 14/48: nodes=325794     time=28.99s  nps=11237
Position 15/48: nodes=233055     time=19.82s  nps=11759
Position 16/48: nodes=317316     time=27.64s  nps=11482
Position 17/48: nodes=9994       time=0.63s  nps=15983
Position 18/48: nodes=4714       time=0.21s  nps=22973
Position 19/48: nodes=22054      time=1.43s  nps=15465
Position 20/48: nodes=52116      time=3.33s  nps=15636
Position 21/48: nodes=9744       time=0.45s  nps=21588
Position 22/48: nodes=517        time=0.03s  nps=20108
Position 23/48: nodes=3282       time=0.18s  nps=18431
Position 24/48: nodes=16676      time=1.16s  nps=14354
Position 25/48: nodes=12272      time=0.73s  nps=16778
Position 26/48: nodes=39717      time=3.19s  nps=12454
Position 27/48: nodes=65555      time=4.25s  nps=15419
Position 28/48: nodes=197486     time=15.81s  nps=12492
Position 29/48: nodes=102842     time=8.16s  nps=12601
Position 30/48: nodes=1474       time=0.09s  nps=16774
Position 31/48: nodes=593565     time=52.06s  nps=11402
Position 32/48: nodes=640652     time=53.80s  nps=11908
Position 33/48: nodes=703595     time=89.79s  nps=7835
Position 34/48: nodes=811750     time=91.43s  nps=8878
Position 35/48: nodes=431989     time=35.40s  nps=12204
Position 36/48: nodes=2256261    time=219.77s  nps=10266
Position 37/48: nodes=1319027    time=118.39s  nps=11141
Position 38/48: nodes=7133       time=0.27s  nps=26041
Position 39/48: nodes=1856       time=0.06s  nps=30337
Position 40/48: nodes=13035      time=0.20s  nps=64368
Position 41/48: nodes=37817      time=2.27s  nps=16667
Position 42/48: nodes=25572      time=2.48s  nps=10324
Position 43/48: nodes=6562       time=0.33s  nps=19616
Position 44/48: nodes=49982      time=3.78s  nps=13221
Position 45/48: nodes=95662      time=8.62s  nps=11100
Position 46/48: nodes=188964     time=15.51s  nps=12180
Position 47/48: nodes=0          time=0.00s  nps=0  (terminal)
Position 48/48: nodes=0          time=0.00s  nps=0  (terminal)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant