Skip to content

Add networkx-full-port: optimize full NetworkX clone for speed#1

Open
AllenGrahamHart wants to merge 2 commits intoharbor-framework:mainfrom
AllenGrahamHart:networkx-full-port-pr
Open

Add networkx-full-port: optimize full NetworkX clone for speed#1
AllenGrahamHart wants to merge 2 commits intoharbor-framework:mainfrom
AllenGrahamHart:networkx-full-port-pr

Conversation

@AllenGrahamHart
Copy link
Collaborator

@AllenGrahamHart AllenGrahamHart commented Mar 5, 2026

Task: Fast NetworkX Full Port

Optimize a complete clone of NetworkX for speed while maintaining correctness.

The agent starts with fast_networkx — a full clone of NetworkX 3.4.2 (~500 algorithms, ~566 Python files) with all internal references renamed. The goal: make it as fast as possible using any technique (Rust extensions, Cython, algorithm improvements, better data structures, SIMD, parallelism) while passing the NetworkX test suite.

Why this task matters

A successful port would produce something genuinely valuable: a fully backwards-compatible drop-in replacement for NetworkX that is dramatically faster. This isn't a toy benchmark — it's real-world optimization of one of the most widely used Python libraries (>30M monthly downloads). The output is directly usable by the scientific computing community.

Evaluation

  • Correctness gate (95%): 3,368 tests from the NetworkX test suite (algorithms + generators), copied at verification time from a freshly-installed pinned NetworkX 3.4.2 with imports renamed to fast_networkx. The agent cannot see or tamper with the test source.
  • Speed score: Geometric mean relative time across a hidden suite of benchmark algorithms. Lower is better. Each benchmark uses interleaved timing with GC control and median aggregation.

The reward is currently binary (1 if correctness passes, 0 otherwise). The speed metric is reported for ranking.

Current state (MVP)

  • Infrastructure is complete and tested — oracle solution passes (3,369/3,369 tests, reward = 1.0)
  • 1 speed benchmark implemented (VF2++ isomorphism, N=300 random regular graphs)
  • Correctness assertion ensures the speed benchmark algorithm produces correct results

Remaining work

  1. Reward hacking protection: Modify a subset of the hidden tests to use different values (seeds, node IDs, weights) from the originals in NetworkX, so agents can't memorize expected outputs from the published test suite
  2. Algorithm coverage: Expand speed benchmarks from 1 to 10-25 algorithms (dijkstra, pagerank, betweenness centrality, community detection, MST, shortest paths, etc.) with graph sizes tuned so NetworkX takes seconds-minutes per call

Estimated agent effort

In a related mini-port task (VF2++ only), Claude achieved a 100x speedup in ~30 minutes by rewriting the algorithm in Rust via PyO3. NetworkX contains ~500 algorithms. Extrapolating, a comprehensive optimization effort could take ~250 hours of agent work — making this a true long-horizon challenge (agent timeout is set to 1,000 hours).

Test plan

  • Oracle solution passes: 3,369/3,369 tests, reward = 1.0
  • Speed benchmark reports ~1.0x relative time for unmodified baseline
  • Verifier force-reinstalls pinned networkx==3.4.2 (tamper-proof)
  • Correctness assertion on benchmark algorithm results
  • Expand speed benchmarks to multiple algorithms
  • Harden tests against reward hacking (modify test values)
  • Add speed threshold to reward

🤖 Generated with Claude Code

MVP infrastructure for a long-horizon challenge where agents optimize a
complete clone of NetworkX 3.4.2 (renamed to fast_networkx) for speed
while maintaining correctness against the networkx test suite.

- 3,369 correctness tests (95% pass rate gate)
- VF2++ speed benchmark with correctness assertion
- Verifier force-reinstalls pinned networkx==3.4.2 (tamper-proof)
- 1000-hour agent timeout for long-horizon optimization

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
AllenGrahamHart pushed a commit that referenced this pull request Mar 6, 2026
* Add Harbor task format resources from terminal-bench

This populates the benchmark template with:

- PR template for task contribution checklist
- 7 GitHub Actions workflows for automated QA checks
- 7 bash validation scripts
- Hello-world example task in Harbor format
- Comprehensive README with documentation

Resources adapted from harbor-framework/terminal-bench#1383.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@AllenGrahamHart
Copy link
Collaborator Author

Speed Benchmarks Expanded: 1 → 10 Algorithms

Expanded the speed benchmark suite from a single VF2++ benchmark to 10 diverse algorithms, covering 3 graph types and 8 distinct computational patterns.

Algorithms Added

# Algorithm Graph Type Pattern nx time (s)
1 vf2pp_is_isomorphic Graph Backtracking search 0.27
2 betweenness_centrality Graph All-pairs BFS 7.22
3 all_pairs_dijkstra_path_length Graph (weighted) n×Dijkstra 8.70
4 find_cliques Graph Bron-Kerbosch 9.22
5 greedy_modularity_communities Graph Greedy optimization 11.40
6 minimum_spanning_tree Graph (weighted) Kruskal + union-find 4.97
7 pagerank DiGraph Power iteration 1.91
8 maximum_flow_value DiGraph Edmonds-Karp 4.37
9 average_clustering Graph Triangle counting 7.27
10 all_pairs_bellman_ford_path_length MultiDiGraph n×Bellman-Ford 4.97

Oracle Verification

Correctness:  3383 passed, 0 failed (pass rate 1.0)
Reward:       1.0
Geo-mean reltime: 0.990 (≈1.0 as expected for unmodified clone)
Geo-std reltime:  1.018
Total runtime:    15:14 (well within 1-hour verifier timeout)

Files Changed

  • tasks/networkx-full-port/tests/test_speed.py — 9 new benchmark functions + test_speed_all entry point
  • tasks/networkx-full-port/tests/test_correctness_benchmark.py — 3 new correctness tests (greedy_modularity, all_pairs_dijkstra, all_pairs_bellman_ford)

Design Notes

  • Graph sizes calibrated via binary search so each algorithm takes 5–15s on vanilla networkx in the Docker container (1 CPU, 4GB RAM)
  • All benchmarks use interleaved nx/fnx timing with GC control and warmup runs
  • pagerank uses tol=1e-12 to ensure meaningful runtime despite scipy acceleration
  • find_cliques uses sum(1 for ...) instead of list() to avoid memory-heavy clique storage
  • Deterministic graph construction via sorted edge iteration for weight assignment

Add 9 new benchmark functions to test_speed.py covering:
- betweenness_centrality (all-pairs BFS)
- all_pairs_dijkstra_path_length (n x Dijkstra)
- find_cliques (Bron-Kerbosch backtracking)
- greedy_modularity_communities (greedy optimization)
- minimum_spanning_tree (Kruskal sort + union-find)
- pagerank (power iteration on DiGraph)
- maximum_flow_value (Edmonds-Karp augmenting paths)
- average_clustering (triangle counting)
- all_pairs_bellman_ford_path_length (n x Bellman-Ford on MultiDiGraph)

Each benchmark calibrated to ~5-15s on vanilla networkx, uses interleaved
nx/fnx timing with GC control, and includes correctness assertions.

Also adds 3 new correctness tests to test_correctness_benchmark.py for
greedy_modularity_communities, all_pairs_dijkstra_path_length, and
all_pairs_bellman_ford_path_length.

Oracle verified: 3383 passed, 0 failed, reward=1.0, geo_mean_reltime=0.990,
total runtime 15:14 (well within 1-hour verifier timeout).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant