Conversation
Summary of ChangesHello @WojtAcht, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This is an excellent and substantial pull request that adds support for discrete optimization problems to the lonpy library. The changes are well-structured, with a clear separation between continuous and discrete modules. The introduction of a ProblemInstance abstract base class is a great design choice that will make the library more extensible. The documentation and tests have been thoroughly updated to cover the new features. I have two suggestions for improvement: one regarding the handling of infeasible solutions in the Knapsack problem to improve search performance, and another to document a small breaking change in the configuration options.
| n_iterations=1000, # Iterations per run | ||
| step_size=0.05, # Perturbation size | ||
| step_mode="per", # "per" (percentage) or "fix" (fixed) | ||
| step_mode="percentage", # "percentage" (of domain) or "fixed" |
There was a problem hiding this comment.
The change of step_mode's value from "per" to "percentage" is a good improvement for clarity. However, this is a breaking change for users of the previous version. It would be beneficial to mention this in the pull request description or in a release notes/changelog section to ensure users are aware they need to update their configurations.
| if total_weight > self.capacity: | ||
| return 0.0 |
There was a problem hiding this comment.
Returning 0.0 for infeasible solutions can be problematic. If a feasible solution (like an empty knapsack) also has a fitness of 0.0, the search cannot distinguish between them. This creates a large plateau of infeasible solutions with no gradient to guide the search back to feasibility.
Consider applying a penalty proportional to the degree of infeasibility. For this maximization problem, you could return a negative value based on how much the capacity is exceeded. This provides a gradient that can help the search algorithm escape the infeasible region.
| if total_weight > self.capacity: | |
| return 0.0 | |
| if total_weight > self.capacity: | |
| return -1.0 * (total_weight - self.capacity) |
There was a problem hiding this comment.
Pull request overview
This PR adds comprehensive support for discrete optimization problems to lonpy, significantly expanding the library's capabilities beyond continuous optimization.
Purpose: Add Local Optima Network (LON) construction and analysis for combinatorial/discrete optimization problems using Iterated Local Search (ILS) sampling.
Key changes:
- Implements discrete solution representations (bitstrings and permutations) with neighborhood operators (flip and swap)
- Adds ILS sampler for discrete LON construction with configurable hill climbing strategies
- Introduces built-in discrete problems: OneMax, Knapsack, NumberPartitioning, and NKLandscape
- Adds MLON (Monotonic LON) as an intermediate representation between LON and CMLON
- Extends visualization to support MLON and edge coloring by type (improving/equal/worsening)
- Refactors continuous sampling to use private methods and improved RNG handling
Reviewed changes
Copilot reviewed 26 out of 30 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
src/lonpy/discrete/solution.py |
Solution class for discrete problems with bitstring/permutation support |
src/lonpy/discrete/neighborhoods.py |
FlipNeighborhood and SwapNeighborhood operators for local search |
src/lonpy/discrete/local_search.py |
Hill climbing implementation with first/best improvement strategies |
src/lonpy/discrete/sampling.py |
ILSSampler for discrete LON construction via Iterated Local Search |
src/lonpy/problems/base.py |
Abstract ProblemInstance base class for optimization problems |
src/lonpy/problems/discrete.py |
Built-in discrete problems (OneMax, Knapsack, NumberPartitioning, NKLandscape) |
src/lonpy/lon.py |
Adds MLON class and classify_edges method for edge type classification |
src/lonpy/visualization.py |
Extends visualization to support MLON and edge coloring by type |
src/lonpy/continuous/sampling.py |
Refactors to use private methods and improved RNG handling |
src/lonpy/__init__.py |
Updates exports to include discrete optimization components |
tests/test_solution.py |
Comprehensive tests for Solution class |
tests/test_neighborhoods.py |
Tests for neighborhood operators |
tests/test_lon.py |
Tests for LON/MLON/CMLON construction and metrics |
tests/test_discrete_problems.py |
Tests for discrete problem implementations |
docs/ |
Documentation updates for discrete optimization throughout user guide and API reference |
uv.lock |
Moves ioh dependency from dev-dependencies to optional dev extras |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| # Or load from file | ||
| problem = Knapsack.from_file("instance.txt") |
There was a problem hiding this comment.
The documentation mentions Knapsack.from_file("instance.txt") as a method to load knapsack instances from files, but this method doesn't exist in the Knapsack class implementation. Either implement the from_file class method, or remove this line from the documentation.
| # Or load from file | |
| problem = Knapsack.from_file("instance.txt") |
| from lonpy.problems import ( | ||
| Knapsack, | ||
| NumberPartitioning, | ||
| OneMax, | ||
| ProblemInstance, | ||
| ) |
There was a problem hiding this comment.
NKLandscape is implemented in discrete.py and exported in problems/init.py, but is not exported in the main lonpy/init.py. This makes it inaccessible via the public API (e.g., from lonpy import NKLandscape will fail). Either add NKLandscape to the exports in src/lonpy/init.py, or remove it from the problems module if it's not ready for public use.
| class NKLandscape(ProblemInstance): | ||
| """ | ||
| NK Landscape problem: tunable rugged fitness landscape. | ||
|
|
||
| The NK model is a problem-independent model for constructing multimodal | ||
| landscapes that can gradually be tuned from smooth to rugged: | ||
| - N: number of binary genes in the genotype (string length) | ||
| - K: number of genes that influence a particular gene (0 <= K <= N-1) | ||
|
|
||
| By increasing K from 0 to N-1, landscapes can be tuned from smooth to rugged. | ||
|
|
||
| This implementation uses the adjacent neighborhood model, where the K variables | ||
| forming the context of gene s_i are the K variables closest to s_i in a total | ||
| ordering (s_1, s_2, ..., s_n) using periodic boundaries. | ||
|
|
||
| The fitness is the average of N contribution functions, where each | ||
| contribution f_i depends on bit i and its K adjacent neighbors. | ||
|
|
||
| Attributes: | ||
| n: Length of the bitstring (number of genes). | ||
| k: Epistasis parameter (number of neighboring genes influencing each gene). | ||
| seed: Random seed for reproducible instance generation. | ||
| contributions: Lookup tables for each bit's contribution. | ||
|
|
||
| Example: | ||
| >>> problem = NKLandscape(n=18, k=4, seed=42) | ||
| >>> solution = [1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0] | ||
| >>> fitness = problem.evaluate(solution) | ||
| """ | ||
|
|
||
| n: int = 18 | ||
| k: int = 4 | ||
| seed: int = 1 | ||
| contributions: list[dict[tuple[int, ...], float]] = field(default_factory=list, repr=False) | ||
|
|
||
| def __post_init__(self) -> None: | ||
| """Generate NK landscape instance if not already initialized.""" | ||
| if self.k < 0 or self.k >= self.n: | ||
| raise ValueError(f"k must be in [0, n-1], got k={self.k}, n={self.n}") | ||
|
|
||
| if not self.contributions: | ||
| self._generate_instance() | ||
|
|
||
| def _generate_instance(self) -> None: | ||
| """Generate contribution tables for adjacent neighborhood model.""" | ||
| rng = np.random.default_rng(self.seed) | ||
|
|
||
| self.contributions = [] | ||
|
|
||
| for _ in range(self.n): | ||
| # Create contribution lookup table | ||
| # Each entry maps (bit_i, neighbor_0, ..., neighbor_k-1) -> contribution | ||
| n_entries = 2 ** (self.k + 1) # 2^(K+1) possible combinations | ||
| contribution_table: dict[tuple[int, ...], float] = {} | ||
|
|
||
| for entry_idx in range(n_entries): | ||
| # Convert entry index to bit pattern | ||
| bits = tuple((entry_idx >> b) & 1 for b in range(self.k + 1)) | ||
| contribution_table[bits] = float(rng.random()) | ||
|
|
||
| self.contributions.append(contribution_table) | ||
|
|
||
| def _get_adjacent_neighbors(self, i: int) -> list[int]: | ||
| """ | ||
| Get K adjacent neighbors for gene i using periodic boundaries. | ||
|
|
||
| Args: | ||
| i: Index of the gene. | ||
|
|
||
| Returns: | ||
| List of K neighbor indices. | ||
| """ | ||
| return [(i + j + 1) % self.n for j in range(self.k)] | ||
|
|
||
| def maximize(self) -> bool: | ||
| """NK Landscape is a maximization problem.""" | ||
| return True | ||
|
|
||
| def evaluate(self, solution: list[int]) -> float: | ||
| """ | ||
| Evaluate fitness as average of all contribution functions. | ||
|
|
||
| Args: | ||
| solution: Bitstring as list of 0/1 integers. | ||
|
|
||
| Returns: | ||
| Average contribution (value in [0, 1]). | ||
| """ | ||
| total = 0.0 | ||
|
|
||
| for i in range(self.n): | ||
| # Get the relevant bits: bit i and its K adjacent neighbors | ||
| neighbors = self._get_adjacent_neighbors(i) | ||
| key_bits = [solution[i]] + [solution[j] for j in neighbors] | ||
| key = tuple(key_bits) | ||
| total += self.contributions[i][key] | ||
|
|
||
| return total / self.n | ||
|
|
||
| def supports_delta_evaluation(self) -> bool: | ||
| """NK Landscape does not support efficient delta evaluation.""" | ||
| return False |
There was a problem hiding this comment.
NKLandscape is not mentioned or documented anywhere else in the codebase (no tests, no examples in docs), but is defined and exported from the problems module. This inconsistency suggests it may not be ready for release. Consider either: (1) adding tests and documentation for NKLandscape, or (2) removing it from this PR to add in a future release when it's fully integrated.
No description provided.