diff --git a/CHANGELOG.md b/CHANGELOG.md index 49a30906..5f2d3b90 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,71 @@ All notable changes to PGS will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). Dates are *YYYY-MM-DD*. +## **2.2** *(2026-xx-xx)* + +### Added +#### Classes +* **`PGS_Polygonisation`** — generates simple polygonisations of point sets. + +#### Methods +* `fixBrokenFaces()` to `PGS_Meshing`. Repairs broken faces in near-coverage linework using endpoint-only snapping, then polygonises the result. +* `polygonize()` to `PGS_Processing`. Finds polygonal faces from the given shape's linework. +* `softCells()` to `PGS_Tiling`. Generates a softened (curved) version of a tiling using the SoftCells edge-bending algorithm. +* A new mesh-coloring algorithm: `DBLAC`, it's even better than RLF! +* `smoothGaussianNormalised()` to `PGS_Processing`. Applies normalised Gaussian smoothing to all geometries in a shape, intended to be more consistent across child shapes of different sizes. +* `normalisedErosion()` to `PGS_Morphology`. Erodes a shape by a normalised amount (scaled to shape size). +* `refine()` to `PGS_Triangulation`. Refines an existing triangulation using Ruppert's Delaunay refinement algorithm. +* `arapDeform()` to `PGS_Morphology`. Applies As-Rigid-As-Possible (ARAP) shape deformation using point handles. +* `regularise()` to `PGS_Morphology`. Straightens the contour of a shape by snapping edges toward a small set of principal directions. +* New method signature for `PGS_Conversion.toWKT()` that accepts a precision parameter to control the number of decimal places written. +* `smoothBezierFit()` to `PGS_Morphology`. Smoothes a shape by fitting Bezier curves to its vertices. +* `powerDiagram()` to `PGS_Voronoi`. Generates a Power Voronoi Diagram for a set of weighted sites. +* `manhattanVoronoi()` to `PGS_Voronoi`. Generates a Manhattan Voronoi Diagram for a set of sites and a bounding box. +* `intersectionPoints(shape)` to `PGS_Processing`. Computes all self-intersection points of the linework contained within a single shape. +* `intersections()` to `PGS_SegmentSet`. Computes all intersection points among the supplied edges. +* `squareGrid()` to `PGS_Tiling`. Divides the plane into a simple axis-aligned grid using square cells. +* `aztecDiamond()` to `PGS_Tiling`. Produces a random domino tiling of the Aztec diamond of a given order. +* `perpendicularPathSegments()` to `PGS_SegmentSet`. Extracts perpendicular segments along each linear component of shape, with each segment centered on the path/outline. +* `dilationMorph()` to `PGS_Morphology`. Morphs between two shapes using a Hausdorff-distance based method. +* `voronoiMorph()` to `PGS_Morphology`. Morphs between two shapes using a Voronoi-based method. +* `isolinesFromFunction()` to `PGS_Contour`. Extracts contour lines (isolines) from a user-defined 2D “height map” over a rectangular region. +* `kCenters()` to `PGS_PointSet`. Selects k points from the input to act as centers that are typically well distributed over the input space. +* `extractBoundary()` to `PGS_Processing`. Extracts the topological boundary of the given shape. +* `weaveSegments()` to `PGS_SegmentSet`. Creates a fabric-like layout of horizontal and vertical segments. +* `auxeticTiling()` to `PGS_Tiling`. Builds a tiling of interlocking cells that form an auxetic structure. + +### Changes +* `PGS_Conversion.fromPShape()` now disambiguates closed paths using the PShape’s `kind`: closed shapes with `kind == POLYGON` convert to JTS `Polygon`, while closed shapes with `kind == PATH` convert to a (closed) JTS `LineString` (previously closed paths were generally treated as polygonal). +* `PGS_Conversion.toPShape()` now encodes polygon-vs-line semantics by setting the output PShape’s `kind` appropriately (`POLYGON` for JTS polygonal geometries; `PATH` for JTS lineal geometries), so closed linework no longer becomes ambiguous on round-trip. +* These methods in `PGS_Meshing` are more performant and robust: `urquhartFaces()`, `gabrielFaces()`, `spannerFaces()`, `relativeNeighborFaces()`, `edgeCollapseQuadrangulation()`, `centroidQuadrangulation()`. +* Reimplemented `PGS_Processing.convexPartition()` using the optimal *Keil & Snoeyink* partitioning algorithm. +* Reimplemented `PGS_PointSet.findShortestTour()` TSP algorithm. Much faster (~50x) on larger inputs. +* `PGS_Meshing.fixBreaks()` now uses a JTS implementation under the hood. The method's prior `angleTolerance` arg has been removed as it's no longer necessary. +* A PShape's original `.name` is now included in the `PRESERVE_STYLE` routines. +* All methods in `PGS_Morphology` now support GROUP shapes (where it makes sense to). +* `PGS_Conversion.toWKT()` now writes coordinates in float precison by default (previously 2 decimal places). +* Reimplemented `PGS_Morphology.interpolate()` using a more advanced approach with better quality (though interpolations can still self-intersect). +* Renamed `shapeIntersection(a, b)` in `PGS_Processing` to `intersectionPoints()`. +* Reimplemented `PGS_Morphology.distanceField()` with a better quality approach, and added an additional method signature that accepts a 'pole' parameter to compute the distance field with respect to a specific point. +* `largestEmptyCircles()`, `maximumInscribedPack()` and `obstaclePack()` are slightly faster. + +### Fixed +* `PGS_Optimisation.closestPoint()` now returns the nearest location on the shape's **boundary** for queries inside a polygonal shape (previously returned the query point itself). +* `GENETIC` mesh-coloring algorithm now always works (and has been improved too). +* `PGS_Morphology.reducePrecision()` now supports GROUP shapes without collapsing them. +* `PGS_Construction.createSuperRandomPolygon()` no longer produces holes when `holes` is set to `false`. +* `PGS_Contour.chordalAxis()` can no longer return polygonal output. +* `PGS_Voronoi.compoundVoronoi()` now uses the given bounds (previously ignored). +* `PGS_Processing.generateRandomPoints()` can no longer produce different outputs for the same seed on polygons with holes. +* The `toGraph()` and `fromGraph()` methods in `PGS_Conversion` now correctly handle shapes with holes. + +### Removed +* `polygonizeLines()` from `PGS_Processing`, in favour of `polygonize(PShape)`. +* `unionMeshWithoutHoles()` from `PGS_ShapeBoolean`. Previously deprecated in favour of the more general `unionMesh()`. +* `fromGeoJSON()` and `toGeoJSON()` from `PGS_Conversion`. +* The `COARSE` mesh coloring algorithm, since it can color adjacent faces the same colour. +* `lineSegmentsIntersection()` from `PGS_ShapeBoolean` in favour of `intersectionPoints(PShape)`. + ## **2.1** *(2025-10-04)* ### Added @@ -46,6 +111,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 * `pruneRandomRemoveN()` to `PGS_PointSet`. Randomly removes exactly N points from a list of points. * `pruneRandomToN()` to `PGS_PointSet`. Randomly prunes a list of points to exactly N points. * `convexMaximumInscribedCircle()` to `PGS_Optimisation`. Computes the largest inscribed circle of a convex polygon (faster and exact). +* `dissolve()` to `PGS_Processing`. Dissolves the linear components of a shape into a set of unique maximal-length lines ### Changes * Optimised `PGS_CirclePacking.tangencyPack()`. It's now around 1.5-2x faster and has higher precision. @@ -65,7 +131,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 * `PGS_Processing.pointsOnExterior()` methods now return points on all elements of a shape, not just the perimeter of the first polygon. * `PGS_Processing.segmentsOnExterior()` now return segments on all elements of a shape, not just the perimeter of the first polygon. * These methods in `PGS_Morphology` now process any and all polygon/line elements in a shape: `chaikinCut()`, `smoothGaussian()`, `simplifyDCE()`, `simplifyHobby()`, `smoothEllipticFourier()`, `round()`. -* `dissolve()` to `PGS_Processing`. Dissolves the linear components of a shape into a set of unique maximal-length lines ### Fixed * `PGS_Morphology.rounding()` no longer gives invalid results. diff --git a/README.md b/README.md index 2750eb4f..77f47256 100644 --- a/README.md +++ b/README.md @@ -32,6 +32,8 @@ Library functionality is split over the following classes: * Solve geometric optimisation problems, such as finding the maximum inscribed circle, or the closest vertex to a coordinate. * `PGS_PointSet` * Generates sets of 2D points having a variety of different distributions and constraints. +* `PGS_Polygonisation` + * Generates simple polygonisations of point sets. * `PGS_Processing` * Methods that process a shape in some way: partition, slice, clean, etc. * `PGS_SegmentSet` @@ -246,6 +248,13 @@ Much of the functionality (but by no means all) is demonstrated below: A contour map based on a distance field of a shape + + Isolines from function + + + + + ## *Morphology* @@ -319,9 +328,24 @@ Much of the functionality (but by no means all) is demonstrated below: Pinch Warp + ARAP Deform + Regularise + Bezier-fit Smoothing + + + + + + + Dilation Morph + Voronoi Morph + + + + @@ -473,13 +497,16 @@ Much of the functionality (but by no means all) is demonstrated below: Poisson Delaunay Triangulation + Refinement + Delaunay triangulation of shapes where steiner points generated by poisson disk sampling are inserted. + Ruppert angle refinement. @@ -497,15 +524,23 @@ Much of the functionality (but by no means all) is demonstrated below: - Centroidal Relaxation + Manhattan Voronoi + Power Diagram Multiplicatively Weighted Voronoi Farthest-Point Voronoi - + + + + Centroidal Relaxation + + + + ## *Meshing* @@ -887,9 +922,11 @@ Much of the functionality (but by no means all) is demonstrated below: Distance Prune + k Centers + @@ -909,11 +946,15 @@ Much of the functionality (but by no means all) is demonstrated below: Parallel + Perpendicular Path Segments Polygon Interior Segments + weave Segments + + @@ -933,32 +974,91 @@ Much of the functionality (but by no means all) is demonstrated below: - Islamic Tiling + Square Grid Doyle Spiral Hexagon Tiling - + + Islamic Tiling Penrose Tiling Square-Triangle Tiling Annular Bricks - Slice Division + + - + Slice Division Arc Division + Soft Cells + Aztec Diamond + + + + + + Auxetic Tiling + + + + + +
+

Polygonisation

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Max AreaMin AreaMin Perimeter (TSP)Hilbert
maxAreaminAreaminPerimeterhilbert
HorizontalVerticalCircularAngular
horizontalverticalcircularangular
Onion
onion
+
\ No newline at end of file diff --git a/pom.xml b/pom.xml index d111c439..3056363b 100644 --- a/pom.xml +++ b/pom.xml @@ -4,20 +4,22 @@ 4.0.0 micycle PGS - 2.1 + 2.2-SNAPSHOT Processing Geometry Suite Geometric algorithms for Processing + https://github.com/micycle1/PGS micycle1_PTS micycle https://sonarcloud.io - ${user.home}\Documents\Processing\libraries\PGS\library + ${user.home}/Documents/Processing/libraries/GeometrySuiteForProcessing/library 17 UTF-8 UTF-8 PGS 3.6.1 + 5.11.4 @@ -46,6 +48,9 @@ maven-javadoc-plugin 3.12.0 + UTF-8 + UTF-8 + UTF-8 false micycle.pgs micycle.pgs.color,micycle.pgs.commons @@ -148,7 +153,7 @@ false true - PGS + GeometrySuiteForProcessing ${processing.library.dir} @@ -211,9 +216,15 @@ org.processing core - 4.4.7 + 4.5.0 provided true + + + commons-logging + commons-logging + + org.locationtech.jts @@ -231,15 +242,14 @@ 1.5.2 - com.github.gwlucastrig + com.github.micycle1 Tinfour - 44b8d26e15 + 8357a9ba6c com.github.micycle1 JMedialAxis 5207bec2f2 - true org.locationtech.jts @@ -260,20 +270,15 @@ org.junit.jupiter junit-jupiter-api - [5.10,) + ${junit.version} test org.junit.jupiter junit-jupiter-engine - [5.10,) + ${junit.version} test - - com.github.micycle1 - balaban-intersection - 1.0.0 - com.github.micycle1 UniformNoise @@ -306,27 +311,11 @@ - - it.unimi.dsi - fastutil - 8.5.12 - net.jafama jafama 2.3.2 - - com.github.openjump-gis - topology-extension - 9c1d788f8d - - - org.locationtech.jts - jts-core - - - com.github.scoutant polyline-decoder @@ -335,7 +324,21 @@ it.unimi.dsi dsiutils - 2.7.2 + 2.7.4 + + + ch.qos.logback + logback-classic + + + ch.qos.logback + logback-core + + + commons-logging + commons-logging + + com.github.micycle1 @@ -366,11 +369,27 @@ com.github.micycle1 SRPG + 1.0.1 + + + it.unimi.dsi + fastutil + + + + + com.github.micycle1 + geoblitz + 0.9.5 + + + com.github.micycle1 + malleo 1.0 com.github.micycle1 - GeoBlitz + quickhull3d 0.9 diff --git a/resources/contour/distanceField.png b/resources/contour/distanceField.png index c5d09d37..12169bb7 100644 Binary files a/resources/contour/distanceField.png and b/resources/contour/distanceField.png differ diff --git a/resources/contour/isolinesFromFunction1.gif b/resources/contour/isolinesFromFunction1.gif new file mode 100644 index 00000000..e0265fc2 Binary files /dev/null and b/resources/contour/isolinesFromFunction1.gif differ diff --git a/resources/contour/isolinesFromFunction2.gif b/resources/contour/isolinesFromFunction2.gif new file mode 100644 index 00000000..ef19cb64 Binary files /dev/null and b/resources/contour/isolinesFromFunction2.gif differ diff --git a/resources/morphology/arapMorph.gif b/resources/morphology/arapMorph.gif new file mode 100644 index 00000000..0569abd2 Binary files /dev/null and b/resources/morphology/arapMorph.gif differ diff --git a/resources/morphology/dilationMorph.gif b/resources/morphology/dilationMorph.gif new file mode 100644 index 00000000..88f92e88 Binary files /dev/null and b/resources/morphology/dilationMorph.gif differ diff --git a/resources/morphology/regularise.gif b/resources/morphology/regularise.gif new file mode 100644 index 00000000..845d05bd Binary files /dev/null and b/resources/morphology/regularise.gif differ diff --git a/resources/morphology/smoothBezierFit.gif b/resources/morphology/smoothBezierFit.gif new file mode 100644 index 00000000..e1960d2c Binary files /dev/null and b/resources/morphology/smoothBezierFit.gif differ diff --git a/resources/morphology/voronoiMorph.gif b/resources/morphology/voronoiMorph.gif new file mode 100644 index 00000000..eb2115f2 Binary files /dev/null and b/resources/morphology/voronoiMorph.gif differ diff --git a/resources/point_set/kCenters.gif b/resources/point_set/kCenters.gif new file mode 100644 index 00000000..ed2d2a4d Binary files /dev/null and b/resources/point_set/kCenters.gif differ diff --git a/resources/polygonisation/angular.png b/resources/polygonisation/angular.png new file mode 100644 index 00000000..67296517 Binary files /dev/null and b/resources/polygonisation/angular.png differ diff --git a/resources/polygonisation/circular.png b/resources/polygonisation/circular.png new file mode 100644 index 00000000..d5f4c28e Binary files /dev/null and b/resources/polygonisation/circular.png differ diff --git a/resources/polygonisation/hilbert.png b/resources/polygonisation/hilbert.png new file mode 100644 index 00000000..3a97e442 Binary files /dev/null and b/resources/polygonisation/hilbert.png differ diff --git a/resources/polygonisation/horizontal.png b/resources/polygonisation/horizontal.png new file mode 100644 index 00000000..88de024d Binary files /dev/null and b/resources/polygonisation/horizontal.png differ diff --git a/resources/polygonisation/maxArea.png b/resources/polygonisation/maxArea.png new file mode 100644 index 00000000..8b72f986 Binary files /dev/null and b/resources/polygonisation/maxArea.png differ diff --git a/resources/polygonisation/minArea.png b/resources/polygonisation/minArea.png new file mode 100644 index 00000000..c1dd77aa Binary files /dev/null and b/resources/polygonisation/minArea.png differ diff --git a/resources/polygonisation/minPerimeter.png b/resources/polygonisation/minPerimeter.png new file mode 100644 index 00000000..7c0a03ac Binary files /dev/null and b/resources/polygonisation/minPerimeter.png differ diff --git a/resources/polygonisation/onion.png b/resources/polygonisation/onion.png new file mode 100644 index 00000000..8d890b79 Binary files /dev/null and b/resources/polygonisation/onion.png differ diff --git a/resources/polygonisation/vertical.png b/resources/polygonisation/vertical.png new file mode 100644 index 00000000..f6a29343 Binary files /dev/null and b/resources/polygonisation/vertical.png differ diff --git a/resources/segment_set/perpendicularPathSegments.gif b/resources/segment_set/perpendicularPathSegments.gif new file mode 100644 index 00000000..6e1024ad Binary files /dev/null and b/resources/segment_set/perpendicularPathSegments.gif differ diff --git a/resources/segment_set/weaveSegments.gif b/resources/segment_set/weaveSegments.gif new file mode 100644 index 00000000..b86d1932 Binary files /dev/null and b/resources/segment_set/weaveSegments.gif differ diff --git a/resources/tiling/auxetic1.png b/resources/tiling/auxetic1.png new file mode 100644 index 00000000..f022786f Binary files /dev/null and b/resources/tiling/auxetic1.png differ diff --git a/resources/tiling/auxetic2.png b/resources/tiling/auxetic2.png new file mode 100644 index 00000000..7337b0ed Binary files /dev/null and b/resources/tiling/auxetic2.png differ diff --git a/resources/tiling/aztecDiamond.png b/resources/tiling/aztecDiamond.png new file mode 100644 index 00000000..73c0f517 Binary files /dev/null and b/resources/tiling/aztecDiamond.png differ diff --git a/resources/tiling/grid.png b/resources/tiling/grid.png new file mode 100644 index 00000000..53cddf1a Binary files /dev/null and b/resources/tiling/grid.png differ diff --git a/resources/tiling/softCells.gif b/resources/tiling/softCells.gif new file mode 100644 index 00000000..08d83925 Binary files /dev/null and b/resources/tiling/softCells.gif differ diff --git a/resources/triangulation/refine.gif b/resources/triangulation/refine.gif new file mode 100644 index 00000000..9ecc9c25 Binary files /dev/null and b/resources/triangulation/refine.gif differ diff --git a/resources/voronoi/manhattenVoronoi.gif b/resources/voronoi/manhattenVoronoi.gif new file mode 100644 index 00000000..001d0e1c Binary files /dev/null and b/resources/voronoi/manhattenVoronoi.gif differ diff --git a/resources/voronoi/powerDiagram.gif b/resources/voronoi/powerDiagram.gif new file mode 100644 index 00000000..54114ba5 Binary files /dev/null and b/resources/voronoi/powerDiagram.gif differ diff --git a/src/main/java/micycle/pgs/PGS.java b/src/main/java/micycle/pgs/PGS.java index b7e1d6d1..6f339c2f 100644 --- a/src/main/java/micycle/pgs/PGS.java +++ b/src/main/java/micycle/pgs/PGS.java @@ -10,7 +10,7 @@ import java.util.Arrays; import java.util.Collection; import java.util.Collections; -import java.util.HashMap; +import java.util.Comparator; import java.util.HashSet; import java.util.Iterator; import java.util.List; @@ -24,20 +24,24 @@ import org.locationtech.jts.geom.Coordinate; import org.locationtech.jts.geom.CoordinateList; import org.locationtech.jts.geom.Geometry; +import org.locationtech.jts.geom.GeometryCollection; import org.locationtech.jts.geom.GeometryFactory; import org.locationtech.jts.geom.GeometryFilter; import org.locationtech.jts.geom.LineString; import org.locationtech.jts.geom.LinearRing; +import org.locationtech.jts.geom.MultiLineString; +import org.locationtech.jts.geom.MultiPoint; import org.locationtech.jts.geom.MultiPolygon; import org.locationtech.jts.geom.Point; import org.locationtech.jts.geom.Polygon; import org.locationtech.jts.geom.PrecisionModel; +import org.locationtech.jts.geom.util.GeometryTransformer; import org.locationtech.jts.noding.NodedSegmentString; import org.locationtech.jts.noding.Noder; import org.locationtech.jts.noding.SegmentString; import org.locationtech.jts.noding.snapround.SnapRoundingNoder; -import org.locationtech.jts.operation.linemerge.LineMerger; import org.locationtech.jts.operation.polygonize.Polygonizer; +import org.locationtech.jts.operation.union.UnaryUnionOp; import org.tinspin.index.IndexConfig; import org.tinspin.index.kdtree.KDTree; @@ -59,9 +63,15 @@ final class PGS { static final int SHAPE_SAMPLES = 80; /** - * PGS global geometry factory (uses 32 bit float precision). + * Precision model that's suitable to guarantee conformity under float + * coordinates. */ - public static final GeometryFactory GEOM_FACTORY = new GeometryFactory(new PrecisionModel(PrecisionModel.FLOATING_SINGLE)); + public static final PrecisionModel PM = new PrecisionModel(1024); // grid = 1/1024 == Math.ulp(1e4f) + + /** + * PGS global geometry factory. + */ + public static final GeometryFactory GEOM_FACTORY = new GeometryFactory(PM); private PGS() { } @@ -300,8 +310,8 @@ static final PShape polygonizeNodedEdges(Collection edges) { * Polygonizes a set of edges using JTS Polygonizer (occasionally * FastPolygonizer is not robust enough). * - * @param edges a collection of NODED (i.e. non intersecting / must onlymeet at - * their endpoints) edges. The collection can containduplicates. + * @param edges a collection of NODED (i.e. non intersecting / must only meet at + * their endpoints) edges. The collection can contain duplicates. * @return a GROUP PShape, where each child shape represents a polygon face * formed by the given edges */ @@ -309,7 +319,7 @@ static final PShape polygonizeNodedEdges(Collection edges) { private static final PShape polygonizeEdgesRobust(Collection edges) { final Set edgeSet = new HashSet<>(edges); final Polygonizer polygonizer = new Polygonizer(); -// polygonizer.setCheckRingsValid(false); + // polygonizer.setCheckRingsValid(false); edgeSet.forEach(ss -> { /* * NOTE: If the same LineString is added more than once to the polygonizer, the @@ -323,6 +333,83 @@ private static final PShape polygonizeEdgesRobust(Collection edges) { return PGS_Conversion.toPShape(polygonizer.getPolygons()); } + /** + * Post-processes polygons produced by JTS + * {@link org.locationtech.jts.operation.polygonize.Polygonizer Polygonizer} so + * that nested rings are interpreted as holes of their enclosing polygon. + *

+ * This is necessary because {@code Polygonizer} returns all bounded + * faces implied by the input linework. For example, when a ring lies inside + * another ring, {@code Polygonizer} will typically produce both the enclosing + * polygon-with-hole and the inner “hole face” as a standalone polygon. + * This method removes those hole faces by classifying faces by nesting depth + * (odd depth = hole, even depth = filled). + *

+ * + * @param polygonizerFaces polygons returned by {@code Polygonizer}. + * @param dissolve if {@code true}, unions the kept faces into a + * dissolved geometry; if {@code false}, returns a + * {@link GeometryCollection} of the kept faces. + * @return a geometry containing only the “filled” faces, with holes inferred + * from nesting. + */ + static Geometry dropHolePolygons(List polygonizerFaces, boolean dissolve) { + // method could be optimised, but shouldn't need used much + if (polygonizerFaces == null || polygonizerFaces.isEmpty()) { + return new GeometryFactory().createGeometryCollection(); + } + + record Face(Polygon face, Polygon shellOnly, double area) { + } + + // Build faces with "shell-only" geometry (exterior ring only) + List faces = new ArrayList<>(polygonizerFaces.size()); + for (Polygon p : polygonizerFaces) { + if (p == null || p.isEmpty()) { + continue; + } + LinearRing shell = p.getExteriorRing(); + Polygon shellOnly = GEOM_FACTORY.createPolygon(shell, null); + faces.add(new Face(p, shellOnly, shellOnly.getArea())); + } + + // Sort by increasing area to find smallest containing parent efficiently (still + // O(n^2)) + faces.sort(Comparator.comparingDouble(Face::area)); + + int n = faces.size(); + int[] parent = new int[n]; + Arrays.fill(parent, -1); + + // Parent of i = smallest-area shell that covers a point inside i's shell + for (int i = 0; i < n; i++) { + Point testPt = faces.get(i).shellOnly().getInteriorPoint(); + for (int j = i + 1; j < n; j++) { + if (faces.get(j).shellOnly().covers(testPt)) { + parent[i] = j; + break; + } + } + } + + // Keep even-depth faces (filled), drop odd-depth faces (holes) + List kept = new ArrayList<>(); + for (int i = 0; i < n; i++) { + int depth = 0; + for (int p = parent[i]; p != -1; p = parent[p]) { + depth++; + } + if ((depth & 1) == 0) { + kept.add(faces.get(i).face()); + } + } + + if (!dissolve) { + return GEOM_FACTORY.createGeometryCollection(kept.toArray(Geometry[]::new)); + } + return kept.isEmpty() ? GEOM_FACTORY.createGeometryCollection() : UnaryUnionOp.union(kept); + } + /** * Computes a robust noding for a collection of SegmentStrings. * @@ -335,11 +422,11 @@ static final Collection nodeSegmentStrings(Collection HashSet makeHashSet(int expectedSize) { return new HashSet<>((int) ((expectedSize) / 0.75 + 1)); } - /** - * Computes an ordered list of vertices that make up the boundary - * of a polygon from an unordered collection of edges. The - * underlying approach is around ~10x faster than JTS .buffer(0) and ~3x faster - * than {@link LineMerger}. - *

- * For now, this method does not properly support multi-shapes, nor unclosed - * edge collections (that form unclosed linestrings). - *

- * Notably, unlike {@link LineMerger} this approach does not merge successive - * boundary segments that together form a straight line into a single longer - * segment. - * - * @param edges unordered/random collection of edges (containing no duplicates), - * that together constitute the boundary of a single polygon / a - * closed ring - * @return list of sequential vertices belonging to the polygon that follow some - * constant winding (may wind clockwise or anti-clockwise). Note: this - * vertex list is not closed (having same start and end vertex) by - * default! - */ - static List fromEdges(Collection edges) { - // NOTE same as org.locationtech.jts.operation.linemerge.LineSequencer ? - // map of vertex to the 2 edges that share it - final HashMap> vertexEdges = new HashMap<>((int) ((edges.size()) / 0.75 + 1)); - - /* - * Build up map of vertex->edge to later find edges sharing a given vertex in - * O(1). When the input is valid (edges form a closed loop) every vertex is - * shared by 2 edges. - */ - for (PEdge e : edges) { - if (vertexEdges.containsKey(e.a)) { - vertexEdges.get(e.a).add(e); - } else { - HashSet h = new HashSet<>(); - h.add(e); - vertexEdges.put(e.a, h); - } - if (vertexEdges.containsKey(e.b)) { - vertexEdges.get(e.b).add(e); - } else { - HashSet h = new HashSet<>(); - h.add(e); - vertexEdges.put(e.b, h); - } - } - - List vertices = new ArrayList<>(edges.size() + 1); // boundary vertices - - // begin by choosing a random edge - final PEdge startingEdge = edges.iterator().next(); - vertices.add(startingEdge.a); - vertices.add(startingEdge.b); - vertexEdges.get(startingEdge.a).remove(startingEdge); - vertexEdges.get(startingEdge.b).remove(startingEdge); - - while (vertices.size() < edges.size()) { - final PVector lastVertex = vertices.get(vertices.size() - 1); - Set connectedEdges = vertexEdges.get(lastVertex); - - if (connectedEdges.isEmpty()) { - /* - * This will be hit if the input is malformed (contains multiple disjoint shapes - * for example), and break when the first loop is closed. On valid inputs the - * while loop will break before this statement can be hit. - */ - break; - } - - final PEdge nextEdge = connectedEdges.iterator().next(); - if (nextEdge.a.equals(lastVertex)) { - vertices.add(nextEdge.b); - vertexEdges.get(nextEdge.b).remove(nextEdge); - } else { - vertices.add(nextEdge.a); - vertexEdges.get(nextEdge.a).remove(nextEdge); - } - connectedEdges.remove(nextEdge); // remove this edge from vertex mapping - if (connectedEdges.isEmpty()) { - vertexEdges.remove(lastVertex); // have used both edges connected to this vertex -- now remove! - } - } - - return vertices; - } - static SimpleWeightedGraph makeCompleteGraph(List points) { SimpleWeightedGraph graph = new SimpleWeightedGraph<>(PEdge.class); @@ -463,14 +463,10 @@ static SimpleWeightedGraph makeCompleteGraph(List point * list. Other geometry types contained within the input geometry are ignored. */ static List extractPolygons(Geometry g) { - List polygons = new ArrayList<>(g.getNumGeometries()); + List polygons = new ArrayList<>(); g.apply((GeometryFilter) geom -> { if (geom instanceof Polygon) { polygons.add((Polygon) geom); - } else if (geom instanceof MultiPolygon) { - for (int i = 0; i < geom.getNumGeometries(); i++) { - polygons.add((Polygon) geom.getGeometryN(i)); - } } }); return polygons; @@ -564,9 +560,9 @@ public LinearRingIterator(Geometry g) { ArrayList rings = new ArrayList<>(g.getNumGeometries()); for (int i = 0; i < g.getNumGeometries(); i++) { Polygon poly = (Polygon) g.getGeometryN(i); -// if (poly.getNumPoints() == 0) { -// continue; -// } + // if (poly.getNumPoints() == 0) { + // continue; + // } rings.add(poly.getExteriorRing()); for (int j = 0; j < poly.getNumInteriorRing(); j++) { rings.add(poly.getInteriorRingN(j)); @@ -607,174 +603,113 @@ public void remove() { } /** - * Apply a transformation to every lineal element in a PShape, preserving - * geometry structure and polygon/hole relationships, and return a non-null - * result. + * Apply a transformation to every lineal element in a {@code PShape}, + * preserving geometry structure and polygon/hole relationships, and return a + * non-null result. * *

- * The geometry encoded by {@code shape} (via {@code fromPShape}) is traversed, + * The geometry encoded by {@code shape} (via {@code fromPShape}) is traversed * and {@code function} is applied to each lineal component: {@code LineString} - * and {@code LinearRing}. The function may return a replacement + * and polygon rings ({@code LinearRing}, passed to the function as a + * {@code LineString}). The function may return a replacement * {@code LineString}, or {@code null} to drop that element. * - *

- * Structure preservation: + *

Structure preservation

*
    - *
  • GeometryCollection / MultiPolygon / MultiLineString: + *
  • GeometryCollection / MultiPolygon / MultiLineString *
      - *
    • Children are processed recursively; original grouping and order are - * preserved.
    • - *
    • Children for which the function yields {@code null} (or become empty) are - * filtered out before assembling the result.
    • - *
    • A GROUP {@code PShape} is always returned (it may be empty if nothing - * survives).
    • + *
    • Children are processed in index order; the relative order of surviving + * children is preserved.
    • + *
    • Children for which the function yields {@code null} (or that become + * empty) are omitted from the result.
    • + *
    • If the input encodes a multi/collection geometry, the returned + * {@code PShape} is always of kind {@code GROUP} (it may be empty if nothing + * survives), even if only a single child remains after filtering.
    • *
    *
  • - *
  • Polygon / LinearRing: + * + *
  • Polygon *
      - *
    • Rings are visited shell-first (exterior, then holes), preserving the - * exterior–hole relations.
    • - *
    • If the exterior becomes {@code null} or invalid, the entire polygon is + *
    • Rings are visited shell-first (exterior, then holes in interior-ring + * index order), preserving exterior–hole relationships.
    • + *
    • If the exterior ring is dropped or becomes invalid, the entire polygon is * dropped.
    • - *
    • Holes that become {@code null} or invalid are omitted; remaining holes - * retain order.
    • - *
    • Ring orientation is enforced: exterior is CW; holes are CCW.
    • + *
    • Holes that are dropped or become invalid are omitted; remaining holes + * retain their original order.
    • + *
    • Ring orientation is enforced: exterior is clockwise (CW); holes are + * counter-clockwise (CCW).
    • + *
    + *
  • + * + *
  • LinearRing + *
      + *
    • If a {@code LinearRing} is encountered outside a polygon, it is treated + * as an exterior ring for closure/orientation rules.
    • *
    *
  • *
* - *

- * Additional behavior: + *

Additional behavior

*
    - *
  • Non-closed outputs are closed when possible (if at least two points + *
  • Non-closed ring outputs are closed when possible (if at least two points * exist).
  • *
  • Rings must have at least 4 coordinates (including repeated first/last) * after closing; otherwise they are dropped.
  • - *
  • LineString elements return the transformed line or are dropped if - * {@code function} returns {@code null}.
  • - *
  • Unsupported geometry types yield an empty {@code PShape}.
  • + *
  • {@code LineString} elements return the transformed line, or are dropped + * if {@code function} returns {@code null}.
  • + *
  • Unsupported geometry types are ignored (dropped). If the root geometry is + * unsupported, an empty {@code PShape} is returned.
  • *
  • No full topology validation is performed; run JTS validators if * needed.
  • *
* - *

- * Return contract: + *

Return contract

*
    *
  • This method never returns {@code null}. If no geometry survives, an empty - * {@code PShape} is returned.
  • + * {@code PShape} is returned (for multi/collection inputs, an empty + * {@code GROUP} {@code PShape}). *
* - * @param shape input PShape encoding geometries to transform (must be - * convertible via {@code fromPShape}) - * @param function a UnaryOperator that receives each {@code LineString} (linear - * rings are passed as {@code LineString}) and returns a - * modified {@code LineString}, or {@code null} to drop the - * element - * @return a non-null {@code PShape} representing the transformed geometry; for - * multi/geometries a GROUP {@code PShape} is returned and may be empty - * when no children survive + * @param shape input {@code PShape} encoding geometries to transform (must + * be convertible via {@code fromPShape}) + * @param function operator applied to each {@code LineString}; polygon rings + * are passed as {@code LineString}. Returning {@code null} + * drops that element. + * @return a non-null {@code PShape} representing the transformed geometry * @since 2.1 */ - static PShape applyToLinealGeometries(PShape shape, UnaryOperator function) { - Geometry g = fromPShape(shape); - final var data = g.getUserData(); // probably styling - switch (g.getGeometryType()) { - case Geometry.TYPENAME_GEOMETRYCOLLECTION : - case Geometry.TYPENAME_MULTIPOLYGON : - case Geometry.TYPENAME_MULTILINESTRING : { - PShape group = new PShape(GROUP); - for (int i = 0; i < g.getNumGeometries(); i++) { - PShape child = applyToLinealGeometries(toPShape(g.getGeometryN(i)), function); - if (!isEmptyShape(child)) { - group.addChild(child); - } - } - // Always return a group, possibly empty - return group; - } - case Geometry.TYPENAME_LINEARRING : - case Geometry.TYPENAME_POLYGON : { - // Preserve exterior-hole relations; allow function to return null (skip) - LinearRing[] rings = new LinearRingIterator(g).getLinearRings(); - List processed = new ArrayList<>(rings.length); - for (int i = 0; i < rings.length; i++) { - LinearRing ring = rings[i]; - LineString out = function.apply(ring); - final boolean isHole = i > 0; - - if (out == null) { - // If the exterior is removed, drop the whole polygon -> empty shape - if (!isHole) { - return new PShape(); - } else { - // skip this hole - continue; - } - } + static PShape applyToLinealGeometries(PShape shape, UnaryOperator fn) { + final Geometry in = fromPShape(shape); - Coordinate[] coords = out.getCoordinates(); - - // Ensure closed; if not, close automatically when possible. - if (!out.isClosed()) { - if (coords.length >= 2) { - Coordinate[] closedCoords = Arrays.copyOf(coords, out.getNumPoints() + 1); - closedCoords[closedCoords.length - 1] = closedCoords[0]; // close the ring - coords = closedCoords; - } else { - // Too short to form a ring; skip this ring - if (!isHole) { - return new PShape(); - } else { - continue; - } - } - } + if (in instanceof Point || in instanceof MultiPoint) { + return new PShape(); + } - // Need at least 4 coordinates for a valid closed ring (including repeated - // first) - if (coords.length >= 4) { - // as createPolygon() doesn't check ring orientation - final boolean ccw = Orientation.isCCWArea(coords); - if (isHole && !ccw) { - ArrayUtils.reverse(coords); // make hole CCW - } else if (!isHole && ccw) { - ArrayUtils.reverse(coords); // make exterior CW - } - processed.add(GEOM_FACTORY.createLinearRing(coords)); - } else { - if (!isHole) { - return new PShape(); - } - // skip hole otherwise - } - } + final boolean rootIsMultiPolygon = in instanceof MultiPolygon; + final boolean rootIsMultiLineString = in instanceof MultiLineString; + final boolean rootIsGeomCollection = (in instanceof GeometryCollection) && !rootIsMultiPolygon && !rootIsMultiLineString && !(in instanceof MultiPoint); - if (processed.isEmpty()) { - return new PShape(); - } + final Object rootUserData = in.getUserData(); - LinearRing exterior = processed.get(0); - LinearRing[] holes = (processed.size() > 1) ? processed.subList(1, processed.size()).toArray(new LinearRing[0]) : null; + Geometry out = new PGS_Transformer(fn).transform(in); - var polygon = GEOM_FACTORY.createPolygon(exterior, holes); - polygon.setUserData(data); - return toPShape(polygon); - } - case Geometry.TYPENAME_LINESTRING : { - LineString l = (LineString) g; - LineString out = function.apply(l); - if (out == null) { - return new PShape(); - } - out.setUserData(data); - var line = toPShape(out); - line.setFill(false); - return line; - } - default : - // Return an empty PShape to indicate "ignored / not processed" - return new PShape(); + // Never return null; match empty policies + if (out == null || out.isEmpty()) { + return new PShape(PConstants.GROUP); + } + + // Preserve "GROUP-ness" for multi/collection roots even if only one child + // survives + if (rootIsMultiPolygon && out instanceof Polygon p) { + out = GEOM_FACTORY.createMultiPolygon(new Polygon[] { p }); + } else if (rootIsMultiLineString && out instanceof LineString ls && !(out instanceof MultiLineString)) { + out = GEOM_FACTORY.createMultiLineString(new LineString[] { ls }); + } else if (rootIsGeomCollection && !(out instanceof GeometryCollection)) { + out = GEOM_FACTORY.createGeometryCollection(new Geometry[] { out }); } + + out.setUserData(rootUserData); + return toPShape(out); } static boolean isEmptyShape(PShape s) { @@ -790,4 +725,128 @@ static boolean isEmptyShape(PShape s) { return true; } + private static class PGS_Transformer extends GeometryTransformer { + + private final UnaryOperator fn; + + PGS_Transformer(UnaryOperator fn) { + this.fn = fn; + } + + @Override + protected Geometry transformPolygon(Polygon p, Geometry parent) { + // Own the polygon traversal order: shell first, then holes by index. + LinearRing shell = processRing(p.getExteriorRing(), false); + if (shell == null) { + return null; // drop whole polygon + } + + List holes = new ArrayList<>(p.getNumInteriorRing()); + for (int i = 0; i < p.getNumInteriorRing(); i++) { + LinearRing h = processRing(p.getInteriorRingN(i), true); + if (h != null) { + holes.add(h); + } + } + + Polygon out = GEOM_FACTORY.createPolygon(shell, holes.toArray(LinearRing[]::new)); + out.setUserData(p.getUserData()); + return out; + } + + @Override + protected Geometry transformLinearRing(LinearRing ring, Geometry parent) { + // Standalone rings: treat as exterior policy (CW) + LinearRing out = processRing(ring, false); + if (out != null) { + out.setUserData(ring.getUserData()); + } + return out; + } + + @Override + protected Geometry transformLineString(LineString ls, Geometry parent) { + // Note: GeometryTransformer may route rings here too; ensure we handle them as + // rings. + if (ls instanceof LinearRing r) { + return transformLinearRing(r, parent); + } + + LineString res = fn.apply(ls); + if (res == null || res.isEmpty()) { + return null; + } + + LineString out = GEOM_FACTORY.createLineString(res.getCoordinateSequence()); + out.setUserData(ls.getUserData()); + return out; + } + + @Override + protected Geometry transformGeometryCollection(GeometryCollection gc, Geometry parent) { + // Preserve order; filter null/empty; preserve container type + List kept = new ArrayList<>(gc.getNumGeometries()); + for (int i = 0; i < gc.getNumGeometries(); i++) { + Geometry t = transform(gc.getGeometryN(i)); + if (t != null && !t.isEmpty()) { + kept.add(t); + } + } + + if (gc instanceof MultiPolygon) { + List polys = new ArrayList<>(); + for (Geometry g : kept) { + if (g instanceof Polygon p) { + polys.add(p); + } + } + return GEOM_FACTORY.createMultiPolygon(polys.toArray(Polygon[]::new)); + } + + if (gc instanceof MultiLineString) { + List lines = new ArrayList<>(); + for (Geometry g : kept) { + if (g instanceof LineString ls) { + lines.add(ls); + } + } + return GEOM_FACTORY.createMultiLineString(lines.toArray(LineString[]::new)); + } + + return GEOM_FACTORY.createGeometryCollection(kept.toArray(Geometry[]::new)); + } + + private LinearRing processRing(LinearRing ring, boolean isHole) { + // Apply fn to ring (passed as LineString) + LineString res = fn.apply(ring); + if (res == null || res.isEmpty()) { + return null; + } + + Coordinate[] coords = res.getCoordinates(); + + // Ensure closed when possible + if (coords.length >= 2 && !coords[0].equals2D(coords[coords.length - 1])) { + coords = Arrays.copyOf(coords, coords.length + 1); + coords[coords.length - 1] = coords[0]; + } + + // Need at least 4 coordinates for a valid ring + if (coords.length < 4) { + return null; + } + + // Enforce orientation: exterior CW, holes CCW + boolean ccw = Orientation.isCCWArea(coords); + if (isHole && !ccw) { + ArrayUtils.reverse(coords); + } + if (!isHole && ccw) { + ArrayUtils.reverse(coords); + } + + return GEOM_FACTORY.createLinearRing(coords); + } + } + } diff --git a/src/main/java/micycle/pgs/PGS_CirclePacking.java b/src/main/java/micycle/pgs/PGS_CirclePacking.java index b67c7bab..d0912725 100644 --- a/src/main/java/micycle/pgs/PGS_CirclePacking.java +++ b/src/main/java/micycle/pgs/PGS_CirclePacking.java @@ -28,7 +28,6 @@ import micycle.pgs.commons.FrontChainPacker; import micycle.pgs.commons.LargestEmptyCircles; import micycle.pgs.commons.RepulsionCirclePack; -import micycle.pgs.commons.ShapeRandomPointSampler; import micycle.pgs.commons.TangencyPack; import processing.core.PShape; import processing.core.PVector; @@ -84,7 +83,7 @@ public static List obstaclePack(PShape shape, Collection point areaCoverRatio = Math.min(areaCoverRatio, 1 - (1e-3)); final Geometry geometry = fromPShape(shape); final Geometry obstacles = fromPShape(PGS_Conversion.toPointsPShape(pointObstacles)); - LargestEmptyCircles lec = new LargestEmptyCircles(obstacles, geometry, areaCoverRatio > 0.95 ? 0.5 : 1); + var lec = new LargestEmptyCircles(geometry, obstacles, areaCoverRatio > 0.95 ? 0.5 : 1); final double shapeArea = geometry.getArea(); double circlesArea = 0; @@ -322,7 +321,7 @@ public static List frontChainPack(PShape shape, double radiusMin, doubl */ public static List maximumInscribedPack(PShape shape, int n, double tolerance) { tolerance = Math.max(0.01, tolerance); - LargestEmptyCircles mics = new LargestEmptyCircles(fromPShape(shape), null, tolerance); + LargestEmptyCircles mics = new LargestEmptyCircles(fromPShape(shape), tolerance); final List out = new ArrayList<>(); for (int i = 0; i < n; i++) { @@ -352,7 +351,7 @@ public static List maximumInscribedPack(PShape shape, int n, double tol public static List maximumInscribedPack(PShape shape, double minRadius, double tolerance) { tolerance = Math.max(0.01, tolerance); minRadius = Math.max(0.01, minRadius); - LargestEmptyCircles mics = new LargestEmptyCircles(fromPShape(shape), null, tolerance); + LargestEmptyCircles mics = new LargestEmptyCircles(fromPShape(shape), tolerance); final List out = new ArrayList<>(); double[] currentLEC; diff --git a/src/main/java/micycle/pgs/PGS_Coloring.java b/src/main/java/micycle/pgs/PGS_Coloring.java index c44b1312..6860de42 100644 --- a/src/main/java/micycle/pgs/PGS_Coloring.java +++ b/src/main/java/micycle/pgs/PGS_Coloring.java @@ -4,7 +4,6 @@ import java.util.Map; import java.util.concurrent.ThreadLocalRandom; -import org.jgrapht.alg.color.ColorRefinementAlgorithm; import org.jgrapht.alg.color.LargestDegreeFirstColoring; import org.jgrapht.alg.color.RandomGreedyColoring; import org.jgrapht.alg.color.SaturationDegreeColoring; @@ -16,6 +15,7 @@ import it.unimi.dsi.util.XoRoShiRo128PlusRandom; import micycle.pgs.color.ColorUtils; import micycle.pgs.color.Colors; +import micycle.pgs.commons.DBLACColoring; import micycle.pgs.commons.GeneticColoring; import micycle.pgs.commons.RLFColoring; import processing.core.PShape; @@ -43,7 +43,7 @@ * @since 1.2.0 */ public final class PGS_Coloring { - + public static long SEED = 1337; private PGS_Coloring() { @@ -82,11 +82,7 @@ public enum ColoringAlgorithm { */ DSATUR, /** - * Finds the coarsest coloring of a graph. - */ - COARSE, - /** - * Recursive largest-first coloring (recommended). + * Recursive largest-first coloring. */ RLF, /** @@ -101,7 +97,22 @@ public enum ColoringAlgorithm { * specifically targets a chromaticity of 4 (falls back to 5 if no solution is * found). */ - GENETIC + GENETIC, + /** + * Degree-Based Largest Adjacency Count coloring. + * + *

+ * Fast with good chromaticity (recommended). + * + *

+ * Repeatedly selects an uncolored vertex that maximizes LAC(v) = + * number of already-colored neighbors. Ties are broken by larger static degree, + * then by the shuffled index. Each selected vertex is colored using first-fit + * (smallest feasible color). + * + * @since 2.2 + */ + DBLAC, } /** @@ -133,6 +144,25 @@ public static Map colorMesh(Collection shapes, Coloring return coloring.getColors(); } + /** + * Computes a coloring of the given mesh shape using the default coloring + * algorithm ({@link ColoringAlgorithm#DBLAC DBLAC}) and applies the provided + * palette to its faces. + *

+ * This method mutates the fill colour of the input {@code meshShape} by setting + * the fill of each child face {@link PShape}. If the computed number of + * required colors exceeds the palette length. + * + * @param meshShape a GROUP {@link PShape} whose children constitute the + * faces of a conforming mesh + * @param colorPalette the colors with which to color the mesh + * @return the input {@code meshShape} (whose faces have now been colored) + * @see #colorMesh(PShape, ColoringAlgorithm, int[]) + */ + public static PShape colorMesh(PShape meshShape, int[] colorPalette) { + return colorMesh(meshShape, ColoringAlgorithm.DBLAC, colorPalette); + } + /** * Computes a coloring of the given mesh shape and colors its faces using the * colors provided. This method mutates the fill colour of the input shape. @@ -146,8 +176,8 @@ public static Map colorMesh(Collection shapes, Coloring public static PShape colorMesh(PShape shape, ColoringAlgorithm coloringAlgorithm, int[] colorPalette) { final Coloring coloring = findColoring(shape, coloringAlgorithm); if (coloring.getNumberColors() > colorPalette.length) { - System.err.format("WARNING: Number of mesh colors (%s) exceeds those provided in palette (%s)%s", coloring.getNumberColors(), - colorPalette.length, System.lineSeparator()); + System.err.format("WARNING: Number of mesh colors (%s) exceeds those provided in palette (%s)%s", coloring.getNumberColors(), colorPalette.length, + System.lineSeparator()); } coloring.getColors().forEach((face, color) -> { int c = colorPalette[color % colorPalette.length]; // NOTE use modulo to avoid OOB exception @@ -250,11 +280,11 @@ private static Coloring findColoring(Collection shapes, Coloring case DSATUR : coloring = new SaturationDegreeColoring<>(graph).getColoring(); break; - case COARSE : - coloring = new ColorRefinementAlgorithm<>(graph).getColoring(); - break; case GENETIC : - coloring = new GeneticColoring<>(graph).getColoring(); + coloring = new GeneticColoring<>(graph, SEED).getColoring(); + break; + case DBLAC : + coloring = new DBLACColoring<>(graph, SEED).getColoring(); break; case RLF_BRUTE_FORCE_4COLOR : int iterations = 0; diff --git a/src/main/java/micycle/pgs/PGS_Construction.java b/src/main/java/micycle/pgs/PGS_Construction.java index 0299f0a4..5dc82311 100644 --- a/src/main/java/micycle/pgs/PGS_Construction.java +++ b/src/main/java/micycle/pgs/PGS_Construction.java @@ -40,7 +40,7 @@ import micycle.spacefillingcurves.SierpinskiTenSteps; import micycle.spacefillingcurves.SierpinskiThreeSteps; import micycle.spacefillingcurves.SpaceFillingCurve; -import micycle.srpg.SRPolygonGenerator; +import com.github.micycle1.srpg.SRPolygonGenerator; import net.jafama.FastMath; import processing.core.PConstants; import processing.core.PShape; @@ -128,9 +128,10 @@ public static PShape createRandomPolygonExact(int n, double width, double height * @param centerX centre point X * @param centerY centre point Y * @param width polygon width + * @return a PShape representing a regular polygon * @since 2.0 */ - public static PShape createRegularPolyon(int n, double centerX, double centerY, double width) { + public static PShape createRegularPolygon(int n, double centerX, double centerY, double width) { final GeometricShapeFactory shapeFactory = new GeometricShapeFactory(); shapeFactory.setNumPoints(n); shapeFactory.setCentre(new Coordinate(centerX, centerY)); @@ -234,7 +235,7 @@ public static PShape createSuperShape(double centerX, double centerY, double rad r = Math.pow(t1 + t2, 1 / n1); if (Math.abs(r) != 0) { r *= radius; // multiply r (0...1) by (max) radius -// r = radius/r; + // r = radius/r; shape.vertex((float) (centerX + r * FastMath.cos(angle)), (float) (centerY + r * FastMath.sin(angle))); } @@ -376,7 +377,7 @@ public static PShape createArbelos(double centerX, double centerY, double radius * @param outerRadius The outer radius of the star * @param roundness A roundness value between 0.0 and 1.0, for the inner and * outer corners of the star. - * @return The star shape + * @return The star shape as a PShape */ public static PShape createStar(double centerX, double centerY, int numRays, double innerRadius, double outerRadius, double roundness) { roundness = Math.max(Math.min(1, roundness), 0); @@ -418,8 +419,8 @@ public static PShape createStar(double centerX, double centerY, int numRays, dou */ public static PShape createBlobbie(double centerX, double centerY, double maxWidth, double a, double b, double c, double d) { // http://paulbourke.net/geometry/blobbie/ - final double cirumference = 2 * Math.PI * maxWidth / 2; - final int samples = (int) (cirumference / 2); // 1 point every 2 distance + final double circumference = 2 * Math.PI * maxWidth / 2; + final int samples = (int) (circumference / 2); // 1 point every 2 distance double dt = Math.PI * 2 / samples; final CoordinateList blobbieCoords = new CoordinateList(); @@ -472,7 +473,7 @@ public static PShape createHeart(final double centerX, final double centerY, fin PShape heart = new PShape(PShape.PATH); heart.setFill(true); heart.setFill(Colors.WHITE); - heart.beginShape(); + heart.beginShape(PConstants.POLYGON); final double length = 6.3855 * width; // Arc length of parametric curve from wolfram alpha final int points = (int) length / 2; // sample every 2 units along curve (roughly) @@ -554,8 +555,8 @@ public static PShape createGear(final double centerX, final double centerY, fina curve.setFill(Colors.WHITE); curve.beginShape(); - final double cirumference = 2 * Math.PI * radius; - final int samples = (int) (cirumference / 5); // 1 point every 5 distance + final double circumference = 2 * Math.PI * radius; + final int samples = (int) (circumference / 5); // 1 point every 5 distance final double angleInc = Math.PI * 2 / samples; double angle = 0; @@ -642,8 +643,7 @@ public static PShape createRing(double centerX, double centerY, double outerRadi * @param generators the number of generator points for the underlying Voronoi * tessellation. Should be >5. * @param thickness thickness of sponge structure walls - * @param smoothing the cell smoothing factor which determines how rounded the - * cells are. a value of 6 is a good starting point. + * @param smoothing level of gaussian smoothing to apply to the structure * @param classes the number of classes to use for the cell merging process, * where lower results in more merging (or larger "blob-like" * shapes). @@ -980,7 +980,7 @@ public static PShape createSierpinskiCurve(double centerX, double centerY, doubl final PShape curve = new PShape(PShape.PATH); curve.setFill(true); curve.setFill(Colors.WHITE); - curve.beginShape(); + curve.beginShape(PConstants.POLYGON); half1.forEach(p -> curve.vertex((float) p[0], (float) p[1])); curve.endShape(PConstants.CLOSE); @@ -1271,6 +1271,7 @@ static PShape rect(int rectMode, double a, double b, double c, double d, double private static PShape rectImpl(float x1, float y1, float x2, float y2, float tl, float tr, float br, float bl) { PShape sh = new PShape(PShape.PATH); + sh.setKind(PConstants.POLYGON); sh.setFill(true); sh.setFill(Colors.WHITE); sh.beginShape(); @@ -1365,7 +1366,7 @@ static Polygon createCircle(double x, double y, double r, final double maxDeviat int nPts = (int) Math.ceil(2 * Math.PI / Math.acos(1 - maxDeviation / r)); nPts = Math.max(nPts, 21); // min of 21 points for tiny circles final int circumference = (int) (Math.PI * r * 2); - if (nPts > circumference * 2) { + if (nPts > circumference * 2 && circumference > 0) { // AT MOST 1 point every half pixel nPts = circumference * 2; } diff --git a/src/main/java/micycle/pgs/PGS_Contour.java b/src/main/java/micycle/pgs/PGS_Contour.java index d7750cc9..44ca8042 100644 --- a/src/main/java/micycle/pgs/PGS_Contour.java +++ b/src/main/java/micycle/pgs/PGS_Contour.java @@ -13,8 +13,8 @@ import java.util.Map; import java.util.Objects; import java.util.Set; +import java.util.function.DoubleBinaryOperator; import java.util.stream.Collectors; -import java.util.stream.Stream; import javax.vecmath.Point3d; @@ -38,6 +38,7 @@ import org.locationtech.jts.operation.buffer.BufferParameters; import org.locationtech.jts.operation.buffer.OffsetCurve; import org.locationtech.jts.operation.distance.IndexedFacetDistance; +import org.locationtech.jts.operation.overlayng.OverlayNG; import org.locationtech.jts.simplify.DouglasPeuckerSimplifier; import org.tinfour.common.IIncrementalTin; import org.tinfour.common.IQuadEdge; @@ -55,6 +56,7 @@ import org.twak.utils.collections.Loop; import org.twak.utils.collections.LoopL; +import com.github.micycle1.geoblitz.SegmentVoronoiIndex; import com.github.micycle1.geoblitz.YStripesPointInAreaLocator; import com.google.common.collect.Lists; @@ -63,6 +65,7 @@ import micycle.pgs.PGS.LinearRingIterator; import micycle.pgs.color.ColorUtils; import micycle.pgs.color.Colors; +import micycle.pgs.commons.MarchingSquares; import micycle.pgs.commons.PEdge; import net.jafama.FastMath; import processing.core.PConstants; @@ -70,16 +73,22 @@ import processing.core.PVector; /** - * Methods for producing different kinds of shape contours. * + * Methods for producing interior contour structures from shapes. + * *

- * Contours produced by this class are always computed within the interior of - * shapes. Contour lines and features (such as isolines, medial axes, and - * fields) are extracted as vector linework following the topology or scalar - * properties of the enclosed shape area, rather than operations that modify the - * shape boundary. + * The algorithms in this class extract derived linework—such as + * medial/chordal axes, straight skeletons, isolines, and field-derived + * curves—computed from within the interior of a polygonal {@link PShape}. These + * results describe the shape’s internal topology or scalar fields (e.g., + * distance-to-boundary), rather than directly editing the original boundary. * - * @author Michael Carleton + *

+ * Note: Outputs are typically vector linework (polylines) and may be + * returned as GROUP {@code PShape}s. Depending on geometry complexity, some + * methods may produce branching networks, multiple disjoint components, or + * degenerate segments. * + * @author Michael Carleton */ public final class PGS_Contour { @@ -140,7 +149,7 @@ public static PShape medialAxis(PShape shape, double axialThreshold, double dist *

* In its primitive form, the chordal axis is constructed by joining the * midpoints of the chords and the centroids of junction and terminal triangles - * of the delaunay trianglution of a shape. + * of the delaunay triangulation of a shape. *

* It can be considered a more useful alternative to the medial axis for * obtaining skeletons of discrete shapes. @@ -150,7 +159,6 @@ public static PShape medialAxis(PShape shape, double axialThreshold, double dist * segment (possibly >2 vertices) * @since 1.3.0 */ - @SuppressWarnings("unchecked") public static PShape chordalAxis(PShape shape) { /*- * See 'Rectification of the Chordal Axis Transform and a New Criterion for @@ -533,7 +541,7 @@ public static Map isolines(Collection points, double int * requirements of the application. Values in the * range 5 to 40 are good candidates for * investigation. - * @return a map of {isoline -> height of the isoline} + * @return a map of {isoline (path) -> height of the isoline} */ public static Map isolines(Collection points, double intervalValueSpacing, double isolineMin, double isolineMax, int smoothing) { final IncrementalTin tin = new IncrementalTin(intervalValueSpacing / 10); @@ -563,7 +571,7 @@ public static Map isolines(Collection points, double int isoline.setStroke(Colors.PINK); PVector last = new PVector(Float.NaN, Float.NaN); - isoline.beginShape(); + isoline.beginShape(PConstants.PATH); for (int i = 0; i < coords.length; i += 2) { float vx = (float) coords[i]; float vy = (float) coords[i + 1]; @@ -585,42 +593,176 @@ public static Map isolines(Collection points, double int } /** - * Generates vector contour lines representing a distance field derived from a - * shape. + * Extracts contour lines (isolines) from a user-defined 2D “height map” over a + * rectangular region. + *

+ * You provide a function {@code f(x,y)} that returns a numeric value for every + * point. This method samples that function on a regular grid over + * {@code bounds}, then traces contour lines that connect points with the same + * value (like elevation contours on a map) using the Marching Squares + * algorithm. + *

+ * This is a very versatile way to turn simple math functions into computational + * patterns—ripples, bands, interference fields, cellular textures, etc.—without + * manually constructing geometry. The contour value range is determined + * automatically from the sampled minimum/maximum values. + * + * @param bounds Sampling bounds as {@code [xmin, ymin, xmax, ymax]}. + * @param sampleSpacing Grid spacing in coordinate units (smaller yields finer + * detail but is slower). 5 is sufficient for very high + * quality. + * @param contourInterval The value step between successive contour lines. + * @param valueFunction Function that returns the value at {@code (x,y)}. + * @return A map of isoline shapes to their corresponding contour (height) + * value. + * @since 2.2 + */ + public static PShape isolinesFromFunction(double[] bounds, double sampleSpacing, double contourInterval, DoubleBinaryOperator valueFunction) { + return isolinesFromFunction(bounds, sampleSpacing, contourInterval, valueFunction, Double.NaN, Double.NaN); + } + + /** + * Extracts contour lines (isolines) from a user-defined 2D “height map” over a + * rectangular region, within a specified value range. + *

+ * You provide a function {@code f(x,y)} that returns a numeric value for every + * point. This method samples that function on a regular grid over + * {@code bounds}, then traces contour lines that connect points with the same + * value (like elevation contours on a map) using the Marching Squares + * algorithm. + *

+ * This is a very versatile way to turn simple math functions into computational + * patterns. Only contour lines with values in {@code [isolineMin, isolineMax]} + * are produced. + * + * @param bounds Sampling bounds as {@code [xmin, ymin, xmax, ymax]}. + * @param sampleSpacing Grid spacing in coordinate units (smaller yields finer + * detail but is slower). 5 is sufficient for very high + * quality. + * @param contourInterval The value step between successive contour lines. + * @param valueFunction Function that returns the value at {@code (x,y)}. + * @param isolineMin Minimum contour value (inclusive). + * @param isolineMax Maximum contour value (inclusive). + * @return A map of isoline shapes to their corresponding contour (height) + * value. + * @since 2.2 + */ + public static PShape isolinesFromFunction(double[] bounds, double sampleSpacing, double contourInterval, DoubleBinaryOperator valueFunction, + double isolineMin, double isolineMax) { + var isolines = MarchingSquares.isolines(bounds, sampleSpacing, contourInterval, isolineMin, isolineMax, valueFunction).keySet(); + + var out = PGS_Conversion.flatten(isolines); + PGS_Conversion.setAllStrokeColor(out, micycle.pgs.color.Colors.PINK, 4, PConstants.SQUARE); + + return out; + } + + /** + * Extracts the zero contour (the 0-level set) from a user-defined 2D + * “height map” over a rectangular region. + *

+ * You provide a function {@code f(x,y)} that returns a numeric value for every + * point. This method samples that function on a regular grid over + * {@code bounds}, then traces the isoline where {@code f(x,y) = 0} using the + * Marching Squares algorithm. *

- * The distance field for a shape assigns each interior point a value equal to - * the shortest Euclidean distance from that point to the shape boundary. This - * method computes a series of contour lines (isolines), where each line - * connects points with the same distance value, effectively visualizing the - * "levels" of the distance field like elevation contours on a topographic map. + * The resulting contour follows the boundary between positive and negative + * values of {@code f} (i.e., where the function crosses zero). This is useful + * for extracting implicit curves such as circles, signed-distance fields, and + * other zero-crossing patterns. * - * @param shape A polygonal shape for which to calculate the distance field - * contours. - * @param spacing The interval between successive contour lines, i.e., the - * distance value difference between each contour. - * @return A GROUP PShape. Each child of the group is a closed contour line or a - * section (partition) of a contour line, collectively forming the - * contour map. + * @param bounds Sampling bounds as {@code [xmin, ymin, xmax, ymax]}. + * @param sampleSpacing Grid spacing in coordinate units (smaller yields finer + * detail but is slower). 5 is sufficient for very high + * quality. + * @param valueFunction Function that returns the value at {@code (x,y)}. + * @return A {@link PShape} containing all extracted zero-value isoline + * polylines within {@code bounds}. + * @since 2.2 + */ + public static PShape isolineZeroFromFunction(double[] bounds, double sampleSpacing, DoubleBinaryOperator valueFunction) { + var isolines = MarchingSquares.isolineZero(bounds, sampleSpacing, valueFunction).keySet(); + + var out = PGS_Conversion.flatten(isolines); + PGS_Conversion.setAllStrokeColor(out, micycle.pgs.color.Colors.PINK, 4, PConstants.SQUARE); + + return out; + } + + /** + * Generates interior contour lines (isolines) that radiate from a shape + * “center”. + *

+ * The result resembles offset curves (inward parallels), but the underlying + * metric is not a pure boundary offset. Instead, contours are derived from a + * distance-like field that balances distance to the boundary with distance to + * an interior pole (chosen automatically), producing characteristic + * rings/levels emanating from the shape’s interior. + * + * @param shape A polygonal {@link PShape} to generate contours for. + * @param spacing The contour interval between successive lines. + * @return A {@code GROUP} {@link PShape} whose children form the contour set + * inside {@code shape}. * @since 1.3.0 + * @see #distanceField(PShape, double, PVector) */ public static PShape distanceField(PShape shape, double spacing) { - Geometry g = fromPShape(shape); - MedialAxis m = new MedialAxis(g); - - List disks = new ArrayList<>(); - double min = Double.POSITIVE_INFINITY; - double max = Double.NEGATIVE_INFINITY; - for (MedialDisk d : m.getDisks()) { - disks.add(new PVector((float) d.position.x, (float) d.position.y, (float) d.distance)); - min = Math.min(d.distance, min); - max = Math.max(d.distance, max); - } + PVector mic = new PVector(); + PGS_Optimisation.maximumInscribedCircle(shape, 1, mic); + return distanceField(shape, spacing, mic); + } - PShape out = PGS_Conversion.flatten(PGS_Contour.isolines(disks, spacing, min, max, 1).keySet()); - PShape i = PGS_ShapeBoolean.intersect(shape, out); - PGS_Conversion.disableAllFill(i); // since some shapes may be polygons - PGS_Conversion.setAllStrokeColor(i, micycle.pgs.color.Colors.PINK, 4, PConstants.SQUARE); - return i; + /** + * Generates interior contour lines (isolines) that radiate from a specified + * pole point within a polygon. + *

+ * The result is similar in spirit to inward offset curves, but governed by a + * distance-like field that blends proximity to the boundary with proximity to + * the given {@code pole}. This tends to produce characteristic “rings”/levels + * centred on {@code pole}, clipped to the shape interior. + * + * @param shape A polygonal {@link PShape} to generate contours for. + * @param spacing The contour interval between successive lines. + * @param pole The point that the contours are oriented around (need not lie + * inside {@code shape}). + * @return A {@code GROUP} {@link PShape} whose children form the contour set + * inside {@code shape}. + * @since 2.2 + */ + public static PShape distanceField(PShape shape, double spacing, PVector pole) { + final Geometry g = fromPShape(shape); + final var svi = new SegmentVoronoiIndex((Polygon) g, Math.max(spacing / 5.0, 4)); + + DoubleBinaryOperator fn = (x, y) -> { + Coordinate c = new Coordinate(x, y); + double dGeo = svi.distanceToNearestSegment(c); + double dPoint = Math.sqrt((x - pole.x) * (x - pole.x) + (y - pole.y) * (y - pole.y)); + return dGeo - dPoint; // no abs() as abs produces cusp where dGeo==dPoint + }; + + var env = g.getEnvelopeInternal(); + env.expandBy(1); + double[] bounds = { env.getMinX(), env.getMinY(), env.getMaxX(), env.getMaxY() }; + + double sampleSpacing = Math.max(spacing / 10.0, 4); // heuristic + var contourMap = isolinesFromFunction(bounds, sampleSpacing, spacing, fn); + + /* + * Experienced 'Overlay input is mixed-dimension' issue when intersecting + * geometry collection of isolines with g - so force to MultiLineString. + */ + + var contours = PGS_Conversion.getChildren(contourMap).stream().map(c -> { + var cg = fromPShape(c); + return cg.getGeometryType().equals(Geometry.TYPENAME_POLYGON) ? cg.getBoundary() : cg; + }).toArray(LineString[]::new); + + var contourStrings = PGS.GEOM_FACTORY.createMultiLineString(contours); + var out = toPShape(OverlayNG.overlay(contourStrings, g, OverlayNG.INTERSECTION)); + + PGS_Conversion.setAllStrokeColor(out, micycle.pgs.color.Colors.PINK, 4, PConstants.SQUARE); + + return out; } /** @@ -935,7 +1077,7 @@ private static PShape offsetCurves(PShape shape, OffsetStyle style, double spaci } final BufferParameters bufParams = new BufferParameters(8, BufferParameters.CAP_FLAT, style.style, BufferParameters.DEFAULT_MITRE_LIMIT); -// bufParams.setSimplifyFactor(5); // can produce "poor" yet interesting results + // bufParams.setSimplifyFactor(5); // can produce "poor" yet interesting results spacing = Math.max(1, Math.abs(spacing)); // ensure positive and >=1 spacing = outwards ? spacing : -spacing; @@ -1009,8 +1151,8 @@ private static double[] generateDoubleSequence(double start, double end, double * @param spacingY * @return */ - private static ArrayList generateGrid(double minX, double minY, double maxX, double maxY, double spacingX, double spacingY) { - ArrayList grid = new ArrayList<>(); + private static List generateGrid(double minX, double minY, double maxX, double maxY, double spacingX, double spacingY) { + List grid = new ArrayList<>(); double[] y = generateDoubleSequence(minY, maxY, spacingY); double[] x = generateDoubleSequence(minX, maxX, spacingX); diff --git a/src/main/java/micycle/pgs/PGS_Conversion.java b/src/main/java/micycle/pgs/PGS_Conversion.java index a0c4d055..94299b8e 100644 --- a/src/main/java/micycle/pgs/PGS_Conversion.java +++ b/src/main/java/micycle/pgs/PGS_Conversion.java @@ -59,8 +59,8 @@ import org.locationtech.jts.io.WKBWriter; import org.locationtech.jts.io.WKTReader; import org.locationtech.jts.io.WKTWriter; -import org.locationtech.jts.io.geojson.GeoJsonReader; -import org.locationtech.jts.io.geojson.GeoJsonWriter; +import org.locationtech.jts.operation.polygonize.Polygonizer; +import org.locationtech.jts.precision.GeometryPrecisionReducer; import org.locationtech.jts.util.GeometricShapeFactory; import org.scoutant.polyline.PolylineDecoder; @@ -87,6 +87,25 @@ * Though certain conversion methods are utilised internally by the library, * they have been kept public to cater to more complex user requirements. *

+ * Closed-path semantics: Processing {@code PShape}s can be closed + * without unambiguously indicating whether they represent linework (a + * closed {@code LineString}) or an areal region (a {@code Polygon}). PGS + * resolves this ambiguity using the shape's {@code kind}: + *

    + *
  • A closed {@code PShape} with {@code kind == PConstants.POLYGON} is + * treated as polygonal and converts to a JTS {@code Polygon} (holes via + * contours are supported).
  • + *
  • A {@code PShape} with {@code kind == PConstants.PATH} is treated as + * lineal and converts to a JTS {@code LineString}, even if closed.
  • + *
  • If a shape is not closed, it is always treated as lineal and + * converts to a JTS {@code LineString}, regardless of {@code kind} (an unclosed + * {@code POLYGON} kind cannot form a valid JTS {@code Polygon}).
  • + *
+ * When converting from JTS to {@code PShape}, {@link #toPShape(Geometry)} + * encodes these semantics by setting the output {@code PShape}'s {@code kind} + * to {@code POLYGON} for polygonal JTS geometries and {@code PATH} for lineal + * JTS geometries, ensuring round-trip stability. + *

* Note: JTS {@code Geometries} do not provide support for bezier curves. As * such, bezier curves are linearised/divided into straight line segments during * the conversion process from {@code PShape} to JTS {@code Geometry}. @@ -96,14 +115,13 @@ * {@link #PRESERVE_STYLE} (set to true by default), and * {@link #HANDLE_MULTICONTOUR} (set to false by default). Users are encouraged * to review these flags as part of more complicated workflows with this class. - * + * * @author Michael Carleton */ public final class PGS_Conversion { /** Approximate distance between successive sample points on bezier curves */ static final float BEZIER_SAMPLE_DISTANCE = 2; - private static Field MATRIX_FIELD, PSHAPE_FILL_FIELD; /** * A boolean flag that affects whether a PShape's style (fillColor, strokeColor, * strokeWidth) is preserved during PShape->Geometry->PShape @@ -129,7 +147,41 @@ public final class PGS_Conversion { * GitHub. */ public static boolean HANDLE_MULTICONTOUR = false; + /** + * When converting JTS {@link org.locationtech.jts.geom.Geometry Geometry} to a + * Processing {@link processing.core.PShape PShape} (inside + * {@link #toPShape(Geometry) toPShape()}), this flag controls whether PGS + * performs an explicit precision reduction step before writing + * vertices as floats. + *

+ * Processing {@code PShape} vertices are stored as 32-bit floats, so a + * double->float cast always introduces rounding. If this flag + * is false (default), PGS relies on that implicit rounding only. + *

+ * If this flag is true, PGS first snap-rounds each component + * geometry to a fixed grid of 1/1024 and then casts to float. This + * can reduce the (very rare) chance that float rounding breaks tight + * topological relationships (e.g. a conforming mesh where vertices/edges must + * match exactly), at the cost of intentionally coarsening coordinates even in + * cases where the float cast alone might have preserved more detail. + *

+ * Scope: This is currently applied only when converting + * multi-geometries / collections (i.e. {@code GeometryCollection}, + * {@code MultiPolygon}, {@code MultiLineString}) containing more than one + * component geometry. These cases are more susceptible to float rounding + * causing adjacent components to no longer share identical boundary vertices + * after conversion. + *

+ * See {@link org.locationtech.jts.precision.GeometryPrecisionReducer + * GeometryPrecisionReducer} for more information. + *

+ * Default = false. + */ + public static boolean FLOAT_SAFE_MESH_CONVERSION = false; + + private static GeometryPrecisionReducer reducer = new GeometryPrecisionReducer(PGS.PM); + private static Field MATRIX_FIELD, PSHAPE_FILL_FIELD; static { try { MATRIX_FIELD = PShape.class.getDeclaredField("matrix"); @@ -218,40 +270,68 @@ public static PShape toPShape(final Geometry g) { } else { shape.setFamily(GROUP); for (int i = 0; i < g.getNumGeometries(); i++) { - shape.addChild(toPShape(g.getGeometryN(i))); + Geometry child = g.getGeometryN(i); + if (FLOAT_SAFE_MESH_CONVERSION) { + Geometry reducedChild = reducer.reduce(child); + reducedChild.setUserData(child.getUserData()); + child = reducedChild; + } + shape.addChild(toPShape(child)); } } break; - // TODO treat closed linestrings as unfilled & unclosed paths? - case Geometry.TYPENAME_LINEARRING : // LinearRings are closed by definition - case Geometry.TYPENAME_LINESTRING : // LineStrings may be open + case Geometry.TYPENAME_LINEARRING : { + // LinearRings are closed by definition + // treat as lineal + final LineString ring = (LineString) g; + shape.setFamily(PShape.PATH); + shape.setFill(false); + + shape.beginShape(PConstants.PATH); // encode polygonness + Coordinate[] coords = ring.getCoordinates(); + // Skip the closing coordinate (same as first) + for (int i = 0; i < coords.length - 1; i++) { + shape.vertex((float) coords[i].x, (float) coords[i].y); + } + shape.endShape(PConstants.CLOSE); + break; + } + + case Geometry.TYPENAME_LINESTRING : { + // LineStrings may be open or closed. + // always treat as linear path (no fill) final LineString l = (LineString) g; final boolean closed = l.isClosed(); + shape.setFamily(PShape.PATH); - shape.beginShape(); - Coordinate[] coords = l.getCoordinates(); - for (int i = 0; i < coords.length - (closed ? 1 : 0); i++) { + shape.setFill(false); // never fill LineStrings (even if closed) + + shape.beginShape(PConstants.PATH); // encode lineal + final Coordinate[] coords = l.getCoordinates(); + + // If closed, skip the duplicated closing vertex + final int n = coords.length - (closed ? 1 : 0); + for (int i = 0; i < n; i++) { shape.vertex((float) coords[i].x, (float) coords[i].y); } - if (closed) { // closed vertex was skipped, so close the path - shape.endShape(PConstants.CLOSE); + + if (closed) { + shape.endShape(PConstants.CLOSE); // close the path, still unfilled } else { - // shape is more akin to an unconnected line: keep as PATH shape, but don't fill - // visually shape.endShape(); - shape.setFill(false); } break; + } case Geometry.TYPENAME_POLYGON : final Polygon polygon = (Polygon) g; shape.setFamily(PShape.PATH); - shape.beginShape(); + shape.beginShape(PConstants.POLYGON); /* * Outer and inner loops are iterated up to length-1 to skip the point that * closes the JTS shape (same as the first point). */ - coords = polygon.getExteriorRing().getCoordinates(); + Coordinate[] coords = polygon.getExteriorRing().getCoordinates(); for (int i = 0; i < coords.length - 1; i++) { final Coordinate coord = coords[i]; shape.vertex((float) coord.x, (float) coord.y); @@ -367,7 +447,7 @@ public static PShape toPShape(Collection geometries) { * unsupported. */ public static Geometry fromPShape(PShape shape) { - Geometry g = GEOM_FACTORY.createEmpty(2); + Geometry g; switch (shape.getFamily()) { case PConstants.GROUP : @@ -386,6 +466,8 @@ public static Geometry fromPShape(PShape shape) { case PShape.PRIMITIVE : g = fromPrimitive(shape); // (no holes) break; + default : + throw new IllegalArgumentException("Unrecognised (invalid) PShape family type: " + shape.getFamily()); } if (PRESERVE_STYLE && g != null) { @@ -469,9 +551,10 @@ private static Geometry fromCreateShape(PShape shape) { /** * Extracts the contours from a POLYGON or PATH - * PShape, represented as lists of PVector points. It extracts both the exterior - * contour (perimeter) and interior contours (holes). For such PShape types, all - * contours after the first are guaranteed to be holes. + * PShape, represented as lists of PVector points (having closing vertex). It + * extracts both the exterior contour (perimeter) and interior contours (holes). + * For such PShape types, all contours after the first are guaranteed to be + * holes. *

* Background: The PShape data structure stores all vertices in a single array, * with contour breaks designated in a separate array of vertex codes. This @@ -528,7 +611,7 @@ public static List> toContours(PShape shape) { continue; default : // VERTEX PVector v = shape.getVertex(i).copy(); - // skip consecutive duplicate vertices + // NOTE skip consecutive duplicate vertices if (lastVertex == null || !(v.x == lastVertex.x && v.y == lastVertex.y)) { rings.get(currentGroup).add(v); lastVertex = v; @@ -568,20 +651,28 @@ private static Geometry fromVertices(PShape shape) { return GEOM_FACTORY.createPoint(outerRing[0]); } else if (outerRing.length == 2) { return GEOM_FACTORY.createLineString(outerRing); - } else if (shape.isClosed()) { // closed geometry or path - if (HANDLE_MULTICONTOUR) { // handle single shapes that *may* represent multiple shapes over many contours - return fromMultiContourShape(rings, false, false); - } else { // assume all contours beyond the first represent holes - LinearRing outer = GEOM_FACTORY.createLinearRing(outerRing); // should always be valid - LinearRing[] holes = new LinearRing[rings.size() - 1]; // Create linear ring for each hole in the shape - for (int j = 1; j < rings.size(); j++) { - final Coordinate[] innerCoords = rings.get(j); - holes[j - 1] = GEOM_FACTORY.createLinearRing(innerCoords); + } else { + final boolean closed = shape.isClosed(); + final int kind = shape.getKind(); + final boolean hasHoles = contours.size() > 1; + + // POLYGON kind only matters when closed (or when holes exist) + final boolean polygonal = hasHoles || (closed && kind == PConstants.POLYGON); + + if (polygonal) { + if (HANDLE_MULTICONTOUR) { + return fromMultiContourShape(rings, false, false); + } else { + final LinearRing outer = GEOM_FACTORY.createLinearRing(outerRing); + final LinearRing[] holes = new LinearRing[rings.size() - 1]; + for (int j = 1; j < rings.size(); j++) { + holes[j - 1] = GEOM_FACTORY.createLinearRing(rings.get(j)); + } + return GEOM_FACTORY.createPolygon(outer, holes); } - return GEOM_FACTORY.createPolygon(outer, holes); + } else { + return GEOM_FACTORY.createLineString(outerRing); } - } else { // not closed - return GEOM_FACTORY.createLineString(outerRing); } } @@ -703,7 +794,7 @@ private static Geometry fromMultiContourShape(List contours, boole "PGS_Conversion Error: Shape contour #%s was identified as a hole but no existing exterior rings contained it.", j)); } } - } else { // this ring is new polygon (or explictly contour #1) + } else { // this ring is new polygon (or explicitly contour #1) ring = GEOM_FACTORY.createLinearRing(contourCoords); if (previousRingIsHole) { previousRingIsHole = false; @@ -835,7 +926,7 @@ public static final PShape toPointsPShape(PVector... vertices) { */ public static final PShape toPointsPShape(Collection points) { PShape shape = new PShape(); - shape.setFamily(PShape.GEOMETRY); + shape.setFamily(PShape.PATH); shape.setStrokeCap(PConstants.ROUND); shape.setStroke(true); shape.setStroke(micycle.pgs.color.Colors.PINK); @@ -916,17 +1007,52 @@ public static List toPVector(PShape shape) { */ public static SimpleGraph toGraph(PShape shape) { final SimpleGraph graph = new SimpleWeightedGraph<>(PEdge.class); + for (PShape child : getChildren(shape)) { - final int stride = child.getKind() == PShape.LINES ? 2 : 1; - // Handle other child shapes (e.g., faces) - for (int i = 0; i < child.getVertexCount() - (child.isClosed() ? 0 : 1); i += stride) { + + final int kind = child.getKind(); + + // Contour-aware handling (preserves holes as separate rings) + if (kind == PConstants.POLYGON || kind == PShape.PATH) { + final List> rings = toContours(child); + + for (List ring : rings) { + final int n = ring.size(); + if (n < 2) { + continue; + } + + for (int i = 0; i < n - 1; i++) { + final PVector a = ring.get(i); + final PVector b = ring.get((i + 1)); + if (a.equals(b)) { + continue; + } + + final PEdge e = new PEdge(a, b); + graph.addVertex(a); + graph.addVertex(b); + graph.addEdge(a, b, e); + graph.setEdgeWeight(e, e.length()); + } + } + + continue; + } + + // Original behavior for LINES and other non-contour shapes + final int stride = (kind == PConstants.LINES) ? 2 : 1; + final int vc = child.getVertexCount(); + final int end = vc - (child.isClosed() ? 0 : 1); + + for (int i = 0; i < end; i += stride) { final PVector a = child.getVertex(i); - final PVector b = child.getVertex((i + 1) % child.getVertexCount()); + final PVector b = child.getVertex((i + 1) % vc); if (a.equals(b)) { continue; } - final PEdge e = new PEdge(a, b); + final PEdge e = new PEdge(a, b); graph.addVertex(a); graph.addVertex(b); graph.addEdge(a, b, e); @@ -939,15 +1065,20 @@ public static SimpleGraph toGraph(PShape shape) { /** * Converts a given SimpleGraph consisting of PVectors and PEdges into a PShape - * by polygonizing its edges. If the graph represented a shape with holes, these - * will not be preserved during the conversion. + * by polygonizing its edges. Nested rings are inferred as holes of the + * enclosing polygon, rather than returned as separate overlapping polygons. * * @param graph the graph to be converted into a PShape. * @return a PShape representing the polygonized edges of the graph. * @since 1.4.0 */ public static PShape fromGraph(SimpleGraph graph) { - return PGS.polygonizeNodedEdges(graph.edgeSet()); + final Polygonizer polygonizer = new Polygonizer(); + var edges = graph.edgeSet().stream().map(e -> PGS.createLineString(e.a, e.b)).toList(); + polygonizer.add(edges); + @SuppressWarnings("unchecked") + List polys = (List) polygonizer.getPolygons(); + return toPShape(PGS.dropHolePolygons(polys, false)); } /** @@ -1111,6 +1242,8 @@ static SimpleGraph toDualGraph(Collection meshFaces * Writes the Well-Known Text representation of a shape. The * Well-Known Text format is defined in the OGC Simple Features * Specification for SQL. + *

+ * This variant uses single-precision floating point for output coordinates. * * @param shape shape to process * @return a Geometry Tagged Text string @@ -1119,8 +1252,29 @@ static SimpleGraph toDualGraph(Collection meshFaces */ public static String toWKT(PShape shape) { WKTWriter writer = new WKTWriter(2); - writer.setPrecisionModel(new PrecisionModel(PrecisionModel.FIXED)); // 1 d.p. -// writer.setMaxCoordinatesPerLine(1); + writer.setPrecisionModel(new PrecisionModel(PrecisionModel.FLOATING_SINGLE)); + // writer.setMaxCoordinatesPerLine(1); + return writer.writeFormatted(fromPShape(shape)); + } + + /** + * Writes the Well-Known Text representation of a shape. The + * Well-Known Text format is defined in the OGC Simple Features + * Specification for SQL. + *

+ * The precision of output coordinates is controlled via the + * decimalPlaces parameter. A larger value will preserve more + * digits after the decimal point. + * + * @param shape shape to process + * @return a Geometry Tagged Text string + * @since 2.2 + * @see #fromWKT(String) + */ + public static String toWKT(PShape shape, int decimalPlaces) { + decimalPlaces = Math.min(decimalPlaces, 10); // 10 max + WKTWriter writer = new WKTWriter(2); + writer.setPrecisionModel(new PrecisionModel(Math.pow(10, decimalPlaces - 1))); return writer.writeFormatted(fromPShape(shape)); } @@ -1282,36 +1436,17 @@ public static PShape fromEncodedPolyline(String encodedPolyline) { coords.add(new Coordinate(x, y)); }); - return toPShape(GEOM_FACTORY.createLineString(coords.toCoordinateArray())); - } + Coordinate[] coordArray = coords.toCoordinateArray(); + boolean isClosed = coordArray.length > 1 && coordArray[0].equals2D(coordArray[coordArray.length - 1]); - /** - * Writes a shape into the string representation of its GeoJSON format. - * - * @param shape - * @return json JSON string - * @since 1.3.0 - */ - public static String toGeoJSON(PShape shape) { - final GeoJsonWriter writer = new GeoJsonWriter(1); - writer.setForceCCW(true); - return writer.write(fromPShape(shape)); - } - - /** - * Converts a GeoJSON representation of a shape into its PShape counterpart. - * - * @param json GeoJSON string - * @return PShape represented by the GeoJSON - * @since 1.3.0 - */ - public static PShape fromGeoJSON(String json) { - final GeoJsonReader reader = new GeoJsonReader(GEOM_FACTORY); - try { - return toPShape(reader.read(json)); - } catch (ParseException e) { - System.err.println("Error occurred when converting json to shape."); - return new PShape(); + /* + * NOTE Inherently ambiguous (did the closed polyline represent an areal or + * lineal geometry?). Treat closed polyline as polygon. + */ + if (isClosed && coordArray.length >= 4) { + return toPShape(GEOM_FACTORY.createPolygon(coordArray)); + } else { + return toPShape(GEOM_FACTORY.createLineString(coordArray)); } } @@ -1373,7 +1508,7 @@ public static PShape fromPVector(Collection vertices) { shape.setStroke(closed ? Colors.PINK : Colors.WHITE); shape.setStrokeWeight(2); - shape.beginShape(); + shape.beginShape(closed ? PConstants.POLYGON : PConstants.PATH); for (int i = 0; i < verticesList.size() - (closed ? 1 : 0); i++) { PVector v = verticesList.get(i); shape.vertex(v.x, v.y); @@ -1423,7 +1558,7 @@ public static PShape fromContours(List shell, @Nullable List * A hull is the smallest enclosing shape of some nature that contains all @@ -132,7 +132,7 @@ public static PShape concaveHull(PShape shapeSet, double concavity, boolean tigh * * @param points * @param concavity a factor value between 0 and 1, specifying how concave the - * output is (where 1 is maximal concavity) + * output is (where 0 is maximal concavity) * @return * @since 1.1.0 * @see #concaveHullBFS(List, double) diff --git a/src/main/java/micycle/pgs/PGS_Meshing.java b/src/main/java/micycle/pgs/PGS_Meshing.java index 2297e9a4..a3f9e82b 100644 --- a/src/main/java/micycle/pgs/PGS_Meshing.java +++ b/src/main/java/micycle/pgs/PGS_Meshing.java @@ -2,78 +2,83 @@ import static micycle.pgs.PGS_Conversion.fromPShape; import static micycle.pgs.PGS_Conversion.getChildren; +import static micycle.pgs.PGS_Conversion.toPShape; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; +import java.util.Comparator; import java.util.HashMap; import java.util.HashSet; +import java.util.IdentityHashMap; import java.util.List; import java.util.Map; import java.util.Set; import java.util.stream.Collectors; import java.util.stream.IntStream; + import org.apache.commons.lang3.tuple.Pair; import org.apache.commons.math3.random.RandomGenerator; import org.jgrapht.alg.connectivity.ConnectivityInspector; import org.jgrapht.alg.interfaces.MatchingAlgorithm; -import org.jgrapht.alg.interfaces.VertexColoringAlgorithm.Coloring; import org.jgrapht.alg.matching.blossom.v5.KolmogorovWeightedMatching; import org.jgrapht.alg.matching.blossom.v5.KolmogorovWeightedPerfectMatching; import org.jgrapht.alg.matching.blossom.v5.ObjectiveSense; -import org.jgrapht.alg.spanning.GreedyMultiplicativeSpanner; import org.jgrapht.alg.util.NeighborCache; -import org.jgrapht.graph.AbstractBaseGraph; import org.jgrapht.graph.DefaultEdge; import org.jgrapht.graph.SimpleGraph; -import org.locationtech.jts.algorithm.Orientation; +import org.locationtech.jts.coverage.CoverageCleaner; import org.locationtech.jts.coverage.CoverageSimplifier; import org.locationtech.jts.coverage.CoverageValidator; import org.locationtech.jts.geom.Coordinate; -import org.locationtech.jts.geom.CoordinateList; import org.locationtech.jts.geom.Geometry; -import org.locationtech.jts.geom.Polygon; -import org.locationtech.jts.index.strtree.STRtree; import org.locationtech.jts.noding.SegmentString; -import org.locationtech.jts.operation.overlayng.OverlayNG; +import org.locationtech.jts.operation.polygonize.Polygonizer; import org.tinfour.common.IConstraint; import org.tinfour.common.IIncrementalTin; import org.tinfour.common.IQuadEdge; import org.tinfour.common.SimpleTriangle; import org.tinfour.common.Vertex; import org.tinfour.utils.TriangleCollector; -import org.tinspin.index.PointMap; -import org.tinspin.index.kdtree.KDTree; -import com.vividsolutions.jcs.conflate.coverage.CoverageCleaner; -import com.vividsolutions.jcs.conflate.coverage.CoverageCleaner.Parameters; -import com.vividsolutions.jump.feature.FeatureCollection; -import com.vividsolutions.jump.feature.FeatureDatasetFactory; -import com.vividsolutions.jump.feature.FeatureUtil; -import com.vividsolutions.jump.task.DummyTaskMonitor; +import com.github.micycle1.geoblitz.EndpointSnapper; import it.unimi.dsi.util.XoRoShiRo128PlusRandomGenerator; import micycle.pgs.PGS_Conversion.PShapeData; import micycle.pgs.color.Colors; import micycle.pgs.commons.AreaMerge; +import micycle.pgs.commons.EdgePrunedFaces; import micycle.pgs.commons.IncrementalTinDual; import micycle.pgs.commons.PEdge; import micycle.pgs.commons.PMesh; -import micycle.pgs.commons.RLFColoring; import micycle.pgs.commons.SpiralQuadrangulation; import processing.core.PConstants; import processing.core.PShape; import processing.core.PVector; /** - * Mesh generation (excluding triangulation) and processing. + * Mesh generation and mesh processing utilities (excluding triangulation). + * + *

+ * This class contains algorithms that operate on mesh-like representations + * derived from shapes (most commonly a Delaunay triangulation), producing + * alternative adjacency structures (graphs/faces), quad meshes, and cleaned or + * simplified meshes. In contrast to polygon-focused operations, these methods + * work with connectivity (vertices/edges/faces) as a first-class + * concern. + * *

- * Many of the methods within this class process an existing Delaunay - * triangulation; you may first generate such a triangulation from a shape using - * the - * {@link PGS_Triangulation#delaunayTriangulationMesh(PShape, Collection, boolean, int, boolean) - * delaunayTriangulationMesh()} method. - * + * Many methods expect an {@link IIncrementalTin} (a Delaunay TIN) or a + * {@link PShape} that encodes a mesh (often a GROUP of faces/edges). A typical + * workflow is: + *

    + *
  1. Generate a triangulation via {@link PGS_Triangulation}.
  2. + *
  3. Derive or transform connectivity (Gabriel / RNG / Urquhart / spanners / + * duals).
  4. + *
  5. Optionally quadrangulate, smooth, simplify, subdivide, or repair the + * mesh.
  6. + *
+ * * @author Michael Carleton * @since 1.2.0 */ @@ -110,32 +115,7 @@ private PGS_Meshing() { * @see #gabrielFaces(IIncrementalTin, boolean) */ public static PShape urquhartFaces(final IIncrementalTin triangulation, final boolean preservePerimeter) { - final HashSet edges = PGS.makeHashSet(triangulation.getMaximumEdgeAllocationIndex()); - final HashSet uniqueLongestEdges = PGS.makeHashSet(triangulation.getMaximumEdgeAllocationIndex()); - - final boolean notConstrained = triangulation.getConstraints().isEmpty(); - - TriangleCollector.visitSimpleTriangles(triangulation, t -> { - final IConstraint constraint = t.getContainingRegion(); - if (notConstrained || (constraint != null && constraint.definesConstrainedRegion())) { - edges.add(t.getEdgeA().getBaseReference()); - edges.add(t.getEdgeB().getBaseReference()); - edges.add(t.getEdgeC().getBaseReference()); - final IQuadEdge longestEdge = findLongestEdge(t).getBaseReference(); - if (!preservePerimeter || (preservePerimeter && !longestEdge.isConstraintRegionBorder())) { - uniqueLongestEdges.add(longestEdge); - } - } - }); - - edges.removeAll(uniqueLongestEdges); - - final Collection meshEdges = new ArrayList<>(edges.size()); - edges.forEach(edge -> meshEdges.add(new PEdge(edge.getA().x, edge.getA().y, edge.getB().x, edge.getB().y))); - - PShape mesh = PGS.polygonizeNodedEdges(meshEdges); - - return removeHoles(mesh, triangulation); + return EdgePrunedFaces.urquhartFaces(triangulation, preservePerimeter); } /** @@ -162,42 +142,7 @@ public static PShape urquhartFaces(final IIncrementalTin triangulation, final bo * @see #urquhartFaces(IIncrementalTin, boolean) */ public static PShape gabrielFaces(final IIncrementalTin triangulation, final boolean preservePerimeter) { - final HashSet edges = new HashSet<>(); - final HashSet vertices = new HashSet<>(); - - final boolean notConstrained = triangulation.getConstraints().isEmpty(); - TriangleCollector.visitSimpleTriangles(triangulation, t -> { - final IConstraint constraint = t.getContainingRegion(); - if (notConstrained || (constraint != null && constraint.definesConstrainedRegion())) { - edges.add(t.getEdgeA().getBaseReference()); // add edge to set - edges.add(t.getEdgeB().getBaseReference()); // add edge to set - edges.add(t.getEdgeC().getBaseReference()); // add edge to set - vertices.add(t.getVertexA()); - vertices.add(t.getVertexB()); - vertices.add(t.getVertexC()); - } - }); - - final PointMap tree = KDTree.create(2); - vertices.forEach(v -> tree.insert(new double[] { v.x, v.y }, v)); - - final HashSet nonGabrielEdges = new HashSet<>(); // base references to edges that should be removed - edges.forEach(edge -> { - final double[] midpoint = midpoint(edge); - final Vertex near = tree.query1nn(midpoint).value(); - if (near != edge.getA() && near != edge.getB()) { - if (!preservePerimeter || (preservePerimeter && !edge.isConstraintRegionBorder())) { // don't remove constraint borders (holes) - nonGabrielEdges.add(edge); // base reference - } - } - }); - edges.removeAll(nonGabrielEdges); - - final Collection meshEdges = new ArrayList<>(edges.size()); - edges.forEach(edge -> meshEdges.add(new PEdge(edge.getA().x, edge.getA().y, edge.getB().x, edge.getB().y))); - - PShape mesh = PGS.polygonizeNodedEdges(meshEdges); - return removeHoles(mesh, triangulation); + return EdgePrunedFaces.gabrielFaces(triangulation, preservePerimeter); } /** @@ -217,37 +162,7 @@ public static PShape gabrielFaces(final IIncrementalTin triangulation, final boo * @since 1.3.0 */ public static PShape relativeNeighborFaces(final IIncrementalTin triangulation, final boolean preservePerimeter) { - SimpleGraph graph = PGS_Triangulation.toTinfourGraph(triangulation); - NeighborCache cache = new NeighborCache<>(graph); - - Set edges = new HashSet<>(graph.edgeSet()); - - /* - * If any vertex is nearer to both vertices of an edge, than the length of the - * edge, this edge does not belong in the RNG. - */ - graph.edgeSet().forEach(e -> { - double l = e.getLength(); - cache.neighborsOf(e.getA()).forEach(n -> { - if (Math.max(n.getDistance(e.getA()), n.getDistance(e.getB())) < l) { - if (!preservePerimeter || (preservePerimeter && !e.isConstraintRegionBorder())) { - edges.remove(e); - } - } - }); - cache.neighborsOf(e.getB()).forEach(n -> { - if (Math.max(n.getDistance(e.getA()), n.getDistance(e.getB())) < l) { - if (!preservePerimeter || (preservePerimeter && !e.isConstraintRegionBorder())) { - edges.remove(e); - } - } - }); - }); - - List edgesOut = edges.stream().map(PGS_Triangulation::toPEdge).collect(Collectors.toList()); - - PShape mesh = PGS.polygonizeNodedEdges(edgesOut); - return removeHoles(mesh, triangulation); + return EdgePrunedFaces.relativeNeighborFaces(triangulation, preservePerimeter); } /** @@ -265,26 +180,7 @@ public static PShape relativeNeighborFaces(final IIncrementalTin triangulation, * @since 1.3.0 */ public static PShape spannerFaces(final IIncrementalTin triangulation, int k, final boolean preservePerimeter) { - SimpleGraph graph = PGS_Triangulation.toGraph(triangulation); - if (graph.edgeSet().isEmpty()) { - return new PShape(); - } - - k = Math.max(2, k); // min(2) since k=1 returns triangulation - GreedyMultiplicativeSpanner spanner = new GreedyMultiplicativeSpanner<>(graph, k); - List spannerEdges = spanner.getSpanner().stream().collect(Collectors.toList()); - if (preservePerimeter) { - if (triangulation.getConstraints().isEmpty()) { // does not have constraints - spannerEdges.addAll(triangulation.getPerimeter().stream().map(PGS_Triangulation::toPEdge).collect(Collectors.toList())); - } else { // has constraints - spannerEdges.addAll(triangulation.getEdges().stream().filter(IQuadEdge::isConstraintRegionBorder).map(PGS_Triangulation::toPEdge) - .collect(Collectors.toList())); - } - } - - PShape mesh = PGS.polygonizeNodedEdges(spannerEdges); - - return removeHoles(mesh, triangulation); + return EdgePrunedFaces.spannerFaces(triangulation, k, preservePerimeter); } /** @@ -400,57 +296,7 @@ public static PShape splitQuadrangulation(final IIncrementalTin triangulation) { * similar approach, but faster */ public static PShape edgeCollapseQuadrangulation(final IIncrementalTin triangulation, final boolean preservePerimeter) { - /*- - * From 'Fast unstructured quadrilateral mesh generation'. - * A better coloring approach is given in 'Face coloring in unstructured CFD codes'. - * - * First partition the edges of the triangular mesh into three groups such that - * no triangle has two edges of the same color (find groups by reducing to a - * graph-coloring). - * Then obtain an all-quadrilateral mesh by removing all edges of *one* - * particular color. - */ - final boolean unconstrained = triangulation.getConstraints().isEmpty(); - final AbstractBaseGraph graph = new SimpleGraph<>(DefaultEdge.class); - TriangleCollector.visitSimpleTriangles(triangulation, t -> { - final IConstraint constraint = t.getContainingRegion(); - if (unconstrained || (constraint != null && constraint.definesConstrainedRegion())) { - graph.addVertex(t.getEdgeA().getBaseReference()); - graph.addVertex(t.getEdgeB().getBaseReference()); - graph.addVertex(t.getEdgeC().getBaseReference()); - - graph.addEdge(t.getEdgeA().getBaseReference(), t.getEdgeB().getBaseReference()); - graph.addEdge(t.getEdgeA().getBaseReference(), t.getEdgeC().getBaseReference()); - graph.addEdge(t.getEdgeB().getBaseReference(), t.getEdgeC().getBaseReference()); - } - }); - - Coloring coloring = new RLFColoring<>(graph, 1337).getColoring(); - - final HashSet perimeter = new HashSet<>(triangulation.getPerimeter()); - if (!unconstrained) { - perimeter.clear(); // clear, the perimeter of constrained tin is unaffected by the constraint - } - - final Collection meshEdges = new ArrayList<>(); - coloring.getColors().forEach((edge, color) -> { - /* - * "We can remove the edges of any one of the colors, however a convenient - * choice is the one that leaves the fewest number of unmerged boundary - * triangles". -- ideal, but not implemented here... - */ - // NOTE could now apply Topological optimization, as given in paper. - if ((color < 2) || (preservePerimeter && (edge.isConstraintRegionBorder() || perimeter.contains(edge)))) { - meshEdges.add(new PEdge(edge.getA().x, edge.getA().y, edge.getB().x, edge.getB().y)); - } - }); - - PShape quads = PGS.polygonizeNodedEdges(meshEdges); - if (triangulation.getConstraints().size() < 2) { // assume constraint 1 is the boundary (not a hole) - return quads; - } else { - return removeHoles(quads, triangulation); - } + return EdgePrunedFaces.edgeCollapseQuadrangulation(triangulation, preservePerimeter); } /** @@ -471,33 +317,70 @@ public static PShape edgeCollapseQuadrangulation(final IIncrementalTin triangula * @return a GROUP PShape, where each child shape is one quadrangle * @since 1.2.0 */ - public static PShape centroidQuadrangulation(final IIncrementalTin triangulation, final boolean preservePerimeter) { - final boolean unconstrained = triangulation.getConstraints().isEmpty(); - final HashSet edges = new HashSet<>(); - TriangleCollector.visitSimpleTriangles(triangulation, t -> { - final IConstraint constraint = t.getContainingRegion(); - if (unconstrained || (constraint != null && constraint.definesConstrainedRegion())) { - Vertex centroid = centroid(t); - edges.add(new PEdge(centroid.getX(), centroid.getY(), t.getVertexA().x, t.getVertexA().y)); - edges.add(new PEdge(centroid.getX(), centroid.getY(), t.getVertexB().x, t.getVertexB().y)); - edges.add(new PEdge(centroid.getX(), centroid.getY(), t.getVertexC().x, t.getVertexC().y)); + public static PShape centroidQuadrangulation(final IIncrementalTin tin, final boolean preservePerimeter) { + final boolean unconstrained = tin.getConstraints().isEmpty(); + final var gf = PGS.GEOM_FACTORY; + + // Collect accepted triangles: centroids + base-edge adjacency + final List centroids = new ArrayList<>(); + final Map edgeAdj = new IdentityHashMap<>(); + + TriangleCollector.visitSimpleTriangles(tin, t -> { + final IConstraint c = t.getContainingRegion(); + if (!(unconstrained || (c != null && c.definesConstrainedRegion()))) { + return; } + + final int tid = centroids.size(); + centroids.add(new double[] { (t.getVertexA().x + t.getVertexB().x + t.getVertexC().x) / 3.0, + (t.getVertexA().y + t.getVertexB().y + t.getVertexC().y) / 3.0 }); + + addAdj(edgeAdj, t.getEdgeA().getBaseReference(), tid); + addAdj(edgeAdj, t.getEdgeB().getBaseReference(), tid); + addAdj(edgeAdj, t.getEdgeC().getBaseReference(), tid); }); - if (preservePerimeter) { - List perimeter = triangulation.getPerimeter(); - triangulation.edges().forEach(edge -> { - if (edge.isConstraintRegionBorder() || (unconstrained && perimeter.contains(edge))) { - edges.add(new PEdge(edge.getA().x, edge.getA().y, edge.getB().x, edge.getB().y)); - } - }); + if (centroids.isEmpty()) { + return new PShape(); + } + + final List out = new ArrayList<>(); + for (Map.Entry en : edgeAdj.entrySet()) { + final IQuadEdge e = en.getKey(); + final int[] inc = en.getValue(); + final Vertex A = e.getA(), B = e.getB(); + + if (inc[0] >= 0 && inc[1] >= 0) { + // Interior edge -> quad [A, c0, B, c1, A] + final double[] c0 = centroids.get(inc[0]); + final double[] c1 = centroids.get(inc[1]); + final Coordinate[] ring = new Coordinate[] { new Coordinate(A.x, A.y), new Coordinate(c0[0], c0[1]), new Coordinate(B.x, B.y), + new Coordinate(c1[0], c1[1]), new Coordinate(A.x, A.y) }; + out.add(PGS_Conversion.toPShape(gf.createPolygon(ring))); + } else if (preservePerimeter) { + // Boundary edge -> triangle [A, c, B, A] + final int tid = inc[0] >= 0 ? inc[0] : inc[1]; + final double[] c = centroids.get(tid); + final Coordinate[] tri = new Coordinate[] { new Coordinate(A.x, A.y), new Coordinate(c[0], c[1]), new Coordinate(B.x, B.y), + new Coordinate(A.x, A.y) }; + out.add(PGS_Conversion.toPShape(gf.createPolygon(tri))); + } } - final PShape quads = PGS.polygonizeNodedEdges(edges); - if (triangulation.getConstraints().size() < 2) { // assume constraint 1 is the boundary (not a hole) - return quads; + final PShape quads = PGS_Conversion.flatten(out); + return quads; + } + + private static void addAdj(Map adj, IQuadEdge base, int tid) { + int[] a = adj.get(base); + if (a == null) { + a = new int[] { -1, -1 }; + adj.put(base, a); + } + if (a[0] < 0) { + a[0] = tid; } else { - return removeHoles(quads, triangulation); + a[1] = tid; } } @@ -546,9 +429,9 @@ public static PShape matchingQuadrangulation(final IIncrementalTin triangulation Set seen = new HashSet<>(g.vertexSet()); var quads = collapsedEdges.stream().map(e -> { var t1 = g.getEdgeSource(e); - var f1 = toPShape(t1); + var f1 = triToPShape(t1); var t2 = g.getEdgeTarget(e); - var f2 = toPShape(t2); + var f2 = triToPShape(t2); seen.remove(t1); seen.remove(t2); @@ -558,80 +441,11 @@ public static PShape matchingQuadrangulation(final IIncrementalTin triangulation // include uncollapsed triangles (if any) seen.forEach(t -> { - quads.add(toPShape(t)); + quads.add(triToPShape(t)); }); - return PGS_Conversion.flatten(quads); - } - - /** - * Removes (what should be) holes from a polygonized quadrangulation. - *

- * When the polygonizer is applied to the collapsed triangles of a - * triangulation, it cannot determine which collapsed regions represent holes in - * the quadrangulation and will consequently fill them in. The subroutine below - * restores holes/topology, detecting which polygonized face(s) are original - * holes. Note the geometry of the original hole/constraint and its associated - * polygonized face are different, since quads are polygonized, not triangles - * (hence an overlap metric is used to match candidates). - * - * @param faces faces of the quadrangulation - * @param triangulation - * @return - */ - private static PShape removeHoles(PShape faces, IIncrementalTin triangulation) { - List holes = new ArrayList<>(triangulation.getConstraints()); // copy list - if (holes.size() <= 1) { - return faces; - } - holes = holes.subList(1, holes.size()); // slice off perimeter constraint (not a hole) - - STRtree tree = new STRtree(); - holes.stream().map(constraint -> constraint.getVertices()).iterator().forEachRemaining(vertices -> { - CoordinateList coords = new CoordinateList(); // coords of constraint - vertices.forEach(v -> coords.add(new Coordinate(v.x, v.y))); - coords.closeRing(); - - if (!Orientation.isCCWArea(coords.toCoordinateArray())) { // triangulation holes are CW - Polygon polygon = PGS.GEOM_FACTORY.createPolygon(coords.toCoordinateArray()); - tree.insert(polygon.getEnvelopeInternal(), polygon); - } - }); - - List nonHoles = PGS_Conversion.getChildren(faces).parallelStream().filter(quad -> { - /* - * If quad overlaps with a hole detect whether it *is* that hole via Hausdorff - * Similarity. - */ - final Geometry g = PGS_Conversion.fromPShape(quad); - - @SuppressWarnings("unchecked") - List matches = tree.query(g.getEnvelopeInternal()); - - for (Polygon m : matches) { - try { - // PGS_ShapePredicates.overlap() inlined here - Geometry overlap = OverlayNG.overlay(m, g, OverlayNG.INTERSECTION); - double a1 = g.getArea(); - double a2 = m.getArea(); - double total = a1 + a2; - double aOverlap = overlap.getArea(); - double w1 = a1 / total; - double w2 = a2 / total; - - double similarity = w1 * (aOverlap / a1) + w2 * (aOverlap / a2); - if (similarity > 0.2) { // magic constant, unsure what the best value is - return false; // is hole; keep=false - } - } catch (Exception e) { // catch occasional noded error - continue; - } - - } - return true; // is not hole; keep=true - }).collect(Collectors.toList()); - - return PGS_Conversion.flatten(nonHoles); + // sort faces so that output is structurally deterministic + return PGS_Optimisation.centroidSortFaces(PGS_Conversion.flatten(quads)); } /** @@ -764,9 +578,12 @@ public static PShape stochasticMerge(PShape mesh, int nClasses, long seed) { * @since 1.4.0 */ public static PShape smoothMesh(PShape mesh, int iterations, boolean preservePerimeter) { + // TODO smooth with enum for smoothing? PMesh m = new PMesh(mesh); for (int i = 0; i < iterations; i++) { m.smoothTaubin(0.25, -0.251, preservePerimeter); + // m.smoothHC(0.33, 0.33, 0.33, preservePerimeter); + // m.smoothCotanWeighted(preservePerimeter); } return m.getMesh(); } @@ -825,7 +642,7 @@ public static PShape smoothMesh(PShape mesh, double displacementCutoff, boolean * the original. * @param preservePerimeter whether to only simplify inner-boundaries and * leaving outer boundary edges unchanged. - * @return GROUP shape comprising the simplfied mesh faces + * @return GROUP shape comprising the simplified mesh faces * @since 1.4.0 */ public static PShape simplifyMesh(PShape mesh, double tolerance, boolean preservePerimeter) { @@ -856,7 +673,7 @@ public static PShape simplifyMesh(PShape mesh, double tolerance, boolean preserv * @param mesh The mesh containing faces to subdivide. * @param edgeSplitRatio The distance ratio [0...1] along each edge where the * faces are subdivided. A value of 0.5 is mid-edge - * division (recommended value for a simple subvision). + * division (recommended value for a simple subdivision). * @return A new GROUP PShape representing the subdivided mesh. * @since 1.4.0 */ @@ -981,40 +798,133 @@ static Pair, List> extractInnerEdgesAndVertices(PShape mesh return Pair.of(new ArrayList<>(inner), innerVerts); } + /** + * Convenience overload of {@link #fixBrokenFaces(PShape, double, boolean)} that + * performs endpoint-only snapping within {@code tolerance} and then polygonises + * the result ({@code polygonise = true}). + * + * @param coverage input coverage as a {@link PShape}; may include polygons and + * broken boundary lines + * @param tolerance maximum distance within which endpoints may be + * clustered/snapped + * @return a flattened {@link PShape} containing polygonal faces, and any + * remaining linework + * @see #fixBrokenFaces(PShape, double, boolean) + * @since 2.2 + */ + public static PShape fixBrokenFaces(PShape coverage, double tolerance) { + return fixBrokenFaces(coverage, tolerance, true); + } + + /** + * Repairs broken faces in near-coverage linework using endpoint-only snapping, + * then polygonises the result. + *

+ * Targets face-level defects in a line arrangement that is intended to form a + * valid coverage but doesn’t quite join exactly (e.g., near-misses, tiny + * endpoint gaps, unclosed rings). It performs endpoint-only snapping within + * {@code tolerance} and then polygonises the result. + *

+ *
    + *
  • Performs Endpoint-only snapping (not general vertex snapping): + *
      + *
    • Only line endpoints move; interior vertices are not adjusted, and polygon + * vertices are never moved.
    • + *
    • Endpoints (and polygon vertices) within {@code tolerance} form transitive + * clusters.
    • + *
    • If a cluster contains any polygon vertex, endpoints snap to that vertex + * (polygons act as fixed anchors).
    • + *
    • If a cluster contains only endpoints, they snap mutually to the cluster + * mean, closing gaps where no valid nodes exist yet.
    • + *
    • Closed LineStrings are treated as polygons and ignored for snapping.
    • + *
    + *
  • + *
  • Polygonisation: + *
      + *
    • Builds faces from the snapped linework and returns a flattened shape + * containing faces, cut edges, and dangles.
    • + *
    + *
  • + *
+ *

+ * Outcome: This focuses on repairing faces from near-coverage linework. As a + * side effect, it often yields a valid or more valid coverage, but it is not a + * general cleaner for inter-face gaps/overlaps. + *

+ *

+ * For cleaning breaks between faces (gaps, overlaps, slivers, misaligned shared + * edges) in an existing coverage, use {@link #fixBreaks(PShape, double, double) + * fixBreaks()}. + *

+ * + * @param coverage input coverage as a {@link PShape}; may include polygons and + * broken boundary lines + * @param tolerance maximum distance within which endpoints may be + * clustered/snapped + * @return a flattened {@link PShape} containing polygonal faces, and any + * remaining linework + * @see #fixBreaks(PShape, double, double) + * @since 2.2 + */ + @SuppressWarnings("unchecked") + public static PShape fixBrokenFaces(PShape coverage, double tolerance, boolean polygonise) { + var g = fromPShape(coverage); + EndpointSnapper snapper = new EndpointSnapper(tolerance); + var fixed = snapper.snapEndpoints(g, true); + + if (!polygonise) { + return toPShape(fixed); + } + + Polygonizer p = new Polygonizer(false); + p.add(fixed); + var polys = toPShape(p.getPolygons()); + var cuts = toPShape(p.getCutEdges()); + var dangles = toPShape(p.getDangles()); + + return PGS_Conversion.flatten(polys, cuts, dangles); + } + /** * Removes gaps and overlaps from meshes/polygon collections that are intended * to satisfy the following conditions: *
    - *
  • Vector-clean - edges between neighbouring polygons must either be + *
  • Vector-clean — edges between neighbouring polygons must either be * identical or intersect only at endpoints.
  • - *
  • Non-overlapping - No two polygons may overlap. Equivalently, polygons - * must be interior-disjoint.
  • + *
  • Non-overlapping — no two polygons may overlap (polygons are + * interior-disjoint).
  • *
*

- * It may not always be possible to perfectly clean the input. + * Note: This operates on breaks between faces (inter-polygon gaps, + * overlaps, slivers, and misaligned shared edges), not on “broken” faces / line + * arrangements with unclosed lines or endpoint gaps. For repairing broken faces + * via endpoint-only snapping, see {@link #fixBrokenFaces(PShape, double) + * fixBrokenFaces()}. + *

*

- * While this method is intended to be used to fix malformed coverages, it also - * can be used to snap collections of disparate polygons together. - * - * @param coverage a GROUP shape, consisting of the polygonal faces to - * clean - * @param distanceTolerance the distance below which segments and vertices are - * considered to match - * @param angleTolerance the maximum angle difference between matching - * segments, in degrees + * It may not always be possible to perfectly clean the input. While this method + * is intended for malformed coverages, it can also snap collections of + * disparate polygons together. + *

+ * + * @param coverage a GROUP shape consisting of the polygonal faces to clean + * @param maxGapWidth the maximum width of the gaps that will be filled and + * merged * @return GROUP shape whose child polygons satisfy a (hopefully) valid coverage * @since 1.3.0 * @see #findBreaks(PShape) + * @see #fixBrokenFaces(PShape, double) */ - public static PShape fixBreaks(PShape coverage, double distanceTolerance, double angleTolerance) { - final List geometries = PGS_Conversion.getChildren(coverage).stream().map(PGS_Conversion::fromPShape).collect(Collectors.toList()); - final FeatureCollection features = FeatureDatasetFactory.createFromGeometry(geometries); + public static PShape fixBreaks(PShape coverage, double maxGapWidth) { + Geometry[] geomsIn = PGS_Conversion.getChildren(coverage).stream().map(f -> fromPShape(f)).filter(q -> q != null).toArray(Geometry[]::new); + + CoverageCleaner cleaner = new CoverageCleaner(geomsIn); + cleaner.setGapMaximumWidth(maxGapWidth); + cleaner.clean(); - final CoverageCleaner cc = new CoverageCleaner(features, new DummyTaskMonitor()); - cc.process(new Parameters(distanceTolerance, angleTolerance)); + var geomsOut = PGS.GEOM_FACTORY.createGeometryCollection(cleaner.getResult()); - final List cleanedGeometries = FeatureUtil.toGeometries(cc.getUpdatedFeatures().getFeatures()); - final PShape out = PGS_Conversion.toPShape(cleanedGeometries); + final PShape out = PGS_Conversion.toPShape(geomsOut); PGS_Conversion.setAllStrokeColor(out, Colors.PINK, 2); return out; } @@ -1047,6 +957,33 @@ public static PShape findContainingFace(PShape mesh, PVector position) { .orElse(null); } + /** + * Identifies disconnected groups of faces (islands) within a mesh by analysing + * face adjacency relationships. + *

+ * The returned islands are sorted by the number of faces they contain in + * descending order, with the island containing the most faces appearing first. + * + * @param mesh a PShape of type GROUP representing the input mesh + * @return a list of PShape GROUP objects, each containing one connected island + * of faces, sorted from largest to smallest by face count. Returns an + * empty list if the mesh is empty or cannot be converted to a dual + * graph. + * @since 2.2 + */ + public static List findIslands(PShape mesh) { + var dual = PGS_Conversion.toDualGraph(mesh); + + if (dual == null || dual.vertexSet().isEmpty()) { + return List.of(); + } + + var inspector = new ConnectivityInspector<>(dual); + var components = inspector.connectedSets(); + + return components.stream().map(PGS_Conversion::flatten).sorted(Comparator.comparingInt(PShape::getChildCount).reversed()).toList(); + } + /** * Merges all faces in the given mesh that are smaller than a specified area * threshold into their larger neighbors, and repeats this process until no face @@ -1135,43 +1072,7 @@ private static PShape applyOriginalStyling(final PShape newMesh, final PShape ol return newMesh; } - /** - * Calculate the longest edge of a given triangle. - */ - private static IQuadEdge findLongestEdge(final SimpleTriangle t) { - if (t.getEdgeA().getLength() > t.getEdgeB().getLength()) { - if (t.getEdgeC().getLength() > t.getEdgeA().getLength()) { - return t.getEdgeC(); - } else { - return t.getEdgeA(); - } - } else { - if (t.getEdgeC().getLength() > t.getEdgeB().getLength()) { - return t.getEdgeC(); - } else { - return t.getEdgeB(); - } - } - } - - private static double[] midpoint(final IQuadEdge edge) { - final Vertex a = edge.getA(); - final Vertex b = edge.getB(); - return new double[] { (a.x + b.x) / 2d, (a.y + b.y) / 2d }; - } - - private static Vertex centroid(final SimpleTriangle t) { - final Vertex a = t.getVertexA(); - final Vertex b = t.getVertexB(); - final Vertex c = t.getVertexC(); - double x = a.x + b.x + c.x; - x /= 3; - double y = a.y + b.y + c.y; - y /= 3; - return new Vertex(x, y, 0); - } - - private static PShape toPShape(SimpleTriangle t) { + private static PShape triToPShape(SimpleTriangle t) { PVector vertexA = new PVector((float) t.getVertexA().x, (float) t.getVertexA().y); PVector vertexB = new PVector((float) t.getVertexB().x, (float) t.getVertexB().y); PVector vertexC = new PVector((float) t.getVertexC().x, (float) t.getVertexC().y); diff --git a/src/main/java/micycle/pgs/PGS_Morphology.java b/src/main/java/micycle/pgs/PGS_Morphology.java index 783e2664..74b53c33 100644 --- a/src/main/java/micycle/pgs/PGS_Morphology.java +++ b/src/main/java/micycle/pgs/PGS_Morphology.java @@ -2,9 +2,12 @@ import static micycle.pgs.PGS_Conversion.fromPShape; import static micycle.pgs.PGS_Conversion.toPShape; -import java.util.ArrayList; +import static micycle.pgs.PGS.GEOM_FACTORY; + +import java.util.Arrays; import java.util.List; import java.util.function.BiFunction; +import org.locationtech.jts.algorithm.construct.MaximumInscribedCircle; import org.locationtech.jts.densify.Densifier; import org.locationtech.jts.geom.Coordinate; import org.locationtech.jts.geom.CoordinateList; @@ -20,39 +23,53 @@ import org.locationtech.jts.linearref.LengthIndexedLine; import org.locationtech.jts.operation.buffer.BufferOp; import org.locationtech.jts.operation.buffer.BufferParameters; -import org.locationtech.jts.operation.buffer.VariableBuffer; import org.locationtech.jts.precision.GeometryPrecisionReducer; import org.locationtech.jts.shape.CubicBezierCurve; import org.locationtech.jts.simplify.DouglasPeuckerSimplifier; import org.locationtech.jts.simplify.TopologyPreservingSimplifier; import org.locationtech.jts.simplify.VWSimplifier; +import com.gihub.micycle1.malleo.Malleo; +import com.github.micycle1.geoblitz.FastVariableBuffer; + import micycle.pgs.PGS_Contour.OffsetStyle; import micycle.pgs.commons.ChaikinCut; +import micycle.pgs.commons.ContourRegularization; +import micycle.pgs.commons.ContourRegularization.Parameters; import micycle.pgs.commons.CornerRounding; import micycle.pgs.commons.CornerRounding.RoundingStyle; import micycle.pgs.commons.DiscreteCurveEvolution; import micycle.pgs.commons.DiscreteCurveEvolution.DCETerminationCallback; import micycle.pgs.commons.EllipticFourierDesc; +import micycle.pgs.commons.FastAtan2; import micycle.pgs.commons.GaussianLineSmoothing; +import micycle.pgs.commons.HausdorffInterpolator; import micycle.pgs.commons.LaneRiesenfeldSmoothing; -import micycle.pgs.commons.ShapeInterpolation; +import micycle.pgs.commons.NewtonThieleRingMorpher; +import micycle.pgs.commons.SchneiderBezierFitter; +import micycle.pgs.commons.VoronoiInterpolator; import micycle.uniformnoise.UniformNoise; +import net.jafama.FastMath; import processing.core.PConstants; import processing.core.PShape; import processing.core.PVector; import uk.osgb.algorithm.minkowski_sum.MinkowskiSum; /** - * Methods that affect the geometry or topology of shapes. - * - * @author Michael Carleton + * Morphological editing operations for {@link PShape} polygons. * + *

+ * This class hosts algorithms that reshape geometry, typically by + * offsetting, simplifying, smoothing, warping, or deforming outlines; often + * changing vertex count and sometimes changing topology (splitting/merging + * parts, creating/removing holes). + * + * @author Michael Carleton */ public final class PGS_Morphology { static { - MinkowskiSum.setGeometryFactory(PGS.GEOM_FACTORY); + MinkowskiSum.setGeometryFactory(GEOM_FACTORY); } private PGS_Morphology() { @@ -128,7 +145,7 @@ public static PShape buffer(PShape shape, double buffer, OffsetStyle bufferStyle * Buffers a shape with a varying buffer distance (interpolated between a start * distance and an end distance) along the shape's perimeter. * - * @param shape a single polygon or lineal shape + * @param shape a polygon, lineal shape, or GROUP containing such shapes * @param startDistance the starting buffer amount * @param endDistance the terminating buffer amount * @return a polygonal shape representing the variable buffer region (which may @@ -136,11 +153,10 @@ public static PShape buffer(PShape shape, double buffer, OffsetStyle bufferStyle * @since 1.3.0 */ public static PShape variableBuffer(PShape shape, double startDistance, double endDistance) { - Geometry g = fromPShape(shape); - if (!g.getGeometryType().equals(Geometry.TYPENAME_LINEARRING) && !g.getGeometryType().equals(Geometry.TYPENAME_LINESTRING)) { - g = ((Polygon) g).getExteriorRing(); // variable buffer applies to linestrings only - } - return toPShape(VariableBuffer.buffer(g, startDistance, endDistance)); + return PGS.applyToLinealGeometries(shape, line -> { + var buffer = (Polygon) FastVariableBuffer.buffer(line, startDistance, endDistance); + return buffer.getExteriorRing(); + }); } /** @@ -160,9 +176,10 @@ public static PShape variableBuffer(PShape shape, double startDistance, double e * } * * - * @param shape A single polygon or lineal shape + * @param shape A single polygon, lineal shape, or GROUP containing + * such shapes * @param bufferCallback A callback function that receives the vertex coordinate - * and a double representing tractional distance (0...1) + * and a double representing fractional distance (0...1) * of the vertex along the shape's boundary. The function * may use properties of the vertex, or its position, to * determine the buffer width at that point. @@ -172,27 +189,71 @@ public static PShape variableBuffer(PShape shape, double startDistance, double e * @since 2.0 */ public static PShape variableBuffer(PShape shape, BiFunction bufferCallback) { - final Geometry inputGeometry = fromPShape(shape); - if (!(inputGeometry instanceof Lineal || inputGeometry instanceof Polygon)) { - throw new IllegalArgumentException("The geometry must be linear or a non-multi polygonal shape."); - } - var coords = inputGeometry.getCoordinates(); - double[] bufferDistances = new double[coords.length]; - double totalLength = inputGeometry.getLength(); - double running_length = 0; - Coordinate previousCoordinate = coords[0]; - - for (int i = 1; i < coords.length; i++) { - running_length += previousCoordinate.distance(coords[i]); - double fractionalDistance = running_length / totalLength; // 0...1 - bufferDistances[i] = bufferCallback.apply(coords[i], fractionalDistance); - previousCoordinate = coords[i]; - } + return PGS.applyToLinealGeometries(shape, line -> { + final Coordinate[] coords = line.getCoordinates(); + if (coords.length == 0) { + // return an "empty buffer" geometry consistent with VariableBuffer expectations + return null; + } + + final double totalLength = line.getLength(); + final double[] bufferDistances = new double[coords.length]; + + // Guard against degenerate/zero-length lines (all points same). + if (totalLength == 0) { + final double d0 = bufferCallback.apply(coords[0], 0.0); + for (int i = 0; i < bufferDistances.length; i++) { + bufferDistances[i] = d0; + } + } else { + bufferDistances[0] = bufferCallback.apply(coords[0], 0.0); + + double runningLength = 0; + Coordinate prev = coords[0]; - bufferDistances[0] = bufferCallback.apply(coords[0], 0.0); + for (int i = 1; i < coords.length; i++) { + runningLength += prev.distance(coords[i]); + final double fractionalDistance = runningLength / totalLength; // 0..1 + bufferDistances[i] = bufferCallback.apply(coords[i], fractionalDistance); + prev = coords[i]; + } + } + + final var vb = new FastVariableBuffer(line, bufferDistances); + var buffer = (Polygon) vb.getResult(); + return buffer.getExteriorRing(); + }); + } + + /** + * Erodes (a negative buffer) a shape by a normalised amount (scaled to shape + * size). + *

+ * {@code amount} is dimensionless: {@code amount == 1} corresponds to a full + * erosion (approximately to the maximum inscribed radius), often collapsing + * polygons to empty. {@code shape} may be a {@code GROUP}; each polygonal + * element is processed independently. The sign of {@code amount} is ignored + * (always erodes). + * + * @param shape the source shape (polygonal or {@code GROUP}) + * @param amount normalised erosion amount (dimensionless) + * @return a polygonal {@code PShape} of the eroded geometry (may be empty) + * @since 2.2 + */ + public static PShape normalisedErosion(PShape shape, double amount) { + double amt = -Math.abs(amount); // force erosion + var polys = PGS.extractPolygons(fromPShape(shape)); + var buffered = polys.parallelStream().map(p -> { + var mic = new MaximumInscribedCircle(p, 0.5); + var r = mic.getRadiusLine().getLength() * (1 + 1e-3); + var buffer = amt * r; + var bufParams = createBufferParams(buffer, 0.5, OffsetStyle.ROUND, CapStyle.ROUND); + BufferOp b = new BufferOp(p, bufParams); + var out = b.getResultGeometry(buffer); + return out; + }).toList(); - VariableBuffer variableBuffer = new VariableBuffer(inputGeometry, bufferDistances); - return toPShape(variableBuffer.getResult()); + return toPShape(buffered); } /** @@ -204,7 +265,7 @@ public static PShape variableBuffer(PShape shape, BiFunction { - var coords = DiscreteCurveEvolution.process(ring, terminationCallback); - return PGS.GEOM_FACTORY.createLineString(coords); + return DiscreteCurveEvolution.process(ring, terminationCallback); }); } @@ -340,7 +400,9 @@ public static PShape simplifyDCE(PShape shape, DCETerminationCallback terminatio * * @param shape the input shape * @param relevanceThreshold the relevance threshold; only vertices with - * relevance >= the threshold will be kept + * relevance >= the threshold will be kept. 20 is a + * good starting value for generally imperceptible + * simplification. * @return the simplified PShape * @since 2.1 */ @@ -429,6 +491,13 @@ public static PShape minkDifference(PShape source, PShape subtract) { * Smoothes a shape. The smoothing algorithm inserts new vertices which are * positioned using Bezier splines. The output shape tends to be a little larger * than the input. + *

+ * Note: this method effectively constructs a Bezier curve through the existing + * vertices. As a result, if the input geometry already has very dense / closely + * spaced vertices, the smoothing may have little or no perceptual effect. This + * differs from other smoothing approaches (e.g. Gaussian) that operate at a + * spatial scale and are therefore largely invariant to vertex density. + *

* * @param shape shape to smooth * @param alpha curvedness parameter (0 is linear, 1 is round, >1 is @@ -441,6 +510,47 @@ public static PShape smooth(PShape shape, double alpha) { return toPShape(curve); } + /** + * Smoothes a shape by fitting one or more cubic Bezier curve segments + * to each lineal component (polylines and polygon rings), then + * resampling the fitted Beziers to produce a new vertex sequence. + *

+ * This method uses Philip J. Schneider’s curve fitting algorithm. Unlike + * {@link #smooth(PShape, double) smooth()}, which constructs a Bezier curve + * through the existing vertices, this method approximates the input + * within a user-specified tolerance and can substantially simplify noisy or + * densely-vertexed input while producing a visually smoother result. + *

+ *

+ * The {@code maxDeviation} parameter controls how closely the fitted Bezier(s) + * must follow the original polyline/ring: smaller values preserve the original + * shape more strictly (often producing more Bezier segments and/or more output + * vertices), while larger values allow a smoother, more generalised result. + *

+ *

+ * Implementation note: the fitted Bezier segments are sampled at a fixed + * spacing (currently 2 units in the coordinate system of the input geometry) to + * create the returned JTS geometry, which is then converted back to a + * {@link PShape}. + *

+ * + * @param shape shape whose lineal geometry (LineStrings and polygon + * rings) will be Bezier-fit and resampled + * @param maxDeviation maximum allowed deviation (error tolerance) between the + * input vertices and the fitted Bezier curve(s); must be + * {@code > 0} + * @return a smoothed copy of {@code shape} produced by piecewise cubic Bezier + * fitting and resampling + * + * @since 2.2 + * @see SchneiderBezierFitter + */ + public static PShape smoothBezierFit(PShape shape, double maxDeviation) { + return PGS.applyToLinealGeometries(shape, ring -> { + return SchneiderBezierFitter.fitAndSample(ring, maxDeviation, PGS_Conversion.BEZIER_SAMPLE_DISTANCE); + }); + } + /** * Smoothes a shape by applying a gaussian filter to vertex coordinates. At * larger values, this morphs the input shape much more visually than @@ -450,12 +560,29 @@ public static PShape smooth(PShape shape, double alpha) { * @param sigma The standard deviation of the gaussian kernel. Larger values * provide more smoothing. * @return smoothed copy of the shape + * @see #smoothGaussianNormalised(PShape, double) * @see #smooth(PShape, double) */ public static PShape smoothGaussian(PShape shape, double sigma) { return PGS.applyToLinealGeometries(shape, ring -> GaussianLineSmoothing.get(ring, sigma)); } + /** + * Applies Gaussian smoothing to each lineal geometry in a {@link PShape} using + * a normalised amount in [0..1], intended to be scale-invariant across children + * of different sizes. {@code amount=0} leaves geometry unchanged; + * {@code amount=1} collapses (per geometry) using the extreme-sigma fallback. + * + * @param shape input shape + * @param amount normalised smoothing amount in [0..1] + * @return new shape with smoothed lineal components + * @see #smoothGaussian(PShape, double) + * @since 2.2 + */ + public static PShape smoothGaussianNormalised(PShape shape, double amount) { + return PGS.applyToLinealGeometries(shape, ring -> GaussianLineSmoothing.getNormalised(ring, amount)); + } + /** * Calculates the Elliptic Fourier Descriptors (EFD) of a specified shape, * yielding a simplified/smoothed shape representation based on the specified @@ -489,7 +616,7 @@ public static PShape smoothEllipticFourier(PShape shape, int descriptors) { if (ring.isClosed()) { final EllipticFourierDesc efd = new EllipticFourierDesc((LinearRing) ring, descriptorz); Coordinate[] coords = efd.createPolygon(); - return PGS.GEOM_FACTORY.createLinearRing(coords); + return GEOM_FACTORY.createLinearRing(coords); } else { return null; // open linestrings not supported } @@ -608,19 +735,30 @@ public static PShape chaikinCut(PShape shape, double ratio, int iterations) { } /** - * Distorts a polygonal shape by radially displacing its vertices along the line - * connecting each vertex with the shape's centroid, creating a warping or + * Radially warps a polygon by moving each boundary vertex inward/outward along + * the ray from the polygon centroid to that vertex, creating a warping or * perturbing effect. *

- * The shape's input vertices can optionally be densified prior to the warping - * operation. + * Optionally, the input boundary can be densified before warping by inserting + * additional vertices at a spacing of ~1 unit. This causes long edges to warp + * smoothly along their full length rather than only at the original corner + * vertices. * - * @param shape A polygonal PShape object to be distorted. - * @param magnitude The degree of the displacement, which determines the - * maximum Euclidean distance a vertex will be moved in - * relation to the shape's centroid. - * @param warpOffset An offset angle, which establishes the starting angle for - * the displacement process. + * @param shape A polygonal {@link PShape} (or GROUP of polygons) to be + * distorted. The warp is applied to each polygon ring + * independently. + * @param magnitude Controls the strength of the warp. Larger values produce + * larger radial displacements from the original boundary + * (i.e., larger inward/outward movement). A value of + * {@code 0} produces an unchanged shape. + * @param warpOffset An angular phase offset (in radians) added to each vertex's + * polar angle before sampling the noise field. Changing + * {@code warpOffset} does not change the warp magnitude; it + * rotates the noise pattern around the centroid (i.e., shifts + * where bulges/indentations occur along the boundary). This + * is useful for animation by incrementing {@code warpOffset} + * over time. The warp has a period of 2π. A typical/useful + * domain is {@code [0, 2*Math.PI)}. * @param densify A boolean parameter determining whether the shape should be * densified (by inserting additional vertices at a distance * of 1) before warping. If true, shapes with long edges will @@ -630,39 +768,69 @@ public static PShape chaikinCut(PShape shape, double ratio, int iterations) { * specified parameters. */ public static PShape radialWarp(PShape shape, double magnitude, double warpOffset, boolean densify) { - Geometry g = fromPShape(shape); - if (!g.getGeometryType().equals(Geometry.TYPENAME_POLYGON)) { - System.err.println("radialWarp() expects (single) polygon input. The geometry resolved to a " + g.getGeometryType()); - return shape; - } + final UniformNoise noise = new UniformNoise(1337); + + return PGS.applyToLinealGeometries(shape, line -> { - final Point point = g.getCentroid(); - final PVector c = new PVector((float) point.getX(), (float) point.getY()); + // radialWarp is defined for polygon rings; if we get an open line, just return + // it unchanged + if (!line.isClosed()) { + return line; + } + final Point centroid = line.getCentroid(); + final PVector c = new PVector((float) centroid.getX(), (float) centroid.getY()); + + Geometry working = line; + if (densify) { + final Densifier d = new Densifier(line); + d.setDistanceTolerance(1); + d.setValidate(false); + working = d.getResultGeometry(); + } - final List coords; + final Coordinate[] coords = working.getCoordinates(); + if (coords.length == 0) { + return line; + } - if (densify) { - final Densifier d = new Densifier(fromPShape(shape)); - d.setDistanceTolerance(1); - d.setValidate(false); - coords = PGS_Conversion.toPVector(toPShape(d.getResultGeometry())); - } else { - coords = PGS_Conversion.toPVector(shape); - } + // Warp all unique vertices; then explicitly re-close + final int n = coords.length; + for (int i = 0; i < n - 1; i++) { // ignore last coordinate (closure); we re-close after warping + final double x = coords[i].x; + final double y = coords[i].y; - final UniformNoise noise = new UniformNoise(1337); - coords.forEach(coord -> { - PVector heading = PVector.sub(coord, c); // vector from center to each vertex - final double angle = heading.heading() + warpOffset; - float perturbation = noise.uniformNoise(Math.cos(angle), Math.sin(angle)); - perturbation -= 0.5f; // [0...1] -> [-0.5...0.5] - perturbation *= magnitude * 2; - coord.add(heading.normalize().mult(perturbation)); // add perturbation to vertex + double dx = x - c.x; + double dy = y - c.y; + + final double len = Math.sqrt(dx * dx + dy * dy); + if (len == 0) { + continue; // vertex at centroid + } + + final double angle = FastAtan2.atan2(dy, dx) + warpOffset; + + float perturbation = noise.uniformNoise(FastMath.cos(angle), FastMath.sin(angle)); + perturbation -= 0.5f; // [0..1] -> [-0.5..0.5] + perturbation *= (float) (magnitude * 2.0); + + // normalize heading and displace + dx /= len; + dy /= len; + + coords[i].x = x + dx * perturbation; + coords[i].y = y + dy * perturbation; + } + + // ensure exact closure + coords[n - 1].x = coords[0].x; + coords[n - 1].y = coords[0].y; + + // preserve ring-ness if possible + if (line instanceof LinearRing) { + return GEOM_FACTORY.createLinearRing(coords); + } + return GEOM_FACTORY.createLineString(coords); }); - if (!coords.get(0).equals(coords.get(coords.size() - 1))) { - coords.add(coords.get(0)); - } - return PGS_Conversion.fromPVector(coords); } /** @@ -697,7 +865,7 @@ public static PShape sineWarp(PShape shape, double magnitude, double frequency, } coords.closeRing(); - Geometry out = GeometryFixer.fix(PGS.GEOM_FACTORY.createPolygon(coords.toCoordinateArray())); + Geometry out = GeometryFixer.fix(GEOM_FACTORY.createPolygon(coords.toCoordinateArray())); return PGS_Conversion.toPShape(out); } @@ -772,29 +940,42 @@ public static PShape fieldWarp(PShape shape, double magnitude, double noiseScale copy.addChild(copy); } - /* - * TODO preserveEnds arg, that scales the noise offset towards 0 for vertices - * near the end (so we don't large jump between end point and warped next - * vertex). - */ for (PShape child : copy.getChildren()) { - int offset = 0; // child.isClosed() ? 0 : 1 - for (int i = offset; i < child.getVertexCount() - offset; i++) { + int vCount = child.getVertexCount(); + if (vCount == 0) + continue; + + // Determine if the shape is closed. + boolean isClosed = child.isClosed() || (vCount > 1 && child.getVertex(0).equals(child.getVertex(vCount - 1))); + + // If closed, we iterate up to N-1 and handle the last vertex separately to + // ensure closure. + int limit = isClosed ? vCount - 1 : vCount; + + for (int i = 0; i < limit; i++) { final PVector coord = child.getVertex(i); float dx = noise.uniformNoise(coord.x / scale, coord.y / scale + time) - 0.5f; float dy = noise.uniformNoise(coord.x / scale + (101 + time), coord.y / scale + (101 + time)) - 0.5f; child.setVertex(i, coord.x + (dx * (float) magnitude * 2), coord.y + (dy * (float) magnitude * 2)); } + + // If the shape was closed, sync the last vertex with the newly warped first + // vertex. + if (isClosed && vCount > 1) { + PVector firstV = child.getVertex(0); + child.setVertex(vCount - 1, firstV.x, firstV.y); + } } if (pointsShape) { return copy; } else { if (copy.getChildCount() == 1) { + // Fix self-intersections or invalid geometries caused by warping return toPShape(GeometryFixer.fix(fromPShape(copy.getChild(0)))); } else { - // don't apply geometryFixer to GROUP shape, since fixing a multigeometry - // appears to merge shapes. TODO apply .fix() to shapes individually + // Return group as-is (fixing individual children would be safer but requires a + // loop) return copy; } } @@ -816,32 +997,51 @@ public static PShape fieldWarp(PShape shape, double magnitude, double noiseScale * @since 2.0 */ public static PShape pinchWarp(PShape shape, PVector pinchPoint, double weight) { - List vertices = new ArrayList<>(shape.getVertexCount()); - for (int i = 0; i < shape.getVertexCount(); i++) { - PVector vertex = shape.getVertex(i).copy(); - float distance = PVector.dist(vertex, pinchPoint); - float w = (float) (weight / (distance + 1)); - PVector direction = PVector.sub(pinchPoint, vertex); - direction.mult(w); - vertex.add(direction); - vertices.add(vertex); - } - if (shape.isClosed()) { - vertices.add(vertices.get(0)); - } - return PGS_Conversion.fromPVector(vertices); + return PGS.applyToLinealGeometries(shape, line -> { + final var gf = line.getFactory(); + final var coords = line.getCoordinates(); + + if (coords.length == 0) { + return line; + } + + final boolean closed = line.isClosed(); + + for (int i = 0; i < coords.length; i++) { + // if closed, we'll re-close explicitly after warping to avoid drift + if (closed && i == coords.length - 1) { + break; + } + + final double x = coords[i].x; + final double y = coords[i].y; + + final double dx = pinchPoint.x - x; + final double dy = pinchPoint.y - y; + + final double distance = Math.sqrt(dx * dx + dy * dy); + final double w = weight / (distance + 1.0); + + coords[i].x = x + dx * w; + coords[i].y = y + dy * w; + } + + if (closed) { + coords[coords.length - 1].x = coords[0].x; + coords[coords.length - 1].y = coords[0].y; + } + + return gf.createLineString(coords); + }); } /** * Generates an intermediate shape between two shapes by interpolating between - * them. This process has many names: shape morphing / blending / averaging / - * tweening / interpolation. - *

- * The underlying technique rotates one of the shapes to minimise the total - * distance between each shape's vertices, then performs linear interpolation - * between vertices. This performs well in practice but the outcome worsens as - * shapes become more concave; more sophisticated techniques would employ some - * level of rigidity preservation. + * their exterior rings. This process has many names: shape morphing / blending + * / averaging / tweening / interpolation. + *

+ * Note the interpolated shape may self-intersect (this implementation is not + * "rigid"). * * @param from a single polygon; the shape we want to morph from * @param to a single polygon; the shape we want to morph @@ -850,53 +1050,215 @@ public static PShape pinchWarp(PShape shape, PVector pinchPoint, double weight) * @return a polygonal PShape * @since 1.2.0 * @see #interpolate(PShape, PShape, int) + * @implNote Uses {@link NewtonThieleRingMorpher} for higher-quality + * interpolation. */ public static PShape interpolate(PShape from, PShape to, double interpolationFactor) { - final Geometry fromGeom = fromPShape(from); - final Geometry toGeom = fromPShape(to); - if (toGeom.getGeometryType().equals(Geometry.TYPENAME_POLYGON) && fromGeom.getGeometryType().equals(Geometry.TYPENAME_POLYGON)) { - final ShapeInterpolation tween = new ShapeInterpolation(fromGeom, toGeom); - return toPShape(PGS.GEOM_FACTORY.createPolygon(tween.tween(interpolationFactor))); - } else { - System.err.println("interpolate() accepts holeless single polygons only (for now)."); - return from; - } + return interpolate(List.of(from, to), interpolationFactor); + } + + /** + * Generates an intermediate shape from a sequence of input shapes by + * interpolating (morphing) between their exterior rings. + *

+ * This is a generalisation of {@link #interpolate(PShape, PShape, double)} to + * more than two shapes. The interpolation follows the order of {@code shapes}. + *

+ * Note the interpolated shape may self-intersect (this implementation is not + * "rigid"). + * + * @param shapes a list of single-polygon {@link PShape}s; only the + * exterior ring is used. + * @param interpolationFactor interpolation parameter in the range + * {@code [0..1]} + * @return a polygonal {@link PShape} representing the interpolated shape + * @since 2.2.0 + * @see #interpolate(PShape, PShape, double) + * @implNote Uses {@link NewtonThieleRingMorpher} for higher-quality + * interpolation. + */ + public static PShape interpolate(List shapes, double interpolationFactor) { + var rings = shapes.stream().map(s -> ((Polygon) fromPShape(s)).getExteriorRing()).toArray(LinearRing[]::new); + NewtonThieleRingMorpher m = new NewtonThieleRingMorpher(rings); + var tween = m.interpolate(interpolationFactor); + return toPShape(tween); } /** - * Generates intermediate shapes (frames) between two shapes by interpolating - * between them. This process has many names: shape morphing / blending / + * Generates intermediate shapes (frames) by interpolating (morphing) through a + * sequence of shapes. This process has many names: shape morphing / blending / * averaging / tweening / interpolation. *

- * This method is faster than calling - * {@link #interpolate(PShape, PShape, double) interpolate()} repeatedly for - * different interpolation factors. - * - * @param from a single polygon; the shape we want to morph from - * @param to a single polygon; the shape we want to morph from - * into - * @param frames the number of frames (including first and last) to generate. >= - * 2 - * @return a GROUP PShape, where each child shape is a frame from the - * interpolation - * @since 1.3.0 + * The returned frames include both endpoints: the first frame corresponds to + * {@code t = 0} (the first shape in {@code shapes}) and the last frame + * corresponds to {@code t = 1} (the last shape in {@code shapes}). Intermediate + * frames are evenly spaced in {@code [0..1]} using {@code t = i/(frames-1)}. + *

+ * This method is faster than calling {@link #interpolate(List, double)} (or + * {@link #interpolate(PShape, PShape, double)}) repeatedly for different + * interpolation factors. + * + * @param shapes a list of single-polygon {@link PShape}s, in the order they + * should be morphed through; only the exterior ring is used. + * @param frames the number of frames (including first and last) to generate; + * must be {@code >= 2} + * @return a GROUP {@link PShape} whose children are the generated frames + * @since 2.2.0 + * @see #interpolate(List, double) * @see #interpolate(PShape, PShape, double) */ - public static PShape interpolate(PShape from, PShape to, int frames) { - final Geometry fromGeom = fromPShape(from); - final Geometry toGeom = fromPShape(to); - if (toGeom.getGeometryType().equals(Geometry.TYPENAME_POLYGON) && fromGeom.getGeometryType().equals(Geometry.TYPENAME_POLYGON)) { - final ShapeInterpolation tween = new ShapeInterpolation(fromGeom, toGeom); - final float fraction = 1f / (frames - 1); - PShape out = new PShape(); - for (int i = 0; i < frames; i++) { - out.addChild(toPShape(PGS.GEOM_FACTORY.createPolygon(tween.tween(fraction * i)))); - } - return out; - } else { - System.err.println("interpolate() accepts holeless single polygons only (for now)."); - return from; + public static PShape interpolate(List shapes, int frames) { + var rings = shapes.stream().map(s -> ((Polygon) fromPShape(s)).getExteriorRing()).toArray(LinearRing[]::new); + NewtonThieleRingMorpher m = new NewtonThieleRingMorpher(rings); + + final double fraction = 1d / (frames - 1); + PShape out = new PShape(); + for (int i = 0; i < frames; i++) { + out.addChild(toPShape(GEOM_FACTORY.createPolygon(m.interpolate(fraction * i)))); } + + return out; + } + + /** + * Interpolates ("morphs") between two shapes using a Hausdorff-distance based + * dilation approach. + *

+ * The intermediate shape is computed by buffering each input by a complementary + * amount (based on the estimated Hausdorff distance between the shapes) and + * intersecting the two buffers. This provides a correspondence-free morph that + * works even when the inputs have different vertex counts, components, or + * holes. + * + * @param from the starting shape (α = 0) + * @param to the ending shape (α = 1) + * @param morphFactor the interpolation parameter α (in {@code [0,1]}) + * @return a new {@code PShape} representing the Hausdorff morph between + * {@code from} and {@code to} + * @since 2.2 + */ + public static PShape dilationMorph(PShape from, PShape to, double morphFactor) { + var gFrom = fromPShape(from); + var gTo = fromPShape(to); + var i = HausdorffInterpolator.interpolateUsingEstimatedHausdorff(gFrom, gTo, morphFactor, 1, 15); + return toPShape(i); + } + + /** + * Computes the Voronoi-based Hausdorff morph between two shapes. + *

+ * Convenience overload for + * {@link #voronoiMorph(PShape, PShape, double, double, boolean)} using default + * parameters. + *

+ * Uses {@code maxSegmentLength = 0} (no boundary densification) and + * {@code unionResult = true} (returns a cleaned area geometry). + * + * @param from the starting shape (α = 0) + * @param to the ending shape (α = 1) + * @param morphFactor the morph parameter α, in {@code [0,1]} + * @return a new {@code PShape} representing the Voronoi Hausdorff morph between + * {@code from} and {@code to} + * @see #voronoiMorph(PShape, PShape, double, double, boolean) + * @since 2.2 + */ + public static PShape voronoiMorph(PShape from, PShape to, double morphFactor) { + return voronoiMorph(from, to, morphFactor, 0, true); + } + + /** + * Interpolates ("morphs") between two shapes using a Voronoi partition + * approach. + *

+ * The non-overlapping parts of each input are partitioned by Voronoi cells + * induced by sampled boundary sites of the other shape; each partition piece is + * then moved toward its closest site: + *

    + *
  • closest vertex: uniform scaling toward that vertex,
  • + *
  • closest edge: scaling perpendicular to the edge’s supporting + * line.
  • + *
+ * The result is the union of transformed pieces from {@code from} using + * fraction {@code α} and transformed pieces from {@code to} using fraction + * {@code 1-α}, plus their overlap. + *

+ * This method supports polygons with holes and groups with disconnected + * components, and does not require any explicit correspondence between the + * inputs. + * + * @param from the starting shape (α = 0) + * @param to the ending shape (α = 1) + * @param morphFactor the morph parameter α, in {@code [0,1]} + * @param maxSegmentLength maximum segment length used to densify boundaries + * when sampling Voronoi sites; {@code <= 0} disables + * densification + * @param unionResult if {@code true}, unions the result into a clean area + * geometry (slower); if {@code false}, returns a + * combined multi/collection geometry (faster) that may + * retain overlaps/seams + * @return a new {@code PShape} representing the Voronoi-partition morph between + * {@code from} and {@code to} + * @see #voronoiMorph(PShape, PShape, double) + * @since 2.2 + */ + public static PShape voronoiMorph(PShape from, PShape to, double morphFactor, double maxSegmentLength, boolean unionResult) { + var gFrom = fromPShape(from); + var gTo = fromPShape(to); + var pvp = VoronoiInterpolator.prepareVoronoiPartition(gFrom, gTo, maxSegmentLength, 0); + var g = VoronoiInterpolator.interpolateVoronoi(pvp, morphFactor, unionResult); + return toPShape(g); + } + + /** + * As-rigid-as-possible (ARAP) 2D deformation of a polygon {@link PShape} using + * point handles. + *

Handle semantics

+ *
    + *
  • {@code handles} are points in the rest (original) shape's + * coordinate space.
  • + *
  • {@code handleTargets} are the desired positions for those same handles in + * the deformed shape.
  • + *
  • Both lists must have the same size and matching order (i.e., index + * {@code i} in {@code handles} maps to index {@code i} in + * {@code handleTargets}).
  • + *
  • ARAP typically requires at least 2 handles for a stable solve.
  • + *
+ * + *

Performance notes

+ *

+ * This method rebuilds and refines a triangulation on every call. For + * interactive dragging (re-solving every frame), prefer using {@link Malleo} + * directly: build the triangulation and call + * {@link Malleo#prepareHandles(List)} once, then repeatedly call + * {@link Malleo#solve(Malleo.CompiledHandles, List)} with updated targets. + * + *

Output

+ *

+ * Returns the deformed polygon boundary. The result may self-intersect + * depending on handle motion and mesh quality. + * + * @param shape the rest shape to deform (expected to be a single + * polygon {@code PShape}) + * @param handles handle locations in rest-space + * @param handleTargets target locations for each handle, in the same order as + * {@code handles} + * @return a new {@code PShape} representing the deformed shape + * @since 2.2 + */ + public static PShape arapDeform(PShape shape, List handles, List handleTargets) { + var t = PGS_Triangulation.delaunayTriangulationMesh(shape); + PGS_Triangulation.refine(t, 15, 50); // refine + var g = PGS_Triangulation.toGeometry(t); + + Malleo m = new Malleo(g); + var mHandles = Arrays.asList(PGS.toCoords(handles)); + var mTargets = Arrays.asList(PGS.toCoords(handleTargets)); + + var compiledHandles = m.prepareHandles(mHandles); + + var deformed = m.solve(compiledHandles, mTargets); + + return toPShape(deformed); } /** @@ -912,7 +1274,63 @@ public static PShape interpolate(PShape from, PShape to, int frames) { * @since 1.3.0 */ public static PShape reducePrecision(PShape shape, double precision) { - return toPShape(GeometryPrecisionReducer.reduce(fromPShape(shape), new PrecisionModel(-Math.max(Math.abs(precision), 1e-10)))); + var pm = new PrecisionModel(-Math.max(Math.abs(precision), 1e-10)); + if (shape.getFamily() == PShape.GROUP) { + // pointwise preserves polygon faces (doesn't merge) + return toPShape(GeometryPrecisionReducer.reducePointwise(fromPShape(shape), pm)); + } else { + return toPShape(GeometryPrecisionReducer.reduce(fromPShape(shape), pm)); + } + } + + /** + * Regularises (straightens) the contour of a lineal {@link PShape} by snapping + * edges toward a small set of principal directions and simplifying the result. + * The prinicipal direction is derived from the shape's longest edge. + * + * @param shape a lineal {@code PShape} to regularise (or a group containing + * lineal children) + * @param maxOffset maximum allowed offset. Used to constrain how far the + * regularised contour may deviate from the input; must be + * >= 0 + * @return a new {@code PShape} whose linework has been regularised + * @see #regularise(PShape, double, double) + * @since 2.2 + */ + public static PShape regularise(PShape shape, double maxOffset) { + var params = Parameters.builder().maximumOffset(maxOffset); + return PGS.applyToLinealGeometries(shape, l -> { + return ContourRegularization.regularize(l, params.build()); + }); + } + + /** + * Regularises (straightens) the contour of a lineal {@link PShape} by snapping + * edges toward principal directions and simplifying the result. + *

+ * This overload lets you provide an explicit principal axis + * orientation (in degrees). Edges are snapped to be parallel to that axis + * or to its orthogonal (axis + 90°), subject to the {@code maxOffset} + * constraint. + * + * @param shape a lineal {@code PShape} to regularize (or a group + * containing lineal children) + * @param maxOffset maximum allowed offset used to constrain how far the + * regularised contour may deviate from the input; must + * be >= 0 + * @param axisOrientation principal axis direction, in degrees, expected in the + * range {@code [0,180)} (values outside this range are + * normalised) + * @return a new {@code PShape} whose linework has been regularised + * @see #regularise(PShape, double) + * @since 2.2 + */ + public static PShape regularise(PShape shape, double maxOffset, double axisOrientation) { + var d = new ContourRegularization.UserDefinedDirections(5, axisOrientation); + var params = Parameters.builder().maximumOffset(maxOffset).directions(d); + return PGS.applyToLinealGeometries(shape, l -> { + return ContourRegularization.regularize(l, params.build()); + }); } /** diff --git a/src/main/java/micycle/pgs/PGS_Optimisation.java b/src/main/java/micycle/pgs/PGS_Optimisation.java index d20d4adf..e4e2b0b8 100644 --- a/src/main/java/micycle/pgs/PGS_Optimisation.java +++ b/src/main/java/micycle/pgs/PGS_Optimisation.java @@ -16,6 +16,7 @@ import org.apache.commons.lang3.tuple.Triple; import org.locationtech.jts.algorithm.MinimumAreaRectangle; import org.locationtech.jts.algorithm.MinimumBoundingCircle; +import org.locationtech.jts.algorithm.MinimumBoundingTriangle; import org.locationtech.jts.algorithm.MinimumDiameter; import org.locationtech.jts.algorithm.construct.LargestEmptyCircle; import org.locationtech.jts.algorithm.construct.MaximumInscribedCircle; @@ -27,6 +28,7 @@ import org.locationtech.jts.geom.Location; import org.locationtech.jts.geom.Point; import org.locationtech.jts.geom.Polygon; +import org.locationtech.jts.geom.Polygonal; import org.locationtech.jts.operation.distance.DistanceOp; import org.locationtech.jts.simplify.DouglasPeuckerSimplifier; import org.locationtech.jts.util.GeometricShapeFactory; @@ -46,10 +48,10 @@ import micycle.pgs.commons.MaximumInscribedRectangle; import micycle.pgs.commons.MaximumInscribedTriangle; import micycle.pgs.commons.MinimumBoundingEllipse; -import micycle.pgs.commons.MinimumBoundingTriangle; import micycle.pgs.commons.Nullable; import micycle.pgs.commons.SpiralIterator; import micycle.pgs.commons.VisibilityPolygon; +import processing.core.PConstants; import processing.core.PShape; import processing.core.PVector; import whitegreen.dalsoo.DalsooPack; @@ -234,7 +236,8 @@ public static PShape maximumInscribedTriangle(PShape shape) { /** * Finds the rectangle with a maximum area whose sides are parallel to the - * x-axis and y-axis ("axis-aligned"), contained/insribed within a convex shape. + * x-axis and y-axis ("axis-aligned"), contained/inscribed within a convex + * shape. *

* This method computes the MIR for convex shapes only; if a concave shape is * passed in, the resulting rectangle will be computed based on its convex hull. @@ -507,7 +510,7 @@ public static PShape minimumBoundingEllipse(PShape shape, double errorTolerance) final PShape ellipse = new PShape(PShape.PATH); ellipse.setFill(true); ellipse.setFill(Colors.WHITE); - ellipse.beginShape(); + ellipse.beginShape(PConstants.POLYGON); for (double[] eEoord : eEoords) { ellipse.vertex((float) eEoord[0], (float) eEoord[1]); } @@ -522,7 +525,7 @@ public static PShape minimumBoundingEllipse(PShape shape, double errorTolerance) * @param shape */ public static PShape minimumBoundingTriangle(PShape shape) { - MinimumBoundingTriangle mbt = new MinimumBoundingTriangle(fromPShape(shape)); + var mbt = new MinimumBoundingTriangle(fromPShape(shape)); return toPShape(mbt.getTriangle()); } @@ -686,8 +689,9 @@ public static PShape largestEmptyCircle(PShape obstacles, @Nullable PShape bound */ public static List largestEmptyCircles(PShape obstacles, @Nullable PShape boundary, int n, double tolerance) { tolerance = Math.max(0.01, tolerance); - LargestEmptyCircles lecs = new LargestEmptyCircles(obstacles == null ? null : fromPShape(obstacles), boundary == null ? null : fromPShape(boundary), - tolerance); + var boundaryG = boundary == null ? null : fromPShape(boundary); + var obstaclesG = obstacles == null ? null : fromPShape(obstacles); + var lecs = new LargestEmptyCircles(boundaryG, obstaclesG, tolerance); final List out = new ArrayList<>(); for (int i = 0; i < n; i++) { @@ -875,10 +879,10 @@ public static PVector closestVertex(PShape shape, PVector queryPoint) { if (vertices.isEmpty()) { return null; } - float minDistSq = Float.POSITIVE_INFINITY; + double minDistSq = Double.POSITIVE_INFINITY; PVector closest = null; for (PVector v : vertices) { - float distSq = PVector.dist(v, queryPoint); + double distSq = PGS.distanceSq(v, queryPoint); if (distSq < minDistSq) { minDistSq = distSq; closest = v; @@ -910,6 +914,9 @@ public static PVector closestVertex(PShape shape, PVector queryPoint) { */ public static PVector closestPoint(PShape shape, PVector point) { Geometry g = fromPShape(shape); + if (g instanceof Polygonal) { + g = g.getBoundary(); + } Coordinate coord = DistanceOp.nearestPoints(g, PGS.pointFromPVector(point))[0]; return new PVector((float) coord.x, (float) coord.y); } @@ -956,9 +963,9 @@ public static PVector closestPoint(Collection points, PVector point) { */ public static List closestPoints(PShape shape, PVector point) { Geometry g = fromPShape(shape); - ArrayList points = new ArrayList<>(); + List points = new ArrayList<>(); for (int i = 0; i < g.getNumGeometries(); i++) { - final Coordinate coord = DistanceOp.nearestPoints(g.getGeometryN(i), PGS.pointFromPVector(point))[0]; + final Coordinate coord = DistanceOp.nearestPoints(g.getGeometryN(i).getBoundary(), PGS.pointFromPVector(point))[0]; points.add(PGS.toPVector(coord)); } return points; @@ -1272,22 +1279,30 @@ public static PVector solveApollonius(PVector c1, PVector c2, PVector c3, int s1 } /** - * Computes a visibility polygon / isovist, the area visible from a given point - * in a space, considering occlusions caused by obstacles. In this case, - * obstacles comprise the line segments of input shape. + * Computes the visibility polygon (isovist): the region visible from a given + * viewpoint, with occlusions caused by the edges of the supplied shape. * - * @param obstacles shape representing obstacles, which may have any manner of - * polygon and line geometries. - * @param viewPoint view point from which to compute visibility. If the input if - * polygonal, the viewpoint may lie outside the polygon. - * @return a polygonal shape representing the visibility polygon. + * @param obstacles a PShape whose edges serve as occluding obstacles; may + * contain polygons and/or lines. + * @param viewPoint the viewpoint from which visibility is computed. If the + * input if polygonal, the viewpoint may lie outside the + * polygon. + * @return a polygon representing the visible region from {@code viewPoint} * @since 1.4.0 * @see #visibilityPolygon(PShape, Collection) */ public static PShape visibilityPolygon(PShape obstacles, PVector viewPoint) { + var g = fromPShape(obstacles); + var p = PGS.pointFromPVector(viewPoint); + VisibilityPolygon vp = new VisibilityPolygon(); - vp.addGeometry(fromPShape(obstacles)); - return toPShape(vp.getIsovist(PGS.coordFromPVector(viewPoint), true)); + vp.addGeometry(g); + + /* + * Skip adding envelope only when viewpoint is in a polygon. + */ + var isovist = vp.getIsovist(p.getCoordinate(), (g instanceof Polygonal) ? !g.contains(p) : true); + return toPShape(isovist); } /** diff --git a/src/main/java/micycle/pgs/PGS_PointSet.java b/src/main/java/micycle/pgs/PGS_PointSet.java index 1df51711..5f91a07c 100644 --- a/src/main/java/micycle/pgs/PGS_PointSet.java +++ b/src/main/java/micycle/pgs/PGS_PointSet.java @@ -4,6 +4,7 @@ import java.util.Collection; import java.util.Collections; import java.util.List; +import java.util.Random; import java.util.SplittableRandom; import java.util.stream.Collectors; import java.util.stream.IntStream; @@ -30,6 +31,7 @@ import it.unimi.dsi.util.XoRoShiRo128PlusRandom; import it.unimi.dsi.util.XoRoShiRo128PlusRandomGenerator; import micycle.pgs.commons.GeometricMedian; +import micycle.pgs.commons.GonHeuristic; import micycle.pgs.commons.GreedyTSP; import micycle.pgs.commons.PEdge; import micycle.pgs.commons.PoissonDistributionJRUS; @@ -393,7 +395,7 @@ public static List random(double xMin, double yMin, double xMax, double * point set is centered around the given center, given by mean coordinates. * * @param centerX x coordinate of the center/mean of the point set - * @param centerY x coordinate of the center/mean of the point set + * @param centerY y coordinate of the center/mean of the point set * @param sd standard deviation, which specifies how much the values can * vary from the mean. 68% of point samples have a value within * one standard deviation of the mean; three standard deviations @@ -411,7 +413,7 @@ public static List gaussian(double centerX, double centerY, double sd, * by mean coordinates. * * @param centerX x coordinate of the center/mean of the point set - * @param centerY x coordinate of the center/mean of the point set + * @param centerY y coordinate of the center/mean of the point set * @param sd standard deviation, which specifies how much the values can * vary from the mean. 68% of point samples have a value within * one standard deviation of the mean; three standard deviations @@ -547,7 +549,7 @@ public static List hexagon(double centerX, double centerY, int length, * (annulus). * * @param centerX x coordinate of the center/mean of the ring - * @param centerY x coordinate of the center/mean of the ring + * @param centerY y coordinate of the center/mean of the ring * @param innerRadius radius of the ring's hole * @param outerRadius outer radius of the ring * @param maxAngle sweep angle of the ring (in radians). Can be negative @@ -1084,7 +1086,7 @@ public static List sobolLDS(double xMin, double yMin, double xMax, doub * @return a LINES PShape * @since 1.3.0 */ - public static PShape minimumSpanningTree(List points) { + public static PShape minimumSpanningTree(Collection points) { /* * The Euclidean minimum spanning tree in a plane is a subgraph of the Delaunay * triangulation. @@ -1099,10 +1101,6 @@ public static PShape minimumSpanningTree(List points) { * Computes an approximate Traveling Salesman path for the set of points * provided. Utilises a heuristic based TSP solver, followed by 2-opt heuristic * improvements for further tour optimisation. - *

- * Note {@link PGS_Hull#concaveHullBFS(List, double) concaveHullBFS()} produces - * a similar result (somewhat longer tours, i.e. 10%) but is much more - * performant. * * @param points the list of points for which to compute the approximate * shortest tour @@ -1111,11 +1109,50 @@ public static PShape minimumSpanningTree(List points) { * starting point). * @since 2.0 */ - public static PShape findShortestTour(List points) { + public static PShape findShortestTour(Collection points) { var tour = new GreedyTSP<>(points, (a, b) -> a.dist(b)); return PGS_Conversion.fromPVector(tour.getTour()); } + /** + * Selects {@code k} points from {@code points} to act as centers that are + * typically well distributed over the input set (i.e., each new center tends to + * be chosen from the currently “largest uncovered” / most distant region + * relative to the centers selected so far). + * + * @param points the input points; must be non-empty and contain at least + * {@code k} points + * @param k the number of centers to return; must be {@code >= 1} + * @return a list containing {@code k} points chosen as centers (subset of + * {@code points}) + * @throws IllegalArgumentException if {@code k <= 0}, {@code points} is empty, + * or {@code points.size() < k} + * @since 2.2 + */ + public static List kCenters(Collection points, int k) { + return kCenters(points, k, System.nanoTime()); + } + + /** + * Selects {@code k} points from {@code points} to act as centers that are + * typically well distributed over the input set (i.e., each new center tends to + * be chosen from the currently “largest uncovered” / most distant region + * relative to the centers selected so far). + * + * @param points the input points; must be non-empty and contain at least + * {@code k} points + * @param k the number of centers to return; must be {@code >= 1} + * @param seed random seed used for deterministic center selection + * @return a list containing {@code k} points chosen as centers (subset of + * {@code points}) + * @since 2.2 + */ + public static List kCenters(Collection points, int k, long seed) { + GonHeuristic gh = new GonHeuristic<>(new Random(seed)); + var centers = gh.getCenters(points, k, (a, b) -> PGS.distanceSq(a, b)); + return centers; + } + /** * Applies random weights within a specified range to a list of points. The * weights are assigned to the z-coordinate of each point using a random number @@ -1153,11 +1190,12 @@ public static List applyRandomWeights(List points, double minW * with a random weight assigned to its z-coordinate * @since 2.0 */ - public static List applyRandomWeights(List points, double minWeight, double maxWeight, long seed) { + public static List applyRandomWeights(List points, final double minWeight, final double maxWeight, final long seed) { final SplittableRandom random = new SplittableRandom(seed); return points.stream().map(p -> { p = p.copy(); - p.z = (float) random.nextDouble(minWeight, maxWeight); + var w = minWeight == maxWeight ? minWeight : random.nextDouble(minWeight, maxWeight); + p.z = (float) w; return p; }).collect(Collectors.toList()); } diff --git a/src/main/java/micycle/pgs/PGS_Polygonisation.java b/src/main/java/micycle/pgs/PGS_Polygonisation.java new file mode 100644 index 00000000..60b3b4c9 --- /dev/null +++ b/src/main/java/micycle/pgs/PGS_Polygonisation.java @@ -0,0 +1,569 @@ +package micycle.pgs; + +import static micycle.pgs.PGS_Conversion.toPShape; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; +import java.util.IdentityHashMap; +import java.util.List; + +import micycle.pgs.commons.AreaOptimalPolygonizer; +import micycle.pgs.commons.AreaOptimalPolygonizer.AreaObjective; +import micycle.pgs.commons.Uncrossing2Opt; +import net.jafama.FastMath; +import processing.core.PShape; +import processing.core.PVector; + +/** + * Generates simple polygonisations of point sets. + *

+ * A polygonisation is a simple polygon whose vertex set is exactly the given + * point set, i.e. a non-self-intersecting Hamiltonian cycle through all points. + * Different algorithms may produce different polygonisations of the same point + * set. + *

+ * Polygonisations are distinct from geometric hulls: hulls may select a + * subset of extreme points to form an enclosing boundary, whereas + * polygonisations must use all points as vertices. + * + * @author Michael Carleton + * @since 2.2 + */ +public class PGS_Polygonisation { + + /** + * Produces a simple polygonisation that attempts to minimise the polygon area + * while using every point in the supplied set as a vertex. + * + * @param points the input point set (must not be {@code null}) containing >2 + * points. + * @return a new {@link processing.core.PShape PShape} representing a simple + * polygon that polygonises the input points and (attempts to) minimise + * area. + * @see {@link #maxArea(Collection)} + * @since 2.2 + */ + public static PShape minArea(Collection points) { + var coords = points.stream().map(p -> PGS.coordFromPVector(p)).toList(); + var g = AreaOptimalPolygonizer.polygonize(coords, AreaObjective.MINIMIZE); + return toPShape(g); + } + + /** + * Produces a simple polygonisation that attempts to maximise the polygon area + * while using every point in the supplied set as a vertex. + * + * @param points the input point set (must not be {@code null}) containing >2 + * points. + * @return a new {@link processing.core.PShape PShape} representing a simple + * polygon that polygonises the input points and (attempts to) maximise + * area. + * @see #minArea(Collection) + * @since 2.2 + */ + public static PShape maxArea(Collection points) { + var coords = points.stream().map(p -> PGS.coordFromPVector(p)).toList(); + var g = AreaOptimalPolygonizer.polygonize(coords, AreaObjective.MAXIMIZE); + return toPShape(g); + } + + /** + * Computes a polygonisation that approximates a shortest closed tour visiting + * every point exactly once (a Hamiltonian cycle with small perimeter). + *

+ * This method is effectively a TSP-style polygonisation: it returns a simple + * polygon whose total edge length is minimised (or approximated by the + * underlying shortest-tour routine). + * + * @param points the input point set (must not be {@code null}). If the set + * contains fewer than three distinct points an appropriate + * degenerate {@link processing.core.PShape PShape} containing the + * input points will be returned. + * @return a new {@link processing.core.PShape PShape} representing a simple + * polygon that attempts to minimise perimeter. + * @since 2.2 + */ + public static PShape minPerimeter(Collection points) { + return PGS_PointSet.findShortestTour(points); + } + + /** + * Builds a polygonisation by scanning points horizontally (primary sort by Y, + * secondary by X) and then removing edge crossings via a 2-opt (segment + * reversal) uncrossing routine. + *

+ * The produced polygon has "horizontal scanline" characteristics. + * + * @param points the input point set (may be {@code null}). If {@code null} an + * empty {@link processing.core.PShape PShape} is returned. If the + * set contains fewer than three distinct points a degenerate + * {@link processing.core.PShape PShape} containing the input + * points is returned. + * @return a new {@link processing.core.PShape PShape} representing a simple + * polygon constructed by horizontal scan and uncrossing. + * @since 2.2 + */ + public static PShape horizontal(Collection points) { + return scanAndResolve(points, true); + } + + /** + * Builds a polygonisation by scanning points vertically (primary sort by X, + * secondary by Y) and then removing edge crossings via a 2-opt (segment + * reversal) uncrossing routine. + *

+ * The produced polygon has "vertical scanline" characteristics. + * + * @param points the input point set (may be {@code null}). If {@code null} an + * empty {@link processing.core.PShape PShape} is returned. If the + * set contains fewer than three distinct points a degenerate + * {@link processing.core.PShape PShape} containing the input + * points is returned. + * @return a new {@link processing.core.PShape PShape} representing a simple + * polygon constructed by vertical scan and uncrossing. + * @since 2.2 + */ + public static PShape vertical(Collection points) { + return scanAndResolve(points, false); + } + + /** + * Produces a polygonisation by ordering points according to a Hilbert curve + * (space-filling curve) ordering, then applying a local uncrossing (2-opt) pass + * to remove any segment intersections. + *

+ * Hilbert ordering tends to preserve locality and thus often produces visually + * compact, low-crossing initial orderings which the uncrossing step refines + * into a simple polygon. + * + * @param points the input point set (must not be {@code null}). If the set + * contains fewer than three distinct points an appropriate + * degenerate {@link processing.core.PShape PShape} containing the + * input points will be returned. + * @return a new {@link processing.core.PShape PShape} representing a simple + * polygon obtained from Hilbert ordering + uncrossing. + * @since 2.2 + */ + public static PShape hilbert(Collection points) { + var seq = PGS_PointSet.hilbertSort(new ArrayList(points)); + Uncrossing2Opt.uncross(seq); + return toPolygon(seq); + } + + /** + * Builds a polygonisation by grouping points into concentric "rings" around the + * centroid, ordering points within each ring by polar angle, and stitching the + * rings together into a single sequence. + *

+ * This is a heuristic polygonisation: it favors "circular" or banded structures + * (concentric/clustered layouts) and often produces visually compact, + * low-crossing initial orders that the uncrossing step refines into a simple + * polygon. + * + * @param points the input point set (may be {@code null}). If {@code null} an + * empty {@link processing.core.PShape PShape} is returned. If the + * set contains fewer than three distinct points a degenerate + * {@link processing.core.PShape PShape} containing the input + * points is returned. + * @return a new {@link processing.core.PShape PShape} representing a simple + * polygon constructed by concentric ring (circular) ordering and + * subsequent uncrossing. + * @since 2.2 + */ + public static PShape circular(Collection points) { + if (points == null) { + return new PShape(); + } + final int n = points.size(); + if (n < 3) + return PGS_Conversion.fromPVector(new ArrayList<>(points)); + + // center = centroid + double cx = 0, cy = 0; + for (PVector p : points) { + cx += p.x; + cy += p.y; + } + cx /= n; + cy /= n; + + List info = new ArrayList<>(n); + for (PVector p : points) { + double dx = p.x - cx, dy = p.y - cy; + info.add(new Info(p, Math.sqrt(dx * dx + dy * dy), FastMath.atan2(dy, dx))); + } + + // sort by radius and split into rings (equal-size quantiles) + Collections.sort(info, (a, b) -> Double.compare(a.r, b.r)); + int numRings = Math.max(1, (int) Math.round(Math.sqrt(n))); // heuristic + List> rings = new ArrayList<>(numRings); + for (int i = 0; i < numRings; i++) + rings.add(new ArrayList<>()); + + for (int i = 0; i < n; i++) { + int bucket = (int) ((long) i * numRings / n); // maps 0..n-1 into 0..numRings-1 + rings.get(bucket).add(info.get(i)); + } + + // sort each ring by angle + for (List ring : rings) { + Collections.sort(ring, (a, b) -> Double.compare(a.theta, b.theta)); + } + + // concatenate rings, aligning each ring to the nearest start point and + // alternating direction + List seq = new ArrayList<>(n); + boolean forward = true; + for (List ring : rings) { + if (ring.isEmpty()) + continue; + if (seq.isEmpty()) { + // first ring: optionally start at smallest theta, and maybe reverse for parity + if (!forward) + Collections.reverse(ring); + for (Info it : ring) + seq.add(it.p); + } else { + // find index in ring nearest to last appended point + PVector last = seq.get(seq.size() - 1); + int start = 0; + double best = Double.POSITIVE_INFINITY; + for (int k = 0; k < ring.size(); k++) { + double dx = last.x - ring.get(k).p.x; + double dy = last.y - ring.get(k).p.y; + double d2 = dx * dx + dy * dy; + if (d2 < best) { + best = d2; + start = k; + } + } + // append ring starting at start, in forward or reverse direction + if (forward) { + for (int k = 0; k < ring.size(); k++) { + seq.add(ring.get((start + k) % ring.size()).p); + } + } else { + for (int k = 0; k < ring.size(); k++) { + int idx = (start - k) % ring.size(); + if (idx < 0) + idx += ring.size(); + seq.add(ring.get(idx).p); + } + } + } + forward = !forward; + } + + Uncrossing2Opt.uncross(seq); + + return toPolygon(seq); + } + + /** + * Generates a polygonisation by angular (radial) sorting: points are sorted by + * angle around the centroid, tie-broken by distance from the centroid, and then + * a 2-opt uncrossing pass is applied. + *

+ * The angular sort usually gives a star-shaped output. + * + * @param points the input point set (may be {@code null}). If {@code null} an + * empty {@link processing.core.PShape PShape} is returned. If the + * set contains fewer than three distinct points a degenerate + * {@link processing.core.PShape PShape} containing the input + * points is returned. + * @return a new {@link processing.core.PShape PShape} representing a simple + * polygon constructed by radial sorting and uncrossing. + * @since 2.2 + */ + public static PShape angular(Collection points) { + if (points == null) { + return new PShape(); + } + final int n = points.size(); + if (n == 0) { + return PGS_Conversion.fromPVector(points); + } + if (n < 3) { + return PGS_Conversion.fromPVector(new ArrayList<>(points)); + } + + // compute centroid as center for angular sort + double cx = 0.0, cy = 0.0; + for (PVector p : points) { + cx += p.x; + cy += p.y; + } + cx /= n; + cy /= n; + + List info = new ArrayList<>(n); + for (PVector p : points) { + double dx = p.x - cx; + double dy = p.y - cy; + double theta = FastMath.atan2(dy, dx); + double r = Math.sqrt(dx * dx + dy * dy); + info.add(new Info(p, r, theta)); + } + + // sort by angle, tie-break by radius (closer first) + Collections.sort(info, (a, b) -> { + int c = Double.compare(a.theta, b.theta); + if (c != 0) + return c; + return Double.compare(a.r, b.r); + }); + + List seq = new ArrayList<>(n); + for (Info it : info) { + seq.add(it.p); + } + + Uncrossing2Opt.uncross(seq); + + return toPolygon(seq); + } + + /** + * Constructs a polygonisation using the "onion" (convex-layers) strategy: + * repeatedly peel convex hull layers (outermost first), stitch hull layers into + * a single cyclic order (alternating directions for continuity), insert any + * leftover points with a cheapest-insertion heuristic, and finally apply a + * 2-opt uncrossing pass. + *

+ * This approach tends to respect global convex structure and produces + * spiral-like polygonisations that use all points as vertices. + * + * @param points the input point set (may be {@code null}). If {@code null} an + * empty {@link processing.core.PShape PShape} is returned. If the + * set contains fewer than three distinct points a degenerate + * {@link processing.core.PShape PShape} containing the input + * points is returned. + * @return a new {@link processing.core.PShape PShape} representing a simple + * polygon constructed by convex-layer peeling, stitching and + * uncrossing. + * @since 2.2 + */ + public static PShape onion(Collection points) { + if (points == null) + return new PShape(); + int n0 = points.size(); + if (n0 < 3) + return PGS_Conversion.fromPVector(new ArrayList<>(points)); + + // mutable working set + List remaining = new ArrayList<>(points); + + // peel convex hull layers + List> layers = new ArrayList<>(); + while (remaining.size() >= 3) { + List hull = convexHullMonotoneChain(remaining); + if (hull.size() < 3) + break; // degenerate (collinear etc.) + layers.add(hull); + + // remove hull points (identity-based) + var onHull = Collections.newSetFromMap(new IdentityHashMap()); + onHull.addAll(hull); + remaining.removeIf(onHull::contains); + } + + // stitch layers into one cyclic order (spiral-ish) + List seq = new ArrayList<>(points.size()); + boolean forward = true; + + for (List layer : layers) { + if (layer.isEmpty()) + continue; + + if (seq.isEmpty()) { + if (!forward) + java.util.Collections.reverse(layer); + seq.addAll(layer); + } else { + PVector last = seq.get(seq.size() - 1); + List rotated = rotateToNearest(layer, last); + if (!forward) + Collections.reverse(rotated); + seq.addAll(rotated); + } + forward = !forward; + } + + // if anything left (0,1,2 points or collinear residue), insert cheaply + for (PVector p : remaining) { + insertCheapest(seq, p); + } + + Uncrossing2Opt.uncross(seq); + + return toPolygon(seq); + } + + /** + * Generic scan-based polygonisation. If primaryIsY is true, points are sorted + * primarily by Y then X (horizontal scanlines). Otherwise sorted primarily by X + * then Y (vertical scanlines). After sorting, a 2-opt style crossing removal is + * applied by iteratively reversing segments that cause segment intersections. + * + * @param points the input point set + * @param primaryIsY whether to sort primarily by Y (true) or X (false) + * @return a simple polygon PShape + */ + private static PShape scanAndResolve(Collection points, boolean primaryIsY) { + // defensive handling + if (points == null) { + return new PShape(); + } + + final int n = points.size(); + if (n == 0) { + return PGS_Conversion.fromPVector(points); + } + if (n < 3) { + // trivial: nothing to polygonise + return PGS_Conversion.fromPVector(new ArrayList<>(points)); + } + + // make a mutable copy + List seq = new ArrayList<>(points); + + // comparator depending on primary axis + Comparator cmp = primaryIsY ? Comparator.comparingDouble((PVector p) -> p.y).thenComparingDouble(p -> p.x) + : Comparator.comparingDouble((PVector p) -> p.x).thenComparingDouble(p -> p.y); + + Collections.sort(seq, cmp); + Uncrossing2Opt.uncross(seq); + + return toPolygon(seq); + } + + /** + * Computes the convex hull of a point set using the Monotone Chain algorithm. + * + * @param pts list of points + * @return a list of points representing the convex hull in CCW order + */ + private static List convexHullMonotoneChain(List pts) { + // returns CCW hull without repeating the first point + int n = pts.size(); + if (n < 3) + return new ArrayList<>(); + + // sort by x then y + List p = new ArrayList<>(pts); + p.sort((a, b) -> { + int cx = Float.compare(a.x, b.x); + if (cx != 0) + return cx; + return Float.compare(a.y, b.y); + }); + + List lower = new ArrayList<>(); + for (PVector v : p) { + while (lower.size() >= 2 && cross(lower.get(lower.size() - 2), lower.get(lower.size() - 1), v) <= 0) { + lower.remove(lower.size() - 1); + } + lower.add(v); + } + + List upper = new ArrayList<>(); + for (int i = p.size() - 1; i >= 0; i--) { + PVector v = p.get(i); + while (upper.size() >= 2 && cross(upper.get(upper.size() - 2), upper.get(upper.size() - 1), v) <= 0) { + upper.remove(upper.size() - 1); + } + upper.add(v); + } + + // remove last of each (it's the start of the other list) + lower.remove(lower.size() - 1); + upper.remove(upper.size() - 1); + + List hull = new ArrayList<>(lower.size() + upper.size()); + hull.addAll(lower); + hull.addAll(upper); + return hull; + } + + private static float cross(PVector o, PVector a, PVector b) { + return (a.x - o.x) * (b.y - o.y) - (a.y - o.y) * (b.x - o.x); + } + + private static List rotateToNearest(List ring, PVector target) { + int m = ring.size(); + if (m == 0) + return new ArrayList<>(); + + int best = 0; + double bestD2 = Double.POSITIVE_INFINITY; + for (int i = 0; i < m; i++) { + PVector p = ring.get(i); + double dx = p.x - target.x; + double dy = p.y - target.y; + double d2 = dx * dx + dy * dy; + if (d2 < bestD2) { + bestD2 = d2; + best = i; + } + } + + List out = new ArrayList<>(m); + for (int k = 0; k < m; k++) + out.add(ring.get((best + k) % m)); + return out; + } + + /** + * Inserts a point into a cycle at the position that minimizes the increase in + * total length (cheapest insertion heuristic). + * + * @param cycle the current polygon cycle (modified in place) + * @param p the point to insert + */ + private static void insertCheapest(List cycle, PVector p) { + int n = cycle.size(); + if (n == 0) { + cycle.add(p); + return; + } + if (n == 1) { + cycle.add(p); + return; + } + + int bestIdx = 0; + double bestDelta = Double.POSITIVE_INFINITY; + + for (int i = 0; i < n; i++) { + PVector a = cycle.get(i); + PVector b = cycle.get((i + 1) % n); + double delta = dist(a, p) + dist(p, b) - dist(a, b); + if (delta < bestDelta) { + bestDelta = delta; + bestIdx = i + 1; + } + } + cycle.add(bestIdx, p); + } + + private static double dist(PVector a, PVector b) { + double dx = a.x - b.x, dy = a.y - b.y; + return Math.sqrt(dx * dx + dy * dy); + } + + private static record Info(PVector p, double r, double theta) { + } + + /** + * Converts a list of vertices into a closed polygon PShape. + */ + private static PShape toPolygon(List points) { + if (!points.get(0).equals(points.get(points.size() - 1))) { + points.add(points.get(0)); // close + } + return PGS_Conversion.fromPVector(points); + } + +} diff --git a/src/main/java/micycle/pgs/PGS_Processing.java b/src/main/java/micycle/pgs/PGS_Processing.java index 0042648c..563e0c8d 100644 --- a/src/main/java/micycle/pgs/PGS_Processing.java +++ b/src/main/java/micycle/pgs/PGS_Processing.java @@ -32,7 +32,9 @@ import org.apache.commons.math3.ml.distance.EuclideanDistance; import org.locationtech.jts.algorithm.Angle; import org.locationtech.jts.algorithm.Area; +import org.locationtech.jts.algorithm.LineIntersector; import org.locationtech.jts.algorithm.Orientation; +import org.locationtech.jts.algorithm.RobustLineIntersector; import org.locationtech.jts.algorithm.hull.ConcaveHullOfPolygons; import org.locationtech.jts.algorithm.locate.IndexedPointInAreaLocator; import org.locationtech.jts.densify.Densifier; @@ -51,10 +53,12 @@ import org.locationtech.jts.geom.prep.PreparedGeometryFactory; import org.locationtech.jts.geom.util.GeometryFixer; import org.locationtech.jts.geom.util.LineStringExtracter; +import org.locationtech.jts.geom.util.LinearComponentExtracter; import org.locationtech.jts.geom.util.PolygonExtracter; import org.locationtech.jts.linearref.LengthIndexedLine; +import org.locationtech.jts.noding.BasicSegmentString; +import org.locationtech.jts.noding.MCIndexNoder; import org.locationtech.jts.noding.MCIndexSegmentSetMutualIntersector; -import org.locationtech.jts.noding.NodedSegmentString; import org.locationtech.jts.noding.Noder; import org.locationtech.jts.noding.SegmentIntersectionDetector; import org.locationtech.jts.noding.SegmentIntersector; @@ -75,12 +79,9 @@ import com.github.micycle1.geoblitz.YStripesPointInAreaLocator; import it.unimi.dsi.util.XoRoShiRo128PlusRandomGenerator; -import micycle.balaban.BalabanSolver; -import micycle.balaban.Point; -import micycle.balaban.Segment; import micycle.pgs.color.ColorUtils; import micycle.pgs.color.Colors; -import micycle.pgs.commons.PolygonDecomposition; +import micycle.pgs.commons.KeilSnoeyinkConvexPartitioner; import micycle.pgs.commons.SeededRandomPointsInGridBuilder; import micycle.pgs.commons.ShapeRandomPointSampler; import micycle.trapmap.TrapMap; @@ -89,10 +90,17 @@ import processing.core.PVector; /** - * Methods that process shape geometry: partitioning, slicing, cleaning, etc. + * Shape-processing utilities for {@link PShape} geometry. + * + *

+ * This class groups “workflow” operations that operate on shapes + * rather than primarily reshaping them: sampling and traversal, + * validation/repair, cleaning and filtering, intersection helpers, and + * partitioning/slicing/splitting into multiple parts. Methods often return + * derived shapes (or shape collections) suitable for downstream steps such as + * meshing, tiling, coloring, or boolean operations. * * @author Michael Carleton - * */ public final class PGS_Processing { @@ -138,6 +146,8 @@ public static PShape densify(PShape shape, double distanceTolerance) { * point away from the shape (outwards); negative * values offset the point inwards towards its * interior. + * @return A {@link PVector} located on the exterior of {@code shape} at the + * requested perimeter position and offset. * @see #pointsOnExterior(PShape, int, double) */ public static PVector pointOnExterior(PShape shape, double perimeterPosition, double offsetDistance) { @@ -164,6 +174,8 @@ public static PVector pointOnExterior(PShape shape, double perimeterPosition, do * point away from the shape (outwards); negative * values offset the point inwards towards its * interior. + * @return A {@link PVector} located at the specified distance along the + * exterior perimeter, offset by {@code offsetDistance}. * @since 1.4.0 */ public static PVector pointOnExteriorByDistance(PShape shape, double perimeterDistance, double offsetDistance) { @@ -433,9 +445,11 @@ private static IndexedLengthIndexedLine makeIndexedLine(PShape shape) { * @since 1.2.0 */ public static PShape extractPerimeter(PShape shape, double from, double to) { - from = floatMod(from, 1); - if (to != 1) { // so that value of 1 is not moduloed to equal 0 - to = floatMod(to, 1); + if (!isWhole(from)) { + from = floatMod(from, 1.0); + } + if (!isWhole(to)) { + to = floatMod(to, 1.0); } Geometry g = fromPShape(shape); if (!g.getGeometryType().equals(Geometry.TYPENAME_LINEARRING) && !g.getGeometryType().equals(Geometry.TYPENAME_LINESTRING)) { @@ -451,26 +465,8 @@ public static PShape extractPerimeter(PShape shape, double from, double to) { .createLineString(Stream.concat(Arrays.stream(l1.getCoordinates()), Arrays.stream(l2.getCoordinates())).toArray(Coordinate[]::new))); } - /* - * The PGS toPShape() method treats a closed linestring as polygonal (having a - * fill), which occurs when from==0 and to==1. We don't want the output to be - * filled in, so build the PATH shape here without closing it. - */ LineString string = (LineString) l.extractLine(length * from, length * to); - PShape perimeter = new PShape(); - perimeter.setFamily(PShape.PATH); - perimeter.setStroke(true); - perimeter.setStroke(micycle.pgs.color.Colors.PINK); - perimeter.setStrokeWeight(4); - - perimeter.beginShape(); - Coordinate[] coords = string.getCoordinates(); - for (Coordinate coord : coords) { - perimeter.vertex((float) coord.x, (float) coord.y); - } - perimeter.endShape(); - - return perimeter; + return toPShape(string); } /** @@ -518,25 +514,146 @@ public static double tangentAngle(PShape shape, double perimeterRatio) { } /** - * Computes all points of intersection between the linework of two - * shapes. + * Computes all self-intersection points of the linework contained within a + * single shape. + * *

- * NOTE: This method shouldn't be confused with + * This is equivalent to finding all intersection points formed by pairwise + * intersections of the shape's lineal components (exterior rings, interior + * rings/holes and standalone LineStrings). + *

+ * Note endpoint-endpoint intersections ("touches") are not included. + * + * @param a the input shape whose linework will be tested for self-intersections + * @return a List containing self-intersection points; empty if none + * are found + * @since 2.2 + */ + public static List intersectionPoints(PShape a) { + Geometry g = fromPShape(a); + + @SuppressWarnings("unchecked") + List strings = LinearComponentExtracter.getLines(g); + + final Collection segmentStringsA = new ArrayList<>(strings.size()); + + for (LineString ls : strings) { + Coordinate[] c = ls.getCoordinates(); + if (c.length < 2) { + continue; + } + + // ignore closed + int n = c.length; + if (n >= 2 && c[0].equals2D(c[n - 1])) { + n--; // drop duplicated closing coord + } + + // emit one SegmentString per segment + for (int i = 0; i < n - 1; i++) { + Coordinate p0 = new Coordinate(c[i]); + Coordinate p1 = new Coordinate(c[i + 1]); + if (!p0.equals2D(p1)) { + segmentStringsA.add(new BasicSegmentString(new Coordinate[] { p0, p1 }, null)); + } + } + } + + return intersections(segmentStringsA, false); + } + + /** + * Computes all intersection points between the linework (edges/boundaries) of + * two shapes. + *

+ * This method operates on the extracted linework of the provided PShapes: that + * includes polygon exteriors, polygon holes, and standalone paths (and any + * lineal children of GROUP shapes). It does not compute the geometric + * intersection area of filled polygons — see * {@link micycle.pgs.PGS_ShapeBoolean#intersect(PShape, PShape) - * PGS_ShapeBoolean.intersect()}, which finds the shape made by the intersecting - * shape areas. + * PGS_ShapeBoolean.intersect()} for area-based intersection results. * - * @param a one shape - * @param b another shape - * @return list of all intersecting points (as PVectors) + * @param a one input shape (polygons, lines or groups containing them) + * @param b the other input shape (polygons, lines or groups containing them) + * @return a List containing the intersection points between the + * linework of {@code a} and {@code b}. Returns an empty list if no + * intersections are found. */ - public static List shapeIntersection(PShape a, PShape b) { + public static List intersectionPoints(PShape a, PShape b) { final Collection segmentStringsA = SegmentStringUtil.extractSegmentStrings(fromPShape(a)); final Collection segmentStringsB = SegmentStringUtil.extractSegmentStrings(fromPShape(b)); return intersections(segmentStringsA, segmentStringsB); } + static List intersections(Collection segments, boolean countEndpointTouches) { + @SuppressWarnings("unchecked") + final Collection segStrings = (Collection) segments; + + final Set hits = new HashSet<>(); + final RobustLineIntersector li = new RobustLineIntersector(); + + final SegmentIntersector intersector = (e0, i0, e1, i1) -> { + // Skip identical segment + if (e0 == e1 && i0 == i1) + return; + + // For self-comparisons, avoid double-reporting, and (optionally) skip adjacent + // segments + if (e0 == e1) { + if (i1 <= i0) + return; // process each pair once + + if (!countEndpointTouches) { + final int nSegs = e0.size() - 1; // number of segments in this SegmentString + + // Adjacent by index, including wrap-around (last segment adjacent to first) + final boolean adjacent = Math.abs(i0 - i1) == 1 || (i0 == 0 && i1 == nSegs - 1) || (i1 == 0 && i0 == nSegs - 1); + + if (adjacent) + return; + } + } + + final Coordinate p0 = e0.getCoordinate(i0); + final Coordinate p1 = e0.getCoordinate(i0 + 1); + final Coordinate q0 = e1.getCoordinate(i1); + final Coordinate q1 = e1.getCoordinate(i1 + 1); + + li.computeIntersection(p0, p1, q0, q1); + if (!li.hasIntersection()) + return; + + final boolean collinear = li.getIntersectionNum() == LineIntersector.COLLINEAR; + + for (int k = 0; k < li.getIntersectionNum(); k++) { + final Coordinate ip = li.getIntersection(k); + + if (!countEndpointTouches) { + // Keep "proper" crossings (interior-interior) and collinear overlaps. + // Otherwise drop intersections that occur at any segment endpoint (touches). + if (!li.isProper() && !collinear) { + if (ip.equals2D(p0) || ip.equals2D(p1) || ip.equals2D(q0) || ip.equals2D(q1)) { + continue; + } + } + } + + hits.add(new Coordinate(ip)); + } + }; + + MCIndexNoder noder = new MCIndexNoder(); + noder.setSegmentIntersector(intersector); + noder.computeNodes(segStrings); + + final List out = new ArrayList<>(hits.size()); + for (Coordinate c : hits) { + out.add(new PVector((float) c.x, (float) c.y)); + } + return out; + } + static List intersections(Collection segmentStringsA, Collection segmentStringsB) { final Collection larger, smaller; if (segmentStringsA.size() > segmentStringsB.size()) { @@ -554,58 +671,15 @@ static List intersections(Collection segmentStringsA, Collection // checks if two segments actually intersect final SegmentIntersectionDetector sid = new SegmentIntersectionDetector(); - mci.process(smaller, new SegmentIntersector() { - @Override - public void processIntersections(SegmentString e0, int segIndex0, SegmentString e1, int segIndex1) { - sid.processIntersections(e0, segIndex0, e1, segIndex1); - if (sid.hasIntersection()) { - points.add(new PVector((float) sid.getIntersection().x, (float) sid.getIntersection().y)); - } - } - - @Override - public boolean isDone() { - return false; + mci.process(smaller, (e0, segIndex0, e1, segIndex1) -> { + sid.processIntersections(e0, segIndex0, e1, segIndex1); + if (sid.hasIntersection()) { + points.add(new PVector((float) sid.getIntersection().x, (float) sid.getIntersection().y)); } }); return new ArrayList<>(points); } - /** - * Computes all points of intersection between segments in a set of line - * segments. The input set is first processed to remove degenerate segments - * (does not mutate the input). - * - * @param lineSegments a list of PVectors where each pair (couplet) of PVectors - * represent the start and end point of one line segment - * @return A list of PVectors each representing the intersection point of a - * segment pair - */ - public static List lineSegmentsIntersection(List lineSegments) { - final List intersections = new ArrayList<>(); - if (lineSegments.size() % 2 != 0) { - System.err.println( - "The input to lineSegmentsIntersection() contained an odd number of line segment vertices. The method expects successive pairs of vertices"); - return intersections; - } - - Collection segments = new ArrayList<>(); - for (int i = 0; i < lineSegments.size(); i += 2) { // iterate pairwise - final PVector p1 = lineSegments.get(i); - final PVector p2 = lineSegments.get(i + 1); - segments.add(new Segment(p1.x, p1.y, p2.x, p2.y)); - } - - final BalabanSolver balabanSolver = new BalabanSolver((a, b) -> { - final Point pX = a.getIntersection(b); - intersections.add(new PVector((float) pX.x, (float) pX.y)); - }); - segments.removeAll(balabanSolver.findDegenerateSegments(segments)); - balabanSolver.computeIntersections(segments); - - return intersections; - } - /** * Generates N random points that lie within the shape region. *

@@ -618,6 +692,8 @@ public static List lineSegmentsIntersection(List lineSegments) * * @param shape defines the region in which random points are generated * @param points number of points to generate within the shape region + * @return a list of {@link PVector} points randomly sampled inside + * {@code shape} * @see #generateRandomPoints(PShape, int, long) * @see #generateRandomGridPoints(PShape, int, boolean, double) */ @@ -888,32 +964,63 @@ public static PShape extractHoles(PShape shape) { } /** - * Finds the polygonal faces formed by a set of intersecting line segments. - * - * @param lineSegmentVertices a list of PVectors where each pair (couplet) of - * PVectors represent the start and end point of one - * line segment - * @return a GROUP PShape where each child shape is a face / enclosed area - * formed between intersecting lines - * @since 1.1.2 + * Extracts the topological boundary of the given shape. + * + *

+ * For a polygonal (area) {@code PShape}, the boundary is its perimeter: the + * outer outline plus the outlines of any holes. The returned shape encodes this + * as one or more unfilled {@link PShape#PATH PATH} shapes (closed where + * appropriate). + * + *

+ * For non-area shapes, the boundary may be empty or may reduce to point-like + * elements (for example, the boundary of an open path consists of its end + * vertices). + * + *

+ * This is useful because some operations have different semantics depending on + * whether the input is encoded as an area ({@code kind == POLYGON}) or as a + * stroke/path ({@code kind == PATH}). For example, buffering a {@code POLYGON} + * expands/contracts an area, whereas buffering a {@code PATH} produces a + * stroked “tube” around the linework. Extracting the boundary provides a + * consistent way to convert an area into its outline representation prior to + * such operations. + * + *

+ * Note: the returned {@code PShape} may be a {@link PConstants#GROUP} if the + * boundary contains multiple disjoint components. + * + * @param shape the input shape whose boundary is to be returned + * @return a {@code PShape} representing the boundary of {@code shape} + * @since 2.2 */ - public static PShape polygonizeLines(List lineSegmentVertices) { - // TODO constructor for LINES PShape - if (lineSegmentVertices.size() % 2 != 0) { - System.err.println("The input to polygonizeLines() contained an odd number of vertices. The method expects successive pairs of vertices."); - return new PShape(); - } - - final List segmentStrings = new ArrayList<>(lineSegmentVertices.size() / 2); - for (int i = 0; i < lineSegmentVertices.size(); i += 2) { - final PVector v1 = lineSegmentVertices.get(i); - final PVector v2 = lineSegmentVertices.get(i + 1); - if (!v1.equals(v2)) { - segmentStrings.add(new NodedSegmentString(new Coordinate[] { PGS.coordFromPVector(v1), PGS.coordFromPVector(v2) }, null)); - } - } + public static PShape extractBoundary(PShape shape) { + return toPShape(fromPShape(shape).getBoundary()); + } - return PGS.polygonizeSegments(segmentStrings, true); + /** + * Finds polygonal faces from the given shape's linework. + *

+ * This method extracts linework from the supplied PShape (including existing + * polygon edges and standalone line primitives), nodes intersections, and + * polygonizes the resulting segment network. Only closed polygonal faces + * (enclosed areas) are returned. Open edges, dangling line segments + * ("dangles"), and isolated lines that do not form a closed ring are ignored + * and dropped — the result contains faces only. + * + * The returned PShape is a GROUP whose children are PShapes representing each + * detected face. + * + * @param shape a PShape whose linework (edges) will be used to find polygonal + * faces; can include existing polygons or line primitives + * @return a GROUP PShape containing only the polygonal faces discovered from + * the input linework; dangles and non-enclosed edges are not included + * @since 2.2 + */ + public static PShape polygonize(PShape shape) { + var g = fromPShape(shape); + var segs = SegmentStringUtil.extractNodedSegmentStrings(g); + return PGS.polygonizeSegments(segs, true); } /** @@ -1101,21 +1208,35 @@ public static PShape centroidSplit(PShape shape, int n, double offset) { } /** - * Partitions shape(s) into convex (simple) polygons. + * Partitions the provided shape into convex, simple polygonal pieces. + *

+ * This implementation uses the optimal Keil & Snoeyink dynamic-programming + * approach, which minimises the number of added diagonals and thus the number + * of convex pieces. + *

+ * The input may be a single polygon PShape or a GROUP PShape containing + * multiple polygon children. Each polygon child is partitioned independently; + * the method returns a GROUP PShape whose children are the convex pieces. If + * the partition produces exactly one child, that single child PShape is + * returned (rather than a GROUP). + *

+ * Polygons with interior holes are supported — holes are bridged to produce + * simple polygons prior to partitioning. * - * @param shape the shape to partition. can be a single polygon or a GROUP of - * polygons - * @return a GROUP PShape, where each child shape is some convex partition of - * the original shape + * @param shape a non-null PShape representing a polygon or a GROUP of polygons + * @return a GROUP PShape whose children are convex, simple polygon partitions + * of the input; if only one partition piece results, that child PShape + * is returned directly + * @implNote Implementation changed in v2.2 from Bayazit algorithm to Keil & + * Snoeyink (optimal). */ public static PShape convexPartition(PShape shape) { - // algorithm described in https://mpen.ca/406/bayazit final Geometry g = fromPShape(shape); final PShape polyPartitions = new PShape(PConstants.GROUP); @SuppressWarnings("unchecked") final List polygons = PolygonExtracter.getPolygons(g); - polygons.forEach(p -> polyPartitions.addChild(toPShape(PolygonDecomposition.decompose(p)))); + polygons.forEach(p -> polyPartitions.addChild(toPShape(KeilSnoeyinkConvexPartitioner.convexPartition(p)))); if (polyPartitions.getChildCount() == 1) { return polyPartitions.getChild(0); @@ -1128,7 +1249,7 @@ public static PShape convexPartition(PShape shape) { * Randomly partitions a shape into N approximately equal-area polygonal cells. * * @param shape a polygonal (non-group, no holes) shape to partition - * @param parts number of roughly equal area partitons to create + * @param parts number of roughly equal area partitions to create * @return a GROUP PShape, whose child shapes are partitions of the original * @since 1.3.0 */ @@ -1141,7 +1262,7 @@ public static PShape equalPartition(final PShape shape, final int parts) { * equal-area polygonal cells. * * @param shape a polygonal (non-group, no holes) shape to partition - * @param parts number of roughly equal area partitons to create + * @param parts number of roughly equal area partitions to create * @param seed number used to initialize the underlying pseudorandom number * generator * @return a GROUP PShape, whose child shapes are partitions of the original @@ -1660,4 +1781,8 @@ private static double floatMod(double x, double y) { return (x - Math.floor(x / y) * y); } + private static boolean isWhole(double v) { + return Double.isFinite(v) && Math.abs(v - Math.rint(v)) < 1e-12; + } + } diff --git a/src/main/java/micycle/pgs/PGS_SegmentSet.java b/src/main/java/micycle/pgs/PGS_SegmentSet.java index fb3059f1..821e9dcb 100644 --- a/src/main/java/micycle/pgs/PGS_SegmentSet.java +++ b/src/main/java/micycle/pgs/PGS_SegmentSet.java @@ -14,6 +14,7 @@ import org.jgrapht.alg.matching.blossom.v5.KolmogorovWeightedMatching; import org.jgrapht.alg.matching.blossom.v5.KolmogorovWeightedPerfectMatching; import org.jgrapht.alg.matching.blossom.v5.ObjectiveSense; +import org.locationtech.jts.algorithm.Orientation; import org.locationtech.jts.algorithm.RobustLineIntersector; import org.locationtech.jts.algorithm.locate.IndexedPointInAreaLocator; import org.locationtech.jts.dissolve.LineDissolver; @@ -31,6 +32,8 @@ import org.locationtech.jts.noding.SegmentStringUtil; import org.tinfour.common.IIncrementalTin; +import com.github.micycle1.geoblitz.IndexedLengthIndexedLine; + import micycle.pgs.color.Colors; import micycle.pgs.commons.FastAtan2; import micycle.pgs.commons.Nullable; @@ -350,6 +353,406 @@ public static List parallelSegments(double centerX, double centerY, doubl return edges; } + /** + * Function that supplies the perpendicular segment length at each sampled + * position along a shape component (optionally varying by location, phase, or + * normal angle). + */ + @FunctionalInterface + public interface SegmentLengthFn { + /** + * @param x sampled boundary point x + * @param y sampled boundary point y + * @param posFrac fractional position along the current component in [0,1) + * (includes {@code startOffset} phase) + * @param angleRad angle of the (outward) unit normal in radians at the sample. + * (Tangent angle is {@code angleRad - PI/2}). + * @return desired segment length L. If {@code <= 0}, the segment is skipped. + */ + double length(double x, double y, double posFrac, double angleRad); + } + + /** + * Extracts perpendicular segments along each linear component of {@code shape}, + * with each segment centered on the path/outline. + * + *

+ * This method samples positions along each linear component (each path or + * polygon boundary) at approximately {@code interSegmentDistance} spacing. At + * every sampled position it builds a length {@code L} segment that is + * perpendicular to the local direction and centered on the sampled point (i.e. + * it extends {@code L/2} to each side). + * + *

+ * {@code startOffset} is wrapped into {@code [0,1)} and acts like a phase along + * each component. For closed boundaries, increasing {@code startOffset} + * advances sampling counterclockwise and decreasing it advances clockwise. For + * open paths, it advances forward/backward along the path direction. + * + * @param shape the input {@link PShape} containing rings or line + * strings + * @param interSegmentDistance spacing between successive segments along each + * component (arc-length units) + * @param L length of each perpendicular segment (must be + * > 0) + * @param startOffset fractional phase along each component (0..1); + * values outside this range are wrapped + * @return a list of {@link PEdge} segments from every linear component; empty + * if none produce segments + * @since 2.2 + */ + public static List perpendicularPathSegments(PShape shape, double interSegmentDistance, double L, double startOffset) { + return perpendicularPathSegments(shape, interSegmentDistance, (x, y, t, a) -> L, startOffset); + } + + /** + * Extracts perpendicular segments along each linear component of {@code shape}, + * with each segment centered on the path/outline, using a user-supplied + * function to vary segment length. + * + *

+ * This method samples positions along each linear component (each path or + * polygon boundary) at approximately {@code interSegmentDistance} spacing. At + * every sampled position it builds a length {@code L} segment that is + * perpendicular to the local direction and centered on the sampled point (i.e. + * it extends {@code L/2} to each side). + * + *

+ * {@code startOffset} is wrapped into {@code [0,1)} and acts like a phase along + * each component. For closed boundaries, increasing {@code startOffset} + * advances sampling counterclockwise and decreasing it advances clockwise. For + * open paths, it advances forward/backward along the path direction. + * + * @param shape the input {@link PShape} containing rings or line + * strings + * @param interSegmentDistance spacing between successive segments along each + * component (arc-length units) + * @param lengthFn function that returns a per-sample segment length + * @param startOffset fractional phase along each component (0..1); + * values outside this range are wrapped + * @return a list of {@link PEdge} segments; empty if none produce segments + * @since 2.2 + */ + public static List perpendicularPathSegments(PShape shape, double interSegmentDistance, SegmentLengthFn lengthFn, double startOffset) { + List edges = new ArrayList<>(); + if (interSegmentDistance <= 0 || lengthFn == null) { + return edges; + } + + final double startNorm = ((startOffset % 1.0) + 1.0) % 1.0; + + PGS.applyToLinealGeometries(shape, line -> { + + final boolean closed = line.isClosed(); + + // Only meaningful for closed components; keeps "outward" consistent. + if (closed && Orientation.isCCW(line.getCoordinates())) { + line = line.reverse(); + } + + final IndexedLengthIndexedLine l = new IndexedLengthIndexedLine(line); + final double end = l.getEndIndex(); + if (end <= 0) { + return line; + } + + final int count = Math.max(1, (int) Math.round(end / interSegmentDistance)); + final double increment = 1.0 / count; + + // ε chosen automatically; inline clamp + final double eps = Math.max(end * 1e-4, Math.min(interSegmentDistance * 0.35, end * 0.01)); + + for (int i = 0; i < count; i++) { + final double posFrac = (startNorm + i * increment) % 1.0; + final double idx = posFrac * end; + + final double idxM = closed ? wrapIndex(idx - eps, end) : Math.max(0.0, Math.min(end, idx - eps)); + + final double idxP = closed ? wrapIndex(idx + eps, end) : Math.max(0.0, Math.min(end, idx + eps)); + + final Coordinate pm = l.extractPoint(idxM); + final Coordinate pp = l.extractPoint(idxP); + final Coordinate pc = l.extractPoint(idx); + + final double tx = pp.x - pm.x; + final double ty = pp.y - pm.y; + final double tlen = Math.sqrt(tx * tx + ty * ty); + if (tlen == 0) { + continue; + } + + // Left-hand unit normal (-ty, tx). For normalized CW rings, this points + // outward. + final double nx = -ty / tlen; + final double ny = tx / tlen; + final double normalAngle = FastAtan2.atan2(ny, nx); + + final double L = lengthFn.length(pc.x, pc.y, posFrac, normalAngle); + if (!(L > 0)) { + continue; + } + + final double half = L * 0.5; + + PVector a = new PVector((float) (pc.x - nx * half), (float) (pc.y - ny * half)); + PVector b = new PVector((float) (pc.x + nx * half), (float) (pc.y + ny * half)); + edges.add(new PEdge(a, b)); + } + + return line; + }); + + return edges; + } + + /** + * Creates a fabric-like layout of horizontal and vertical segments on a regular + * cell grid, controlled by the A (horizontal run), B (vertical run), and C (row + * shift) weave parameters. + * + *

+ * The pattern is built on a rectangular grid of cells (size {@code cellSize}). + * Each cell is assigned one of two states (“horizontal on top” or “vertical on + * top”) so the grid reads like a simple over/under weaving diagram. Contiguous + * same-state cells in a row become one horizontal segment; contiguous + * same-state cells in a column become one vertical segment. + *

+ * + *
    + *
  • A - how many cells in a row the horizontal strand stays on top. + * Increasing A lengthens horizontal runs (longer horizontal elements).
  • + *
  • B - how many cells in a row the vertical strand stays on top. + * Increasing B lengthens vertical runs (longer vertical elements).
  • + *
  • C - how far each successive row is shifted (a phase offset). + * Changing C slides the pattern row-by-row and can change where segments meet + * or form longer/shorter junctions. (C is applied modulo A + B.)
  • + *
+ * + *

+ * Many traditional weaves are expressible with A–B–C (e.g., plain weave 1–1–1, + * twill 2–2–1). Patterns where the shift and period are “coprime” (gcd(A+B, + * C)=1) tend to produce a single connected repeating motif (“hang together”); + * if not, the repeat unit can be larger or the motif can repeat in bands. + *

+ * + * @param width domain width + * @param height domain height + * @param cellSize size of a grid cell (world units) + * @param A consecutive weft-visible (horizontal on-top) cells per + * period; controls horizontal run length (A >= 1) + * @param B consecutive warp-visible (vertical on-top) cells per period; + * controls vertical run length (B >= 1) + * @param C row-to-row offset (phase shift) in cells (C >= 0) + * @return list of {@link PEdge} segments representing the weaves + * @throws IllegalArgumentException if cellSize <= 0, A <= 0, or B <= 0 + * @since 2.2 + */ + public static List weaveSegments(final double width, final double height, final double cellSize, final int A, final int B, final int C) { + return weaveSegments(width, height, cellSize, A, B, C, false, 1, true); + } + + /** + * Creates a fabric-like layout of horizontal and vertical segments on a regular + * cell grid, controlled by the A (horizontal run), B (vertical run), and C (row + * shift) weave parameters. + * + *

+ * The pattern is built on a rectangular grid of cells (size {@code cellSize}). + * Each cell is assigned one of two states (“horizontal on top” or “vertical on + * top”) so the grid reads like a simple over/under weaving diagram. Contiguous + * same-state cells in a row become one horizontal segment; contiguous + * same-state cells in a column become one vertical segment. + *

+ * + *
    + *
  • A — how many cells in a row the horizontal strand stays on top. + * Increasing A lengthens horizontal runs (longer horizontal elements).
  • + *
  • B — how many cells in a row the vertical strand stays on top. + * Increasing B lengthens vertical runs (longer vertical elements).
  • + *
  • C — how far each successive row is shifted (a phase offset). + * Changing C slides the pattern row-by-row and can change where segments meet + * or form longer/shorter junctions. (C is applied modulo A + B.)
  • + *
+ * + *

+ * Many traditional weaves are expressible with A–B–C (e.g., plain weave 1–1–1, + * twill 2–2–1). Patterns where the shift and period are “coprime” (gcd(A+B, + * C)=1) tend to produce a single connected repeating motif (“hang together”); + * if not, the repeat unit can be larger or the motif can repeat in bands. + *

+ * + * + *

Endpoint placement and edge behavior

+ *

+ * The {@code cellFraction} value controls how segment endpoints sit inside the + * terminal cell of each run: 0.5 places endpoints at cell centers (short + * segments), values closer to 1 move endpoints toward cell edges (longer + * segments). If {@code extendSingletonsToEdge} is true, single-cell runs that + * touch the outer domain boundary are extended to the grid edge so they are not + * rendered as tiny isolated dots at the border. + *

+ * + * @param width domain width + * @param height domain height + * @param cellSize size of a grid cell (world units) + * @param A consecutive weft-visible (horizontal on-top) + * cells per period; controls horizontal run + * length (A >= 1) + * @param B consecutive warp-visible (vertical on-top) + * cells per period; controls vertical run length + * (B >= 1) + * @param C row-to-row offset (phase shift) in cells (C >= + * 0) + * @param swapColors if true, swap roles of weft/warp (horizontal ↔ + * vertical) + * @param cellFraction endpoint position inside the run end cells + * (0.5..1.0 typical) + * @param extendSingletonsToEdge if true, extend single-cell runs on the domain + * boundary to the edge + * @return list of {@link PEdge} segments representing the weaves + * @since 2.2 + */ + private static List weaveSegments(final double width, final double height, final double cellSize, final int A, final int B, final int C, + final boolean swapColors, final double cellFraction, final boolean extendSingletonsToEdge) { + /* + * Implements 'ABC-Auxetics: An Implicit Design Approach for Negative Poisson’s + * Ratio Materials' + */ + if (cellSize <= 0) { + throw new IllegalArgumentException("cellSize must be > 0"); + } + if (A <= 0 || B <= 0) { + throw new IllegalArgumentException("A and B must be > 0"); + } + if (!(cellFraction > 0.0)) { + throw new IllegalArgumentException("cellFraction must be > 0"); + } + + final int P = A + B; + + final double f = Math.max(0.0, Math.min(1.0, cellFraction)); + + // Fit an integer grid inside the domain; center it. + final int cols = Math.max(1, (int) Math.floor(width / cellSize)); + final int rows = Math.max(1, (int) Math.floor(height / cellSize)); + final double gridW = cols * cellSize; + final double gridH = rows * cellSize; + final double dx = (width - gridW) * 0.5; + final double dy = (height - gridH) * 0.5; + + // World-space bounds of the actual grid (may be inset if dx/dy != 0) + final double gridLeft = dx; + final double gridRight = dx + gridW; + final double gridBottom = dy; + final double gridTop = dy + gridH; + + // Build 2-color matrix: true = weft (horizontal), false = warp (vertical) + final boolean[][] weft = new boolean[rows][cols]; + for (int r = 0; r < rows; r++) { + final int shift = Math.floorMod(r * C, P); + for (int c = 0; c < cols; c++) { + final int idx = Math.floorMod(c + shift, P); + boolean isWeft = idx < A; + if (swapColors) { + isWeft = !isWeft; + } + weft[r][c] = isWeft; + } + } + + final List segs = new ArrayList<>(); + + // Horizontal segments from contiguous WEFT runs per row + for (int r = 0; r < rows; r++) { + int c = 0; + while (c < cols) { + if (!weft[r][c]) { + c++; + continue; + } + + final int start = c; + while (c + 1 < cols && weft[r][c + 1]) { + c++; + } + final int end = c; + + final int runLen = end - start + 1; + + final double y = dy + (r + 0.5) * cellSize; + + // Endpoints controlled by cellFraction: + // left endpoint is inside first cell at (1-f), right endpoint inside last cell + // at f. + double x0 = dx + (start + (1.0 - f)) * cellSize; + double x1 = dx + (end + f) * cellSize; + + // Special case: single-cell run on boundary becomes half-cell from center to + // boundary. + if (extendSingletonsToEdge && runLen == 1) { + final double cx = dx + (start + 0.5) * cellSize; + if (start == 0) { + x0 = gridLeft; + x1 = cx; + } else if (end == cols - 1) { + x0 = cx; + x1 = gridRight; + } + } + + // Clamp to grid bounds (keeps segments from bleeding into margins due to f) + x0 = Math.max(gridLeft, Math.min(gridRight, x0)); + x1 = Math.max(gridLeft, Math.min(gridRight, x1)); + + segs.add(new PEdge(x0, y, x1, y)); + c++; + } + } + + // Vertical segments from contiguous WARP runs per column + for (int c = 0; c < cols; c++) { + int r = 0; + while (r < rows) { + if (weft[r][c]) { + r++; + continue; + } // warp = !weft + + final int start = r; + while (r + 1 < rows && !weft[r + 1][c]) { + r++; + } + final int end = r; + + final int runLen = end - start + 1; + + final double x = dx + (c + 0.5) * cellSize; + + double y0 = dy + (start + (1.0 - f)) * cellSize; + double y1 = dy + (end + f) * cellSize; + + if (extendSingletonsToEdge && runLen == 1) { + final double cy = dy + (start + 0.5) * cellSize; + if (start == 0) { + y0 = gridBottom; + y1 = cy; + } else if (end == rows - 1) { + y0 = cy; + y1 = gridTop; + } + } + + y0 = Math.max(gridBottom, Math.min(gridTop, y0)); + y1 = Math.max(gridBottom, Math.min(gridTop, y1)); + + segs.add(new PEdge(x, y0, x, y1)); + r++; + } + } + + return segs; + } + /** * Converts a collection of {@link micycle.pgs.commons.PEdge PEdges} into a * LINES shape. @@ -413,6 +816,23 @@ public static PShape dissolve(Collection segments) { return PGS_Conversion.toPShape(dissolved); } + /** + * + * Computes all intersection points among the supplied edges. + *

+ * Each PEdge in {@code edges} is treated as a line segment and intersections + * are computed pairwise between segment interiors. Endpoint-endpoint "touches" + * are not included. + * + * @param edges collection of PEdge objects to test for intersections + * @return a List containing intersection points; empty if none are + * found + * @since 2.2 + */ + public static List intersections(Collection edges) { + return PGS_Processing.intersections(fromPEdges(edges), false); + } + /** * Computes all intersection points between two collections of line segments. *

@@ -658,4 +1078,12 @@ private static boolean ccw(Coordinate A, Coordinate B, Coordinate C) { private static boolean intersect(Coordinate A, Coordinate B, Coordinate C, Coordinate D) { return ccw(A, C, D) != ccw(B, C, D) && ccw(A, B, C) != ccw(A, B, D); } + + private static double wrapIndex(double idx, double end) { + double r = idx % end; + if (r < 0) { + r += end; + } + return r; + } } diff --git a/src/main/java/micycle/pgs/PGS_ShapeBoolean.java b/src/main/java/micycle/pgs/PGS_ShapeBoolean.java index f0119799..1b196c46 100644 --- a/src/main/java/micycle/pgs/PGS_ShapeBoolean.java +++ b/src/main/java/micycle/pgs/PGS_ShapeBoolean.java @@ -5,20 +5,16 @@ import java.util.Arrays; import java.util.Collection; -import java.util.HashMap; import java.util.List; -import java.util.Map; import java.util.Objects; import java.util.stream.Collectors; import org.locationtech.jts.geom.Envelope; import org.locationtech.jts.geom.Geometry; import org.locationtech.jts.geom.Polygon; -import org.locationtech.jts.geom.PrecisionModel; import org.locationtech.jts.geom.prep.PreparedGeometry; import org.locationtech.jts.geom.prep.PreparedGeometryFactory; import org.locationtech.jts.geom.util.GeometryFixer; import org.locationtech.jts.geom.util.LinearComponentExtracter; -import org.locationtech.jts.noding.NodedSegmentString; import org.locationtech.jts.noding.Noder; import org.locationtech.jts.noding.SegmentString; import org.locationtech.jts.noding.SegmentStringDissolver; @@ -33,7 +29,6 @@ import micycle.pgs.commons.FastOverlapRegions; import micycle.pgs.commons.Nullable; -import micycle.pgs.commons.PEdge; import processing.core.PConstants; import processing.core.PShape; import processing.core.PVector; @@ -74,6 +69,29 @@ public static PShape intersect(final PShape a, final PShape b) { return toPShape(result); } + /** + * Calculates the intersection of all provided shapes, producing a new shape + * representing the area shared by every input. + *

+ * This is equivalent to {@code shapes[0] ∩ shapes[1] ∩ ...}. + * + * @param shapes the shapes to intersect (must contain at least 2 shapes) + * @return a new shape representing the intersection of all inputs; retains the + * style of {@code shapes[0]} + * @throws IllegalArgumentException if fewer than 2 shapes are provided + * @since 2.2 + */ + public static PShape intersect(final PShape... shapes) { + if (shapes == null || shapes.length < 2) { + throw new IllegalArgumentException("intersect requires at least 2 shapes"); + } + PShape out = shapes[0]; + for (int i = 1; i < shapes.length; i++) { + out = intersect(out, shapes[i]); + } + return out; + } + /** * Performs an intersection operation between a mesh-like shape (a polygonal * coverage) and a polygonal area, while preserving the individual features of @@ -195,7 +213,7 @@ public static PShape union(PShape... shapes) { *

* * @param a The first input geometry as a {@link PShape}. - * @param b b The second input geometry as a {@link PShape}, or {@code null} to + * @param b The second input geometry as a {@link PShape}, or {@code null} to * use only {@code a}'s linework. * @return A new {@link PShape} representing the polygonal faces created by the * union of the input geometries' linework. Returns {@code null} if the @@ -209,7 +227,7 @@ public static PShape unionLines(PShape a, @Nullable PShape b) { var lB = LinearComponentExtracter.getGeometry(bG); Polygonizer polygonizer = new Polygonizer(false); - polygonizer.add(OverlayNG.overlay(lA, lB, OverlayOp.UNION, new PrecisionModel(-1e-3))); + polygonizer.add(OverlayNG.overlay(lA, lB, OverlayOp.UNION, PGS.PM)); return toPShape(polygonizer.getGeometry()); } @@ -259,7 +277,7 @@ public static PShape unionLines(Collection shapes) { d.dissolve(totalSegs); var dissolvedSegs = d.getDissolved(); - Noder noder = new SnapRoundingNoder(new PrecisionModel(-5e-3)); + Noder noder = new SnapRoundingNoder(PGS.PM); noder.computeNodes(dissolvedSegs); var nodedSegs = noder.getNodedSubstrings(); @@ -314,58 +332,6 @@ private static PShape unionMeshWithHoles(final PShape mesh) { } } - /** - * Unifies a collection of mesh shapes without handling holes, providing a more - * faster approach than {@link #unionMesh(PShape)} if the input is known to have - * no holes. - *

- * This method calculates the set of unique edges belonging to the mesh, which - * is equivalent to the boundary, assuming a mesh without holes. It then - * determines a sequential/winding order for the vertices of the boundary. - *

- * Note: This method does not account for meshes with holes. - * - * @param mesh A collection of shapes representing a mesh. - * @return A new PShape representing the union of the mesh shapes. - * @deprecated This method is deprecated due to the lack of support for meshes - * with holes. - */ - @Deprecated - public static PShape unionMeshWithoutHoles(final Collection mesh) { - Map edges = new HashMap<>(); - - final List allEdges; - - /* - * Compute set of unique edges belonging to the mesh (this set is equivalent to - * the boundary, assuming a holeless mesh). - */ - for (PShape child : mesh) { - for (int i = 0; i < child.getVertexCount(); i++) { - final PVector a = child.getVertex(i); - final PVector b = child.getVertex((i + 1) % child.getVertexCount()); - if (!a.equals(b)) { - PEdge edge = new PEdge(a, b); - edges.merge(edge, 1, Integer::sum); - } - } - } - - allEdges = edges.entrySet().stream().filter(e -> e.getValue() == 1).map(e -> e.getKey()).collect(Collectors.toList()); - - /* - * Now find a sequential/winding order for the vertices of the boundary. The - * vertices output fromEdges() is not closed, so close it afterwards (assumes - * the input to unionMesh() was indeed closed and valid). - */ - final List orderedVertices = PGS.fromEdges(allEdges); - if (!orderedVertices.get(0).equals(orderedVertices.get(orderedVertices.size() - 1))) { - orderedVertices.add(orderedVertices.get(0)); // close vertex list for fromPVector() - } - - return PGS_Conversion.fromPVector(orderedVertices); - } - /** * Finds all regions covered by at least two input shapes. *

    @@ -415,6 +381,26 @@ public static PShape subtract(final PShape a, final PShape b) { return toPShape(result); } + /** + * Subtracts multiple shapes from a base shape and returns the resulting shape. + * This is equivalent to subtracting the union of {@code shapes} from {@code a}: + * + *
    {@code
    +	 * subtract(a, s1, s2, s3) == subtract(a, union(s1, s2, s3))
    +	 * }
    + * + * @param base the {@code PShape} from which all subsequent shapes will be + * subtracted + * @param shapes zero or more {@code PShape}s to subtract from {@code a} + * @return a new {@code PShape} representing {@code a \ (s1 ∪ s2 ∪ ...)}; the + * returned shape has the style of {@code a} + * @since 2.2 + * @see #subtract(PShape, PShape) + */ + public static PShape subtract(final PShape base, PShape... shapes) { + return subtract(base, union(Arrays.asList(shapes))); + } + /** * Subtracts holes from the shell, without geometric * processing. diff --git a/src/main/java/micycle/pgs/PGS_ShapePredicates.java b/src/main/java/micycle/pgs/PGS_ShapePredicates.java index 12cd5529..75b76bf4 100644 --- a/src/main/java/micycle/pgs/PGS_ShapePredicates.java +++ b/src/main/java/micycle/pgs/PGS_ShapePredicates.java @@ -35,16 +35,24 @@ import micycle.pgs.commons.EllipticFourierDesc; import micycle.pgs.commons.GeometricMedian; -import micycle.trapmap.TrapMap; import processing.core.PConstants; import processing.core.PShape; import processing.core.PVector; /** - * Various shape metrics, predicates and descriptors. - * - * @author Michael Carleton + * Shape analysis utilities: metrics, predicates, and descriptive measurements + * for {@link PShape}s. + * + *

    + * This class provides read-only queries over geometry, including spatial + * relationships (containment, intersection, distance), scalar measurements + * (area, perimeter/length, diameter, width/height), and higher-level + * descriptors (circularity, elongation, convexity, similarity). It also + * includes validity and equality predicates commonly used to sanity-check + * shapes before downstream operations such as booleans, buffering, meshing, or + * tiling. * + * @author Michael Carleton */ public final class PGS_ShapePredicates { @@ -80,7 +88,7 @@ public static boolean containsPoint(PShape shape, PVector point) { /** * Determines whether a shape contains every point from a list of points. It is - * faster to use method rather than than calling + * faster to use this method rather than calling * {@link #containsPoint(PShape, PVector) containsPoint()} repeatedly. Any * points that lie on the boundary of the shape are considered to be contained. * @@ -112,7 +120,7 @@ public static boolean containsAllPoints(PShape shape, Collection points */ public static List containsPoints(PShape shape, Collection points) { final PointOnGeometryLocator pointLocator = new YStripesPointInAreaLocator(fromPShape(shape)); - ArrayList bools = new ArrayList<>(points.size()); + List bools = new ArrayList<>(points.size()); for (PVector p : points) { bools.add(pointLocator.locate(new Coordinate(p.x, p.y)) != Location.EXTERIOR); } @@ -146,9 +154,6 @@ public static List findContainedPoints(PShape shape, Collection - * This method locates the containing shape in log(n) time (after some - * pre-processing overhead). * * @param groupShape a GROUP shape * @param point the query point @@ -157,30 +162,7 @@ public static List findContainedPoints(PShape shape, Collection * Note: If two Polygons have matching vertices, but one is arranged clockwise - * while the other is counter-clockwise, then then this method will return - * false. + * while the other is counter-clockwise, then this method will return false. * * @param a shape a * @param b shape b diff --git a/src/main/java/micycle/pgs/PGS_Tiling.java b/src/main/java/micycle/pgs/PGS_Tiling.java index ea8e3807..38205791 100644 --- a/src/main/java/micycle/pgs/PGS_Tiling.java +++ b/src/main/java/micycle/pgs/PGS_Tiling.java @@ -1,10 +1,12 @@ package micycle.pgs; +import static micycle.pgs.PGS_Conversion.toPShape; import static micycle.pgs.PGS.GEOM_FACTORY; import java.util.ArrayList; import java.util.Collections; import java.util.List; +import java.util.Random; import java.util.SplittableRandom; import org.locationtech.jts.geom.Coordinate; @@ -16,11 +18,14 @@ import org.locationtech.jts.operation.union.UnaryUnionOp; import micycle.pgs.color.Colors; +import micycle.pgs.commons.AztecDiamond; import micycle.pgs.commons.DoyleSpiral; import micycle.pgs.commons.HatchTiling; import micycle.pgs.commons.PEdge; import micycle.pgs.commons.PenroseTiling; import micycle.pgs.commons.RectangularSubdivision; +import micycle.pgs.commons.SoftCells; +import micycle.pgs.commons.SoftCells.TangentMode; import micycle.pgs.commons.SquareTriangleTiling; import micycle.pgs.commons.TriangleSubdivision; import processing.core.PConstants; @@ -60,7 +65,7 @@ private PGS_Tiling() { * @param height height of the quad subdivision plane * @param maxDepth maximum number of subdivisions (recursion depth) * @return a GROUP PShape, where each child shape is a face of the subdivision - * @see #rectSubdivision(double, double, int, long) seeded rectSubdivsion() + * @see #rectSubdivision(double, double, int, long) seeded rectSubdivision() */ public static PShape rectSubdivision(final double width, final double height, final int maxDepth) { return rectSubdivision(width, height, maxDepth, System.nanoTime()); @@ -74,7 +79,7 @@ public static PShape rectSubdivision(final double width, final double height, fi * @param maxDepth maximum number of subdivisions (recursion depth) * @param seed the random seed * @return a GROUP PShape, where each child shape is a face of the subdivision - * @see #rectSubdivision(double, double, int) non-seeded rectSubdivsion() + * @see #rectSubdivision(double, double, int) non-seeded rectSubdivision() */ public static PShape rectSubdivision(final double width, final double height, int maxDepth, final long seed) { maxDepth++; // so that given depth==0 returns non-divided square @@ -93,7 +98,7 @@ public static PShape rectSubdivision(final double width, final double height, in * @param maxDepth maximum number of subdivisions (recursion depth) * @return a GROUP PShape, where each child shape is a face of the subdivision * @see #triangleSubdivision(double, double, int, long) seeded - * triangleSubdivsion() + * triangleSubdivision() */ public static PShape triangleSubdivision(final double width, final double height, final int maxDepth) { return triangleSubdivision(width, height, maxDepth, System.nanoTime()); @@ -163,6 +168,56 @@ public static PShape quadSubdivision(final double width, final double height, fi return divisions; } + /** + * Divides the plane into a simple axis-aligned grid using square cells. + *

    + * Grid lines are placed every {@code cellSize} units in X and Y. If + * {@code width} or {@code height} are not exact multiples of {@code cellSize}, + * the last row/col will be a smaller “remainder” cell band. + *

    + * + * @param width the width of the plane + * @param height the height of the plane + * @param cellSize the desired square cell size (must be > 0) + * @return a GROUP PShape containing the grid cells + * @since 2.2 + */ + public static PShape squareGrid(final double width, final double height, final double cellSize) { + if (cellSize <= 0) { + throw new IllegalArgumentException("cellSize must be > 0"); + } + + final List cuts = new ArrayList<>(); + final double x = 0, y = 0; + + // boundary + final PVector A = new PVector((float) x, (float) y); + final PVector B = new PVector((float) (x + width), (float) y); + final PVector C = new PVector((float) (x + width), (float) (y + height)); + final PVector D = new PVector((float) x, (float) (y + height)); + + cuts.add(new PEdge(A, B)); + cuts.add(new PEdge(B, C)); + cuts.add(new PEdge(C, D)); + cuts.add(new PEdge(D, A)); + + // vertical grid lines + for (double xx = x + cellSize; xx < x + width; xx += cellSize) { + final PVector p1 = new PVector((float) xx, (float) y); + final PVector p2 = new PVector((float) xx, (float) (y + height)); + cuts.add(new PEdge(p1, p2)); + } + + // horizontal grid lines + for (double yy = y + cellSize; yy < y + height; yy += cellSize) { + final PVector p1 = new PVector((float) x, (float) yy); + final PVector p2 = new PVector((float) (x + width), (float) yy); + cuts.add(new PEdge(p1, p2)); + } + + return PGS.polygonizeEdges(cuts); + } + /** * Randomly subdivides the plane into equal-width strips having varying lengths. * @@ -184,11 +239,14 @@ public static PShape hatchSubdivision(final double width, final double height, f /** * Divides the plane into randomly “sliced” polygonal regions. *

    - * {@code slices} random cuts are generated across the plane (dimensions w×h, at - * (0,0)). Each cut connects a random point on one side of the plane to a random - * point on another side. If {@code forceOpposite} is true, each cut always - * connects opposite sides; otherwise the two sides are chosen at random (but - * never the same side). + * {@code slices} is the number of random interior cuts (line segments) + * to add across the plane (i.e., the number of cuts, not the + * number of resulting regions/pieces). These {@code slices} cuts are generated + * over a w×h rectangle at (0,0); each cut connects a random point on one side + * of the rectangle to a random point on another side. If {@code forceOpposite} + * is true, each cut always connects opposite sides; otherwise the two sides are + * chosen at random (but never the same side). The final number of polygonal + * regions depends on how the cuts intersect and partition the rectangle. *

    *

    * In practice: @@ -396,16 +454,17 @@ public static PShape hexTiling(final double width, final double height, final do public static PShape islamicTiling(final double width, final double height, final double w, final double h) { // adapted from https://openprocessing.org/sketch/320133 final double[] vector = { -w, 0, w, -h, w, 0, -w, h }; - final ArrayList segments = new ArrayList<>(); + var s = PGS.prepareLinesPShape(null, null, null); for (int x = 0; x < width; x += w * 2) { for (int y = 0; y < height; y += h * 2) { for (int i = 0; i <= vector.length; i++) { - segments.add(new PVector((float) (vector[i % vector.length] + x + w), (float) (vector[(i + 6) % vector.length] + y + h))); - segments.add(new PVector((float) (vector[(i + 1) % vector.length] + x + w), (float) (vector[(i + 1 + 6) % vector.length] + y + h))); + s.vertex((float) (vector[i % vector.length] + x + w), (float) (vector[(i + 6) % vector.length] + y + h)); + s.vertex((float) (vector[(i + 1) % vector.length] + x + w), (float) (vector[(i + 1 + 6) % vector.length] + y + h)); } } } - return PGS_Processing.polygonizeLines(segments); + s.endShape(); + return PGS_Processing.polygonize(s); } /** @@ -452,6 +511,80 @@ public static PShape squareTriangleTiling(final double width, final double heigh return stt.getTiling(seed); } + /** + * Builds a tiling of interlocking cells that form an auxetic structure. + * + *

    + * An auxetic structure tends to widen when stretched (it can show a + * negative Poisson’s ratio). This method builds a fabric/weave-like + * layout of horizontal and vertical segments then runs a Voronoi-like + * construction on those line segments. The resulting cell boundaries are made + * of straight and gently curved pieces, producing an interlocking tiling (which + * would have auxetic properties if physical). + *

    + * + *

    The A–B–C parameters

    + *

    + * The A-B-C parameters affect the underlying segment generation. Each row + * follows A cells with the horizontal (weft) thread on top, then B cells with + * the vertical (warp) thread on top. Each next row is shifted right by C cells + * (wraps around modulo P). + *

    + *

    What features appear, and when

    + *

    + * The Voronoi construction produces a small set of recurring unit-cell types. + * Below is a short, non-technical guide to what those features look like and + * the simple conditions that cause them to appear (using C' = C mod (A+B)): + *

    + * + *
      + *
    • Quad - four-armed vertex (a small “X-like” quad). Appears when the + * row-shift aligns with a run boundary: typically when {@code A == C'} or + * {@code B == C'}.
    • + * + *
    • Tri-adjacent - the most common cell: two curved (parabolic) arcs + * meeting at a vertex plus a short straight edge. This cell shows up in nearly + * every weave except when the shift exactly matches a run length: it is absent + * if {@code A == C'} or {@code B == C'}.
    • + * + *
    • Tri-across - a rarer symmetric three-armed cell formed by two + * mirrored parabolas and a straight ray across the vertex. It typically + * requires both runs to be at least length 2 and the shift to fall inside the + * interior of the repeat: occurs when {@code A > 1}, {@code B > 1}, and + * {@code 1 < C' < A + B - 2}.
    • + * + *
    • Straight - long straight edges (horizontal or vertical) with a + * relatively small offset. These happen when the shift produces a significant + * mismatch with a run length, e.g. when {@code |C' - A| > 1} (horizontal + * straight) or {@code |C' - B| > 1} (vertical straight). Note that straight + * elements by themselves are not auxetic; changing how many straight elements + * occur can change the mechanical character of the cell but does not trivially + * predict Poisson’s ratio.
    • + *
    + * + * @param width domain width + * @param height domain height + * @param cellSize size of a grid cell (world units), must be > 0 + * @param A number of consecutive cells where the horizontal + * (weft) thread is on top (A >= 1) + * @param B number of consecutive cells where the vertical (warp) + * thread is on top (B >= 1) + * @param C per-row horizontal shift (in cells), applied modulo + * {@code A + B} + * @return a {@link PShape} containing the Voronoi-derived cell boundaries (a + * weave-based auxetic lattice) + * @since 2.2 + * @see PGS_SegmentSet#weaveSegments(double, double, double, int, int, int) + * weaveSegments() + * @throws IllegalArgumentException if {@code cellSize <= 0} or {@code A <= 0} + * or {@code B <= 0} + */ + public static PShape auxeticTiling(final double width, final double height, final double cellSize, final int A, final int B, final int C) { + var segs = PGS_SegmentSet.weaveSegments(width, height, cellSize, A, B, C); + var shape = PGS_SegmentSet.toPShape(segs); + return PGS_Voronoi.compoundVoronoi(shape); + } + /** * Generates a geometric arrangement composed of annular-sector bricks arranged * in concentric circular rings. Rings progressively expand from the inside out @@ -516,6 +649,96 @@ public static PShape annularBricks(final int nRings, final double cx, final doub return PGS_Conversion.flatten(bricks); } + /** + * Produces a random domino tiling of the Aztec diamond of the given + * {@code order}. + * + *

    + * The generated arrangement is positioned so that {@code (originX, originY)} is + * the center of the Aztec diamond. The tiling is generated on a unit + * grid and scaled by {@code cellSize}. + *

    + * + *

    + * Each child shape is one domino. The child {@link PShape#getName() name} + * encodes an integer “class” identifying one of four standard domino types + * (horizontal/vertical orientation and checkerboard parity). + *

    + * + * @param originX x-coordinate of the center of the generated tiling. + * @param originY y-coordinate of the center of the generated tiling. + * @param order Aztec diamond order {@code n}; must be {@code >= 1}. + * @param cellSize width/height of underlying grid cells; must be {@code > 0}. + * @param seed seed used to initialise the RNG for reproducible tilings. + * @return a flattened {@link PShape} whose child faces are axis-aligned domino + * rectangles tiling the Aztec diamond; each child’s {@code name} + * encodes one of four domino classes. + * @since 2.2 + */ + public static PShape aztecDiamond(double originX, double originY, int order, double cellSize, long seed) { + AztecDiamond a = new AztecDiamond(order, GEOM_FACTORY, new Random(seed)); + var polys = a.toMultiPolygon(cellSize, originX, originY); + var out = PGS.extractPolygons(polys).stream().map(poly -> { + var s = toPShape(poly); + int id = (int) poly.getUserData(); + s.setName(String.valueOf(id)); + return s; + }).toList(); + return PGS_Conversion.flatten(out); + } + + /** + * Generates a softened (curved) version of a tiling using the SoftCells + * edge-bending algorithm. + * + *

    + * The input mesh straight edges are softened into smooth, Bezier-like curves + * according to the supplied parameters. The resulting shape preserves the mesh + * topology (combinatorial adjacency) while altering the geometry to produce the + * characteristic "soft cell" appearance. + *

    + * + *

    + * The implementation samples random directions once per vertex (not per edge) + * when a stochastic tangent mode is selected. The {@code seed} only influences + * the following TangentMode values: + *

    + *
      + *
    • {@code RANDOM} - a random unit direction (one angle) is chosen once per + * vertex;
    • + *
    • {@code RANDOM_DIAGONAL} - one of the two diagonal directions (diag1 or + * diag2) is chosen once per vertex;
    • + *
    • {@code RANDOM_60DEG} - one of three 60° directions is selected once per + * vertex.
    • + *
    + * + * @param mesh the input PShape representing the base tiling to be softened; + * must not be null. The input is not modified — a new PShape is + * returned. + * @param ratio a floating-point control for the amount of softening/edge + * bending. Typical usage treats this as a normalised factor + * (commonly in the [0,1] range) where smaller values produce + * subtler curvature and larger values produce stronger softening + * (values much larger than may lead to face self-intersection). + * @param mode the TangentMode that selects how half-tangents / edge directions + * are chosen and aligned during the edge-bending process; see + * TangentMode for available modes and behaviour. + * @param seed random seed used to initialise the RNG. The seed only affects + * the stochastic tangent modes listed above; using the same seed + * with the same input mesh and parameters yields deterministic, + * repeatable output. + * @return a new PShape containing the softened tessellation (curved/soft cells) + * corresponding to the input mesh and parameters. + * @since 2.2 + */ + public static PShape softCells(PShape mesh, double ratio, TangentMode mode, long seed) { + SoftCells sc = new SoftCells(seed); + mesh = PGS_Optimisation.hilbertSortFaces(mesh); + var cells = sc.generate(mesh, mode, (float) ratio); + cells = PGS_Conversion.setAllStrokeColor(cells, Colors.PINK, 2); + return cells; + } + /** * Generates a hexagon shape. * diff --git a/src/main/java/micycle/pgs/PGS_Transformation.java b/src/main/java/micycle/pgs/PGS_Transformation.java index 60ac37e2..5640d818 100644 --- a/src/main/java/micycle/pgs/PGS_Transformation.java +++ b/src/main/java/micycle/pgs/PGS_Transformation.java @@ -16,15 +16,17 @@ import processing.core.PVector; /** - * Various geometric and affine transformations for PShapes that affect vertex - * coordinates. + * Geometric (mostly affine) transformations for {@link PShape}s that explicitly + * modify vertex coordinates. + * *

    - * Notably, these transformation methods affect the vertex coordinates of - * PShapes, unlike Processing's transform methods that affect the affine matrix - * of shapes only (and thereby leave vertex coordinates in-tact). - * - * @author Michael Carleton + * These methods bake transforms into the geometry: vertices are + * rewritten in-place (conceptually), and the returned {@code PShape} contains + * the transformed coordinates. This differs from Processing’s + * {@code translate()/rotate()/scale()} which modify a {@code PShape}'s internal + * vertex transform matrix without necessarily changing stored vertex positions. * + * @author Michael Carleton */ public final class PGS_Transformation { @@ -35,11 +37,15 @@ private PGS_Transformation() { * Scales the dimensions of the shape by a scaling factor relative to its * centroid. * - * @param shape + * @param shape the PShape to scale * @param scale X and Y axis scale factor + * @return A new copy of {@code shape} scaled relative to its centroid. */ public static PShape scale(PShape shape, double scale) { Geometry g = fromPShape(shape); + if (g.isEmpty()) { + return shape; + } Coordinate c = g.getCentroid().getCoordinate(); AffineTransformation t = AffineTransformation.scaleInstance(scale, scale, c.x, c.y); return toPShape(t.transform(g)); @@ -48,12 +54,16 @@ public static PShape scale(PShape shape, double scale) { /** * Scales the shape relative to its centroid. * - * @param shape + * @param shape the PShape to scale * @param scaleX X-axis scale factor * @param scaleY Y-axis scale factor + * @return A new copy of {@code shape} scaled relative to its centroid. */ public static PShape scale(PShape shape, double scaleX, double scaleY) { Geometry g = fromPShape(shape); + if (g.isEmpty()) { + return shape; + } Point c = g.getCentroid(); AffineTransformation t = AffineTransformation.scaleInstance(scaleX, scaleY, c.getX(), c.getY()); return toPShape(t.transform(g)); @@ -63,6 +73,7 @@ public static PShape scale(PShape shape, double scaleX, double scaleY) { * Scale a shape around a point. * * @since 2.0 + * @return A new copy of {@code shape} scaled around {@code point}. */ public static PShape scale(PShape shape, double scaleX, double scaleY, PVector point) { Geometry g = fromPShape(shape); @@ -74,6 +85,7 @@ public static PShape scale(PShape shape, double scaleX, double scaleY, PVector p * Scale a shape around a point. * * @since 2.0 + * @return A new copy of {@code shape} scaled around the supplied point. */ public static PShape scale(PShape shape, double scale, double x, double y) { Geometry g = fromPShape(shape); @@ -87,6 +99,7 @@ public static PShape scale(PShape shape, double scale, double x, double y) { * @param shape * @param scale scale factor * @since 1.3.0 + * @return A new copy of {@code shape} scaled relative to the origin. */ public static PShape originScale(PShape shape, double scale) { Geometry g = fromPShape(shape); @@ -106,6 +119,9 @@ public static PShape originScale(PShape shape, double scale) { */ public static PShape scaleArea(PShape shape, double scale) { Geometry geometry = fromPShape(shape); + if (geometry.isEmpty()) { + return shape; + } double scalingFactor = Math.sqrt(scale); Coordinate c = geometry.getCentroid().getCoordinate(); AffineTransformation t = AffineTransformation.scaleInstance(scalingFactor, scalingFactor, c.x, c.y); @@ -122,6 +138,9 @@ public static PShape scaleArea(PShape shape, double scale) { */ public static PShape scaleAreaTo(PShape shape, double targetArea) { Geometry geometry = fromPShape(shape); + if (geometry.isEmpty()) { + return shape; + } double area = geometry.getArea(); double scalingFactor = Math.sqrt(targetArea / area); Coordinate c = geometry.getCentroid().getCoordinate(); @@ -133,7 +152,7 @@ public static PShape scaleAreaTo(PShape shape, double targetArea) { * Resizes a shape (based on its envelope) to the given dimensions, relative to * its centroid. * - * @param shape + * @param shape the PShape to resize * @param targetWidth width of the output copy * @param targetHeight height of the output copy * @return resized copy of input shape @@ -142,6 +161,9 @@ public static PShape resize(PShape shape, double targetWidth, double targetHeigh targetWidth = Math.max(targetWidth, 0.001); targetHeight = Math.max(targetHeight, 0.001); Geometry geometry = fromPShape(shape); + if (geometry.isEmpty()) { + return shape; + } Envelope e = geometry.getEnvelopeInternal(); Point c = geometry.getCentroid(); @@ -165,11 +187,16 @@ public static PShape resizeByWidth(PShape shape, double targetWidth) { targetWidth = Math.max(targetWidth, 1e-5); Geometry geometry = fromPShape(shape); + if (geometry.isEmpty()) { + return shape; + } Envelope e = geometry.getEnvelopeInternal(); Point c = geometry.getCentroid(); AffineTransformation t = AffineTransformation.scaleInstance(targetWidth / e.getWidth(), targetWidth / e.getWidth(), c.getX(), c.getY()); - return toPShape(t.transform(geometry)); + var result = t.transform(geometry); + result.setUserData(geometry.getUserData()); // preserve shape style (if any) + return toPShape(result); } /** @@ -188,11 +215,16 @@ public static PShape resizeByHeight(PShape shape, double targetHeight) { targetHeight = Math.max(targetHeight, 1e-5); Geometry geometry = fromPShape(shape); + if (geometry.isEmpty()) { + return shape; + } Envelope e = geometry.getEnvelopeInternal(); Point c = geometry.getCentroid(); AffineTransformation t = AffineTransformation.scaleInstance(targetHeight / e.getHeight(), targetHeight / e.getHeight(), c.getX(), c.getY()); - return toPShape(t.transform(geometry)); + var result = t.transform(geometry); + result.setUserData(geometry.getUserData()); // preserve shape style (if any) + return toPShape(result); } /** @@ -290,7 +322,7 @@ public static PShape translateTo(PShape shape, double x, double y) { */ public static PShape translateCentroidTo(PShape shape, double x, double y) { Geometry g = fromPShape(shape); - if (g.getNumPoints() == 0) { + if (g.isEmpty()) { return shape; } Point c = g.getCentroid(); @@ -317,7 +349,7 @@ public static PShape translateCentroidTo(PShape shape, double x, double y) { */ public static PShape translateEnvelopeTo(PShape shape, double x, double y) { Geometry g = fromPShape(shape); - if (g.getNumPoints() == 0) { + if (g.isEmpty()) { return shape; } Point c = g.getEnvelope().getCentroid(); @@ -397,7 +429,7 @@ public static PShape homotheticTransformation(PShape shape, PVector center, doub Coordinate[] hole_coord = geom.getInteriorRingN(j).getCoordinates(); Coordinate[] hole_coord_ = new Coordinate[hole_coord.length]; for (int i = 0; i < hole_coord.length; i++) { - hole_coord_[i] = new Coordinate(center.x + scaleY * (hole_coord[i].x - center.x), center.y + scaleY * (hole_coord[i].y - center.y)); + hole_coord_[i] = new Coordinate(center.x + scaleX * (hole_coord[i].x - center.x), center.y + scaleY * (hole_coord[i].y - center.y)); } holes[j] = geom.getFactory().createLinearRing(hole_coord_); } @@ -513,6 +545,7 @@ private static double[] getProcrustesParams(PShape shapeToAlign, PShape referenc * @param shape the shape to tranform/rotate * @param point rotation point * @param angle the rotation angle, in radians + * @return A new copy of {@code shape} rotated around {@code point}. * @see #rotateAroundCenter(PShape, double) */ public static PShape rotate(PShape shape, PVector point, double angle) { @@ -526,11 +559,14 @@ public static PShape rotate(PShape shape, PVector point, double angle) { * * @param shape * @param angle the rotation angle, in radians - * @return + * @return A new copy of {@code shape} rotated around its centroid. * @see #rotate(PShape, PVector, double) */ public static PShape rotateAroundCenter(PShape shape, double angle) { Geometry g = fromPShape(shape); + if (g.isEmpty()) { + return shape; + } Point center = g.getCentroid(); AffineTransformation t = AffineTransformation.rotationInstance(angle, center.getX(), center.getY()); return toPShape(t.transform(g)); @@ -539,9 +575,14 @@ public static PShape rotateAroundCenter(PShape shape, double angle) { /** * Flips the shape horizontally based on its centre point (mirror over the * x-axis passing through its centroid). + * + * @return A new {@code PShape} mirrored horizontally across its centroid. */ public static PShape flipHorizontal(PShape shape) { Geometry g = fromPShape(shape); + if (g.isEmpty()) { + return shape; + } Point c = g.getCentroid(); AffineTransformation t = AffineTransformation.reflectionInstance(-1, c.getY(), 1, c.getY()); return toPShape(t.transform(g)); @@ -552,7 +593,8 @@ public static PShape flipHorizontal(PShape shape) { * * @param shape * @param y y-coordinate of horizontal reflection line - * @return + * @return A new {@code PShape} mirrored across the horizontal line at + * {@code y}. */ public static PShape flipHorizontal(PShape shape, double y) { AffineTransformation t = AffineTransformation.reflectionInstance(-1, y, 1, y); @@ -562,9 +604,14 @@ public static PShape flipHorizontal(PShape shape, double y) { /** * Flips the shape vertically based on its centre point (mirror over the y-axis * passing through its centroid). + * + * @return A new {@code PShape} mirrored vertically across its centroid. */ public static PShape flipVertical(PShape shape) { Geometry g = fromPShape(shape); + if (g.isEmpty()) { + return shape; + } Point c = g.getCentroid(); AffineTransformation t = AffineTransformation.reflectionInstance(c.getX(), -1, c.getX(), 1); return toPShape(t.transform(g)); @@ -575,7 +622,7 @@ public static PShape flipVertical(PShape shape) { * * @param shape * @param x x-coordinate of vertical reflection line - * @return + * @return A new {@code PShape} mirrored across the vertical line at {@code x}. */ public static PShape flipVertical(PShape shape, double x) { AffineTransformation t = AffineTransformation.reflectionInstance(x, -1, x, 1); diff --git a/src/main/java/micycle/pgs/PGS_Triangulation.java b/src/main/java/micycle/pgs/PGS_Triangulation.java index 2c4213f2..ee2939be 100644 --- a/src/main/java/micycle/pgs/PGS_Triangulation.java +++ b/src/main/java/micycle/pgs/PGS_Triangulation.java @@ -21,6 +21,8 @@ import org.locationtech.jts.geom.Geometry; import org.locationtech.jts.geom.LinearRing; import org.locationtech.jts.geom.Location; +import org.locationtech.jts.geom.MultiPolygon; +import org.locationtech.jts.geom.Polygon; import org.locationtech.jts.geom.Polygonal; import org.locationtech.jts.triangulate.polygon.PolygonTriangulator; import org.tinfour.common.IConstraint; @@ -30,6 +32,7 @@ import org.tinfour.common.SimpleTriangle; import org.tinfour.common.Vertex; import org.tinfour.edge.QuadEdge; +import org.tinfour.refinement.RuppertRefiner; import org.tinfour.standard.IncrementalTin; import org.tinfour.utils.HilbertSort; import org.tinfour.utils.TriangleCollector; @@ -43,10 +46,32 @@ import processing.core.PVector; /** - * Delaunay and earcut triangulation of shapes and point sets. - * - * @author Michael Carleton + * Triangulation utilities for 2D {@link PShape} polygons and point sets. + * + *

    + * This class provides: + *

      + *
    • Delaunay triangulation of point sets (and optional polygonal + * constraints),
    • + *
    • Refinement of an existing Delaunay TIN (adding Steiner points to + * improve triangle quality),
    • + *
    • Earcut triangulation for fast polygon-to-triangles + * conversion,
    • + *
    • and helpers to convert triangulations to {@link PShape}, JTS + * {@link Geometry}, or graphs.
    • + *
    + * + *

    Delaunay vs. Earcut (when to use which)

    + *
      + *
    • Earcut triangulates a polygon (including holes) into triangles + * that exactly cover the polygon interior. It does not attempt to optimise + * triangle quality.
    • + *
    • Delaunay triangulates a set of points to maximise the minimum + * angle (in the unconstrained case), producing generally “well-shaped” + * triangles. When used with a boundary shape, results are typically + * clipped/filtered to the shape and may be optionally refined.
    • * + * @author Michael Carleton */ public final class PGS_Triangulation { @@ -414,6 +439,58 @@ private static IIncrementalTin delaunayTriangulationMesh(@Nullable PShape shape, return tin; } + /** + * Refines an existing triangulation using Ruppert's Delaunay refinement + * algorithm. + *

      + * Refinement inserts additional Steiner points in order to improve triangle + * quality, primarily by eliminating "skinny" triangles whose minimum internal + * angle is below {@code minAngleDeg}. The provided {@link IIncrementalTin} is + * modified in place. + *

      + * Typical values for {@code minAngleDeg} are in the range 20–33 degrees. Larger + * values produce more regular triangles but may significantly increase the + * number of inserted points (and runtime). Extremely large values may not be + * achievable for constrained triangulations. + * + * @param triangulation the triangulation to refine (modified in place); must + * not be {@code null} + * @param minAngleDeg the minimum allowed triangle angle, in degrees + * @since 2.2 + */ + public static void refine(IIncrementalTin triangulation, double minAngleDeg) { + if (minAngleDeg <= 0) { + return; // no-op + } + RuppertRefiner refiner = new RuppertRefiner(triangulation, minAngleDeg); + refiner.refine(); + } + + /** + * Refines an existing triangulation using Ruppert's Delaunay refinement + * algorithm, while also enforcing a minimum triangle area threshold. + *

      + * Refinement inserts additional Steiner points to improve triangle quality by + * removing triangles with angles below {@code minAngleDeg}. The + * {@code minTriangleArea} parameter acts as a stop condition to avoid + * over-refining very small triangles. The provided {@link IIncrementalTin} is + * modified in place. + * + * @param triangulation the triangulation to refine (modified in place); must + * not be {@code null} + * @param minAngleDeg the minimum allowed triangle angle, in degrees + * @param minTriangleArea triangles with area less than or equal to this value + * will not be further refined + * @since 2.2 + */ + public static void refine(IIncrementalTin triangulation, double minAngleDeg, double minTriangleArea) { + if (minAngleDeg <= 0) { + return; // no-op + } + RuppertRefiner refiner = new RuppertRefiner(triangulation, minAngleDeg, minTriangleArea); + refiner.refine(); + } + /** * Creates a Delaunay triangulation of the shape where additional steiner * points, populated by poisson sampling, are included. @@ -534,6 +611,43 @@ public static PShape toPShape(IIncrementalTin triangulation) { return out; } + /** + * Converts a triangulated mesh object to a JTS MultiPolygon where each triangle + * is represented as a separate Polygon. + * + * @param triangulation the IIncrementalTin object to convert + * @param gf geometry factory to use + * @return a MultiPolygon containing one Polygon per triangle + * @since 2.2 + */ + static MultiPolygon toGeometry(final IIncrementalTin triangulation) { + final List triangles = new ArrayList<>(); + + final Consumer triangleVertexConsumer = t -> { + // triangle ring must be closed: p0, p1, p2, p0 + final Coordinate c0 = new Coordinate(t[0].x, t[0].y); + final Coordinate c1 = new Coordinate(t[1].x, t[1].y); + final Coordinate c2 = new Coordinate(t[2].x, t[2].y); + + final Coordinate[] coords = new Coordinate[] { c0, c1, c2, c0 }; + + final Polygon poly = PGS.GEOM_FACTORY.createPolygon(coords); + + // Skip degenerate triangles (zero area / invalid) + if (!poly.isEmpty() && poly.isValid() && poly.getArea() > 0) { + triangles.add(poly); + } + }; + + if (!triangulation.getConstraints().isEmpty()) { + TriangleCollector.visitTrianglesConstrained(triangulation, triangleVertexConsumer); + } else { + TriangleCollector.visitTriangles(triangulation, triangleVertexConsumer); + } + + return PGS.GEOM_FACTORY.createMultiPolygon(triangles.toArray(new Polygon[0])); + } + /** * Finds the graph equivalent to a triangulation. Graph vertices are * triangulation vertices; graph edges are triangulation edges. @@ -542,7 +656,6 @@ public static PShape toPShape(IIncrementalTin triangulation) { * weights are their euclidean length of their triangulation equivalent. * * @param triangulation triangulation mesh - * @return * @since 1.3.0 * @see #toTinfourGraph(IIncrementalTin) * @see #toDualGraph(IIncrementalTin) @@ -551,9 +664,6 @@ public static SimpleGraph toGraph(IIncrementalTin triangulation) final SimpleGraph graph = new SimpleWeightedGraph<>(PEdge.class); final boolean notConstrained = triangulation.getConstraints().isEmpty(); triangulation.edges().forEach(e -> { -// if (isEdgeOnPerimeter(e)) { -// return; // skip to next triangle -// } if (notConstrained || e.isConstraintRegionMember()) { final IQuadEdge base = e.getBaseReference(); PVector a = toPVector(base.getA()); @@ -572,11 +682,10 @@ public static SimpleGraph toGraph(IIncrementalTin triangulation) * Finds the graph equivalent to a triangulation. Graph vertices are * triangulation vertices; graph edges are triangulation edges. *

      - * The output is an undirected weighted graph of Tinfour primtives; edge weights - * are their euclidean length of their triangulation equivalent. + * The output is an undirected weighted graph of Tinfour primitives; edge + * weights are their euclidean length of their triangulation equivalent. * * @param triangulation triangulation mesh - * @return * @since 1.3.0 * @see #toGraph(IIncrementalTin) * @see #toDualGraph(IIncrementalTin) @@ -585,9 +694,6 @@ public static SimpleGraph toTinfourGraph(IIncrementalTin tria final SimpleGraph graph = new SimpleWeightedGraph<>(IQuadEdge.class); final boolean notConstrained = triangulation.getConstraints().isEmpty(); triangulation.edges().forEach(e -> { -// if (isEdgeOnPerimeter(e)) { -// return; // skip to next triangle -// } if ((notConstrained || e.isConstraintRegionMember())) { final IQuadEdge base = e.getBaseReference(); graph.addVertex(base.getA()); @@ -676,26 +782,6 @@ static PEdge toPEdge(final IQuadEdge e) { return new PEdge(toPVector(e.getA()), toPVector(e.getB())); } - /** - * Determines whether an edge or its dual is on the perimeter. - * - * @param edge a valid instance - * @return true if the edge is on the perimeter; otherwise, false. - */ - private static boolean isEdgeOnPerimeter(IQuadEdge edge) { - /* - * The logic here is that each edge defines one side of a triangle with vertices - * A, B, and C. Vertices A and B are the first and second vertices of the edge, - * vertex C is the opposite one. Triangles lying outside the Delaunay - * Triangulation have a "ghost" vertex for vertex C. Tinfour represents a ghost - * vertex with a null reference. So we test both the edge and its dual to see if - * their vertex C reference is null. Also note that vertex C is the second - * vertex of the forward edge from our edge of interest. Thus the C = - * edge.getForward().getB(). - */ - return edge.getForward().getB() == null || edge.getForwardFromDual().getB() == null; - } - /** * Computes the centroid/barycentre of a triangle. */ diff --git a/src/main/java/micycle/pgs/PGS_Voronoi.java b/src/main/java/micycle/pgs/PGS_Voronoi.java index ba4ffd1a..651e0414 100644 --- a/src/main/java/micycle/pgs/PGS_Voronoi.java +++ b/src/main/java/micycle/pgs/PGS_Voronoi.java @@ -8,42 +8,58 @@ import java.util.Collection; import java.util.HashMap; import java.util.List; +import java.util.Objects; import java.util.stream.Collectors; +import org.locationtech.jts.coverage.CoverageUnion; import org.locationtech.jts.densify.Densifier; import org.locationtech.jts.geom.Coordinate; import org.locationtech.jts.geom.Envelope; import org.locationtech.jts.geom.Geometry; +import org.locationtech.jts.geom.GeometryCollection; import org.locationtech.jts.geom.Polygon; import org.locationtech.jts.geom.Polygonal; +import org.locationtech.jts.geom.TopologyException; +import org.locationtech.jts.geom.util.GeometryFixer; import org.locationtech.jts.operation.overlay.snap.GeometrySnapper; import org.locationtech.jts.operation.overlayng.OverlayNG; import org.locationtech.jts.operation.relateng.RelateNG; import org.tinfour.common.IQuadEdge; import org.tinfour.common.Vertex; -import org.tinfour.standard.IncrementalTin; import org.tinfour.utils.HilbertSort; import org.tinfour.voronoi.BoundedVoronoiBuildOptions; import org.tinfour.voronoi.BoundedVoronoiDiagram; import org.tinfour.voronoi.ThiessenPolygon; +import com.github.micycle1.geoblitz.HilbertParallelPolygonUnion; +import com.github.quickhull3d.PowerDiagram2D; +import com.github.quickhull3d.PowerDiagram2D.Rect; + import micycle.pgs.color.Colors; import micycle.pgs.commons.FarthestPointVoronoi; +import micycle.pgs.commons.ManhattanVoronoi; import micycle.pgs.commons.MultiplicativelyWeightedVoronoi; import micycle.pgs.commons.Nullable; import micycle.pgs.commons.PEdge; -import processing.core.PConstants; import processing.core.PShape; import processing.core.PVector; /** - * Voronoi Diagrams of shapes and point sets. Supports polygonal constraining - * and relaxation to generate centroidal Voronoi. - * - * @author Michael Carleton + * Voronoi diagram utilities for 2D point sets and {@link PShape} polygons. + * + *

      + * This class generates several variants of Voronoi diagrams, including: + * standard (unweighted) diagrams, additively/multiplicatively weighted + * diagrams, farthest-point Voronoi, and polygon-constrained (“inner”) Voronoi. + * + *

      Centroidal Voronoi (relaxation)

      + *

      + * Several {@code innerVoronoi(...)} overloads support Lloyd-style relaxation by + * repeatedly rebuilding the diagram and moving sites toward cell centroids, + * producing centroidal Voronoi tessellations (CVTs) inside a boundary polygon. * + * @author Michael Carleton */ -@SuppressWarnings("squid:S3776") public final class PGS_Voronoi { private PGS_Voronoi() { @@ -403,66 +419,28 @@ public static PShape compoundVoronoi(PShape shape, double[] bounds) { Geometry g = fromPShape(shape); Geometry densified = Densifier.densify(g, 2); - List vertices = new ArrayList<>(); - final List> segmentVertexGroups = new ArrayList<>(); - - for (int i = 0; i < densified.getNumGeometries(); i++) { - Geometry geometry = densified.getGeometryN(i); - List featureVertices; - switch (geometry.getGeometryType()) { - case Geometry.TYPENAME_LINEARRING : - case Geometry.TYPENAME_POLYGON : - case Geometry.TYPENAME_LINESTRING : - case Geometry.TYPENAME_POINT : - featureVertices = toVertex(geometry.getCoordinates()); - if (!featureVertices.isEmpty()) { - segmentVertexGroups.add(featureVertices); - vertices.addAll(featureVertices); - } - break; - case Geometry.TYPENAME_MULTILINESTRING : - case Geometry.TYPENAME_MULTIPOINT : - case Geometry.TYPENAME_MULTIPOLYGON : // nested multi polygon - for (int j = 0; j < geometry.getNumGeometries(); j++) { - featureVertices = toVertex(geometry.getGeometryN(j).getCoordinates()); - if (!featureVertices.isEmpty()) { - segmentVertexGroups.add(featureVertices); - vertices.addAll(featureVertices); - } - } - break; - default : - break; - } - } + List vertices = new ArrayList<>(Math.max(16, densified.getNumPoints())); + List> segmentVertexGroups = new ArrayList<>(Math.max(16, densified.getNumGeometries())); + + collectVertexGroups(densified, segmentVertexGroups, vertices); if (vertices.size() > 2500) { HilbertSort hs = new HilbertSort(); hs.sort(vertices); } - final IncrementalTin tin = new IncrementalTin(2); - tin.add(vertices, null); // initial triangulation - if (!tin.isBootstrapped()) { - return new PShape(); // shape probably empty - } final BoundedVoronoiBuildOptions options = new BoundedVoronoiBuildOptions(); - final double x, y, w, h; + final Rectangle2D boundsRect; if (bounds == null) { - final Envelope envelope = g.getEnvelopeInternal(); - x = envelope.getMinX(); - y = envelope.getMinY(); - w = envelope.getMaxX() - envelope.getMinX(); - h = envelope.getMaxY() - envelope.getMinY(); + final Envelope e = g.getEnvelopeInternal(); + boundsRect = new Rectangle2D.Double(e.getMinX(), e.getMinY(), e.getWidth(), e.getHeight()); } else { - x = bounds[0]; - y = bounds[1]; - w = bounds[2] - bounds[0]; - h = bounds[3] - bounds[1]; + boundsRect = new Rectangle2D.Double(bounds[0], bounds[1], bounds[2] - bounds[0], bounds[3] - bounds[1]); } - options.setBounds(new Rectangle2D.Double(x, y, w, h)); + options.setBounds(boundsRect); + options.enableAutomaticColorAssignment(false); - final BoundedVoronoiDiagram voronoi = new BoundedVoronoiDiagram(tin); + final BoundedVoronoiDiagram voronoi = new BoundedVoronoiDiagram(vertices, options); // Map densified vertices to the voronoi cell they define. final HashMap vertexCellMap = new HashMap<>(); @@ -473,27 +451,28 @@ public static PShape compoundVoronoi(PShape shape, double[] bounds) { * vertices by their source geometry and then union/dissolve the cells belonging * to each vertex group. */ - final List faces = segmentVertexGroups.parallelStream().map(vertexGroup -> { - PShape cellSegments = new PShape(PConstants.GROUP); + final List faces = segmentVertexGroups.parallelStream().map(vertexGroup -> { + var cells = new ArrayList(vertexGroup.size()); vertexGroup.forEach(segmentVertex -> { ThiessenPolygon thiessenCell = vertexCellMap.get(segmentVertex); if (thiessenCell != null) { // null if degenerate input - PShape cellSegment = new PShape(PShape.PATH); - cellSegment.beginShape(); - for (IQuadEdge e : thiessenCell.getEdges()) { - cellSegment.vertex((float) e.getA().x, (float) e.getA().y); - } - cellSegment.endShape(PConstants.CLOSE); - cellSegments.addChild(cellSegment); + cells.add(toPolygon(thiessenCell)); } }); - return PGS_ShapeBoolean.unionMesh(cellSegments); - }).collect(Collectors.toList()); - PShape voronoiCells = PGS_Conversion.flatten(faces); + try { + return CoverageUnion.union(cells.toArray(new Geometry[0])); + } catch (TopologyException e) { + var gf = cells.get(0).getFactory(); + var valid = GeometryFixer.fix(gf.createGeometryCollection(cells.toArray(new Geometry[0]))); + return HilbertParallelPolygonUnion.union(valid); + } + + }).toList(); + + PShape voronoiCells = toPShape(faces); PGS_Conversion.setAllFillColor(voronoiCells, Colors.WHITE); PGS_Conversion.setAllStrokeColor(voronoiCells, Colors.PINK, 2); - return voronoiCells; } @@ -508,16 +487,17 @@ public static PShape compoundVoronoi(PShape shape, double[] bounds) { * generator points. This results in characteristically curved cell boundaries, * unlike the straight line boundaries seen in standard Voronoi diagrams. * - * @param sites A list of PVectors, each representing one site: - * (.x, .y) represent the coordinate and - * .z represents weight. - * @param bounds an array of the form [minX, minY, maxX, maxY] representing the - * bounds of the diagram. The boundary must cover all points. + * @param weightedSites A list of PVectors, each representing one site: + * (.x, .y) represent the coordinate and + * .z represents weight. + * @param bounds an array of the form [minX, minY, maxX, maxY] + * representing the bounds of the diagram. The boundary + * must cover all points. * @return a GROUP PShape, where each child shape is a Voronoi cell * @since 2.0 */ - public static PShape multiplicativelyWeightedVoronoi(Collection sites, double[] bounds) { - return multiplicativelyWeightedVoronoi(sites, bounds, false); + public static PShape multiplicativelyWeightedVoronoi(Collection weightedSites, double[] bounds) { + return multiplicativelyWeightedVoronoi(weightedSites, bounds, false); } /** @@ -531,7 +511,7 @@ public static PShape multiplicativelyWeightedVoronoi(Collection sites, * generator points. This results in characteristically curved cell boundaries, * unlike the straight line boundaries seen in standard Voronoi diagrams. * - * @param sites A list of PVectors, each representing one site: + * @param weightedSites A list of PVectors, each representing one site: * (.x, .y) represent the coordinate and * .z represents weight. * @param bounds an array of the form [minX, minY, maxX, maxY] @@ -543,8 +523,8 @@ public static PShape multiplicativelyWeightedVoronoi(Collection sites, * @return a GROUP PShape, where each child shape is a Voronoi cell * @since 2.0 */ - public static PShape multiplicativelyWeightedVoronoi(Collection sites, double[] bounds, boolean forceConforming) { - var faces = MultiplicativelyWeightedVoronoi.getMWVFromPVectors(sites.stream().toList(), bounds); + public static PShape multiplicativelyWeightedVoronoi(Collection weightedSites, double[] bounds, boolean forceConforming) { + var faces = MultiplicativelyWeightedVoronoi.getMWVFromPVectors(weightedSites.stream().toList(), bounds); Geometry geoms = PGS.GEOM_FACTORY.createGeometryCollection(faces.toArray(new Geometry[] {})); if (forceConforming) { geoms = GeometrySnapper.snapToSelf(geoms, 1e-6, true); // slow @@ -628,6 +608,126 @@ public static PShape farthestPointVoronoi(Collection sites, double[] bo return toPShape(fpvd.getDiagram()); } + /** + * Computes a power diagram (a.k.a. Laguerre–Voronoi diagram) for + * a set of weighted sites, with no clipping bounds. + *

      + * Each site is given as a {@link PVector} where {@code (.x, .y)} is the site + * location and {@code .z} is its weight. + *

      Intuition

      A power diagram is the weighted analogue of a standard + * Voronoi diagram, but it still produces straight-edged (polygonal) + * cells. Increasing a site's weight can allow it to “win” territory even + * when it is farther away in ordinary Euclidean distance. + *

      + * Unlike an additively-weighted Voronoi diagram (Apollonius diagram), + * which typically yields curved boundaries, power diagrams use power + * distance (squared distance with a weight offset), which keeps boundaries + * linear. + * + * @param weightedSites collection of sites encoded as PVectors: + * {@code (.x, .y)} = position, {@code .z} = weight + * @return a GROUP {@link PShape} whose children are the (closed) polygonal + * cells of the power diagram; empty/degenerate cells are omitted + * @see #powerDiagram(Collection, double[]) + * @since 2.2 + */ + public static PShape powerDiagram(Collection weightedSites) { + return powerDiagram(weightedSites, null); + } + + /** + * Computes a power diagram (a.k.a. Laguerre–Voronoi diagram) for + * a set of weighted sites. + *

      + * Each site is given as a {@link PVector} where {@code (.x, .y)} is the site + * location and {@code .z} is its weight. + *

      Intuition

      A power diagram is the weighted analogue of a standard + * Voronoi diagram, but it still produces straight-edged (polygonal) + * cells. Conceptually, each site has an associated “strength” (its weight) + * that offsets distance: a site with a larger weight can “win” territory even + * if it is farther away in ordinary Euclidean terms. Power cells may be empty + * (i.e. fewer cells than sites) and may not contain the site. + *

      + * Unlike an additively-weighted Voronoi diagram (a.k.a. Apollonius + * diagram), where distance is modified by subtracting a radius/weight + * and cell boundaries are typically curved (circular arcs), the power + * diagram uses power distance (squared distance with a weight offset), + * which keeps boundaries linear and cells convex. + *

      + * Note: in practice, weights often need to differ substantially in magnitude + * (roughly on the order of ~100×) before the effect is visually obvious. + * + * @param weightedSites collection of sites encoded as PVectors: + * {@code (.x, .y)} = position, {@code .z} = weight + * @param bounds optional clipping bounds as + * {@code [minX, minY, maxX, maxY]}. If {@code null}, the + * diagram is left unclipped. + * @return a GROUP {@link PShape} whose children are the (closed) polygonal + * cells of the power diagram; empty/degenerate cells are omitted + * @since 2.2 + * @see #powerDiagram(Collection) + */ + public static PShape powerDiagram(Collection weightedSites, @Nullable double[] bounds) { + var sites = weightedSites.stream().map(z -> new PowerDiagram2D.Site(z.x, z.y, z.z)).toList(); + final Rect r = bounds == null ? null : new Rect(bounds[0], bounds[1], bounds[2], bounds[3]); + var cells = PowerDiagram2D.computeCells(sites, r); + var faces = cells.stream().map(cell -> { + if (cell.isEmpty()) { + return null; + } + var points = cell.stream().map(q -> new PVector((float) q.x(), (float) q.y())).collect(Collectors.toList()); + if (!points.get(0).equals(points.get(points.size() - 1))) { + points.add(points.get(0)); // unclosed by default - close + } + return PGS_Conversion.fromPVector(points); + }).filter(Objects::nonNull).toList(); + + return PGS_Conversion.flatten(faces); + } + + /** + * Computes a Manhattan (L1) Voronoi diagram for a set of sites, + * optionally clipped to an axis-aligned bounding box. + *

      + * In a Manhattan Voronoi diagram, distance is measured using the L1 + * (a.k.a. “city-block” or “taxicab”) metric. Each output cell contains the + * points for which a given site is the nearest site under this metric + * (ties may occur along cell boundaries). + *

      + * If {@code bounds} is {@code null}, clipping bounds are computed automatically + * from the input sites using their axis-aligned envelope (i.e. the min/max of + * {@code x} and {@code y}). Note that this envelope is often a tight fit; if + * you want visible “infinite” outer cells, pass an expanded bounding box. + *

      + * Compared to a standard (Euclidean/L2) Voronoi diagram, Manhattan Voronoi + * cells tend to align with the coordinate axes and produce characteristic + * 45°/axis- aligned edges. + * + * @param sites collection of {@link PVector} sites (only {@code x} and + * {@code y} are used) + * @param bounds optional clipping bounds as {@code [minX, minY, maxX, maxY]} + * defining the axis-aligned rectangle to which the diagram is + * restricted. If {@code null}, bounds are derived from the sites' + * envelope. + * @return a {@link PShape} representing the (optionally clipped) Manhattan + * Voronoi cells (a GROUP shape whose children are polygonal regions) + * @since 2.2 + */ + public static PShape manhattanVoronoi(Collection sites, @Nullable double[] bounds) { + var coords = sites.stream().map(PGS::coordFromPVector).toList(); + Envelope e; + if (bounds == null) { + var mp = PGS.GEOM_FACTORY.createMultiPointFromCoords(coords.toArray(Coordinate[]::new)); + e = mp.getEnvelopeInternal(); + } else { + e = new Envelope(bounds[0], bounds[2], bounds[1], bounds[3]); + } + var vSites = ManhattanVoronoi.generate(coords, e, false); + + var cells = vSites.stream().map(s -> s.toPolygon(PGS.GEOM_FACTORY)).toList(); + return toPShape(cells); + } + static Polygon toPolygon(ThiessenPolygon polygon) { Coordinate[] coords = new Coordinate[polygon.getEdges().size() + 1]; int i = 0; @@ -662,4 +762,28 @@ private static List toVertex(Coordinate[] coords) { } return vertices; } + + /** + * Collects coordinate sets into groups, handling nested GeometryCollections + * uniformly. + */ + private static void collectVertexGroups(Geometry geom, List> groups, List allVertices) { + if (geom == null || geom.isEmpty()) + return; + + // GeometryCollection covers MultiPoint/MultiLineString/MultiPolygon and more. + if (geom instanceof GeometryCollection gc) { + for (int i = 0; i < gc.getNumGeometries(); i++) { + collectVertexGroups(gc.getGeometryN(i), groups, allVertices); + } + return; + } + + // For Polygon/LineString/LinearRing/Point etc. + List featureVertices = toVertex(geom.getCoordinates()); + if (!featureVertices.isEmpty()) { + groups.add(featureVertices); + allVertices.addAll(featureVertices); + } + } } diff --git a/src/main/java/micycle/pgs/color/Colors.java b/src/main/java/micycle/pgs/color/Colors.java index 661e6363..d744141e 100644 --- a/src/main/java/micycle/pgs/color/Colors.java +++ b/src/main/java/micycle/pgs/color/Colors.java @@ -1,5 +1,13 @@ package micycle.pgs.color; +/** + * Provides a collection of standard and library-specific color constants as + * ARGB integers. These colors are used for default styling of generated + * geometries. + * + * @author Michael Carleton + * + */ public final class Colors { /** ColorUtils (0, 0, 0) */ diff --git a/src/main/java/micycle/pgs/color/Palette.java b/src/main/java/micycle/pgs/color/Palette.java index 4ce9fd9c..73a3fc74 100644 --- a/src/main/java/micycle/pgs/color/Palette.java +++ b/src/main/java/micycle/pgs/color/Palette.java @@ -81,41 +81,82 @@ private Palette(String[] value) { length = value.length; } + /** + * Returns the hex string representation of the colors in this palette. + * + * @return an array of hex color strings (e.g., "#ffe03d") + */ public String[] stringValue() { return value; } + /** + * Returns the hex string representation of the colors in this palette, rotated + * by the specified amount. + * + * @param rotation the number of positions to rotate the palette (can be + * negative) + * @return an array of rotated hex color strings + */ public String[] stringValue(int rotation) { return rotate(stringValue(), rotation); } + /** + * Returns the integer representation of the colors in this palette. + * + * @return an array of ARGB color integers + */ public int[] intValue() { return intValue; } + /** + * Returns the number of colors in this palette. + * + * @return the palette length + */ public int length() { return length; } + /** + * Returns the integer representation of the colors in this palette, rotated by + * the specified amount. + * + * @param rotation the number of positions to rotate the palette (can be + * negative) + * @return an array of rotated color integers + */ public int[] intValue(int rotation) { return rotate(intValue, rotation); } + /** + * Returns the last color in this palette. + * + * @return the ARGB color integer of the last element + */ public int getLastColor() { return intValue[intValue.length - 1]; } /** * Gets the color closest to the given fraction along the palette. + * + * @param fraction the relative position along the palette [0, 1] + * @return the ARGB color integer at the nearest step */ public int get(double fraction) { return intValue[(int) Math.round(fraction * (intValue.length - 1))]; } /** - * Gets the color at the given index, modulo-ready. + * Gets the color at the given index, modulo-ready (supports negative indices by + * wrapping around). * - * @param index + * @param index the index of the color to retrieve + * @return the ARGB color integer at the specified index */ public int get(int index) { int length = intValue.length; @@ -123,14 +164,45 @@ public int get(int index) { return intValue[positiveIndex % length]; } + /** + * Gets the color at the given index from the palette rotated by + * {@code rotation}, modulo-ready (supports negative index and/or rotation). + * + * Equivalent to: {@code rotate(intValue(), rotation)[index]} but without + * copying. + */ + public int get(int index, int rotation) { + int len = intValue.length; + int idx = Math.floorMod(index, len); + int rot = Math.floorMod(rotation, len); + return intValue[(idx + rot) % len]; + } + + /** + * Returns the palette corresponding to the given ID (ordinal index). + * + * @param id the zero-based index of the palette + * @return the Palette instance associated with the ID + */ public static Palette getPalette(int id) { return Palette.values()[id % Palette.values().length]; } + /** + * Returns a random palette from the selection. + * + * @return a randomly chosen Palette + */ public static Palette getRandomPalette() { return Palette.values()[(int) (Math.random() * Palette.values().length)]; } + /** + * Returns a random palette that contains at least a specified number of colors. + * + * @param minColors the minimum required number of colors in the palette + * @return a randomly chosen Palette meeting the size requirement + */ public static Palette getRandomPalette(int minColors) { Palette palette = null; while (palette == null || palette.intValue.length < minColors) { diff --git a/src/main/java/micycle/pgs/commons/AreaOptimalPolygonizer.java b/src/main/java/micycle/pgs/commons/AreaOptimalPolygonizer.java new file mode 100644 index 00000000..d943707c --- /dev/null +++ b/src/main/java/micycle/pgs/commons/AreaOptimalPolygonizer.java @@ -0,0 +1,698 @@ +package micycle.pgs.commons; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; + +import org.locationtech.jts.algorithm.LineIntersector; +import org.locationtech.jts.algorithm.Orientation; +import org.locationtech.jts.algorithm.RobustLineIntersector; +import org.locationtech.jts.geom.Coordinate; +import org.locationtech.jts.geom.GeometryFactory; +import org.locationtech.jts.geom.LinearRing; +import org.locationtech.jts.geom.Polygon; + +/** + *

      + * Computes approximate area-optimal (minimum- and maximum-area) polygonizations + * of a given set of 2D points. The approach combines a triangle-based + * constructive heuristic with a triangle-swap local-search procedure. + *

      + * + *

      + * Based on Natanael Ramos, Raí C. de Jesus, Pedro J. de Rezende, Cid C. de + * Souza, and Fábio L. Usberti. "Triangle-Based Heuristics for Area Optimal + * Polygonizations". + *

      + * + * @author Michael Carleton + */ +public final class AreaOptimalPolygonizer { + + public enum AreaObjective { + MAXIMIZE, MINIMIZE + } + + static LineIntersector INTERSECTOR = new RobustLineIntersector(); + + /** + * Area-optimal-heuristic polygonization using default settings. + * + *

      + * Constructs an approximate area-optimal polygon visiting all distinct input + * points using the proximity heuristic. Duplicates and nulls in + * {@code inputPoints} are ignored. The pivot is chosen automatically (the first + * unique point in iteration order). Triangle-swap local-search is applied by + * default and a new {@link GeometryFactory} is used to construct the resulting + * polygon. + *

      + * + * @param inputPoints list of points (duplicates allowed; duplicates are + * removed, preserving iteration order) + * @param objective area objective (MAXIMIZE or MINIMIZE) + * @return a simple, non-degenerate {@link Polygon} visiting the given points + */ + public static Polygon polygonize(List inputPoints, AreaObjective objective) { + return polygonize(inputPoints, null, objective, true, new GeometryFactory()); + } + + /** + *

      + * Constructs an approximate area-optimal polygon visiting all distinct input + * points using a heuristic. Optionally, the polygon may be refined by applying + * the triangle-swap local-search (SLS) procedure. + *

      + * + *

      + * Duplicates and nulls in inputPoints are ignored. If pivot is null, the first + * unique point (in input iteration order) is used as the pivot. The returned + * polygon is oriented CCW and is guaranteed to be simple and non-degenerate + * (otherwise an exception is thrown). + *

      + * + * @param inputPoints list of input points (duplicates allowed; + * duplicates are removed preserving order) + * @param pivot optional pivot; if null, the first unique + * point is used + * @param objective area objective (MAXIMIZE or MINIMIZE) + * @param useTriangleSwapLocalSearch if true, apply triangle-swap local-search + * as a post-optimization step + * @param gf geometry factory used to construct the + * resulting polygon + * @return a simple, non-degenerate Polygon visiting the given points + */ + public static Polygon polygonize(Collection inputPoints, Coordinate pivot, AreaObjective objective, boolean postOptimize, GeometryFactory gf) { + Objects.requireNonNull(inputPoints, "inputPoints"); + Objects.requireNonNull(objective, "objective"); + Objects.requireNonNull(gf, "gf"); + + List pts = dedupePreserveOrder(inputPoints); + if (pts.size() < 3) { + throw new IllegalArgumentException("Need at least 3 distinct points."); + } + + Coordinate piv = (pivot != null) ? pivot : pts.get(0); + + // Sort by non-decreasing distance to pivot + List sorted = new ArrayList<>(pts); + sorted.sort(Comparator.comparingDouble((Coordinate c) -> c.distanceSq(piv)).thenComparingDouble(c -> c.x).thenComparingDouble(c -> c.y)); + + // Build initial non-degenerate polygon (handles initial collinearity) + List ring = buildInitialRingHandlingCollinear(sorted); + + // Track used points + Set used = new HashSet<>(ring.size() * 2); + for (Coordinate c : ring) { + used.add(c); + } + + // Reusable candidate buffers (avoid allocations per point) + final int maxN = pts.size(); + final int[] candEdge = new int[maxN]; + final double[] candScore = new double[maxN]; + + // Insert remaining points + for (Coordinate p : sorted) { + if (used.contains(p)) { + continue; + } + + final int m = ring.size(); + + // 1) Score all candidate edges (cheap) + int c = 0; + for (int i = 0; i < m; i++) { + Coordinate q = ring.get(i); + Coordinate r = ring.get((i + 1) % m); + + double area2 = triangleArea2(p, q, r); + if (area2 == 0.0) { + continue; // degenerate triangle + } + + candEdge[c] = i; + candScore[c] = area2; + c++; + } + + // 2) Sort candidates by objective score (best-first) + sortCandidatesByScore(candEdge, candScore, c, objective); + + // 3) Test candidates in that order until first valid (expensive part) + int bestEdge = -1; + for (int k = 0; k < c; k++) { + int i = candEdge[k]; + if (isInsertionNonCrossingFast(ring, i, p)) { + bestEdge = i; + break; + } + } + + if (bestEdge < 0) { + throw new IllegalStateException("No valid insertion edge found for point " + p); + } + + ring.add(bestEdge + 1, new Coordinate(p)); + used.add(p); + } + + // Ensure CCW (JTS doesn't require, but usually preferred) + if (!isCCW(ring)) { + Collections.reverse(ring); + } + + Polygon poly = toPolygon(ring, gf); + if (!poly.isValid() || poly.getArea() == 0.0) { + throw new IllegalStateException("Constructed polygon is invalid/degenerate. Try a different pivot."); + } + + if (postOptimize) { + return TriangleSwapLocalSearch.improveWithSLS(poly, ring, objective, gf); + } else { + return poly; + } + } + + /** + * Returns true if (s1-s2) intersects (e1-e2) in a way that is NOT allowed. The + * only allowed intersection point is 'allowedEndpoint' (exactly). + */ + private static boolean segmentsIntersectDisallowed(Coordinate s1, Coordinate s2, Coordinate e1, Coordinate e2, Coordinate allowedEndpoint) { + INTERSECTOR.computeIntersection(s1, s2, e1, e2); + if (!INTERSECTOR.hasIntersection()) { + return false; + } + + // Proper intersection (interior-interior crossing) is not allowed + if (INTERSECTOR.isProper()) { + return true; + } + + // Non-proper intersection: must be exactly at allowedEndpoint (and only that) + int n = INTERSECTOR.getIntersectionNum(); + if (n == LineIntersector.COLLINEAR) { + return true; + } + for (int i = 0; i < n; i++) { + Coordinate ip = INTERSECTOR.getIntersection(i); + if (!ip.equals2D(allowedEndpoint)) { + return true; + } + } + return false; + } + + private static boolean isInsertionNonCrossingFast(List ring, int edgeIndex, Coordinate p) { + int m = ring.size(); + Coordinate q = ring.get(edgeIndex); + Coordinate r = ring.get((edgeIndex + 1) % m); + + // Adjacent edges can only meet at endpoints; skip them to avoid needless tests + int prevEdge = (edgeIndex - 1 + m) % m; // (... -> q) + int nextEdge = (edgeIndex + 1) % m; // (r -> ...) + + // Check (q,p) against all edges except replaced edge (edgeIndex) and prevEdge + // Check (p,r) against all edges except replaced edge (edgeIndex) and nextEdge + if (segmentHitsAnyEdge(ring, q, p, edgeIndex, prevEdge, q) || segmentHitsAnyEdge(ring, p, r, edgeIndex, nextEdge, r)) { + return false; + } + + return true; + } + + private static boolean segmentHitsAnyEdge(List ring, Coordinate s1, Coordinate s2, int skipEdge1, int skipEdge2, Coordinate allowedEndpoint) { + int m = ring.size(); + + // Segment AABB + double sMinX = Math.min(s1.x, s2.x), sMaxX = Math.max(s1.x, s2.x); + double sMinY = Math.min(s1.y, s2.y), sMaxY = Math.max(s1.y, s2.y); + + for (int j = 0; j < m; j++) { + if (j == skipEdge1 || j == skipEdge2) { + continue; + } + + Coordinate a = ring.get(j); + Coordinate b = ring.get((j + 1) % m); + + // Edge AABB and quick reject + double eMinX = Math.min(a.x, b.x), eMaxX = Math.max(a.x, b.x); + double eMinY = Math.min(a.y, b.y), eMaxY = Math.max(a.y, b.y); + + if (sMaxX < eMinX || eMaxX < sMinX || sMaxY < eMinY || eMaxY < sMinY) { + continue; + } + + if (segmentsIntersectDisallowed(s1, s2, a, b, allowedEndpoint)) { + return true; + } + } + return false; + } + + private static void sortCandidatesByScore(int[] edge, double[] score, int n, AreaObjective obj) { + boolean desc = (obj == AreaObjective.MAXIMIZE); + if (n > 1) { + quicksort(edge, score, 0, n - 1, desc); + } + } + + /** + * Paper suggests handling initial collinear prefix; this builds a + * non-degenerate starting polygon. + */ + private static List buildInitialRingHandlingCollinear(List sorted) { + if (sorted.size() < 3) { + throw new IllegalArgumentException(); + } + + Coordinate a = sorted.get(0); + Coordinate b = sorted.get(1); + + int idx = 2; + while (idx < sorted.size() && index(a, b, sorted.get(idx)) == Orientation.COLLINEAR) { + idx++; + } + if (idx >= sorted.size()) { + throw new IllegalArgumentException("All points appear collinear; no simple polygonization exists."); + } + + // Collinear prefix: sorted[0..idx-1], next non-collinear point is r = + // sorted[idx] + List col = new ArrayList<>(sorted.subList(0, idx)); + Coordinate r = sorted.get(idx); + + // Pick extremes along the line to be endpoints p,q + CollinearOrderComparator order = lineOrder(a, b); + col.sort(order); + + Coordinate p = col.get(0); + Coordinate q = col.get(col.size() - 1); + + // Start with triangle p-q-r (ensure CCW later) + List ring = new ArrayList<>(); + ring.add(new Coordinate(p)); + + // Insert intermediate collinear points between p and q + for (int i = 1; i < col.size() - 1; i++) { + ring.add(new Coordinate(col.get(i))); + } + + ring.add(new Coordinate(q)); + ring.add(new Coordinate(r)); + return ring; + } + + private static Polygon toPolygon(List ring, GeometryFactory gf) { + Coordinate[] coords = new Coordinate[ring.size() + 1]; + for (int i = 0; i < ring.size(); i++) { + coords[i] = ring.get(i); + } + coords[coords.length - 1] = ring.get(0); // close + LinearRing shell = gf.createLinearRing(coords); + return gf.createPolygon(shell); + } + + /** Twice the triangle area (absolute cross product). */ + private static double triangleArea2(Coordinate p, Coordinate q, Coordinate r) { + return Math.abs((q.x - p.x) * (r.y - p.y) - (q.y - p.y) * (r.x - p.x)); + } + + private static boolean isCCW(List ring) { + // Signed area (shoelace). Positive => CCW. + double a = 0.0; + int n = ring.size(); + for (int i = 0; i < n; i++) { + Coordinate c1 = ring.get(i); + Coordinate c2 = ring.get((i + 1) % n); + a += (c1.x * c2.y) - (c2.x * c1.y); + } + return a > 0.0; + } + + private static boolean pointOnSegment(Coordinate p, Coordinate a, Coordinate b) { + if (index(a, b, p) != Orientation.COLLINEAR) { + return false; + } + double minX = Math.min(a.x, b.x), maxX = Math.max(a.x, b.x); + double minY = Math.min(a.y, b.y), maxY = Math.max(a.y, b.y); + return p.x >= minX && p.x <= maxX && p.y >= minY && p.y <= maxY; + } + + private static List dedupePreserveOrder(Collection pts) { + Map map = new LinkedHashMap<>(); + for (Coordinate c : pts) { + if (c == null) { + continue; + } + map.putIfAbsent(c, new Coordinate(c)); + } + return new ArrayList<>(map.values()); + } + + /** + * Orders points along a line (choose x or y depending on direction), to pick + * extremes of a collinear set. + */ + private static CollinearOrderComparator lineOrder(Coordinate a, Coordinate b) { + double dx = Math.abs(b.x - a.x); + double dy = Math.abs(b.y - a.y); + if (dx >= dy) { + return new CollinearOrderComparator(true); + } else { + return new CollinearOrderComparator(false); + } + } + + private static int index(Coordinate p1, Coordinate p2, Coordinate q) { + return Orientation.index(p1, p2, q); +// double dx1 = p2.x - p1.x; +// double dy1 = p2.y - p1.y; +// double dx2 = q.x - p1.x; +// double dy2 = q.y - p1.y; +// +// double cross = dx1 * dy2 - dy1 * dx2; +// +// if (cross > 0.0) { +// return 1; // COUNTERCLOCKWISE / LEFT +// } +// if (cross < 0.0) { +// return -1; // CLOCKWISE / RIGHT +// } +// return 0; // COLLINEAR / STRAIGHT + } + + private static void quicksort(int[] edge, double[] score, int lo, int hi, boolean desc) { + int i = lo, j = hi; + int mid = (lo + hi) >>> 1; + double pivotScore = score[mid]; + int pivotEdge = edge[mid]; + + while (i <= j) { + if (desc) { + while (score[i] > pivotScore || (score[i] == pivotScore && edge[i] < pivotEdge)) { + i++; + } + while (score[j] < pivotScore || (score[j] == pivotScore && edge[j] > pivotEdge)) { + j--; + } + } else { + while (score[i] < pivotScore || (score[i] == pivotScore && edge[i] < pivotEdge)) { + i++; + } + while (score[j] > pivotScore || (score[j] == pivotScore && edge[j] > pivotEdge)) { + j--; + } + } + + if (i <= j) { + double ts = score[i]; + score[i] = score[j]; + score[j] = ts; + int te = edge[i]; + edge[i] = edge[j]; + edge[j] = te; + i++; + j--; + } + } + + if (lo < j) { + quicksort(edge, score, lo, j, desc); + } + if (i < hi) { + quicksort(edge, score, i, hi, desc); + } + } + + private static final class CollinearOrderComparator implements Comparator { + private final boolean byX; + + CollinearOrderComparator(boolean byX) { + this.byX = byX; + } + + @Override + public int compare(Coordinate o1, Coordinate o2) { + if (byX) { + int c = Double.compare(o1.x, o2.x); + if (c != 0) { + return c; + } + return Double.compare(o1.y, o2.y); + } else { + int c = Double.compare(o1.y, o2.y); + if (c != 0) { + return c; + } + return Double.compare(o1.x, o2.x); + } + } + } + + final class TriangleSwapLocalSearch { + + static Polygon improveWithSLS(Polygon start, List allPoints, AreaOptimalPolygonizer.AreaObjective objective, GeometryFactory gf) { + List ring = exteriorRingToList(start); + // Ensure CCW for reflex/convex tests + if (!isCCW(ring)) { + Collections.reverse(ring); + } + + List pts = dedupePreserveOrder(allPoints); + + boolean improved; + do { + improved = false; + int n = ring.size(); + for (int j = 0; j < n; j++) { + int jm2 = mod(j - 2, n); + int jm1 = mod(j - 1, n); + int jp1 = mod(j + 1, n); + int jp2 = mod(j + 2, n); + + if (!isReflex(ring, j) || !isConvex(ring, jm1) || !isConvex(ring, jp1)) { + continue; + } + + Coordinate a = ring.get(jm1); + Coordinate b = ring.get(j); + Coordinate c = ring.get(jp1); + + if (triangleArea2(a, b, c) == 0.0) { + continue; + } + if (!triangleEmptyWrtPoints(pts, a, b, c)) { + continue; + } + + double ins = triangleArea2(a, b, c); + + // left remove candidate: (jm2, jm1, j), swap jm1 <-> j + Coordinate l0 = ring.get(jm2), l1 = ring.get(jm1), l2 = ring.get(j); + boolean leftEmpty = triangleArea2(l0, l1, l2) != 0.0 && triangleEmptyWrtPoints(pts, l0, l1, l2); + double leftArea = triangleArea2(l0, l1, l2); + + // right remove candidate: (j, jp1, jp2), swap j <-> jp1 + Coordinate r0 = ring.get(j), r1 = ring.get(jp1), r2 = ring.get(jp2); + boolean rightEmpty = triangleArea2(r0, r1, r2) != 0.0 && triangleEmptyWrtPoints(pts, r0, r1, r2); + double rightArea = triangleArea2(r0, r1, r2); + + int swapI = -1, swapK = -1; // swap indices + if (objective == AreaOptimalPolygonizer.AreaObjective.MAXIMIZE) { + boolean leftOk = leftEmpty && leftArea < ins; + boolean rightOk = rightEmpty && rightArea < ins; + + if (leftOk && rightOk) { + // remove smaller triangle + if (leftArea <= rightArea) { + swapI = jm1; + swapK = j; + } else { + swapI = j; + swapK = jp1; + } + } else if (leftOk) { + swapI = jm1; + swapK = j; + } else if (rightOk) { + swapI = j; + swapK = jp1; + } + } else { // MIN_AREA + boolean leftOk = leftEmpty && leftArea > ins; + boolean rightOk = rightEmpty && rightArea > ins; + + if (leftOk && rightOk) { + // remove larger triangle + if (leftArea >= rightArea) { + swapI = jm1; + swapK = j; + } else { + swapI = j; + swapK = jp1; + } + } else if (leftOk) { + swapI = jm1; + swapK = j; + } else if (rightOk) { + swapI = j; + swapK = jp1; + } + } + + if (swapI >= 0) { + if (wouldBeSimpleAfterAdjacentSwap(ring, swapI, swapK)) { + Collections.swap(ring, swapI, swapK); + improved = true; + // ring changed; restart scan (optional but tends to behave better) + break; + } + } + } + } while (improved); + + if (!isCCW(ring)) { + Collections.reverse(ring); + } + return toPolygon(ring, gf); + } + + private static boolean wouldBeSimpleAfterAdjacentSwap(List ring, int i, int k) { + int n = ring.size(); + // ensure i and k are adjacent cyclically + if ((n < 4) || !(mod(i + 1, n) == k || mod(k + 1, n) == i)) { + return false; + } + + // normalize so k = i+1 mod n + if (mod(i + 1, n) != k) { + int tmp = i; + i = k; + k = tmp; + } + int prev = mod(i - 1, n); + int next = mod(k + 1, n); + + Coordinate vPrev = ring.get(prev); + Coordinate vi = ring.get(i); + Coordinate vk = ring.get(k); + Coordinate vNext = ring.get(next); + + // New edges after swap: + // (vPrev - vk), (vk - vi), (vi - vNext) + // Edge (vk-vi) is same as old (vi-vk) reversed, but check anyway for collinear + // overlaps. + return edgeDoesNotCrossBoundary(ring, vPrev, vk, Set.of(prev, i)) && // replaces (vPrev-vi) + edgeDoesNotCrossBoundary(ring, vk, vi, Set.of(i, k)) && // replaces (vi-vk) + edgeDoesNotCrossBoundary(ring, vi, vNext, Set.of(k, next)); // replaces (vk-vNext) + } + + /** + * Checks segment (s1-s2) does not intersect polygon boundary edges, except at + * shared endpoints. skipEdgeIndices is a set of edge start indices to ignore. + * (Edge index e means ring[e] -> ring[e+1].) + */ + private static boolean edgeDoesNotCrossBoundary(List ring, Coordinate s1, Coordinate s2, Set skipEdgeIndices) { + int n = ring.size(); + + for (int e = 0; e < n; e++) { + if (skipEdgeIndices.contains(e)) { + continue; + } + Coordinate a = ring.get(e); + Coordinate b = ring.get((e + 1) % n); + + INTERSECTOR.computeIntersection(s1, s2, a, b); + if (!INTERSECTOR.hasIntersection()) { + continue; + } + + if (INTERSECTOR.isProper()) { + return false; + } + + // only allow intersections at shared endpoints + int m = INTERSECTOR.getIntersectionNum(); + for (int t = 0; t < m; t++) { + Coordinate ip = INTERSECTOR.getIntersection(t); + boolean shared = ip.equals2D(s1) || ip.equals2D(s2) ? (ip.equals2D(a) || ip.equals2D(b)) : false; + if (!shared) { + return false; + } + } + } + return true; + } + + private static boolean triangleEmptyWrtPoints(List pts, Coordinate a, Coordinate b, Coordinate c) { + // Empty w.r.t. S: no point (other than a,b,c) inside OR on boundary. + for (Coordinate p : pts) { + if (p.equals2D(a) || p.equals2D(b) || p.equals2D(c)) { + continue; + } + if (pointInTriangleOrOnEdge(p, a, b, c)) { + return false; + } + } + return true; + } + + private static boolean pointInTriangleOrOnEdge(Coordinate p, Coordinate a, Coordinate b, Coordinate c) { + int o1 = index(a, b, p); + int o2 = index(b, c, p); + int o3 = index(c, a, p); + + boolean hasPos = (o1 > 0) || (o2 > 0) || (o3 > 0); + boolean hasNeg = (o1 < 0) || (o2 < 0) || (o3 < 0); + // if not both signs, point is inside or on boundary + if (!(hasPos && hasNeg)) { + // also ensure within bounding box (handles collinear outside segment range) + double minX = Math.min(a.x, Math.min(b.x, c.x)); + double maxX = Math.max(a.x, Math.max(b.x, c.x)); + double minY = Math.min(a.y, Math.min(b.y, c.y)); + double maxY = Math.max(a.y, Math.max(b.y, c.y)); + return p.x >= minX && p.x <= maxX && p.y >= minY && p.y <= maxY; + } + return false; + } + + private static boolean isReflex(List ring, int i) { + int n = ring.size(); + Coordinate prev = ring.get(mod(i - 1, n)); + Coordinate cur = ring.get(i); + Coordinate next = ring.get(mod(i + 1, n)); + return index(prev, cur, next) < 0; // CW turn in CCW polygon + } + + private static boolean isConvex(List ring, int i) { + int n = ring.size(); + Coordinate prev = ring.get(mod(i - 1, n)); + Coordinate cur = ring.get(i); + Coordinate next = ring.get(mod(i + 1, n)); + return index(prev, cur, next) > 0; + } + + private static int mod(int x, int n) { + int r = x % n; + return (r < 0) ? r + n : r; + } + + private static List exteriorRingToList(Polygon p) { + Coordinate[] coords = p.getExteriorRing().getCoordinates(); + // coords is closed; drop last + List ring = new ArrayList<>(coords.length - 1); + for (int i = 0; i < coords.length - 1; i++) { + ring.add(new Coordinate(coords[i])); + } + return ring; + } + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/AztecDiamond.java b/src/main/java/micycle/pgs/commons/AztecDiamond.java new file mode 100644 index 00000000..dd1b60c1 --- /dev/null +++ b/src/main/java/micycle/pgs/commons/AztecDiamond.java @@ -0,0 +1,526 @@ +package micycle.pgs.commons; + +import org.locationtech.jts.geom.*; +import java.util.*; + +/** + * Generates a random domino tiling of an Aztec diamond as JTS geometry. + * + *

      What is an Aztec diamond?

      The Aztec diamond of order n, + * denoted A(n), is a well-known region on the square grid. Informally, it is + * the “diamond-shaped” union of unit grid squares whose centers satisfy + * {@code |x| + |y| < n} (with a suitable half-integer centering). Its boundary + * looks like a rotated square with “stair-step” edges. + * + *

      + * The region A(n) contains exactly {@code 2*n*(n+1)} unit squares. A domino + * tiling of A(n) is a perfect cover of that region by {@code n*(n+1)} + * dominoes, where each domino is a 1×2 or 2×1 rectangle made of two adjacent + * unit squares, with no overlaps and no gaps. + * + *

      Why are Aztec diamonds studied?

      Aztec diamonds are a “toy model” in + * combinatorics and statistical physics: they are simple to define, but their + * random domino tilings exhibit striking large-scale patterns. In particular, a + * uniformly random tiling of a large Aztec diamond typically develops a frozen + * outer region and a disordered inner region separated by an increasingly sharp + * boundary that approaches a circle (the “arctic circle theorem”). Because the + * model is exactly solvable, Aztec diamonds are used to study randomness, phase + * transitions, and limit shapes, and they connect to other structures such as + * non-intersecting lattice paths, determinantal point processes, and + * alternating sign matrices. + * + *

      What does this library produce?

      This class implements a standard + * domino shuffling / local growth procedure that produces a + * random domino tiling of A(n). The result is exported as a JTS + * {@link MultiPolygon} where each domino is represented as a rectangle + * {@link Polygon}. + * + *

      Coordinate system

      This implementation uses an integer cell grid with + * unit cell size, and outputs rectangles in that coordinate space: + *
        + *
      • X corresponds to column index; Y corresponds to row index.
      • + *
      • Y increases downward (screen-like). For a typical Cartesian Y-up system, + * negate Y during export or apply an affine transform.
      • + *
      + * + *

      Randomness and reproducibility

      Random choices occur during the “fill + * 2×2 blocks” step. Providing a seeded {@link Random} yields reproducible + * tilings. + * + * @author Michael Carleton + */ +public final class AztecDiamond { + + /** + * Domino "orientation" as used by the shuffling dynamics. + * + *

      + * Important: this is not simply “north/south means vertical”. In this + * algorithm, the orientation determines how a domino moves during the “move + * tiles” step, and it also determines which adjacent grid cell is paired with + * its upper-left cell. + *

      + */ + public enum Orientation { + N, S, E, W + } + + private static int dr(Orientation o) { + return switch (o) { + case N -> -1; + case S -> 1; + case E, W -> 0; + }; + } + + private static int dc(Orientation o) { + return switch (o) { + case E -> 1; + case W -> -1; + case N, S -> 0; + }; + } + + private static Orientation conflict(Orientation o) { + return switch (o) { + case N -> Orientation.S; + case S -> Orientation.N; + case E -> Orientation.W; + case W -> Orientation.E; + }; + } + + private static final class Domino { + int r; // upper-left cell row, in centered coordinates + int c; // upper-left cell col, in centered coordinates + final Orientation o; + + Domino(int r, int c, Orientation o) { + this.r = r; + this.c = c; + this.o = Objects.requireNonNull(o); + } + + void step() { + r += dr(o); + c += dc(o); + } + + /** + * @return true if this domino occupies two cells stacked vertically (1×2); + * false if it occupies two cells horizontally (2×1). + */ + boolean isVerticalShape() { + // Mirrors the Python implementation: + // E/W occupy (r,c) and (r+1,c); N/S occupy (r,c) and (r,c+1) + return o == Orientation.E || o == Orientation.W; + } + } + + private final GeometryFactory gf; + private final Random rnd; + + private int order; // current order during growth + private boolean[][] mask; // inside-diamond indicator for unit cells + private Domino[][] grid; // per-cell reference to the occupying domino + private final ArrayList tiles = new ArrayList<>(); + + /** + * Constructs a generator and immediately produces a random tiling of the Aztec + * diamond A(order). + * + *

      + * The generation starts from A(1) and repeatedly applies one domino-shuffling + * "growth step" until the requested order is reached. + *

      + * + * @param order the target order n for A(n); must be {@code > 0}. + * @param gf JTS geometry factory used to create polygons; must not be null. + * @param rnd source of randomness used by the algorithm; must not be null. + * Supply a seeded instance for reproducible output. + * @throws IllegalArgumentException if {@code order <= 0}. + * @throws NullPointerException if {@code gf} or {@code rnd} is null. + */ + public AztecDiamond(int order, GeometryFactory gf, Random rnd) { + if (order <= 0) + throw new IllegalArgumentException("order must be > 0"); + this.gf = Objects.requireNonNull(gf, "gf"); + this.rnd = Objects.requireNonNull(rnd, "rnd"); + + // Start at order 1, then grow to requested order. + this.order = 1; + rebuildMaskAndGrid(); + fillTwoByTwos(); // fill A(1) + while (this.order < order) { + stepTileGeneration(); + } + } + + /** + * Convenience factory that generates a random tiling of A(order) using a + * default {@link GeometryFactory} and a deterministic seed. + * + *

      + * This is useful for one-liners and tests. + *

      + * + * @param order the target order n for A(n); must be {@code > 0}. + * @param seed seed for the pseudo-random number generator. + * @return a {@link MultiPolygon} where each component polygon is one domino + * rectangle. + */ + public static MultiPolygon generate(int order, long seed) { + GeometryFactory gf = new GeometryFactory(); + AztecDiamond gen = new AztecDiamond(order, gf, new Random(seed)); + return gen.toMultiPolygon(1.0, 0.0, 0.0); + } + + /** + * Exports the current tiling as a JTS {@link MultiPolygon}. + * + *

      + * The output contains one rectangle {@link Polygon} per domino. Each rectangle + * is axis-aligned in the internal grid coordinate system. + *

      + * + *

      Per-domino “color/class” stored in {@code userData}

      + *

      + * Each returned domino polygon has an {@code Integer} stored in + * {@link Geometry#getUserData()} (set via + * {@link Geometry#setUserData(Object)}). This integer encodes the domino’s + * type as used by the underlying Aztec-diamond tiling algorithm. + *

      + * + *

      + * In plain terms: the algorithm imagines that every domino belongs to one of + * four “classes”. The class says which way that domino would like to + * slide during the shuffling step (up, down, left, or right), and it also + * determines which neighboring domino it can cancel with. This is the same kind + * of 4-way labeling that is often visualized by coloring dominoes in different + * colors. + *

      + * + *

      + * Encoding (matching the original Python reference): + *

      + *
        + *
      • {@code 0} = N (would move “north” / up)
      • + *
      • {@code 1} = S (would move “south” / down)
      • + *
      • {@code 2} = E (would move “east” / right)
      • + *
      • {@code 3} = W (would move “west” / left)
      • + *
      + * + *

      + * Note: this “class” is a property of the shuffling dynamics. It is closely + * related to the common “checkerboard coloring” classification (vertical vs + * horizontal, and which checkerboard color is on the top/left half of the + * domino), but the exact correspondence depends on the coordinate and + * checkerboard conventions used. + *

      + * + * @param cellSize scale factor applied to each unit grid cell (e.g., 10.0 makes + * each cell 10×10). + * @param originX translation applied to exported X coordinates. + * @param originY translation applied to exported Y coordinates. + * @return a {@link MultiPolygon} consisting of {@code n*(n+1)} domino polygons + * for order n. + */ + public MultiPolygon toMultiPolygon(double cellSize, double originX, double originY) { + Polygon[] polys = new Polygon[tiles.size()]; + int i = 0; + for (Domino d : tiles) { + polys[i++] = dominoPolygon(d, cellSize, originX, originY); + } + return gf.createMultiPolygon(polys); + } + + /** + * Performs one full domino-shuffling growth step: + *
        + *
      1. increase the order (A(k) → A(k+1))
      2. + *
      3. remove ("annihilate") adjacent dominoes that would move into each + * other
      4. + *
      5. move remaining dominoes one unit in their orientation direction
      6. + *
      7. fill all newly created 2×2 holes randomly with two dominoes
      8. + *
      + * + *

      + * This method is internal; the constructor repeatedly calls it until the target + * order is reached. + *

      + */ + private void stepTileGeneration() { + increaseOrder(); + cancelOpposingMovers(); + moveTiles(); + fillTwoByTwos(); + } + + /** + * Rebuilds the Aztec diamond mask and empties the occupancy grid for the + * current {@link #order}. + * + *

      + * The mask marks which unit cells belong to A(order). We use the standard + * characterization in terms of cell centers around the point + * ({@code order-0.5}, {@code order-0.5}). + *

      + */ + private void rebuildMaskAndGrid() { + int size = 2 * order; + mask = new boolean[size][size]; + grid = new Domino[size][size]; + + // Aztec diamond membership test on unit cells using cell centers. + double center = order - 0.5; + for (int r = 0; r < size; r++) { + for (int c = 0; c < size; c++) { + double x = r - center; + double y = c - center; + mask[r][c] = (Math.abs(x) + Math.abs(y) < order); + } + } + } + + /** + * Increases the order by one and embeds the previous occupancy grid into the + * new one. + * + *

      + * This corresponds to padding the old 2n×2n grid into the center of the new + * 2(n+1)×2(n+1) grid. It mirrors the Python operation + * {@code new[1:-1,1:-1] = old}. + *

      + */ + private void increaseOrder() { + int oldOrder = order; + Domino[][] oldGrid = grid; + + order = oldOrder + 1; + rebuildMaskAndGrid(); + + int oldSize = 2 * oldOrder; + for (int r = 0; r < oldSize; r++) { + System.arraycopy(oldGrid[r], 0, grid[r + 1], 1, oldSize); + } + } + + /** + * Cancels pairs of dominoes that would move into each other (opposing movers). + * + *

      + * In domino shuffling, dominoes carry a "movement direction". If a domino would + * move into a neighboring cell that is occupied by a domino moving in the exact + * opposite direction, the pair is removed. This creates holes that are later + * re-filled in 2×2 blocks. + *

      + */ + private void cancelOpposingMovers() { + HashSet removed = new HashSet<>(); + + int size = 2 * order; + for (int r = 0; r < size; r++) { + for (int c = 0; c < size; c++) { + if (!mask[r][c]) + continue; + Domino d = grid[r][c]; + if (d == null || removed.contains(d)) + continue; + + int r2 = r + dr(d.o); + int c2 = c + dc(d.o); + if (r2 < 0 || r2 >= size || c2 < 0 || c2 >= size) + continue; + + Domino d2 = grid[r2][c2]; + if (d2 == null || removed.contains(d2)) + continue; + + if (d2.o == conflict(d.o)) { + removed.add(d); + removed.add(d2); + clearDominoFromGrid(d); + clearDominoFromGrid(d2); + } + } + } + + if (!removed.isEmpty()) { + tiles.removeIf(removed::contains); + } + } + + /** + * Removes references to a domino from the current occupancy grid. + * + * @param d the domino to clear. + */ + private void clearDominoFromGrid(Domino d) { + int br = d.r + order; + int bc = d.c + order; + if (0 <= br && br < grid.length && 0 <= bc && bc < grid.length && grid[br][bc] == d) { + grid[br][bc] = null; + } + + if (d.isVerticalShape()) { + int r2 = br + 1, c2 = bc; + if (0 <= r2 && r2 < grid.length && grid[r2][c2] == d) + grid[r2][c2] = null; + } else { + int r2 = br, c2 = bc + 1; + if (0 <= c2 && c2 < grid.length && grid[r2][c2] == d) + grid[r2][c2] = null; + } + } + + /** + * Moves every domino one unit in its movement direction and rebuilds the + * occupancy grid. + * + *

      + * This is the "shuffling" part: after cancellations, remaining dominoes are + * shifted. The algorithm guarantees that after cancellation, these shifts can + * be applied without creating overlaps inside the diamond mask. + *

      + */ + private void moveTiles() { + Domino[][] newGrid = new Domino[2 * order][2 * order]; + + for (Domino d : tiles) { + d.step(); + putDominoInGrid(d, newGrid); + } + + grid = newGrid; + } + + /** + * Places a domino into an occupancy grid by marking its two unit cells. + * + * @param d domino to place. + * @param g target grid. + */ + private void putDominoInGrid(Domino d, Domino[][] g) { + int br = d.r + order; + int bc = d.c + order; + g[br][bc] = d; + + if (d.isVerticalShape()) { + g[br + 1][bc] = d; + } else { + g[br][bc + 1] = d; + } + } + + /** + * Fills all empty cells (holes) inside the Aztec diamond mask by repeatedly + * selecting an empty cell and filling the surrounding 2×2 block with two + * dominoes in one of two random patterns. + * + *

      + * This is the only randomized step in the growth iteration. Each 2×2 hole is + * filled either as two vertical dominoes side-by-side or as two horizontal + * dominoes stacked. + *

      + */ + private void fillTwoByTwos() { + int size = 2 * order; + + while (true) { + int rr = -1, cc = -1; + + // Find any empty cell inside the mask. + outer: for (int r = 0; r < size; r++) { + for (int c = 0; c < size; c++) { + if (mask[r][c] && grid[r][c] == null) { + rr = r; + cc = c; + break outer; + } + } + } + if (rr < 0) + break; + + // The shuffling algorithm ensures holes come in 2x2 blocks. Keep a safety + // check. + if (rr + 1 >= size || cc + 1 >= size) { + throw new IllegalStateException("Unexpected hole too close to boundary at (" + rr + "," + cc + ")"); + } + + if (rnd.nextBoolean()) { + // Two vertical-shape dominoes side-by-side + Domino a = new Domino(rr - order, cc - order, Orientation.W); + Domino b = new Domino(rr - order, (cc - order) + 1, Orientation.E); + + tiles.add(a); + tiles.add(b); + + putDominoInGrid(a, grid); + putDominoInGrid(b, grid); + } else { + // Two horizontal-shape dominoes stacked + Domino a = new Domino(rr - order, cc - order, Orientation.N); + Domino b = new Domino((rr - order) + 1, cc - order, Orientation.S); + + tiles.add(a); + tiles.add(b); + + putDominoInGrid(a, grid); + putDominoInGrid(b, grid); + } + } + } + + private static int classCode(Orientation o) { + // Match Python: ORIENTATIONS = N, S, E, W = range(4) + return switch (o) { + case N -> 0; + case S -> 1; + case E -> 2; + case W -> 3; + }; + } + + /** + * Converts one domino into a rectangle polygon. + * + * @param d domino to export. + * @param cellSize scale factor for unit cells. + * @param originX translation in X. + * @param originY translation in Y. + * @return an axis-aligned rectangle polygon covering the domino area. + */ + private Polygon dominoPolygon(Domino d, double cellSize, double originX, double originY) { + boolean vertical = d.isVerticalShape(); + int wCells = vertical ? 1 : 2; + int hCells = vertical ? 2 : 1; + + double x1 = originX + d.c * cellSize; + double y1 = originY + d.r * cellSize; + double x2 = x1 + wCells * cellSize; + double y2 = y1 + hCells * cellSize; + + Coordinate[] ring = new Coordinate[] { new Coordinate(x1, y1), new Coordinate(x2, y1), new Coordinate(x2, y2), new Coordinate(x1, y2), + new Coordinate(x1, y1) }; + + Polygon p = gf.createPolygon(ring); + + // Encode the “color/class” as an integer in userData: + p.setUserData(classCode(d.o)); // Integer 0..3 + + return p; + } + + /** + * Example usage: prints WKT for a random tiling. + * + * @param args ignored + */ + public static void main(String[] args) { + int n = 20; + MultiPolygon mp = AztecDiamond.generate(n, 12345L); + System.out.println("Dominoes: " + mp.getNumGeometries()); // should be n*(n+1) + System.out.println(mp); + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/PolygonDecomposition.java b/src/main/java/micycle/pgs/commons/BayazitConvexPartitioner.java similarity index 98% rename from src/main/java/micycle/pgs/commons/PolygonDecomposition.java rename to src/main/java/micycle/pgs/commons/BayazitConvexPartitioner.java index 8be9b243..ada6489b 100644 --- a/src/main/java/micycle/pgs/commons/PolygonDecomposition.java +++ b/src/main/java/micycle/pgs/commons/BayazitConvexPartitioner.java @@ -22,12 +22,15 @@ * @author William Bittle * @author Refactored for JTS by Michael Carleton * @see Mark Bayazits Algorithm + * @deprecated */ -public class PolygonDecomposition { +public class BayazitConvexPartitioner { + + // algorithm described in https://mpen.ca/406/bayazit private static final GeometryFactory GEOM_FACTORY = new GeometryFactory(new PrecisionModel(PrecisionModel.FLOATING_SINGLE)); - private PolygonDecomposition() { + private BayazitConvexPartitioner() { } public static List decompose(Polygon polygon) { diff --git a/src/main/java/micycle/pgs/commons/ContourRegularization.java b/src/main/java/micycle/pgs/commons/ContourRegularization.java new file mode 100644 index 00000000..77427df7 --- /dev/null +++ b/src/main/java/micycle/pgs/commons/ContourRegularization.java @@ -0,0 +1,1278 @@ +package micycle.pgs.commons; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.Objects; + +import org.locationtech.jts.algorithm.CGAlgorithmsDD; +import org.locationtech.jts.geom.Coordinate; +import org.locationtech.jts.geom.Geometry; +import org.locationtech.jts.geom.GeometryFactory; +import org.locationtech.jts.geom.LineSegment; +import org.locationtech.jts.geom.LineString; +import org.locationtech.jts.geom.LinearRing; +import org.locationtech.jts.geom.Polygon; + +/** + * Contour regularization for JTS geometries. + * + *

      Regularities reinforced

      + *
        + *
      • Parallelism (by snapping edge orientations via the direction + * model)
      • + *
      • Orthogonality (optional insertion step)
      • + *
      • Collinearity (merge consecutive collinear edges)
      • + *
      + * + *

      Notes / limitations

      + *
        + *
      • No guarantee of topological validity (self-intersections can occur).
      • + *
      • Best suited to man-made / rectilinear-ish shapes.
      • + *
      + * + *

      + * Conceptually inspired by CGAL Shape Regularization package. + *

      + * + * @author Michael Carleton + */ +public final class ContourRegularization { + + private ContourRegularization() { + } + + /** + * Parameters controlling detection/merge thresholds and optional steps. + * + *

      + * Defaults are chosen to be similar in spirit to CGAL defaults, but are not + * numerically identical. + *

      + */ + public static final class Parameters { + /** + * Angle threshold (degrees) used when testing "near parallel" between + * consecutive edges. Typical values: 3..10. + */ + public final double parallelAngleThresholdDeg; + + /** + * Maximum orthogonal distance between two consecutive parallel edges to + * consider them collinear (mergeable). Units are coordinate units. + */ + public final double maximumOffset; + + /** + * Edges shorter than this length are dropped before optimization. + */ + public final double minEdgeLength; + + /** + * If true, inserts an orthogonal edge when two consecutive edges are parallel. + */ + public final boolean insertOrthogonalWhenParallel; + + /** + * If true, for open contours, output keeps the original first and last vertex + * (XY). If false, endpoints can move due to rotation / reconnection. + */ + public final boolean preserveOpenEndpoints; + + public final ContourDirections directions; + + private Parameters(Builder b) { + this.parallelAngleThresholdDeg = b.parallelAngleThresholdDeg; + this.maximumOffset = b.maximumOffset; + this.minEdgeLength = b.minEdgeLength; + this.insertOrthogonalWhenParallel = b.insertOrthogonalWhenParallel; + this.preserveOpenEndpoints = b.preserveOpenEndpoints; + this.directions = b.directions; + } + + public static Parameters defaults() { + return builder().build(); + } + + public static Builder builder() { + return new Builder(); + } + + public static final class Builder { + private double parallelAngleThresholdDeg = 5.0; + private double maximumOffset = 0.5; + private double minEdgeLength = 1e-9; + private boolean insertOrthogonalWhenParallel = true; + private boolean preserveOpenEndpoints = true; + private ContourDirections directions = new LongestEdgeDirections(); + + public Builder parallelAngleThresholdDeg(double deg) { + this.parallelAngleThresholdDeg = deg; + return this; + } + + public Builder maximumOffset(double v) { + this.maximumOffset = v; + return this; + } + + public Builder minEdgeLength(double v) { + this.minEdgeLength = v; + return this; + } + + public Builder insertOrthogonalWhenParallel(boolean v) { + this.insertOrthogonalWhenParallel = v; + return this; + } + + public Builder preserveOpenEndpoints(boolean v) { + this.preserveOpenEndpoints = v; + return this; + } + + public Parameters build() { + return new Parameters(this); + } + + public Builder directions(ContourDirections directions) { + this.directions = directions; + return this; + } + } + } + + public interface ContourDirections { + + /** + * Creates an initialized direction model for a specific contour. + * + *

      + * Recommended contract: return a NEW instance (do not mutate and return + * {@code this}), so that a single {@link Parameters} instance can be reused + * safely across calls/threads. + *

      + * + * @param coordinates ordered coordinates (for closed rings may include + * duplicate last==first) + * @param closed whether to treat the contour as closed + * @return initialized direction model for this contour + */ + ContourDirections init(Coordinate[] coordinates, boolean closed); + + /** + * Orients an edge (segment) in-place toward the principal direction assigned to + * that edge. + * + * @param edgeIndex edge index in [0..numEdges-1] + * @param segment segment to mutate + */ + void orient(int edgeIndex, LineSegment segment); + } + + /** + * Default direction model: uses the orientation of the longest edge as the + * principal direction. + */ + public static final class LongestEdgeDirections implements ContourDirections { + + private final double refOrientationDeg; // only meaningful after init() + private final boolean initialized; + + /** Prototype constructor (no contour yet). */ + public LongestEdgeDirections() { + this.refOrientationDeg = 0.0; + this.initialized = false; + } + + private LongestEdgeDirections(double refOrientationDeg) { + this.refOrientationDeg = refOrientationDeg; + this.initialized = true; + } + + @Override + public ContourDirections init(Coordinate[] coordinates, boolean closed) { + Coordinate[] pts = closed ? sanitizeClosed(coordinates) : copy(coordinates); + double ref = computeLongestEdgeOrientationDeg(pts, closed); + return new LongestEdgeDirections(ref); + } + + @Override + public void orient(int edgeIndex, LineSegment segment) { + if (!initialized) + throw new IllegalStateException("Not initialized; call init() first"); + double segOri = Geometry2D.orientationDeg(segment); + double rot = Geometry2D.mod90AngleDifferenceDeg(segOri, refOrientationDeg); + Geometry2D.rotateSegmentCCWAroundMidpointInPlace(segment, rot); + } + + // same helper as before (use your existing) + private static double computeLongestEdgeOrientationDeg(Coordinate[] pts, boolean closed) { + double bestLen2 = -1.0, bestOri = 0.0; + int n = pts.length, limit = closed ? n : (n - 1); + for (int i = 0; i < limit; i++) { + Coordinate a = pts[i]; + Coordinate b = pts[closed ? (i + 1) % n : (i + 1)]; + double dx = b.x - a.x, dy = b.y - a.y; + double len2 = dx * dx + dy * dy; + if (len2 > bestLen2) { + bestLen2 = len2; + bestOri = Geometry2D.orientationDeg(new LineSegment(a, b)); + } + } + return bestOri; + } + } + + /** + * Principal-direction model that automatically infers one or more dominant + * directions from the contour. + * + *

      Purpose

      + *

      + * This strategy estimates a small set of dominant axes (principal directions) + * from the contour itself, then assigns each edge to one of these axes, + * enabling subsequent snapping (rotation) of edges to the inferred structure. + *

      + * + *

      Algorithm (CGAL-inspired, simplified)

      + *
        + *
      1. Compute all edge orientations and lengths.
      2. + *
      3. Mark edges shorter than {@link Options#minimumLength} as invalid for + * seeding axes.
      4. + *
      5. Sort edges by length descending.
      6. + *
      7. Iteratively pick the next longest unused valid edge as a new axis + * (seed).
      8. + *
      9. Assign other unused valid edges to that axis if they are near-parallel or + * near-orthogonal (within {@link Options#maximumAngleDeg}).
      10. + *
      11. Edges not assigned during axis discovery are filled in by propagating the + * nearest assigned axis along the contour ("unify along contour"), ensuring + * every edge has an axis index.
      12. + *
      13. If {@link Options#adjustDirections} is enabled, each discovered axis is + * slightly rotated by the average residual (mod-90) error of the edges assigned + * to it, yielding a better fit.
      14. + *
      + * + *

      How this differs from {@link UserDefinedDirections}

      + *
        + *
      • {@code MultipleDirections} discovers axes from the contour data.
      • + *
      • {@code UserDefinedDirections} uses axes supplied by the user.
      • + *
      + * + *

      Behaviour with many edges

      + *

      + * If the contour has 100 edges, this strategy will typically infer a small + * number of axes (often 1–4, depending on shape and thresholds). Each of the + * 100 edges is assigned to one inferred axis and then snapped to it during + * {@link #orient(int, LineSegment)}. + *

      + * + *

      Fallback

      + *

      + * If fewer than two meaningful axes are discovered (e.g., the contour is mostly + * one-directional), the strategy falls back to a single axis based on the + * longest edge and assigns all edges to it. + *

      + */ + public static final class MultipleDirections implements ContourDirections { + + /** + * Configuration for {@link MultipleDirections}. + */ + public static final class Options { + /** + * Maximum angle deviation (degrees) used during axis discovery. + * + *

      + * An edge is considered compatible with an axis if it is either: + *

        + *
      • near-parallel: angle ≤ maximumAngleDeg
      • + *
      • near-orthogonal: angle ≥ 90 - maximumAngleDeg
      • + *
      + *

      + */ + public final double maximumAngleDeg; + /** + * Minimum edge length required for an edge to be used as an axis seed and to + * participate in axis estimation. Shorter edges are treated as weak/noisy + * evidence. + */ + public final double minimumLength; + /** + * If true, each inferred axis is readjusted by the average residual angle of + * its assigned edges, improving fit to the input geometry. + */ + public final boolean adjustDirections; + + private Options(Builder b) { + this.maximumAngleDeg = b.maximumAngleDeg; + this.minimumLength = b.minimumLength; + this.adjustDirections = b.adjustDirections; + } + + public static Options defaults() { + return builder().build(); + } + + public static Builder builder() { + return new Builder(); + } + + public static final class Builder { + private double maximumAngleDeg = 10.0; + private double minimumLength = 3.0; + private boolean adjustDirections = true; + + public Builder maximumAngleDeg(double v) { + this.maximumAngleDeg = v; + return this; + } + + public Builder minimumLength(double v) { + this.minimumLength = v; + return this; + } + + public Builder adjustDirections(boolean v) { + this.adjustDirections = v; + return this; + } + + public Options build() { + return new Options(this); + } + } + } + + private final Options opt; // prototype options + + // initialized state: + private final double[] axesDeg; + private final int[] assigned; + private final boolean initialized; + + /** + * Creates a prototype multiple-direction model using the provided options. + * + *

      + * The actual axes and edge assignments are computed during + * {@link #init(Coordinate[], boolean)}. + *

      + * + * @param opt axis discovery and adjustment options + */ + public MultipleDirections(Options opt) { + this.opt = opt; + this.axesDeg = null; + this.assigned = null; + this.initialized = false; + } + + private MultipleDirections(Options opt, double[] axesDeg, int[] assigned) { + this.opt = opt; + this.axesDeg = axesDeg; + this.assigned = assigned; + this.initialized = true; + } + + @Override + public ContourDirections init(Coordinate[] coordinates, boolean closed) { + Coordinate[] pts = closed ? sanitizeClosed(coordinates) : copy(coordinates); + int edgeCount = closed ? pts.length : Math.max(0, pts.length - 1); + + double[] edgeOri = new double[edgeCount]; + double[] edgeLen = new double[edgeCount]; + boolean[] valid = new boolean[edgeCount]; + + for (int e = 0; e < edgeCount; e++) { + LineSegment s = edgeSegment(pts, closed, e); + edgeOri[e] = Geometry2D.orientationDeg(s); + edgeLen[e] = s.getLength(); + valid[e] = edgeLen[e] >= opt.minimumLength; + } + + Integer[] idx = new Integer[edgeCount]; + for (int i = 0; i < edgeCount; i++) + idx[i] = i; + Arrays.sort(idx, (a, b) -> Double.compare(edgeLen[b], edgeLen[a])); + + int[] assigned = new int[edgeCount]; + Arrays.fill(assigned, -1); + boolean[] used = new boolean[edgeCount]; + + ArrayList axes = new ArrayList<>(); + int groupIndex = 0; + + while (true) { + int seed = -1; + for (int e : idx) { + if (!used[e] && valid[e]) { + seed = e; + break; + } + } + if (seed == -1) + break; + + double axis = edgeOri[seed]; + axes.add(axis); + assigned[seed] = groupIndex; + used[seed] = true; + + for (int e = 0; e < edgeCount; e++) { + if (e == seed || used[e] || !valid[e]) + continue; + if (satisfiesAxisCondition(edgeOri[seed], edgeOri[e], opt.maximumAngleDeg)) { + assigned[e] = groupIndex; + used[e] = true; + } + } + groupIndex++; + } + + if (axes.size() <= 1) { + // fallback to longest + int longest = 0; + for (int e = 1; e < edgeCount; e++) + if (edgeLen[e] > edgeLen[longest]) + longest = e; + double[] ax = new double[] { normalize180(edgeOri[longest]) }; + int[] as = new int[edgeCount]; + Arrays.fill(as, 0); + return new MultipleDirections(opt, ax, as); + } + + unifyAndCorrectAssignments(assigned, closed); + + double[] axesDeg = new double[axes.size()]; + for (int i = 0; i < axesDeg.length; i++) + axesDeg[i] = normalize180(axes.get(i)); + + if (opt.adjustDirections) { + double[] sum = new double[axesDeg.length]; + double[] cnt = new double[axesDeg.length]; + + for (int e = 0; e < edgeCount; e++) { + if (!valid[e]) + continue; + int a = assigned[e]; + double resid = Geometry2D.mod90AngleDifferenceDeg(edgeOri[e], axesDeg[a]); + sum[a] += resid; + cnt[a] += 1.0; + } + for (int a = 0; a < axesDeg.length; a++) { + if (cnt[a] == 0.0) + continue; + axesDeg[a] = normalize180(axesDeg[a] + sum[a] / cnt[a]); + } + } + + return new MultipleDirections(opt, axesDeg, assigned); + } + + @Override + public void orient(int edgeIndex, LineSegment segment) { + if (!initialized) + throw new IllegalStateException("Not initialized; call init() first"); + int a = assigned[edgeIndex]; + double segOri = Geometry2D.orientationDeg(segment); + double rot = Geometry2D.mod90AngleDifferenceDeg(segOri, axesDeg[a]); + Geometry2D.rotateSegmentCCWAroundMidpointInPlace(segment, rot); + } + + private static boolean satisfiesAxisCondition(double refOriDeg, double segOriDeg, double maxAngleDeg) { + double a = angle0to90(refOriDeg, segOriDeg); + return a <= maxAngleDeg || a >= (90.0 - maxAngleDeg); + } + } + + /** + * Principal-direction model where the user supplies one or more desired axis + * directions. + * + *

      How axes are interpreted

      + *

      + * The {@code axesDeg} varargs defines a set of global principal axes + * (orientations) in degrees. Each value represents an axis direction normalized + * into the range {@code [0,180)}. For each axis, the model also implicitly + * accepts its orthogonal direction (axis + 90°), matching the CGAL concept of + * "parallel or orthogonal" fits. + *

      + * + *

      Edge assignment (important)

      + *

      + * Let the contour have {@code E} edges (segments), and you provide {@code M} + * axes in {@code axesDeg}. This does not mean you must provide {@code E} + * axes. Instead, each edge chooses an axis by classification: + *

      + * + *
        + *
      1. Compute the edge orientation in {@code [0,180)}.
      2. + *
      3. Scan axes in the order provided and select the first axis for + * which the edge is either: + *
          + *
        • near-parallel to the axis (angle ≤ {@code maxSnapAngleDeg}), or
        • + *
        • near-orthogonal to the axis (angle ≥ {@code 90 - maxSnapAngleDeg}).
        • + *
        + *
      4. + *
      5. If no axis matches, the edge is initially unassigned; unassigned edges + * are then filled-in by propagating the nearest assigned axis along the contour + * (a "unify along contour" pass), so that every edge ends up assigned.
      6. + *
      + * + *

      + * Example: If you pass two axes {@code axesDeg = {0, 45}} and the + * contour has 100 edges: each of the 100 edges will be assigned to axis 0° or + * 45° according to the test above. No per-edge axis array is required. + *

      + * + *

      Orientation step

      + *

      + * When {@link #orient(int, LineSegment)} is called for an edge, the edge is + * rotated around its midpoint by a signed "mod-90" difference so it becomes + * exactly aligned with the chosen axis (or its orthogonal), i.e., it snaps to + * the nearest of {axis, axis+90}. + *

      + * + *

      Notes

      + *
        + *
      • If {@code axesDeg} is empty, this strategy cannot classify edges + * meaningfully. The implementation should either behave as a no-op or fall back + * to a default axis (implementation-defined).
      • + *
      • Because axes are checked in order, earlier axes have priority when an + * edge could match multiple axes.
      • + *
      + */ + public static final class UserDefinedDirections implements ContourDirections { + + private final double maxSnapAngleDeg; + private final double[] axesDeg; // prototype options + + // initialized state: + private final int[] assigned; // per edge + private final boolean initialized; + + /** + * Creates a prototype user-defined axis model. + * + * @param maxSnapAngleDeg maximum angular deviation (degrees) for an edge to be + * considered parallel or orthogonal to an axis. Typical + * values: 3..10. + * @param axesDeg one or more axis orientations in degrees. Each edge of + * the contour is classified against these axes (see + * class Javadoc). You do not need to supply one + * axis per edge. + */ + public UserDefinedDirections(double maxSnapAngleDeg, double... axesDeg) { + this.maxSnapAngleDeg = maxSnapAngleDeg; + this.axesDeg = new double[axesDeg.length]; + for (int i = 0; i < axesDeg.length; i++) + this.axesDeg[i] = normalize180(axesDeg[i]); + + this.assigned = null; + this.initialized = false; + } + + private UserDefinedDirections(double maxSnapAngleDeg, double[] axesDeg, int[] assigned) { + this.maxSnapAngleDeg = maxSnapAngleDeg; + this.axesDeg = axesDeg; + this.assigned = assigned; + this.initialized = true; + } + + @Override + public ContourDirections init(Coordinate[] coordinates, boolean closed) { + Coordinate[] pts = closed ? sanitizeClosed(coordinates) : copy(coordinates); + int edgeCount = closed ? pts.length : Math.max(0, pts.length - 1); + + int[] assigned = new int[edgeCount]; + Arrays.fill(assigned, -1); + + for (int e = 0; e < edgeCount; e++) { + LineSegment seg = edgeSegment(pts, closed, e); + double segOri = Geometry2D.orientationDeg(seg); + + for (int d = 0; d < axesDeg.length; d++) { + if (satisfiesAxisCondition(segOri, axesDeg[d], maxSnapAngleDeg)) { + assigned[e] = d; + break; + } + } + } + + if (axesDeg.length == 0 || allUnassigned(assigned)) { + Arrays.fill(assigned, 0); // fallback + } else { + unifyAndCorrectAssignments(assigned, closed); + } + + // axesDeg is already normalized; safe to share (immutable) + return new UserDefinedDirections(maxSnapAngleDeg, axesDeg, assigned); + } + + @Override + public void orient(int edgeIndex, LineSegment segment) { + if (!initialized) + throw new IllegalStateException("Not initialized; call init() first"); + int d = assigned[edgeIndex]; + double segOri = Geometry2D.orientationDeg(segment); + double rot = Geometry2D.mod90AngleDifferenceDeg(segOri, axesDeg[d]); + Geometry2D.rotateSegmentCCWAroundMidpointInPlace(segment, rot); + } + + private static boolean satisfiesAxisCondition(double segOriDeg, double axisOriDeg, double maxAngleDeg) { + double a = angle0to90(segOriDeg, axisOriDeg); + return a <= maxAngleDeg || a >= (90.0 - maxAngleDeg); + } + + private static boolean allUnassigned(int[] a) { + for (int v : a) + if (v != -1) + return false; + return true; + } + } + + /** + * Regularizes a geometry using {@link Parameters#defaults()} and + * {@link LongestEdgeDirections}. + * + * @param input {@link LineString}, {@link LinearRing}, or {@link Polygon} + * @return a new geometry of the same runtime type + */ + public static Geometry regularize(Geometry input) { + return regularize(input, Parameters.defaults()); + } + + /** + * Regularizes a geometry. + * + *

      + * Supported types: + *

        + *
      • {@link LineString} (open or closed)
      • + *
      • {@link LinearRing}
      • + *
      • {@link Polygon} (shell + holes each regularized independently)
      • + *
      + *

      + * + * @param input geometry to regularize + * @param params configuration + * @return a new geometry + */ + public static Geometry regularize(Geometry input, Parameters params) { + Objects.requireNonNull(input, "input"); + Objects.requireNonNull(params, "params"); + + if (input instanceof Polygon p) { + return regularize(p, params); + } + if (input instanceof LinearRing r) { + return regularize(r, params); + } + if (input instanceof LineString ls) { + return regularize(ls, params); + } + throw new IllegalArgumentException("Unsupported geometry type: " + input.getGeometryType()); + } + + /** + * Regularize a {@link LineString}. If the LineString is closed, it is treated + * as a ring. + */ + public static LineString regularize(LineString line, Parameters params) { + Objects.requireNonNull(line, "line"); + GeometryFactory gf = line.getFactory(); + + boolean closed = line.isClosed(); + Coordinate[] coords = line.getCoordinates(); + Coordinate[] out = regularizeCoordinates(coords, closed, params); + + LineString result = gf.createLineString(out); + result.setUserData(line.getUserData()); + return result; + } + + /** + * Regularize a {@link LinearRing}. Always treated as closed. + */ + public static LinearRing regularize(LinearRing ring, Parameters params) { + Objects.requireNonNull(ring, "ring"); + GeometryFactory gf = ring.getFactory(); + + Coordinate[] coords = ring.getCoordinates(); + Coordinate[] out = regularizeCoordinates(coords, true, params); + + LinearRing result = gf.createLinearRing(out); + result.setUserData(ring.getUserData()); + return result; + } + + /** + * Regularize a {@link Polygon}. Shell and each hole ring are processed + * independently. + */ + public static Polygon regularize(Polygon poly, Parameters params) { + Objects.requireNonNull(poly, "poly"); + GeometryFactory gf = poly.getFactory(); + + LinearRing shell = poly.getExteriorRing(); + LinearRing shellR = regularize(shell, params); + + LinearRing[] holesR = null; + int nh = poly.getNumInteriorRing(); + if (nh > 0) { + holesR = new LinearRing[nh]; + for (int i = 0; i < nh; i++) { + holesR[i] = regularize(poly.getInteriorRingN(i), params); + } + } + + Polygon result = gf.createPolygon(shellR, holesR); + result.setUserData(poly.getUserData()); + return result; + } + + /** + * Lowest-level entry point: regularize an ordered coordinate sequence. + * + * @param coordinates input vertices; for closed input may be with or without + * duplicate last==first + * @param closed treat as ring if true; as open polyline if false + * @param directions direction model (principal directions snapping) + * @param params configuration + * @return regularized coordinates; if closed, guaranteed last==first + */ + public static Coordinate[] regularizeCoordinates(Coordinate[] coordinates, boolean closed, Parameters params) { + Objects.requireNonNull(coordinates, "coordinates"); + Objects.requireNonNull(params, "params"); + + if (coordinates.length < (closed ? 4 : 2)) { + // allow "already has closing point" case; sanitize below + return copy(coordinates); + } + + Coordinate[] pts = closed ? sanitizeClosed(coordinates) : copy(coordinates); + if (pts.length < (closed ? 3 : 2)) { + return closed ? ensureClosed(copy(pts)) : copy(pts); + } + + // Keep originals if requested (open endpoints). + final Coordinate openStart = (!closed && params.preserveOpenEndpoints) ? new Coordinate(pts[0]) : null; + final Coordinate openEnd = (!closed && params.preserveOpenEndpoints) ? new Coordinate(pts[pts.length - 1]) : null; + + // 1) Build segments + List segs = buildSegments(pts, closed); + + // 2) Rotate toward principal directions + ContourDirections dirs = params.directions.init(coordinates, closed); + for (int i = 0; i < segs.size(); i++) { + dirs.orient(i, segs.get(i)); + } + + // 3) Remove tiny edges + segs = removeShort(segs, params.minEdgeLength); + + // CGAL-like early exits + if (closed && segs.size() < 4) { + return ensureClosed(copy(pts)); + } + if (!closed && segs.size() < 1) { + return copy(pts); + } + + // 4) Merge consecutive collinear + segs = mergeConsecutiveCollinear(segs, closed, params.parallelAngleThresholdDeg, params.maximumOffset); + + if (closed && segs.size() < 4) { + return ensureClosed(copy(pts)); + } + if (!closed && segs.size() < 1) { + return copy(pts); + } + + // 5) Insert orth edges if needed + if (params.insertOrthogonalWhenParallel) { + segs = insertOrthogonalEdgesWhenParallel(segs, closed, params.parallelAngleThresholdDeg); + } + + if (closed && segs.size() < 4) { + return ensureClosed(copy(pts)); + } + if (!closed && segs.size() < 1) { + return copy(pts); + } + + // 6) Reconnect by intersections + Coordinate[] out = reconnectToCoordinates(segs, closed); + + if (!closed && params.preserveOpenEndpoints) { + out[0] = openStart; + out[out.length - 1] = openEnd; + } + return out; + } + + private static List buildSegments(Coordinate[] pts, boolean closed) { + int n = pts.length; + int count = closed ? n : (n - 1); + ArrayList segs = new ArrayList<>(count); + for (int i = 0; i < count; i++) { + Coordinate a = pts[i]; + Coordinate b = pts[closed ? (i + 1) % n : (i + 1)]; + segs.add(new LineSegment(new Coordinate(a), new Coordinate(b))); + } + return segs; + } + + private static List removeShort(List segs, double minLen) { + double min2 = minLen * minLen; + ArrayList out = new ArrayList<>(segs.size()); + for (LineSegment s : segs) { + double dx = s.p1.x - s.p0.x; + double dy = s.p1.y - s.p0.y; + if (dx * dx + dy * dy > min2) { + out.add(s); + } + } + return out; + } + + private static List mergeConsecutiveCollinear(List segs, boolean closed, double angleThresholdDeg, double maxOffset) { + if (segs.isEmpty()) { + return segs; + } + + List> groups = new ArrayList<>(); + List curr = new ArrayList<>(); + curr.add(segs.get(0)); + + for (int i = 1; i < segs.size(); i++) { + LineSegment ref = curr.get(0); + LineSegment next = segs.get(i); + + if (Geometry2D.areParallel(ref, next, angleThresholdDeg) && Geometry2D.isCollinearEnough(ref, next, maxOffset)) { + curr.add(next); + } else { + groups.add(curr); + curr = new ArrayList<>(); + curr.add(next); + } + } + groups.add(curr); + + // closed wrap merge + if (closed && groups.size() >= 2) { + List first = groups.get(0); + List last = groups.get(groups.size() - 1); + LineSegment a = last.get(0); + LineSegment b = first.get(0); + if (Geometry2D.areParallel(a, b, angleThresholdDeg) && Geometry2D.isCollinearEnough(a, b, maxOffset)) { + ArrayList merged = new ArrayList<>(last.size() + first.size()); + merged.addAll(last); + merged.addAll(first); + groups.set(0, merged); + groups.remove(groups.size() - 1); + } + } + List out = new ArrayList<>(groups.size()); + for (List g : groups) { + out.add(Geometry2D.mergeCollinearGroup(g)); + } + return out; + } + + private static List insertOrthogonalEdgesWhenParallel(List segs, boolean closed, double angleThresholdDeg) { + int n = segs.size(); + ArrayList out = new ArrayList<>(n * 2); + + for (int i = 0; i < n; i++) { + LineSegment si = segs.get(i); + out.add(si); + + int j = closed ? (i + 1) % n : i + 1; + if (!closed && j >= n) { + break; + } + + LineSegment sj = segs.get(j); + if (Geometry2D.areParallel(si, sj, angleThresholdDeg)) { + out.add(Geometry2D.createAverageOrth(si, sj)); + } + } + return out; + } + + private static Coordinate[] reconnectToCoordinates(List segs, boolean closed) { + int n = segs.size(); + if (closed) { + Coordinate[] out = new Coordinate[n + 1]; + for (int i = 0; i < n; i++) { + int im = (i + n - 1) % n; + out[i] = Geometry2D.infiniteLineIntersectionOrFallback(segs.get(im), segs.get(i), segs.get(i).p0); + } + out[n] = new Coordinate(out[0]); + return out; + } else { + Coordinate[] out = new Coordinate[n + 1]; + out[0] = new Coordinate(segs.get(0).p0); + for (int i = 1; i < n; i++) { + out[i] = Geometry2D.infiniteLineIntersectionOrFallback(segs.get(i - 1), segs.get(i), segs.get(i).p0); + } + out[n] = new Coordinate(segs.get(n - 1).p1); + return out; + } + } + + private static Coordinate[] sanitizeClosed(Coordinate[] input) { + Coordinate[] c = copy(input); + if (c.length >= 2 && c[0].equals2D(c[c.length - 1])) { + return Arrays.copyOf(c, c.length - 1); + } + return c; + } + + private static Coordinate[] ensureClosed(Coordinate[] ptsNoDupLast) { + if (ptsNoDupLast.length == 0) { + return ptsNoDupLast; + } + Coordinate[] out = Arrays.copyOf(ptsNoDupLast, ptsNoDupLast.length + 1); + out[out.length - 1] = new Coordinate(out[0]); + return out; + } + + private static Coordinate[] copy(Coordinate[] in) { + Coordinate[] out = new Coordinate[in.length]; + for (int i = 0; i < in.length; i++) { + out[i] = new Coordinate(in[i]); + } + return out; + } + + private static double normalize180(double ang) { + ang %= 180.0; + if (ang < 0) + ang += 180.0; + return ang; + } + + /** Absolute angle between two orientations mapped into [0,90]. */ + private static double angle0to90(double aDeg, double bDeg) { + double diff = Math.abs(aDeg - bDeg); + diff = Math.min(diff, 180.0 - diff); // now in [0,90] + return diff; + } + + private static LineSegment edgeSegment(Coordinate[] pts, boolean closed, int edgeIndex) { + int n = pts.length; + int i = edgeIndex; + int j = closed ? (i + 1) % n : (i + 1); + return new LineSegment(pts[i], pts[j]); + } + + /** + * CGAL-like "unify along contour" then "correct directions" pass. Operates + * in-place on {@code assigned}, where -1 indicates unassigned. + */ + private static void unifyAndCorrectAssignments(int[] assigned, boolean closed) { + if (assigned.length == 0) + return; + + if (closed) { + unifyClosed(assigned); + correctClosed(assigned); + } else { + unifyOpen(assigned); + correctOpen(assigned); + } + } + + private static void unifyClosed(int[] assigned) { + int n = assigned.length; + for (int i = 0; i < n; i++) { + if (assigned[i] != -1) + continue; + + int im = (i + n - 1) % n; + int ip = (i + 1) % n; + + boolean stop = false; + int steps = 0; + while (!stop && steps < n) { + if (assigned[im] != -1) { + assigned[i] = assigned[im]; + break; + } + if (assigned[ip] != -1) { + assigned[i] = assigned[ip]; + break; + } + + im = (im + n - 1) % n; + ip = (ip + 1) % n; + if (im == i || ip == i) + stop = true; + steps++; + } + if (assigned[i] == -1) + assigned[i] = 0; + } + } + + private static void correctClosed(int[] assigned) { + int n = assigned.length; + int[] clean = new int[n]; + for (int i = 0; i < n; i++) { + int im = (i + n - 1) % n; + int ip = (i + 1) % n; + int dm = assigned[im]; + int di = assigned[i]; + int dp = assigned[ip]; + clean[i] = (dm != -1 && dm == dp && di != dm) ? dm : di; + } + System.arraycopy(clean, 0, assigned, 0, n); + } + + private static void unifyOpen(int[] assigned) { + int n = assigned.length; + for (int i = 0; i < n; i++) { + if (assigned[i] != -1) + continue; + + int im = (i > 0) ? i - 1 : -1; + int ip = (i < n - 1) ? i + 1 : -1; + + boolean stop = false; + int steps = 0; + while (steps < n) { + if (im != -1 && assigned[im] != -1) { + assigned[i] = assigned[im]; + break; + } + if (ip != -1 && assigned[ip] != -1) { + assigned[i] = assigned[ip]; + break; + } + + if (stop) + break; + if (im > 0) + im--; + if (ip != -1 && ip < n - 1) + ip++; + + if (im == 0 || ip == n - 1) + stop = true; + steps++; + } + if (assigned[i] == -1) + assigned[i] = 0; + } + } + + private static void correctOpen(int[] assigned) { + int n = assigned.length; + if (n == 1) + return; + + int[] clean = new int[n]; + + // first + clean[0] = (assigned[0] != assigned[1]) ? assigned[1] : assigned[0]; + + // middle + for (int i = 1; i < n - 1; i++) { + int dm = assigned[i - 1]; + int di = assigned[i]; + int dp = assigned[i + 1]; + clean[i] = (dm != -1 && dm == dp && di != dm) ? dm : di; + } + + // last + clean[n - 1] = (assigned[n - 1] != assigned[n - 2]) ? assigned[n - 2] : assigned[n - 1]; + + System.arraycopy(clean, 0, assigned, 0, n); + } + + static final class Geometry2D { + private Geometry2D() { + } + + /** + * Orientation in degrees, normalized to [0,180) with CGAL-like sign convention. + */ + static double orientationDeg(LineSegment s) { + double dx = s.p1.x - s.p0.x; + double dy = s.p1.y - s.p0.y; + if (dy < 0 || (dy == 0 && dx < 0)) { + dx = -dx; + dy = -dy; + } + double ang = Math.toDegrees(Math.atan2(dy, dx)); + if (ang < 0) { + ang += 180.0; + } + return ang; + } + + /** Signed mod-90 difference (CGAL internal::mod90_angle_difference_2). */ + static double mod90AngleDifferenceDeg(double angleI, double angleJ) { + double diff = angleI - angleJ; + int diff90 = (int) Math.floor(diff / 90.0); + double toLower = 90.0 * (diff90 + 0.0) - diff; + double toUpper = 90.0 * (diff90 + 1.0) - diff; + return (Math.abs(toLower) < Math.abs(toUpper)) ? toLower : toUpper; + } + + static void rotateSegmentCCWAroundMidpointInPlace(LineSegment s, double angleDeg) { + double rad = Math.toRadians(angleDeg); + double sin = Math.sin(rad); + double cos = Math.cos(rad); + + double mx = 0.5 * (s.p0.x + s.p1.x); + double my = 0.5 * (s.p0.y + s.p1.y); + + rotatePointCCWInPlace(s.p0, mx, my, cos, sin); + rotatePointCCWInPlace(s.p1, mx, my, cos, sin); + } + + private static void rotatePointCCWInPlace(Coordinate p, double cx, double cy, double cos, double sin) { + double x = p.x - cx; + double y = p.y - cy; + double rx = x * cos - y * sin; + double ry = y * cos + x * sin; + p.x = cx + rx; + p.y = cy + ry; + } + + static boolean areParallel(LineSegment a, LineSegment b, double angleThresholdDeg) { + double oa = orientationDeg(a); + double ob = orientationDeg(b); + double diff = Math.abs(oa - ob); + diff = Math.min(diff, 180.0 - diff); + diff = Math.min(diff, 90.0 - Math.abs(90.0 - diff)); // map to [0,90] + return diff <= angleThresholdDeg; + } + + /** + * CGAL-like test: distance from midpoint of s to projection on ref supporting + * line. + */ + static boolean isCollinearEnough(LineSegment ref, LineSegment s, double maxOffset) { + double mx = 0.5 * (s.p0.x + s.p1.x); + double my = 0.5 * (s.p0.y + s.p1.y); + Coordinate proj = projectPointToLine(mx, my, ref.p0, ref.p1); + double dx = mx - proj.x; + double dy = my - proj.y; + return (dx * dx + dy * dy) <= maxOffset * maxOffset; + } + + static Coordinate projectPointToLine(double px, double py, Coordinate a, Coordinate b) { + double ux = b.x - a.x; + double uy = b.y - a.y; + double denom = ux * ux + uy * uy; + if (denom == 0.0) { + return new Coordinate(a); + } + double t = ((px - a.x) * ux + (py - a.y) * uy) / denom; + return new Coordinate(a.x + t * ux, a.y + t * uy); + } + + /** + * Intersection of infinite supporting lines using JTS DD arithmetic. Falls back + * to {@code fallback} if parallel/collinear/undefined. + */ + static Coordinate infiniteLineIntersectionOrFallback(LineSegment s1, LineSegment s2, Coordinate fallback) { + Coordinate p = CGAlgorithmsDD.intersection(s1.p0, s1.p1, s2.p0, s2.p1); + return (p != null) ? p : new Coordinate(fallback); + } + + /** Port of CGAL Contour_base_2::create_average_orth. */ + static LineSegment createAverageOrth(LineSegment segmentI, LineSegment segmentJ) { + Coordinate p = projectPointToLine(segmentJ.p0.x, segmentJ.p0.y, segmentI.p0, segmentI.p1); + Coordinate source = midpoint(p, segmentI.p1); + + Coordinate q = projectPointToLine(segmentI.p1.x, segmentI.p1.y, segmentJ.p0, segmentJ.p1); + Coordinate target = midpoint(q, segmentJ.p0); + + return new LineSegment(source, target); + } + + static Coordinate midpoint(Coordinate a, Coordinate b) { + return new Coordinate(0.5 * (a.x + b.x), 0.5 * (a.y + b.y)); + } + + /** + * Merge a consecutive collinear group into one best-fit segment: - choose + * longest as reference direction - compute weighted (squared length) average + * normal offset of midpoints - project all endpoints onto shifted line and take + * min/max along reference axis + */ + static LineSegment mergeCollinearGroup(List group) { + if (group.size() == 1) { + LineSegment s = group.get(0); + return new LineSegment(new Coordinate(s.p0), new Coordinate(s.p1)); + } + + LineSegment ref = group.get(0); + double best = -1; + for (LineSegment s : group) { + double len = s.getLength(); + if (len > best) { + best = len; + ref = s; + } + } + + double ux = ref.p1.x - ref.p0.x; + double uy = ref.p1.y - ref.p0.y; + double un = Math.hypot(ux, uy); + if (un == 0.0) { + return new LineSegment(new Coordinate(ref.p0), new Coordinate(ref.p1)); + } + ux /= un; + uy /= un; + + double nx = -uy; + double ny = ux; + + double r0x = 0.5 * (ref.p0.x + ref.p1.x); + double r0y = 0.5 * (ref.p0.y + ref.p1.y); + + double wsum = 0.0; + double dsum = 0.0; + for (LineSegment s : group) { + double w = s.getLength(); + w = w * w; // squared length weights + double mx = 0.5 * (s.p0.x + s.p1.x); + double my = 0.5 * (s.p0.y + s.p1.y); + double dx = mx - r0x; + double dy = my - r0y; + double signed = dx * nx + dy * ny; + wsum += w; + dsum += w * signed; + } + double d = (wsum == 0.0) ? 0.0 : (dsum / wsum); + + double rx = r0x + d * nx; + double ry = r0y + d * ny; + + double tMin = Double.POSITIVE_INFINITY; + double tMax = Double.NEGATIVE_INFINITY; + for (LineSegment s : group) { + tMin = Math.min(tMin, projT(rx, ry, ux, uy, s.p0)); + tMax = Math.max(tMax, projT(rx, ry, ux, uy, s.p0)); + tMin = Math.min(tMin, projT(rx, ry, ux, uy, s.p1)); + tMax = Math.max(tMax, projT(rx, ry, ux, uy, s.p1)); + } + + Coordinate a = new Coordinate(rx + tMin * ux, ry + tMin * uy); + Coordinate b = new Coordinate(rx + tMax * ux, ry + tMax * uy); + return new LineSegment(a, b); + } + + private static double projT(double rx, double ry, double ux, double uy, Coordinate p) { + return (p.x - rx) * ux + (p.y - ry) * uy; + } + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/DBLACColoring.java b/src/main/java/micycle/pgs/commons/DBLACColoring.java new file mode 100644 index 00000000..b22e6f57 --- /dev/null +++ b/src/main/java/micycle/pgs/commons/DBLACColoring.java @@ -0,0 +1,350 @@ +package micycle.pgs.commons; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.BitSet; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Random; + +import org.jgrapht.Graph; +import org.jgrapht.alg.interfaces.VertexColoringAlgorithm; + +/** + * DBLAC (Degree-Based Largest Adjacency Count) graph coloring. + *

      + * DBLAC selection rule: + *

      + *

      + * Repeatedly select an uncolored vertex v that maximises + *

      + * + *
      + * LAC(v) = number of already-colored neighbors of v
      + * 
      + *

      + * Tie-break by larger static degree (original degree), then by the shuffled + * index. + *

      + *

      + * Color the selected vertex using first-fit (smallest feasible color). + *

      + * + * @author Michael Carleton + */ +public class DBLACColoring implements VertexColoringAlgorithm { + + /*- + * Implementation written for maximum performance (not readability): + * - CSR/flat adjacency (offsets + neighbors array) + * - adjacency built from edgeSet() (no NeighborCache/Set allocations) + * - max-heap with increase-key (no O(n) scans) + * - first-fit using 64-bit mask for <=64 colors, else BitSet + touched-list + */ + + private final Random rnd; + + private final List vertexList; // shuffled + private final Map vertexIndex; + private final int n; + + // CSR adjacency: neighbors in nbrs[off[v]..off[v+1]) + private final int[] off; + private final int[] nbrs; + + // static degree (unique neighbors) + private final int[] degTotal; + + // dynamic: #colored neighbors + private final int[] lac; + + // color[v] = -1 if uncolored + private final int[] color; + + // Max-heap over uncolored vertices by (lac desc, degTotal desc, index asc) + private final int[] heap; + private final int[] posInHeap; // -1 if removed + private int heapSize; + + // first-fit for many colors + private final BitSet forbidden = new BitSet(); + private int[] touched; // colors set in forbidden for current vertex + private int touchedSize; + + private Coloring cached; + + public DBLACColoring(Graph graph, long seed) { + this.rnd = new Random(seed); + + this.n = graph.vertexSet().size(); + + // Vertex indexing (shuffle for random tie-breaking) + this.vertexList = new ArrayList<>(graph.vertexSet()); + Collections.shuffle(vertexList, rnd); + + this.vertexIndex = new HashMap<>(Math.max(16, n * 2)); + for (int i = 0; i < n; i++) { + vertexIndex.put(vertexList.get(i), i); + } + + // Build CSR adjacency from edgeSet (2-pass), then deduplicate per vertex + // (stamp-based). + // Pass 1: count degrees (with duplicates) + int[] degDup = new int[n]; + for (E e : graph.edgeSet()) { + int s = vertexIndex.get(graph.getEdgeSource(e)); + int t = vertexIndex.get(graph.getEdgeTarget(e)); + degDup[s]++; + degDup[t]++; + } + + int[] offDup = new int[n + 1]; + for (int i = 0; i < n; i++) { + offDup[i + 1] = offDup[i] + degDup[i]; + } + + int[] nbrsDup = new int[offDup[n]]; + int[] cur = offDup.clone(); + + // Pass 2: fill adjacency (with duplicates) + for (E e : graph.edgeSet()) { + int s = vertexIndex.get(graph.getEdgeSource(e)); + int t = vertexIndex.get(graph.getEdgeTarget(e)); + nbrsDup[cur[s]++] = t; + nbrsDup[cur[t]++] = s; + } + + // Deduplicate per vertex without sorting (O(n+m)) using stamps + int[] seen = new int[n]; + int stamp = 1; + + int[] degUniq = new int[n]; + for (int v = 0; v < n; v++) { + stamp++; + if (stamp == 0) { + Arrays.fill(seen, 0); + stamp = 1; + } // ultra-defensive + int a = offDup[v], b = offDup[v + 1]; + int cnt = 0; + for (int p = a; p < b; p++) { + int nb = nbrsDup[p]; + if (seen[nb] != stamp) { + seen[nb] = stamp; + cnt++; + } + } + degUniq[v] = cnt; + } + + int[] offUniq = new int[n + 1]; + for (int i = 0; i < n; i++) { + offUniq[i + 1] = offUniq[i] + degUniq[i]; + } + + int[] nbrsUniq = new int[offUniq[n]]; + stamp = 1; + for (int v = 0; v < n; v++) { + stamp++; + if (stamp == 0) { + Arrays.fill(seen, 0); + stamp = 1; + } + int write = offUniq[v]; + int a = offDup[v], b = offDup[v + 1]; + for (int p = a; p < b; p++) { + int nb = nbrsDup[p]; + if (seen[nb] != stamp) { + seen[nb] = stamp; + nbrsUniq[write++] = nb; + } + } + } + + this.off = offUniq; + this.nbrs = nbrsUniq; + this.degTotal = degUniq; + + int maxDeg = 0; + for (int d : degTotal) { + maxDeg = Math.max(maxDeg, d); + } + + this.lac = new int[n]; + this.color = new int[n]; + Arrays.fill(color, -1); + + // Heap init: all vertices uncolored + this.heap = new int[n]; + this.posInHeap = new int[n]; + for (int i = 0; i < n; i++) { + heap[i] = i; + posInHeap[i] = i; + } + this.heapSize = n; + for (int i = (heapSize >>> 1) - 1; i >= 0; i--) { + siftDown(i); + } + + this.touched = new int[Math.max(16, maxDeg)]; // typical touched size ~ degree + } + + public DBLACColoring(Graph graph) { + this(graph, System.nanoTime()); + } + + @Override + public Coloring getColoring() { + if (cached != null) { + return cached; + } + + if (n == 0) { + cached = new ColoringImpl<>(Collections.emptyMap(), 0); + return cached; + } + + int numColors = 0; + + while (heapSize > 0) { + int v = extractMax(); + + int c = chooseSmallestAvailableColor(v, numColors); + if (c == numColors) { + numColors++; + } + color[v] = c; + + // update LAC for uncolored neighbors and fix heap keys (increase-key) + for (int p = off[v], end = off[v + 1]; p < end; p++) { + int nb = nbrs[p]; + if (color[nb] == -1) { + lac[nb]++; + increaseKey(nb); + } + } + } + + Map colorMap = new HashMap<>(Math.max(16, n * 2)); + for (int i = 0; i < n; i++) { + colorMap.put(vertexList.get(i), color[i]); + } + + cached = new ColoringImpl<>(colorMap, numColors); + return cached; + } + + // ---- first-fit color choice ---- + + private int chooseSmallestAvailableColor(int v, int numColors) { + if (numColors <= 64) { + long forb = 0L; + for (int p = off[v], end = off[v + 1]; p < end; p++) { + int c = color[nbrs[p]]; + if (c >= 0) { + forb |= (1L << c); + } + } + long avail = ~forb; + int c = Long.numberOfTrailingZeros(avail); // 0..64 (64 means none in [0..63]) + return (c < numColors) ? c : numColors; + } + + // BitSet + touched list: O(deg(v)) to mark and clear; nextClearBit finds + // smallest available. + touchedSize = 0; + for (int p = off[v], end = off[v + 1]; p < end; p++) { + int c = color[nbrs[p]]; + if (c >= 0 && !forbidden.get(c)) { + forbidden.set(c); + if (touchedSize == touched.length) { + touched = Arrays.copyOf(touched, touched.length << 1); + } + touched[touchedSize++] = c; + } + } + + int chosen = forbidden.nextClearBit(0); + // clear only what we set + for (int i = 0; i < touchedSize; i++) { + forbidden.clear(touched[i]); + } + + return (chosen < numColors) ? chosen : numColors; + } + + // ---- heap (max by lac, then degree, then shuffled index) ---- + + private boolean better(int a, int b) { + int la = lac[a], lb = lac[b]; + if (la != lb) { + return la > lb; + } + + int da = degTotal[a], db = degTotal[b]; + if (da != db) { + return da > db; + } + + return a < b; // shuffled index tie-break + } + + private void swapHeap(int i, int j) { + int vi = heap[i], vj = heap[j]; + heap[i] = vj; + heap[j] = vi; + posInHeap[vi] = j; + posInHeap[vj] = i; + } + + private void siftUp(int i) { + while (i > 0) { + int p = (i - 1) >>> 1; + if (better(heap[p], heap[i])) { + break; + } + swapHeap(p, i); + i = p; + } + } + + private void siftDown(int i) { + for (;;) { + int l = (i << 1) + 1; + if (l >= heapSize) { + return; + } + int r = l + 1; + + int bestChild = (r < heapSize && better(heap[r], heap[l])) ? r : l; + if (better(heap[i], heap[bestChild])) { + return; + } + + swapHeap(i, bestChild); + i = bestChild; + } + } + + private int extractMax() { + int v = heap[0]; + posInHeap[v] = -1; + + int last = heap[--heapSize]; + if (heapSize > 0) { + heap[0] = last; + posInHeap[last] = 0; + siftDown(0); + } + return v; + } + + private void increaseKey(int v) { + int p = posInHeap[v]; + if (p >= 0) { + siftUp(p); + } + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/DiscreteCurveEvolution.java b/src/main/java/micycle/pgs/commons/DiscreteCurveEvolution.java index e3620bf3..def83f65 100644 --- a/src/main/java/micycle/pgs/commons/DiscreteCurveEvolution.java +++ b/src/main/java/micycle/pgs/commons/DiscreteCurveEvolution.java @@ -5,6 +5,7 @@ import java.util.TreeSet; import org.locationtech.jts.geom.Coordinate; import org.locationtech.jts.geom.CoordinateList; +import org.locationtech.jts.geom.GeometryFactory; import org.locationtech.jts.geom.LineString; import net.jafama.FastMath; @@ -96,11 +97,12 @@ public interface DCETerminationCallback { * potentially reduced number of vertices that maintains the perceptual * appearance of the original curve. */ - public static Coordinate[] process(LineString lineString, DCETerminationCallback terminationCallback) { + public static LineString process(LineString lineString, DCETerminationCallback terminationCallback) { final boolean closed = lineString.isClosed(); + final GeometryFactory gf = lineString.getFactory(); Coordinate[] coords = lineString.getCoordinates(); if (coords.length == 0) { - return coords; + return lineString; } if (closed && coords.length > 2) { Coordinate[] newCoords = new Coordinate[coords.length - 1]; @@ -138,7 +140,8 @@ public static Coordinate[] process(LineString lineString, DCETerminationCallback throw new IllegalStateException( String.format("%d Kink objects were lost during the conversion from the kinks list to the kinkRelevanceTree set.", lostKinks)); } - while (kinkRelevanceTree.size() > 2) { + final int minVertices = closed ? 3 : 2; + while (kinkRelevanceTree.size() > minVertices) { Kink candidate = kinkRelevanceTree.pollFirst(); if (terminationCallback.shouldTerminate(candidate.c, candidate.relevance, kinkRelevanceTree.size() + 1)) { kinkRelevanceTree.add(candidate); // reinsert polled element @@ -160,11 +163,12 @@ public static Coordinate[] process(LineString lineString, DCETerminationCallback do { output.add(current.c); } while ((current = current.next) != first && current != null); + if (closed) { output.closeRing(); + return gf.createLinearRing(output.toCoordinateArray()); } - - return output.toCoordinateArray(); + return gf.createLineString(output.toCoordinateArray()); } private static List createKinksWithIds(Coordinate[] coords) { diff --git a/src/main/java/micycle/pgs/commons/EdgePrunedFaces.java b/src/main/java/micycle/pgs/commons/EdgePrunedFaces.java new file mode 100644 index 00000000..ff0e2d2d --- /dev/null +++ b/src/main/java/micycle/pgs/commons/EdgePrunedFaces.java @@ -0,0 +1,591 @@ +package micycle.pgs.commons; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.IdentityHashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; + +import org.jgrapht.alg.interfaces.VertexColoringAlgorithm.Coloring; +import org.jgrapht.alg.spanning.GreedyMultiplicativeSpanner; +import org.jgrapht.graph.AbstractBaseGraph; +import org.jgrapht.graph.DefaultEdge; +import org.jgrapht.graph.SimpleGraph; +import org.locationtech.jts.geom.Coordinate; +import org.locationtech.jts.geom.GeometryFactory; +import org.tinfour.common.IConstraint; +import org.tinfour.common.IIncrementalTin; +import org.tinfour.common.IQuadEdge; +import org.tinfour.common.Vertex; +import org.tinfour.utils.TriangleCollector; +import org.tinspin.index.PointMap; +import org.tinspin.index.kdtree.KDTree; + +import micycle.pgs.PGS_Conversion; +import micycle.pgs.PGS_Triangulation; +import processing.core.PShape; + +/** + * Utilities for extracting polygonal faces from a TIN by pruning TIN edges + * according to a rule, grouping triangles across the pruned edges, and tracing + * group boundaries. + * + *

      + * Core idea + *

      + *
        + *
      • Start with a Delaunay TIN ({@link IIncrementalTin}).
      • + *
      • Drop a subset of base edges according to a rule (Urquhart, Gabriel, RNG, + * k-spanner, etc.).
      • + *
      • Merge triangles across any dropped edge (DSU / union-find).
      • + *
      • For each merged component, collect only its boundary half-edges (those + * not dropped and not shared by two triangles of the component), then sequence + * them to form a ring and emit a polygon.
      • + *
      + * + *

      + * Benefits + *

      + *
        + *
      • Linear-time over triangles/edges for the TIN portion; avoids + * polygonization / noding / unions.
      • + *
      • Robust topology, since edges are taken directly from the TIN; boundaries + * are sequenced only.
      • + *
      • Pluggable rules via a simple {@code DropRule} interface.
      • + *
      • Consistent perimeter handling: when {@code preservePerimeter} is true, + * constraint/hull borders are never dropped.
      • + *
      + * + *

      + * Implemented rules + *

      + *
        + *
      • Urquhart: drop the longest base edge of each triangle.
      • + *
      • Gabriel: drop uv if the midpoint of uv has a nearest vertex that is + * neither u nor v.
      • + *
      • Relative Neighborhood (RNG): drop uv if there exists w with max(d(u,w), + * d(v,w)) < d(u,v).
      • + *
      • k-Spanner: keep only edges chosen by a greedy multiplicative spanner on + * the TIN graph (SimpleGraph<Vertex, IQuadEdge>), drop the rest.
      • + *
      • Edge-collapse quadrangulation: 3-color the three edges of each triangle + * (via graph coloring) and drop all edges with color ≥ 2 (or equivalently, + * keep colors 0–1) to form mostly quadrilateral faces; honors + * perimeter/constraint borders when requested.
      • + *
      + * + * @author Michael Carleton + */ +public class EdgePrunedFaces { + + // thin wrappers around the common pipelines + + public static PShape urquhartFaces(final IIncrementalTin tin, final boolean preservePerimeter) { + return facesSequencedCommon(tin, preservePerimeter, URQUHART_RULE); + } + + public static PShape gabrielFaces(final IIncrementalTin tin, final boolean preservePerimeter) { + return facesSequencedCommon(tin, preservePerimeter, GABRIEL_RULE); + } + + public static PShape relativeNeighborFaces(final IIncrementalTin tin, final boolean preservePerimeter) { + return facesSequencedCommon(tin, preservePerimeter, RELATIVE_NEIGHBOR_RULE); + } + + public static PShape spannerFaces(final IIncrementalTin tin, int k, final boolean preservePerimeter) { + return facesSequencedCommon(tin, preservePerimeter, spannerDropRule(tin, k)); + } + + public static PShape edgeCollapseQuadrangulation(final IIncrementalTin tin, final boolean preservePerimeter) { + return facesSequencedCommon(tin, preservePerimeter, edgeCollapseQuadrangulationDropRule(tin)); + } + + /** + * Common pipeline: collect mesh, mark dropped edges via rule, group via DSU, + * sequence boundaries to polygons. + * + * @param tin + * @param preservePerimeter + * @param rule + * @return + */ + private static PShape facesSequencedCommon(final IIncrementalTin tin, final boolean preservePerimeter, final DropRule rule) { + final GeometryFactory gf = new GeometryFactory(); + final boolean notConstrained = tin.getConstraints().isEmpty(); + + // Collect triangles, adjacency, and unique vertices + final List tris = new ArrayList<>(); + final Map edgeAdj = new IdentityHashMap<>(); + final Set vset = new HashSet<>(); + + TriangleCollector.visitSimpleTriangles(tin, t -> { + final IConstraint c = t.getContainingRegion(); + if (!(notConstrained || (c != null && c.definesConstrainedRegion()))) { + return; + } + + final int tid = tris.size(); + final IQuadEdge ea = t.getEdgeA(); + final IQuadEdge eb = t.getEdgeB(); + final IQuadEdge ec = t.getEdgeC(); + tris.add(new TriRec(ea, eb, ec)); + + addAdj(edgeAdj, ea.getBaseReference(), tid); + addAdj(edgeAdj, eb.getBaseReference(), tid); + addAdj(edgeAdj, ec.getBaseReference(), tid); + + vset.add(t.getVertexA()); + vset.add(t.getVertexB()); + vset.add(t.getVertexC()); + }); + + final int nTri = tris.size(); + if (nTri == 0) { + return new PShape(); + } + + final MeshCtx ctx = new MeshCtx(tris, edgeAdj, new ArrayList<>(vset)); + + // Mark dropped edges according to the rule + final Set dropped = Collections.newSetFromMap(new IdentityHashMap<>()); + rule.markDropped(ctx, preservePerimeter, dropped); + + // DSU across any dropped base edge + final DSU dsu = new DSU(nTri); + for (IQuadEdge e : dropped) { + final int[] inc = edgeAdj.get(e); + if (inc != null && inc[0] >= 0 && inc[1] >= 0) { + dsu.union(inc[0], inc[1]); + } + } + + // Group triangles + final Map> comp = new HashMap<>(); + for (int t = 0; t < nTri; t++) { + comp.computeIfAbsent(dsu.find(t), k -> new ArrayList<>()).add(t); + } + + // Build faces: collect boundary half-edges, sequence into one ring, emit + // polygon + var faces = comp.values().parallelStream().map(triIds -> { + final List boundary = new ArrayList<>(); + + for (int tid : triIds) { + final TriRec tr = tris.get(tid); + for (int i = 0; i < 3; i++) { + final IQuadEdge he = tr.e[i]; + final IQuadEdge base = he.getBaseReference(); + + // Skip dropped inner seams + if (dropped.contains(base)) { + continue; + } + + // If neighbor is in same group, edge is interior, skip + final int[] inc = edgeAdj.get(base); + final int nb = (inc == null) ? -1 : (inc[0] == tid ? inc[1] : inc[0]); + if (nb >= 0 && dsu.find(nb) == dsu.find(tid)) { + continue; + } + + // Boundary half-edge, interior on the left + boundary.add(he); + } + } + + if (boundary.isEmpty()) { + return null; + } + + final Coordinate[] ring = sequenceSingleLoop(boundary); + if (ring == null) { + return null; + } + + return PGS_Conversion.toPShape(gf.createPolygon(ring)); + }); + + return PGS_Conversion.flatten(faces.filter(Objects::nonNull).toList()); + } + + private static Coordinate[] sequenceSingleLoop(List edges) { + if (edges.isEmpty()) { + return null; + } + + final Map out = new IdentityHashMap<>(edges.size() * 2); + for (IQuadEdge e : edges) { + out.put(e.getA(), e); // assumes at most one outgoing per vertex on the boundary + } + + final IQuadEdge start = edges.get(0); + final Vertex startV = start.getA(); + + final List coords = new ArrayList<>(edges.size() + 1); + coords.add(new Coordinate(start.getA().x, start.getA().y)); + + IQuadEdge cur = start; + for (int i = 0; i < edges.size(); i++) { + coords.add(new Coordinate(cur.getB().x, cur.getB().y)); + if (cur.getB() == startV) { + break; // closed the loop + } + final IQuadEdge next = out.get(cur.getB()); + if (next == null) { + return null; // unexpected: dangling + } + cur = next; + } + + // Ensure closed + if (!coords.get(0).equals2D(coords.get(coords.size() - 1))) { + coords.add(new Coordinate(coords.get(0))); + } + if (coords.size() < 4) { + return null; + } + + return coords.toArray(new Coordinate[0]); + } + + private static final DropRule URQUHART_RULE = (ctx, preservePerimeter, dropped) -> { + // Urquhart rule: drop the longest base edge of each triangle + for (TriRec tr : ctx.tris) { + IQuadEdge e0 = tr.e[0].getBaseReference(); + IQuadEdge e1 = tr.e[1].getBaseReference(); + IQuadEdge e2 = tr.e[2].getBaseReference(); + + IQuadEdge max = e0; + double m2 = len2(e0); + double l1 = len2(e1); + if (l1 > m2) { + m2 = l1; + max = e1; + } + double l2 = len2(e2); + if (l2 > m2) { + max = e2; + } + + if (!preservePerimeter || !max.isConstraintRegionBorder()) { + dropped.add(max); + } + } + }; + + private static final DropRule GABRIEL_RULE = (ctx, preservePerimeter, dropped) -> { + // Gabriel rule: drop edges whose midpoint’s nearest vertex is neither endpoint + // Build KD-tree of all vertices + final PointMap tree = KDTree.create(2); + for (Vertex v : ctx.vertices) { + tree.insert(new double[] { v.x, v.y }, v); + } + + for (IQuadEdge base : ctx.edgeAdj.keySet()) { + if (preservePerimeter && base.isConstraintRegionBorder()) { + continue; + } + + final double mx = 0.5 * (base.getA().x + base.getB().x); + final double my = 0.5 * (base.getA().y + base.getB().y); + final Vertex nn = tree.query1nn(new double[] { mx, my }).value(); + + if (nn != base.getA() && nn != base.getB()) { + dropped.add(base); + } + } + }; + + private static final DropRule RELATIVE_NEIGHBOR_RULE = (ctx, preservePerimeter, dropped) -> { + // Relative Neighborhood Graph: drop uv if exists w in N(u) or N(v) with + // max(d(u,w), d(v,w)) < d(u,v) + // Build 1-ring vertex adjacency from base edges + final IdentityHashMap> nbrs = new IdentityHashMap<>(); + for (IQuadEdge base : ctx.edgeAdj.keySet()) { + final Vertex a = base.getA(); + final Vertex b = base.getB(); + nbrs.computeIfAbsent(a, k -> Collections.newSetFromMap(new IdentityHashMap<>())).add(b); + nbrs.computeIfAbsent(b, k -> Collections.newSetFromMap(new IdentityHashMap<>())).add(a); + } + + // Test each base edge against RNG condition, using squared distances + for (IQuadEdge base : ctx.edgeAdj.keySet()) { + if (preservePerimeter && base.isConstraintRegionBorder()) { + continue; + } + + final Vertex a = base.getA(); + final Vertex b = base.getB(); + final double l2 = dist2(a, b); + + boolean drop = false; + + // Check neighbors of a + final Set Na = nbrs.getOrDefault(a, Collections.emptySet()); + for (Vertex w : Na) { + if (w == b) { + continue; + } + if (Math.max(dist2(w, a), dist2(w, b)) < l2) { + drop = true; + break; + } + } + // If not dropped, check neighbors of b + if (!drop) { + final Set Nb = nbrs.getOrDefault(b, Collections.emptySet()); + for (Vertex w : Nb) { + if (w == a) { + continue; + } + if (Math.max(dist2(w, a), dist2(w, b)) < l2) { + drop = true; + break; + } + } + } + + if (drop) { + dropped.add(base); + } + } + }; + + private static DropRule spannerDropRule(final IIncrementalTin tin, final int kParam) { + final int k = Math.max(2, kParam); + return (ctx, preservePerimeter, dropped) -> { + final SimpleGraph g = PGS_Triangulation.toTinfourGraph(tin); + if (g.edgeSet().isEmpty()) { + return; + } + + final GreedyMultiplicativeSpanner sp = new GreedyMultiplicativeSpanner<>(g, k); + + // Build identity set of kept base refs from the spanner result + final Set kept = Collections.newSetFromMap(new IdentityHashMap<>()); + for (IQuadEdge e : sp.getSpanner()) { + kept.add(e.getBaseReference()); + } + + // Drop every base edge not in the spanner (unless perimeter preserved) + for (IQuadEdge base : ctx.edgeAdj.keySet()) { + if (preservePerimeter && base.isConstraintRegionBorder()) { + continue; + } + if (!kept.contains(base)) { + dropped.add(base); + } + } + }; + } + + private static DropRule edgeCollapseQuadrangulationDropRule(final IIncrementalTin tin) { + /*- + * From 'Fast unstructured quadrilateral mesh generation'. + * A better coloring approach is given in 'Face coloring in unstructured CFD codes'. + * + * First partition the edges of the triangular mesh into three groups such that + * no triangle has two edges of the same color (find groups by reducing to a + * graph-coloring). + * Then obtain an all-quadrilateral mesh by removing all edges of *one* + * particular color. + */ + final boolean unconstrained = tin.getConstraints().isEmpty(); + + // Collect unconstrained perimeter edges (base refs) if applicable + final Set perimeterBaseRefs = Collections.newSetFromMap(new IdentityHashMap<>()); + if (unconstrained) { + for (IQuadEdge e : tin.getPerimeter()) { + perimeterBaseRefs.add(e.getBaseReference()); + } + } + + return (ctx, preservePerimeter, dropped) -> { + // Build a graph where each vertex is a base edge; connect the 3 edges of each + // triangle + final AbstractBaseGraph g = new SimpleGraph<>(DefaultEdge.class); + for (TriRec tr : ctx.tris) { + final IQuadEdge a = tr.e[0].getBaseReference(); + final IQuadEdge b = tr.e[1].getBaseReference(); + final IQuadEdge c = tr.e[2].getBaseReference(); + + g.addVertex(a); + g.addVertex(b); + g.addVertex(c); + + g.addEdge(a, b); + g.addEdge(a, c); + g.addEdge(b, c); + } + if (g.vertexSet().isEmpty()) { + return; + } + + // 3-color the "edge graph" so no triangle has two edges of the same color + final Coloring coloring = new DBLACColoring<>(g, 1337L).getColoring(); + + // Mark all edges of the chosen color as dropped, honoring perimeter + // preservation + for (Map.Entry e : coloring.getColors().entrySet()) { + final IQuadEdge base = e.getKey(); + final int color = e.getValue(); + + /* + * NOTE 4-colorings are possible, so some triangles may have two or three edges + * with color >= 2, and yield faces larger than quads once edges are dropped. + */ + if (color < 2) { // skip 0, 1, so drop 2+ + continue; + } + + if (preservePerimeter) { + // Preserve constraint borders, and unconstrained outer perimeter + if (base.isConstraintRegionBorder() || perimeterBaseRefs.contains(base)) { + continue; + } + } + + dropped.add(base); // drop edge! + } + }; + } + + private static void addAdj(Map adj, IQuadEdge base, int tid) { + int[] a = adj.get(base); + if (a == null) { + a = new int[] { -1, -1 }; + adj.put(base, a); + } + if (a[0] < 0) { + a[0] = tid; + } else { + a[1] = tid; + } + } + + private static double dist2(Vertex p, Vertex q) { + return p.getDistanceSq(q); + } + + private static double len2(IQuadEdge e) { + final double dx = e.getA().x - e.getB().x; + final double dy = e.getA().y - e.getB().y; + return dx * dx + dy * dy; + } + + /** + * Strategy for selecting TIN base edges to drop prior to face extraction. + *

      + * The pipeline will union triangles across every dropped base edge, then trace + * the remaining edges to form polygon boundaries. A rule inspects the immutable + * mesh context and adds base edges (getBaseReference()) to the provided set. + *

      + *

      + * Typical usage idea: + *

      + * + *
      +	 * DropRule rule = (ctx, preservePerimeter, dropped) -> {
      +	 * 	for (IQuadEdge base : ctx.edgeAdj.keySet()) {
      +	 * 		if (preservePerimeter && isPerimeter(ctx, base))
      +	 * 			continue; // edge has <2 incident triangles
      +	 * 		if (shouldDropAccordingToRule(base, ctx))
      +	 * 			dropped.add(base);
      +	 * 	}
      +	 * };
      +	 * 
      + *

      + * Notes: + *

      + *
        + *
      • Add base edges only (not half-edges); identity semantics apply.
      • + *
      • If preservePerimeter is true, do not drop perimeter/constraint + * borders.
      • + *
      • Do not mutate ctx; only populate the dropped set.
      • + *
      • Deterministic selection is recommended.
      • + *
      + */ + private interface DropRule { + /** + * Marks base edges to be removed. + * + * @param ctx immutable mesh data (triangles, base-edge adjacency, + * vertices) + * @param preservePerimeter when true, perimeter/constraint borders must be kept + * @param dropped identity set to populate with base edges to drop + */ + void markDropped(MeshCtx ctx, boolean preservePerimeter, Set dropped); + } + + // Collected mesh context from the TIN + private static final class MeshCtx { + final List tris; + final Map edgeAdj; // base edge -> up to 2 incident triangle ids + final List vertices; // unique vertices seen in accepted triangles + + MeshCtx(List tris, Map edgeAdj, List vertices) { + this.tris = tris; + this.edgeAdj = edgeAdj; + this.vertices = vertices; + } + } + + // Triangle record with oriented half-edges (triangle on left) + private static record TriRec(IQuadEdge[] e) { + public TriRec { + if (e == null) + throw new NullPointerException("e"); + if (e.length != 3) + throw new IllegalArgumentException("array must be length 3"); + e = e.clone(); // defensive copy before assignment + } + + public TriRec(IQuadEdge a, IQuadEdge b, IQuadEdge c) { + this(new IQuadEdge[] { a, b, c }); + } + + // override accessor to return a copy so callers can't mutate internal array + @Override + public IQuadEdge[] e() { + return e.clone(); + } + } + + // Simple DSU + private static final class DSU { + final int[] p, r; + + DSU(int n) { + p = new int[n]; + r = new int[n]; + for (int i = 0; i < n; i++) { + p[i] = i; + } + } + + int find(int x) { + return p[x] == x ? x : (p[x] = find(p[x])); + } + + void union(int a, int b) { + a = find(a); + b = find(b); + if (a == b) { + return; + } + if (r[a] < r[b]) { + int t = a; + a = b; + b = t; + } + p[b] = a; + if (r[a] == r[b]) { + r[a]++; + } + } + } + +} diff --git a/src/main/java/micycle/pgs/commons/GaussianLineSmoothing.java b/src/main/java/micycle/pgs/commons/GaussianLineSmoothing.java index 5b292fb1..a82078a0 100644 --- a/src/main/java/micycle/pgs/commons/GaussianLineSmoothing.java +++ b/src/main/java/micycle/pgs/commons/GaussianLineSmoothing.java @@ -3,6 +3,7 @@ import java.util.ArrayList; import java.util.List; +import org.locationtech.jts.algorithm.Area; import org.locationtech.jts.algorithm.Orientation; import org.locationtech.jts.geom.Coordinate; import org.locationtech.jts.geom.CoordinateSequence; @@ -38,9 +39,9 @@ private GaussianLineSmoothing() { * of its neighbors, weighted by a gaussian kernel. For non-closed lines, the * initial and final points are preserved. * - * @param line The input line + * @param line The input line * @param sigmaM The standard deviation of the gaussian kernel. The larger, the - * more smoothed. + * more smoothed. */ public static LineString get(LineString line, double sigmaM) { if (line == null) { @@ -171,6 +172,78 @@ public static LineString get(LineString line, double sigmaM) { return line.getFactory().createLineString(out); } + /** + * Smooths a line using Gaussian convolution with a normalised amount in [0..1]. + *

      + * {@code amount=0} returns a copy of the input. {@code amount=1} forces the + * same collapse behavior as the internal extreme-sigma fallback. For values in + * (0,1), the mapping is scale-aware (closed rings use a thickness-based + * characteristic length) so the perceived smoothing level is more consistent + * across different-sized shapes. + * + * @param line input line/ring + * @param amount normalised smoothing amount in [0..1] + * @return smoothed line (new geometry) + */ + public static LineString getNormalised(LineString line, double amount) { + if (line == null) { + return null; + } + + amount = Math.max(0.0, Math.min(1.0, amount)); + if (amount == 0.0) { + return (LineString) line.copy(); + } + + final double length = line.getLength(); + if (length <= 0) { + return (LineString) line.copy(); + } + + final boolean isClosed = line.isClosed(); + + // Collapse threshold used by get(...) + final double sigmaCollapse = length / 3.0; + + // Characteristic "small-sigma" scale: + // - closed ring: hydraulic radius (thickness-aware) + // - open line: fall back to a fraction of length (no area available) + double L0; + if (isClosed && line instanceof LinearRing) { + Coordinate[] coords = line.getCoordinates(); + double A = Area.ofRing(coords); + double P = length; + if (A <= 0 || P <= 0) { + return (LineString) line.copy(); + } + L0 = (2.0 * A) / P; + } else { + // reasonable default for open lines; tweak if you have a better width estimate + L0 = 0.05 * length; + } + + // Normalized mapping: sigmaM in [0..sigmaCollapse), hits sigmaCollapse only at + // s=1 + double s = amount; + double a = L0 / sigmaCollapse; + + // sigmaM = sigmaCollapse * (a*s) / ((1-s) + a*s) + double denom = (1.0 - s) + a * s; + + double sigmaM; + if (amount >= 1.0) { + sigmaM = sigmaCollapse * 1.000001; // force ">" to trigger collapse branch + } else if (denom <= 0) { + sigmaM = sigmaCollapse * 0.999999; + } else { + sigmaM = sigmaCollapse * (a * s) / denom; + // strictly keep below collapse for s<1 + sigmaM = Math.min(sigmaM, sigmaCollapse * 0.999999); + } + + return GaussianLineSmoothing.get(line, sigmaM); + } + // Stable resampling: // - samples-per-sigma fixed // - count quantized (COUNT_QUANTUM) diff --git a/src/main/java/micycle/pgs/commons/GeneticColoring.java b/src/main/java/micycle/pgs/commons/GeneticColoring.java index a58cea5e..47e01a3b 100644 --- a/src/main/java/micycle/pgs/commons/GeneticColoring.java +++ b/src/main/java/micycle/pgs/commons/GeneticColoring.java @@ -1,11 +1,10 @@ package micycle.pgs.commons; import java.util.ArrayList; +import java.util.Arrays; import java.util.Comparator; import java.util.HashMap; -import java.util.HashSet; import java.util.List; -import java.util.Map; import java.util.SplittableRandom; import org.jgrapht.Graph; @@ -14,56 +13,63 @@ import org.jgrapht.util.CollectionUtil; /** - * Finds a solution to a graph coloring using a genetic algorithm. + * Memetic (GA + local search) solver for the k-Coloring problem on a JGraphT + * graph. *

      - * This class implements the technique described in Genetic Algorithm Applied - * to the Graph Coloring Problem by Musa M. Hindi and Roman V. - * Yampolskiy. - *

      - * The genetic algorithm process continues until it either finds a solution - * (i.e. 0 conflicts between adjacent vertices) or the algorithm has been run - * for the predefined number of generations. + * The algorithm searches for a proper coloring by minimizing the number of + * conflicting edges (adjacent vertices with the same color). It targets a + * 4-coloring first; if unsuccessful within the generation limit, it repairs the + * best found assignment into a proper coloring using up to 5 colors. + * *

      - * The goal of the algorithm is to improve the fitness of the population (a - * coloring) by mating its fittest individuals to produce superior offspring - * that offer a better solution to the problem. This process continues until a - * terminating condition is reached which could be simply that the total number - * of generations has been run or any other parameter like non-improvement of - * fitness over a certain number of generations or that a solution for the - * problem has been found. - * - * @author Soroush Javadi - * @author Refactored for JGraphT by Michael Carleton + * Approach (concise): + *

        + *
      • Representation: int chromosome of length |V|; gene = color in [0, + * k-1].
      • + *
      • Fitness: number of conflicting edges (counted once per edge).
      • + *
      • Initialization: strong seeds via DSATUR and degree-ordered greedy, plus + * random individuals.
      • + *
      • Selection: tournament; elitism preserves the best individuals each + * generation.
      • + *
      • Crossover: uniform (favoring agreement) and occasional 2-point + * crossover.
      • + *
      • Mutation: conflict-directed; for conflicted vertices choose a valid color + * using a boolean mask (micro-optimized); otherwise minimize conflicts.
      • + *
      • Local search: steepest-descent conflict repair applied to each child + * (memetic step).
      • + *
      • Diversification: partial re-seeding on stagnation to escape local + * minima.
      • + *
      • Precomputation: integer vertex IDs and neighbor arrays for fast inner + * loops.
      • + *
      • Termination: success when fitness = 0 or after maxGenerations; fallback + * repair guarantees a proper coloring.
      • + *
      * - * @param the graph vertex type - * @param the graph edge type + * @param graph vertex type + * @param graph edge type */ public class GeneticColoring implements VertexColoringAlgorithm { private final int vertexCount; private final int maxGenerations; private final int populationSize; - // fitness threshold for choosing a parent selection and mutation algorithm private final int fitnessThreshold; private SplittableRandom rand; private int colorsCount; - final Map colors; + final java.util.Map colors; private final List neighborCache; - final Map vertexIds; - - /** - * Creates with a population size of 50; "the value was chosen after testing a - * number of different population sizes. The value 50 was the least value that - * produced the desired results". - * - * @param graph - */ - public GeneticColoring(Graph graph) { - this(graph, 100, 50, 4); + final java.util.Map vertexIds; + + // Precomputed degrees and degree order + private final int[] degrees; + private final int[] degreeOrder; + + public GeneticColoring(Graph graph, long seed) { + this(graph, 200, 100, 4, seed); // larger defaults to give memetic search room } - public GeneticColoring(Graph graph, int maxGenerations, int populationSize, int fitnessThreshold) { + public GeneticColoring(Graph graph, int maxGenerations, int populationSize, int fitnessThreshold, long seed) { if (graph == null || maxGenerations < 1 || populationSize < 2) { throw new IllegalArgumentException(); } @@ -72,32 +78,45 @@ public GeneticColoring(Graph graph, int maxGenerations, int populationSize this.maxGenerations = maxGenerations; this.populationSize = populationSize; this.fitnessThreshold = fitnessThreshold; - this.rand = new SplittableRandom(); // NOTE unseeded + this.rand = new SplittableRandom(seed); this.colors = CollectionUtil.newHashMapWithExpectedSize(graph.vertexSet().size()); + // Vertex IDs 0..n-1 this.vertexIds = new HashMap<>(); int i = 0; for (V v : graph.vertexSet()) { - vertexIds.put(v, i); + vertexIds.put(v, i++); } - this.neighborCache = new ArrayList<>(); - final NeighborCache neighborCacheJT = new NeighborCache<>(graph); + // Neighbor cache indexed by ID + this.neighborCache = new ArrayList<>(vertexCount); + for (int k = 0; k < vertexCount; k++) + neighborCache.add(null); + final NeighborCache nc = new NeighborCache<>(graph); for (V v : graph.vertexSet()) { - neighborCache.add(neighborCacheJT.neighborsOf(v).stream().map(vertexIds::get) // map vertex objects to Integer ID - .mapToInt(Integer::intValue) // Integer -> int - .toArray()); + int id = vertexIds.get(v); + int[] neigh = nc.neighborsOf(v).stream().map(vertexIds::get).mapToInt(Integer::intValue).toArray(); + neighborCache.set(id, neigh); + } + + // Degrees and degree-descending order for greedy seeding + this.degrees = new int[vertexCount]; + for (int v = 0; v < vertexCount; v++) + degrees[v] = neighborCache.get(v).length; + this.degreeOrder = new int[vertexCount]; + { + Integer[] idx = new Integer[vertexCount]; + for (int v = 0; v < vertexCount; v++) + idx[v] = v; + Arrays.sort(idx, (a, b) -> Integer.compare(degrees[b], degrees[a])); + for (int k = 0; k < vertexCount; k++) + degreeOrder[k] = idx[k]; } } @Override public Coloring getColoring() { - /* - * For PGS, attempt to find a 4-color (optimal) solution within the - * maxGenerations threshold. If a solution is not found, find a 5-color solution - * instead (much more attainable/faster). - */ if (!getSolution(4)) { getSolution(5); } @@ -121,191 +140,354 @@ private int[] neighborsOf(int v) { return neighborCache.get(v); } + private int[] greedyColoringByOrder(int[] order) { + int[] chrom = new int[vertexCount]; + Arrays.fill(chrom, -1); + boolean[] used = new boolean[colorsCount]; + int[] candidates = new int[colorsCount]; + for (int idx = 0; idx < vertexCount; idx++) { + int v = order[idx]; + Arrays.fill(used, false); + for (int u : neighborsOf(v)) { + int cu = chrom[u]; + if (cu >= 0) + used[cu] = true; + } + int n = 0; + for (int c = 0; c < colorsCount; c++) + if (!used[c]) + candidates[n++] = c; + chrom[v] = (n > 0) ? candidates[rand.nextInt(n)] : rand.nextInt(colorsCount); + } + return chrom; + } + + // DSATUR seeding (bounded to colorsCount; if blocked, choose least-conflicting + // color) + private int[] dsaturColoring() { + int[] color = new int[vertexCount]; + Arrays.fill(color, -1); + int[] sat = new int[vertexCount]; // saturation degree + int[] usedCounts = new int[vertexCount]; // temporary reuse per vertex + boolean[][] neighborColors = new boolean[vertexCount][colorsCount]; + + for (int colored = 0; colored < vertexCount; colored++) { + int v = -1, bestSat = -1, bestDeg = -1; + for (int u = 0; u < vertexCount; u++) { + if (color[u] != -1) + continue; + int s = sat[u]; + int d = degrees[u]; + if (s > bestSat || (s == bestSat && d > bestDeg)) { + bestSat = s; + bestDeg = d; + v = u; + } + } + Arrays.fill(neighborColors[v], false); + for (int w : neighborsOf(v)) + if (color[w] >= 0) + neighborColors[v][color[w]] = true; + int pick = -1, bestConf = Integer.MAX_VALUE; + for (int c = 0; c < colorsCount; c++) { + int conf = 0; + for (int w : neighborsOf(v)) + if (color[w] == c) + conf++; + if (!neighborColors[v][c]) { + pick = c; + break; + } // conflict-free available + if (conf < bestConf) { + bestConf = conf; + pick = c; + } + } + color[v] = pick; + // update saturation of neighbors + for (int w : neighborsOf(v)) { + if (color[w] == -1) { + if (!neighborColors[w][pick]) { + neighborColors[w][pick] = true; + sat[w]++; + } + } + } + } + return color; + } + + private int[] shuffledOrderFrom(int[] base) { + int[] order = base.clone(); + for (int i = order.length - 1; i > 0; i--) { + int j = rand.nextInt(i + 1); + int t = order[i]; + order[i] = order[j]; + order[j] = t; + } + return order; + } + private class Population { - private List population; // NOTE heap suitable? + private List population; private int generation = 0; + private int bestSoFar = Integer.MAX_VALUE; + private int stagnantGens = 0; + + // memetic parameters + private final int eliteCount = Math.max(2, populationSize / 5); + private final int tournamentK = 3; + private final int localSearchCap = Math.max(200, vertexCount * 10); Population() { population = new ArrayList<>(populationSize); - for (int i = 0; i < populationSize; i++) { - population.add(new Individual()); + + // Strong seeds + population.add(new Individual(dsaturColoring())); + population.add(new Individual(greedyColoringByOrder(degreeOrder))); + + int greedySeeds = Math.max(2, populationSize / 10); + for (int s = 2; s < greedySeeds; s++) { + int[] order = shuffledOrderFrom(degreeOrder); + population.add(new Individual(greedyColoringByOrder(order))); } + + while (population.size() < populationSize) + population.add(new Individual()); + sort(); + bestSoFar = bestFitness(); } - /** - * With each generation the bottom performing half of the population is removed - * and new randomly generated chromosomes are added. - */ public void nextGeneration() { - final int halfSize = populationSize / 2; - List children = new ArrayList<>(halfSize); - for (int i = 0; i < halfSize; i++) { - Parents parents = selectParents(); - Individual child = new Individual(parents); + // Elitism: keep top eliteCount + List next = new ArrayList<>(populationSize); + for (int i = 0; i < eliteCount; i++) + next.add(population.get(i)); + + // Fill the rest with children + while (next.size() < populationSize) { + Individual p1 = tournamentSelect(); + Individual p2 = tournamentSelect(); + Individual child = rand.nextBoolean() ? new Individual(new Parents(p1, p2)) : new Individual(twoPointCrossover(p1, p2)); child.mutate(); - children.add(child); - } - for (int i = 0; i < halfSize; i++) { - population.set(populationSize - i - 1, children.get(i)); + child.localSearch(localSearchCap); // memetic step + next.add(child); } + + population = next; sort(); generation++; + + int best = bestFitness(); + if (best < bestSoFar) { + bestSoFar = best; + stagnantGens = 0; + } else { + stagnantGens++; + if (stagnantGens >= 20 && best > 0) { + diversify(); + stagnantGens = 0; + bestSoFar = bestFitness(); + } + } } - /** - * - * @return the best/fittest color assignment - */ - public int[] bestIndividual() { - return population.get(0).chromosome; + private Individual tournamentSelect() { + Individual best = null; + for (int i = 0; i < tournamentK; i++) { + Individual cand = population.get(rand.nextInt(populationSize)); + if (best == null || cand.fitness < best.fitness) + best = cand; + } + return best; } - public int bestFitness() { - return population.get(0).fitness; + private Parents twoPointCrossover(Individual a, Individual b) { + int i = rand.nextInt(vertexCount); + int j = rand.nextInt(vertexCount); + if (i > j) { + int t = i; + i = j; + j = t; + } + int[] child = new int[vertexCount]; + System.arraycopy(a.chromosome, 0, child, 0, i); + System.arraycopy(b.chromosome, i, child, i, j - i); + System.arraycopy(a.chromosome, j, child, j, vertexCount - j); + Individual wrapped = new Individual(child); + return new Parents(wrapped, wrapped); // reuse Individual(Parents) signature via a wrapper } - public int generation() { - return generation; + private void diversify() { + // re-seed the weakest third + int start = (int) (populationSize * 0.67); + for (int i = start; i < populationSize; i++) { + if (rand.nextDouble() < 0.5) { + population.set(i, new Individual()); + } else { + if (rand.nextBoolean()) + population.set(i, new Individual(dsaturColoring())); + else + population.set(i, new Individual(greedyColoringByOrder(shuffledOrderFrom(degreeOrder)))); + } + } + sort(); } - /** - * Choosing a parent selection and mutation method depends on the state of the - * population and how close it is to finding a solution. - *

      - * If the best fitness is greater than {@code fitnessThreshold} then - * parentSelection1 and mutation1 are used. Otherwise, parentSelection2 and - * mutation2 are used. This alteration is the result of experimenting with the - * different data sets. It was observed that when the best fitness score is low - * (i.e. approaching an optimum) the usage of parent selection 2 (which copies - * the best chromosome as the new child) along with mutation2 (which randomly - * selects a color for the violating vertex) results in a solution more often - * and more quickly than using the other two respective methods. - */ - private Parents selectParents() { - return bestFitness() > fitnessThreshold ? selectParents1() : selectParents2(); + public int[] bestIndividual() { + return population.get(0).chromosome; } - private Parents selectParents1() { - Individual tempParent1, tempParent2, parent1, parent2; - tempParent1 = population.get(rand.nextInt(populationSize)); - do { - tempParent2 = population.get(rand.nextInt(populationSize)); - } while (tempParent1 == tempParent2); - parent1 = (tempParent1.fitness > tempParent2.fitness ? tempParent2 : tempParent1); - do { - tempParent1 = population.get(rand.nextInt(populationSize)); - do { - tempParent2 = population.get(rand.nextInt(populationSize)); - } while (tempParent1 == tempParent2); - parent2 = (tempParent1.fitness > tempParent2.fitness ? tempParent2 : tempParent1); - } while (parent1 == parent2); - return new Parents(parent1, parent2); + public int bestFitness() { + return population.get(0).fitness; } - private Parents selectParents2() { - return new Parents(population.get(0), population.get(1)); + public int generation() { + return generation; } private void sort() { population.sort(Comparator.comparingInt(m -> m.fitness)); } - /** - * A candidate graph coloring. - */ private class Individual { - /** - * each element of chromosome represents a color of a vertex - */ private int[] chromosome; - /** - * fitness is defined as the number of 'bad' edges, i.e., edges connecting two - * vertices with the same color - */ private int fitness; - /** - * Instantiate a random individual. - */ Individual() { chromosome = new int[vertexCount]; - for (int i = 0; i < vertexCount; i++) { + for (int i = 0; i < vertexCount; i++) chromosome[i] = rand.nextInt(colorsCount); - } scoreFitness(); } - // crossover - Individual(Parents parents) { - chromosome = new int[vertexCount]; - final int crosspoint = rand.nextInt(vertexCount); - System.arraycopy(parents.parent1.chromosome, 0, chromosome, 0, crosspoint); - System.arraycopy(parents.parent2.chromosome, crosspoint, chromosome, crosspoint, vertexCount - crosspoint); + Individual(int[] chromosome) { + this.chromosome = chromosome.clone(); scoreFitness(); } - public void mutate() { - if (bestFitness() > fitnessThreshold) { - mutate1(); - } else { - mutate2(); + // Uniform crossover favoring agreement + Individual(Parents parents) { + chromosome = new int[vertexCount]; + final int[] p1 = parents.parent1.chromosome; + final int[] p2 = parents.parent2.chromosome; + for (int i = 0; i < vertexCount; i++) { + int c1 = p1[i], c2 = p2[i]; + chromosome[i] = (c1 == c2) ? c1 : (rand.nextBoolean() ? c1 : c2); } + scoreFitness(); } - private void mutate1() { - for (int v = 0; v < vertexCount; v++) { - for (int w : neighborsOf(v)) { - if (chromosome[v] == chromosome[w]) { - HashSet validColors = new HashSet<>(); - for (int c = 0; c < colorsCount; c++) { - validColors.add(c); - } - for (int u : neighborsOf(v)) { - validColors.remove(chromosome[u]); + public void mutate() { + // Conflict-directed mutation + boolean[] used = new boolean[colorsCount]; + int[] candidates = new int[colorsCount]; + + // early phase vs late phase adapts intensity + int passes = (bestFitness() > fitnessThreshold) ? 2 : 1; + for (int pass = 0; pass < passes; pass++) { + for (int v = 0; v < vertexCount; v++) { + int cv = chromosome[v]; + boolean conflict = false; + for (int w : neighborsOf(v)) { + if (cv == chromosome[w]) { + conflict = true; + break; } - if (!validColors.isEmpty()) { - chromosome[v] = (int) validColors.toArray()[rand.nextInt(validColors.size())]; + } + if (!conflict) + continue; + + // try best color for v + Arrays.fill(used, false); + for (int u : neighborsOf(v)) + used[chromosome[u]] = true; + int n = 0; + for (int c = 0; c < colorsCount; c++) + if (!used[c]) + candidates[n++] = c; + if (n > 0) { + chromosome[v] = candidates[rand.nextInt(n)]; + } else { + // pick color minimizing conflicts + int bestC = cv, bestConf = Integer.MAX_VALUE; + for (int c = 0; c < colorsCount; c++) { + int conf = 0; + for (int u : neighborsOf(v)) + if (chromosome[u] == c) + conf++; + if (conf < bestConf) { + bestConf = conf; + bestC = c; + } } - break; + chromosome[v] = bestC; } } } scoreFitness(); } - private void mutate2() { - for (int v = 0; v < vertexCount; v++) { - for (int w : neighborsOf(v)) { - if (chromosome[v] == chromosome[w]) { - chromosome[v] = rand.nextInt(colorsCount); - break; + // Local search: steepest-descent conflict repair with cap + void localSearch(int cap) { + int moves = 0; + boolean improved = true; + int[] colorCounts = new int[colorsCount]; + + while (improved && moves < cap && fitness > 0) { + improved = false; + + // Build a random permutation of vertices to reduce bias + int[] order = shuffledOrderFrom(degreeOrder); + for (int idx = 0; idx < vertexCount && moves < cap; idx++) { + int v = order[idx]; + int cv = chromosome[v]; + + int currentConf = 0; + for (int w : neighborsOf(v)) + if (chromosome[w] == cv) + currentConf++; + if (currentConf == 0) + continue; + + Arrays.fill(colorCounts, 0); + for (int w : neighborsOf(v)) + colorCounts[chromosome[w]]++; + + int bestC = cv; + int bestScore = currentConf; + for (int c = 0; c < colorsCount; c++) { + int score = colorCounts[c]; + if (score < bestScore || (score == bestScore && c < bestC)) { + bestScore = score; + bestC = c; + } + } + if (bestC != cv) { + chromosome[v] = bestC; + moves++; + // Recompute fitness lazily after a batch; here recompute fully occasionally + if ((moves & 15) == 0) + scoreFitness(); + improved = true; } } + // finalize fitness recompute + scoreFitness(); } - scoreFitness(); } - /** - * A bad edge is defined as an edge connecting two vertices that have the same - * color. The number of bad edges is the fitness score for the chromosome - * (higher number is worse fitness). - */ private void scoreFitness() { int f = 0; for (int v = 0; v < vertexCount; v++) { - for (int w : neighborsOf(v)) { - if (chromosome[v] == chromosome[w]) { + int cv = chromosome[v]; + for (int w : neighborsOf(v)) + if (w > v && cv == chromosome[w]) f++; - } - } } - /* - * Divide by 2 to account for double counting. Doesn't really matter if we're - * simply using fitness score to sort individuals. - */ - fitness = f / 2; + fitness = f; } } diff --git a/src/main/java/micycle/pgs/commons/GonHeuristic.java b/src/main/java/micycle/pgs/commons/GonHeuristic.java new file mode 100644 index 00000000..5ca1b82a --- /dev/null +++ b/src/main/java/micycle/pgs/commons/GonHeuristic.java @@ -0,0 +1,125 @@ +package micycle.pgs.commons; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.List; +import java.util.Objects; +import java.util.Random; +import java.util.function.ToDoubleBiFunction; + +/** + * Gonzalez (Gon) heuristic for the k-center problem on an arbitrary vertex + * collection with a distance function. + *

      + * Chooses the first center uniformly at random, then repeatedly adds the + * farthest vertex from the current center set (ties -> lowest index in the + * input iteration order). + *

      + * Time: O(k * |V|) distance evaluations Space: O(|V|) + */ +public final class GonHeuristic { + + private final Random rng; + + public GonHeuristic(Random rng) { + this.rng = Objects.requireNonNull(rng, "rng"); + } + + public List getCenters(Collection vertices, int k, ToDoubleBiFunction distFunc) { + Objects.requireNonNull(vertices, "vertices"); + Objects.requireNonNull(distFunc, "distFunc"); + + final int n = vertices.size(); + if (k <= 0) { + return new ArrayList<>(); + } + if (n == 0) { + throw new IllegalArgumentException("vertices must be non-empty"); + } + if (n < k) { + throw new IllegalArgumentException("number of vertices must be at least k"); + } + if (n == k) { + return new ArrayList<>(); + } + + // Indexable storage (fast scans, stable order for tie-breaking). + @SuppressWarnings("unchecked") + final V[] verts = (V[]) vertices.toArray(); + + // Bookkeeping + final boolean[] isCenter = new boolean[n]; + final double[] minDist = new double[n]; + Arrays.fill(minDist, Double.POSITIVE_INFINITY); + + // Centers by index, in selection order + final int[] centers = new int[k]; + int centerCount = 0; + + // Pick first center randomly + final int first = rng.nextInt(n); + isCenter[first] = true; + minDist[first] = 0.0; + centers[centerCount++] = first; + + // Initialize minDist to first center + final V firstV = verts[first]; + for (int i = 0; i < n; i++) { + if (isCenter[i]) { + continue; + } + final V vI = verts[i]; + final double d = distFunc.applyAsDouble(vI, firstV); + if (Double.isNaN(d)) { + throw new IllegalArgumentException("distFunc returned NaN for (" + vI + ", " + firstV + ")"); + } + minDist[i] = d; + } + + // Main loop + while (centerCount < k) { + // Find farthest non-center (ties -> lowest index) + int farthest = -1; + double maxD = -1.0; + for (int i = 0; i < n; i++) { + if (isCenter[i]) { + continue; + } + final double d = minDist[i]; + if (d > maxD || (d == maxD && i < farthest)) { + maxD = d; + farthest = i; + } + } + + // Add farthest as new center + isCenter[farthest] = true; + minDist[farthest] = 0.0; + centers[centerCount++] = farthest; + + // Update minDist using only the newly added center + final V newC = verts[farthest]; + for (int i = 0; i < n; i++) { + if (isCenter[i]) { + continue; + } + final V vI = verts[i]; + final double d = distFunc.applyAsDouble(vI, newC); + if (Double.isNaN(d)) { + throw new IllegalArgumentException("distFunc returned NaN for (" + vI + ", " + newC + ")"); + } + if (d < minDist[i]) { + minDist[i] = d; + } + } + } + + final List result = new ArrayList<>(k); + for (int i = 0; i < k; i++) { + final V v = verts[centers[i]]; + result.add(v); + } + return result; + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/GreedyTSP.java b/src/main/java/micycle/pgs/commons/GreedyTSP.java index be0303bd..a5b2cec5 100644 --- a/src/main/java/micycle/pgs/commons/GreedyTSP.java +++ b/src/main/java/micycle/pgs/commons/GreedyTSP.java @@ -1,302 +1,540 @@ package micycle.pgs.commons; +import java.util.ArrayList; import java.util.Arrays; -import java.util.BitSet; +import java.util.Collection; import java.util.List; import java.util.function.ToDoubleBiFunction; -import java.util.stream.Collectors; /** - * A high-performance implementation of the Traveling Salesman Problem (TSP) - * using a greedy construction heuristic followed by 2-opt local search - * improvement. - * - *

      Algorithm Overview

      + * Heuristic Traveling Salesman Problem (TSP) tour builder for a complete, + * weighted graph. *

      - * This implementation uses a two-phase approach: + * Given a list of vertices and a distance function, this class computes a fast, + * high-quality approximate Hamiltonian cycle (tour) that visits every + * vertex exactly once and returns to the start. The solution is not guaranteed + * to be optimal. *

      - *
        - *
      1. Greedy Construction: Builds an initial tour by - * repeatedly selecting the shortest available edge that doesn't violate TSP - * constraints (no cycles except the final one, maximum degree 2 per - * vertex)
      2. - *
      3. 2-opt Improvement: Iteratively improves the tour by - * swapping edges until no further improvement is possible
      4. - *
      * - * @param the type of vertices in the graph. Can be any type for which - * distances can be computed. + *

      Input / Graph Model

      + *
        + *
      • The input is treated as a complete graph over the provided vertices.
      • + *
      • Distances are assumed to be symmetric. This implementation precomputes a + * symmetric distance table by evaluating {@code distFunc} only for + * {@code i < j} and mirroring the result. If {@code distFunc} is asymmetric, + * the effective distance used will be {@code d(i,j)=d(j,i)=distFunc(i,j)} for + * {@code i + *
      • {@code distFunc} should be deterministic, side-effect free, and should + * not return NaN. Returning NaN or extreme values may lead to poor tours or + * undefined behavior.
      • + *
      + * + *

      Output

      + *
        + *
      • {@link #getTour()} returns a closed tour: a + * {@code List} of length {@code n+1} where the first vertex is repeated at + * the end.
      • + *
      • The returned tour is anchored to vertex index {@code 0} (the first input + * vertex) as the start/end point.
      • + *
      * + *

      Determinism and Thread Safety

      + *
        + *
      • For a fixed vertex order and deterministic {@code distFunc}, the produced + * tour is deterministic.
      • + *
      • Instances are effectively immutable after construction. Concurrent calls + * to {@link #getTour()} are safe provided {@code distFunc} itself is + * thread-safe and has no side effects.
      • + *
      + * + * @param vertex type * @author Michael Carleton */ -public class GreedyTSP { +public final class GreedyTSP { + + // Tuning knobs (good defaults) + private static final int CANDIDATES_K = 24; // 16..40 common; higher => better, slower + private static final int RESTARTS_SMALL_N = 8; // more restarts => better, slower + private static final int RESTARTS_LARGE_N = 4; + private static final double EPS = 1e-12; - private final List vertices; + private final Object[] verts; private final ToDoubleBiFunction distFunc; - private final double[][] allDist; - public GreedyTSP(List vertices, ToDoubleBiFunction distFunc) { + private final int n; + private final double[] dist; // flat n*n + private final int[] rowBase; + + // Candidate lists: cand[i*k + t] = t-th nearest neighbor of i (sorted by + // distance) + private final int k; + private final int[] cand; + + public GreedyTSP(Collection vertices, ToDoubleBiFunction distFunc) { if (vertices == null || vertices.isEmpty()) { throw new IllegalArgumentException("Vertex list must not be null or empty"); } - this.vertices = List.copyOf(vertices); + this.verts = vertices.toArray(); this.distFunc = distFunc; - this.allDist = initDistanceTable(); + this.n = verts.length; + + this.rowBase = new int[n]; + for (int i = 0; i < n; i++) { + rowBase[i] = i * n; + } + + this.dist = new double[n * n]; + initDistanceTable(); + + this.k = Math.min(CANDIDATES_K, Math.max(0, n - 1)); + this.cand = (k == 0) ? new int[0] : buildCandidateLists(k); } - /** - * Build the full symmetric distance matrix. - */ - private double[][] initDistanceTable() { - int n = vertices.size(); - double[][] d = new double[n][n]; + @SuppressWarnings("unchecked") + private void initDistanceTable() { for (int i = 0; i < n; i++) { - d[i][i] = 0; + final int ri = rowBase[i]; + dist[ri + i] = 0.0; + final V vi = (V) verts[i]; for (int j = i + 1; j < n; j++) { - double dij = distFunc.applyAsDouble(vertices.get(i), vertices.get(j)); - d[i][j] = dij; - d[j][i] = dij; + final double dij = distFunc.applyAsDouble(vi, (V) verts[j]); + dist[ri + j] = dij; + dist[rowBase[j] + i] = dij; } } - return d; } /** - * Runs greedy construction heuristic, then improves with 2-opt, and returns a - * CLOSED tour (first vertex repeated at end). + * Returns a CLOSED tour (first vertex repeated at end). */ + @SuppressWarnings("unchecked") public List getTour() { - int n = vertices.size(); if (n == 1) { - return List.of(vertices.get(0), vertices.get(0)); + final V a = (V) verts[0]; + return List.of(a, a); } if (n == 2) { - V a = vertices.get(0), b = vertices.get(1); + final V a = (V) verts[0]; + final V b = (V) verts[1]; return List.of(a, b, a); } - // 1) Build tour using greedy edge selection - int[] tour = buildGreedyTour(); + final int restarts = (n <= 2000) ? RESTARTS_SMALL_N : RESTARTS_LARGE_N; - // 2) Improve with 2-opt - improve(tour); + final int[] bestNext = new int[n]; + double bestLen = Double.POSITIVE_INFINITY; - // 3) Map back to V - return Arrays.stream(tour).mapToObj(vertices::get).collect(Collectors.toList()); - } + // Working buffers reused across restarts (minimize allocations) + final int[] next = new int[n]; + final int[] prev = new int[n]; + final int[] order = new int[n]; - /** - * Edge record for efficient immutable edge representation. - */ - private record Edge(int u, int v, double weight) implements Comparable { - @Override - public int compareTo(Edge other) { - return Double.compare(this.weight, other.weight); - } - } + final int[] visitedStamp = new int[n]; + int stamp = 1; - /** - * Build tour using greedy edge selection with optimizations. - */ - private int[] buildGreedyTour() { - int n = vertices.size(); + final byte[] dlb2 = new byte[n]; + final byte[] dlbR = new byte[n]; - // Pre-allocate exact capacity - int edgeCount = n * (n - 1) / 2; - Edge[] edges = new Edge[edgeCount]; - int idx = 0; + // Deterministic seed set (keeps runs reproducible) + final int[] seeds = makeSeeds(restarts); - // Create edges array directly (avoid List overhead) - for (int i = 0; i < n; i++) { - for (int j = i + 1; j < n; j++) { - edges[idx++] = new Edge(i, j, allDist[i][j]); + for (int seed : seeds) { + if (++stamp == 0) { // extremely unlikely, but keep safe + Arrays.fill(visitedStamp, 0); + stamp = 1; } - } - Arrays.sort(edges); - // Use byte array for degrees (max degree is 2) - byte[] degree = new byte[n]; - UnionFind uf = new UnionFind(n); + buildNearestNeighborTour(seed, order, next, prev, visitedStamp, stamp); + localSearch(next, prev, dlb2, dlbR); + + final double len = tourLength(next); + if (len < bestLen) { + bestLen = len; + System.arraycopy(next, 0, bestNext, 0, n); + } + } - // Pre-allocate adjacency lists with exact capacity (2) - int[][] adj = new int[n][2]; + // Materialize as CLOSED tour starting from vertex 0 + final ArrayList out = new ArrayList<>(n + 1); + int cur = 0; for (int i = 0; i < n; i++) { - adj[i][0] = adj[i][1] = -1; + out.add((V) verts[cur]); + cur = bestNext[cur]; + } + out.add((V) verts[0]); + return out; + } + + private int[] makeSeeds(int restarts) { + final int[] seeds = new int[Math.min(restarts, n)]; + + // Always include 0 + seeds[0] = 0; + int count = 1; + if (count == seeds.length) { + return seeds; } - int edgesAdded = 0; - for (Edge e : edges) { - // Fast degree check - // Fast cycle check (skip for last edge) - if (degree[e.u] == 2 || degree[e.v] == 2 || (edgesAdded < n - 1 && uf.connected(e.u, e.v))) { - continue; + // Add farthest-from-0 (often a good diversification) + int far = 1; + double farD = dist[rowBase[0] + 1]; + for (int i = 2; i < n; i++) { + final double d = dist[rowBase[0] + i]; + if (d > farD) { + farD = d; + far = i; } + } + seeds[count++] = far; + if (count == seeds.length) { + return seeds; + } - // Add edge to adjacency (no list needed, max 2 neighbors) - adj[e.u][degree[e.u]] = e.v; - adj[e.v][degree[e.v]] = e.u; - degree[e.u]++; - degree[e.v]++; - uf.union(e.u, e.v); - - if (++edgesAdded == n) { - break; + // Add farthest-from-far + int far2 = 0; + double far2D = dist[rowBase[far] + 0]; + for (int i = 1; i < n; i++) { + final double d = dist[rowBase[far] + i]; + if (d > far2D) { + far2D = d; + far2 = i; } } + seeds[count++] = far2; + if (count == seeds.length) { + return seeds; + } - // Convert adjacency representation to tour array - return buildTourFromAdjacency(adj); + // Fill remaining deterministically (hash-like spread) + for (int i = count; i < seeds.length; i++) { + final long x = (i * 0x9E3779B97F4A7C15L); + seeds[i] = (int) Long.remainderUnsigned(x, n); + } + return seeds; } /** - * Convert adjacency array representation to tour array. Optimized to avoid list - * operations. + * Builds an NN tour; uses candidate list first, falls back to full scan if + * needed. */ - private int[] buildTourFromAdjacency(int[][] adj) { - int n = vertices.size(); - int[] tour = new int[n + 1]; + private void buildNearestNeighborTour(int seed, int[] order, int[] next, int[] prev, int[] visitedStamp, int stamp) { + + order[0] = seed; + visitedStamp[seed] = stamp; + + int cur = seed; + for (int pos = 1; pos < n; pos++) { + int best = -1; + double bestD = Double.POSITIVE_INFINITY; + + // Fast attempt: search among k nearest candidates + if (k != 0) { + final int base = cur * k; + final int rc = rowBase[cur]; + for (int t = 0; t < k; t++) { + final int candNode = cand[base + t]; + if (visitedStamp[candNode] == stamp) { + continue; + } + final double d = dist[rc + candNode]; + best = candNode; + bestD = d; + break; // candidates are sorted by distance + } + } - // Use bitset for visited tracking (more cache-friendly) - BitSet visited = new BitSet(n); + // Fallback: exact NN (full scan) + if (best < 0) { + final int rc = rowBase[cur]; + for (int j = 0; j < n; j++) { + if (visitedStamp[j] == stamp) { + continue; + } + final double d = dist[rc + j]; + if (d < bestD) { + bestD = d; + best = j; + } + } + } - tour[0] = 0; - visited.set(0); - int current = 0; - int prev = -1; + order[pos] = best; + visitedStamp[best] = stamp; + cur = best; + } - // Follow the path (each vertex has exactly 2 neighbors) - for (int i = 1; i < n; i++) { - int next = adj[current][0]; - if (next == prev || visited.get(next)) { - next = adj[current][1]; + // Convert order[] into next/prev cycle + for (int i = 0; i < n - 1; i++) { + final int a = order[i]; + final int b = order[i + 1]; + next[a] = b; + prev[b] = a; + } + final int first = order[0]; + final int last = order[n - 1]; + next[last] = first; + prev[first] = last; + } + + private void localSearch(int[] next, int[] prev, byte[] dlb2, byte[] dlbR) { + Arrays.fill(dlb2, (byte) 0); + Arrays.fill(dlbR, (byte) 0); + + boolean improved; + do { + improved = false; + + // 2-opt phase + boolean changed2; + do { + changed2 = false; + for (int a = 0; a < n; a++) { + if (dlb2[a] != 0) { + continue; + } + if (tryTwoOptAt(a, next, prev, dlb2)) { + changed2 = true; + improved = true; + } else { + dlb2[a] = 1; + } + } + } while (changed2); + + // Relocation phase (Or-opt-1) + boolean changedR; + do { + changedR = false; + for (int x = 0; x < n; x++) { + if (dlbR[x] != 0) { + continue; + } + if (tryRelocateAt(x, next, prev, dlbR)) { + changedR = true; + improved = true; + } else { + dlbR[x] = 1; + } + } + } while (changedR); + + // After relocation, allow 2-opt bits to re-activate a bit + if (improved) { + Arrays.fill(dlb2, (byte) 0); } - tour[i] = next; - visited.set(next); - prev = current; - current = next; + } while (improved); + } + + private boolean tryTwoOptAt(int a, int[] next, int[] prev, byte[] dlb) { + final int b = next[a]; + final int ra = rowBase[a]; + final int rb = rowBase[b]; + + final double dab = dist[ra + b]; + + // Search c among candidate neighbors of a (sorted nearest-first) + if (k == 0) { + return false; } + final int base = a * k; - tour[n] = 0; // close the tour - return tour; + for (int t = 0; t < k; t++) { + final int c = cand[base + t]; + if (c == a || c == b) { + continue; + } + + final int d = next[c]; + if (d == a || d == b) + { + continue; // edges share an endpoint -> invalid 2-opt + } + + final double delta = (dist[ra + c] + dist[rb + d]) - (dab + dist[rowBase[c] + d]); + if (delta < -EPS) { + twoOptSwap(a, b, c, d, next, prev); + clearDlbAround(dlb, a, b, c, d, next, prev); + return true; + } + } + return false; } /** - * Optimized Union-Find with path compression and union by rank. + * 2-opt: remove (a,b) and (c,d), add (a,c) and (b,d) reversing segment [b..c]. */ - private static class UnionFind { - private final int[] parent; - private final byte[] rank; // rank never exceeds log(n) - - UnionFind(int n) { - parent = new int[n]; - rank = new byte[n]; - for (int i = 0; i < n; i++) { - parent[i] = i; + private static void twoOptSwap(int a, int b, int c, int d, int[] next, int[] prev) { + // Reverse pointers along the path from b to c following next[] + int x = b; + while (true) { + final int nx = next[x]; + final int px = prev[x]; + next[x] = px; + prev[x] = nx; + if (x == c) { + break; } + x = nx; } - int find(int x) { - int root = x; - // Find root - while (parent[root] != root) { - root = parent[root]; - } - // Path compression - while (x != root) { - int next = parent[x]; - parent[x] = root; - x = next; - } - return root; + // Reconnect endpoints + next[a] = c; + prev[c] = a; + + next[b] = d; + prev[d] = b; + } + + /** + * Or-opt-1: remove node x and insert it after a (between a and b=next[a]). + */ + private boolean tryRelocateAt(int x, int[] next, int[] prev, byte[] dlb) { + final int p = prev[x]; + final int q = next[x]; + + // If x is the only node? not possible here, but keep structure safe + if (p == x || q == x) { + return false; } - boolean connected(int x, int y) { - return find(x) == find(y); + final int rx = rowBase[x]; + final int rp = rowBase[p]; + + final double dpx = dist[rp + x]; + final double dxq = dist[rx + q]; + final double dpq = dist[rp + q]; + + if (k == 0) { + return false; } + final int base = x * k; + + for (int t = 0; t < k; t++) { + final int a = cand[base + t]; + if (a == x || a == p) { + continue; + } - void union(int x, int y) { - int px = find(x); - int py = find(y); - if (px == py) { - return; + final int b = next[a]; + if (b == x || b == q) + { + continue; // would reinsert into same place / adjacent issues } - // Union by rank - if (rank[px] < rank[py]) { - parent[px] = py; - } else if (rank[px] > rank[py]) { - parent[py] = px; - } else { - parent[py] = px; - rank[px]++; + // delta = (p,q) + (a,x) + (x,b) - (p,x) - (x,q) - (a,b) + final double delta = dpq + dist[rowBase[a] + x] + dist[rx + b] - dpx - dxq - dist[rowBase[a] + b]; + + if (delta < -EPS) { + // remove x + next[p] = q; + prev[q] = p; + + // insert x after a + next[a] = x; + prev[x] = a; + next[x] = b; + prev[b] = x; + + clearDlbAround(dlb, x, p, q, a, next, prev); + return true; } } + + return false; + } + + private static void clearDlbAround(byte[] dlb, int a, int b, int c, int d, int[] next, int[] prev) { + // Clear a few affected nodes + their immediate neighbors (cheap and effective) + clear(dlb, a); + clear(dlb, b); + clear(dlb, c); + clear(dlb, d); + clear(dlb, next[a]); + clear(dlb, prev[a]); + clear(dlb, next[b]); + clear(dlb, prev[b]); + clear(dlb, next[c]); + clear(dlb, prev[c]); + clear(dlb, next[d]); + clear(dlb, prev[d]); + } + + private static void clear(byte[] dlb, int i) { + if (i >= 0 && i < dlb.length) { + dlb[i] = 0; + } + } + + private double tourLength(int[] next) { + double sum = 0.0; + int cur = 0; + for (int i = 0; i < n; i++) { + final int nx = next[cur]; + sum += dist[rowBase[cur] + nx]; + cur = nx; + } + return sum; } /** - * Improve tour with 2-opt. Optimized with early termination and better cache - * patterns. + * Build k-nearest candidate lists for each node (sorted nearest-first). */ - private void improve(int[] tour) { - int N = tour.length - 1; - double minImprovement = 1e-9; - int stallCount = 0; - int maxStalls = 3; // stop after 3 rounds with tiny improvements + private int[] buildCandidateLists(int k) { + final int[] out = new int[n * k]; + final int[] bestIdx = new int[k]; + final double[] bestD = new double[k]; - while (true) { - double bestDelta = 0; - int bestI = -1, bestJ = -1; - - // Cache-friendly iteration pattern - for (int i = 0; i < N - 2; i++) { - int ci = tour[i], ci1 = tour[i + 1]; - double currentEdge = allDist[ci][ci1]; - - // Start j from i+2 to avoid adjacent edges - for (int j = i + 2; j < N; j++) { - int cj = tour[j], cj1 = tour[j + 1]; - - // Quick calculation with early exit - double newEdges = allDist[ci][cj] + allDist[ci1][cj1]; - double oldEdges = currentEdge + allDist[cj][cj1]; - double delta = newEdges - oldEdges; - - if (delta < bestDelta) { - bestDelta = delta; - bestI = i; - bestJ = j; + for (int i = 0; i < n; i++) { + Arrays.fill(bestIdx, -1); + Arrays.fill(bestD, Double.POSITIVE_INFINITY); + + int maxPos = 0; + double maxVal = Double.POSITIVE_INFINITY; + + final int ri = rowBase[i]; + for (int j = 0; j < n; j++) { + if (j == i) { + continue; + } + final double d = dist[ri + j]; + if (d < maxVal) { + bestD[maxPos] = d; + bestIdx[maxPos] = j; + + // recompute current worst + maxPos = 0; + maxVal = bestD[0]; + for (int t = 1; t < k; t++) { + final double v = bestD[t]; + if (v > maxVal) { + maxVal = v; + maxPos = t; + } } } } - if (bestDelta < -minImprovement) { - // Apply the improvement - reverse(tour, bestI + 1, bestJ); - stallCount = 0; - } else if (bestDelta < 0) { - // Very small improvement - reverse(tour, bestI + 1, bestJ); - if (++stallCount >= maxStalls) { - break; + // sort bestIdx by bestD (small k => insertion sort is fine) + for (int a = 1; a < k; a++) { + final double kd = bestD[a]; + final int ki = bestIdx[a]; + int b = a - 1; + while (b >= 0 && bestD[b] > kd) { + bestD[b + 1] = bestD[b]; + bestIdx[b + 1] = bestIdx[b]; + b--; } - } else { - // No improvement found - break; + bestD[b + 1] = kd; + bestIdx[b + 1] = ki; } - } - } - /** - * Optimized in-place reverse using XOR swap for primitives. - */ - private void reverse(int[] tour, int from, int to) { - while (from < to) { - // XOR swap (avoids temp variable) - tour[from] ^= tour[to]; - tour[to] ^= tour[from]; - tour[from] ^= tour[to]; - from++; - to--; + final int base = i * k; + for (int t = 0; t < k; t++) { + out[base + t] = bestIdx[t]; + } } + + return out; } } \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/HausdorffInterpolator.java b/src/main/java/micycle/pgs/commons/HausdorffInterpolator.java new file mode 100644 index 00000000..7a8a0529 --- /dev/null +++ b/src/main/java/micycle/pgs/commons/HausdorffInterpolator.java @@ -0,0 +1,142 @@ +package micycle.pgs.commons; + +import java.util.Objects; + +import org.locationtech.jts.algorithm.distance.DiscreteHausdorffDistance; +import org.locationtech.jts.geom.Geometry; +import org.locationtech.jts.geom.util.GeometryFixer; +import org.locationtech.jts.operation.buffer.BufferOp; +import org.locationtech.jts.operation.buffer.BufferParameters; + +import com.github.micycle1.geoblitz.ProHausdorffDistance; + +/** + * Interpolates ("morphs") between two planar shapes using a Hausdorff-distance + * construction. + *

      + * Intuition: Treat {@code A} and {@code B} as two ink blots. To obtain + * an in-between shape, inflate {@code A} a little and inflate {@code B} a + * little (by complementary amounts), then keep only the region where these two + * inflated blots overlap. When {@code α=0} the overlap is essentially + * {@code A}; when {@code α=1} it is essentially {@code B}; intermediate + * {@code α} values yield intermediate shapes. + *

      + * More formally, computes {@code S_α(A,B) = (A ⊕ D_{α d}) ∩ (B ⊕ D_{(1-α) d})}, + * where {@code d} is the (undirected) Hausdorff distance between {@code A} and + * {@code B}, {@code ⊕} is the Minkowski sum, and {@code D_r} is a disk of + * radius {@code r}. When {@code d} is the true Hausdorff distance, this is the + * "maximal" Hausdorff morph from:
      + * Marc van Kreveld, Tillmann Miltzow, Tim Ophelders, Willem Sonke, Jordi L. + * Vermeulen, Between Shapes, Using the Hausdorff Distance. + * + * @author Michael Carleton + */ +public final class HausdorffInterpolator { + + private HausdorffInterpolator() { + } + + /** + * Interpolates between two shapes using the Hausdorff morph: {@code S_α(A,B) = + * (A ⊕ D_{α d}) ∩ (B ⊕ D_{(1-α) d})}. + *

      + * If {@code d} is the true Hausdorff distance, then the resulting shape is the + * maximal set that is at Hausdorff distance {@code α d} from {@code A} and + * {@code (1-α) d} from {@code B}. + * + * @param a input geometry {@code A} (typically an area + * geometry) + * @param b input geometry {@code B} (typically an area + * geometry) + * @param alpha interpolation parameter in {@code [0,1]} + * ({@code 0 -> A}, {@code 1 -> B}) + * @param hausdorffDistance {@code d} in coordinate units; if {@code 0}, returns + * {@code A} (fixed) + * @param quadSegs number of quadrant segments used to approximate + * round buffers (higher is smoother/slower) + * @return the interpolated geometry {@code S_α(A,B)} (fixed for robustness) + * @throws NullPointerException if {@code a} or {@code b} is {@code null} + * @throws IllegalArgumentException if {@code alpha} is not in {@code [0,1]} or + * {@code hausdorffDistance < 0} + */ + public static Geometry interpolate(Geometry a, Geometry b, double alpha, double hausdorffDistance, int quadSegs) { + Objects.requireNonNull(a, "a"); + Objects.requireNonNull(b, "b"); + if (alpha < 0.0 || alpha > 1.0) { + throw new IllegalArgumentException("alpha must be in [0,1]"); + } + if (hausdorffDistance < 0.0) { + throw new IllegalArgumentException("hausdorffDistance must be >= 0"); + } + + // Normalise trivial cases + if (alpha == 0.0) { + return a; + } + if (alpha == 1.0) { + return b; + } + if (hausdorffDistance == 0.0) { + return a; // A and B coincide (or distance not meaningful) + } + Geometry aFix = GeometryFixer.fix(a); + Geometry bFix = GeometryFixer.fix(b); + + double rA = alpha * hausdorffDistance; + double rB = (1.0 - alpha) * hausdorffDistance; + + BufferParameters bp = new BufferParameters(); + bp.setQuadrantSegments(quadSegs); + bp.setEndCapStyle(BufferParameters.CAP_ROUND); + bp.setJoinStyle(BufferParameters.JOIN_ROUND); + + Geometry aOff = BufferOp.bufferOp(aFix, rA, bp); + Geometry bOff = BufferOp.bufferOp(bFix, rB, bp); + + Geometry s = aOff.intersection(bOff); + + return s; + } + + /** + * Convenience overload that first estimates {@code d = d_H(A,B)} + * (approximately) using {@link DiscreteHausdorffDistance}, then calls + * {@link #interpolate(Geometry, Geometry, double, double, int)}. + *

      + * The estimate is sampling-based: smaller {@code densifyFraction} increases + * sampling density along edges (slower, typically closer to the true Hausdorff + * distance). + * + * @param a input geometry {@code A} + * @param b input geometry {@code B} + * @param alpha interpolation parameter in {@code [0,1]} + * @param maxSegmentLength if > 0, densifies geometry segments so consecutive + * sample points are at most this far apart (in geometry + * units). If <= 0, uses only the geometry's existing + * vertices. + * @param quadSegs buffer roundness accuracy (quadrant segments) + * @return the interpolated geometry using the estimated Hausdorff distance + * @throws IllegalArgumentException if {@code densifyFraction} is not in + * {@code (0,1]} + */ + public static Geometry interpolateUsingEstimatedHausdorff(Geometry a, Geometry b, double alpha, double maxSegmentLength, int quadSegs) { + double d = estimateHausdorffDistance(a, b, maxSegmentLength); + return interpolate(a, b, alpha, d, quadSegs); + } + + /** + * Estimates the (undirected) Hausdorff distance between {@code a} and {@code b} + * using JTS {@link DiscreteHausdorffDistance}. + * + * @param a input geometry {@code A} + * @param b input geometry {@code B} + * @param maxSegmentLength if > 0, densifies geometry segments so consecutive + * sample points are at most this far apart (in geometry + * units). If <= 0, uses only the geometry's existing + * vertices. + * @return an approximation of {@code d_H(A,B)} in coordinate units + */ + public static double estimateHausdorffDistance(Geometry a, Geometry b, double maxSegmentLength) { + return ProHausdorffDistance.distance(a, b, 0.1, maxSegmentLength); // undirected discrete Hausdorff approximation + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/KeilSnoeyinkConvexPartitioner.java b/src/main/java/micycle/pgs/commons/KeilSnoeyinkConvexPartitioner.java new file mode 100644 index 00000000..21b9b621 --- /dev/null +++ b/src/main/java/micycle/pgs/commons/KeilSnoeyinkConvexPartitioner.java @@ -0,0 +1,965 @@ +package micycle.pgs.commons; + +import java.util.ArrayDeque; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.Objects; + +import org.locationtech.jts.algorithm.Orientation; +import org.locationtech.jts.geom.Coordinate; +import org.locationtech.jts.geom.GeometryFactory; +import org.locationtech.jts.geom.Polygon; +import org.locationtech.jts.geom.PrecisionModel; + +/** + * Keil & Snoeyink optimal convex partitioner. + * + *

      + * Computes an optimal convex partition of a polygon (possibly with holes) using + * the dynamic-programming algorithm originally exposed as ConvexPartition_OPT + * in the polypartition library. The algorithm minimizes the number of added + * diagonals (hence the number of convex pieces) and reconstructs an optimal + * partition of the input. + *

      + * Complexity: worst-case O(n^3) time. + *

      + * + * @author Michael Carleton + */ +public final class KeilSnoeyinkConvexPartitioner { + + private KeilSnoeyinkConvexPartitioner() { + } + + /** + * Computes an optimal convex partition of the given polygon. + * + *

      + * This is the public entry point for the Keil & Snoeyink + * dynamic-programming partition. Holes in the input polygon are bridged to + * produce simple polygons prior to the DP on each simple polygon. + *

      + * + * @param input non-null Polygon (may contain holes) + * @return a non-null List of Polygons. Each returned polygon is simple (no + * holes), oriented CCW, and the union of the returned polygons equals + * the input polygon. + */ + public static List convexPartition(Polygon input) { + Objects.requireNonNull(input, "input"); + if (input.isEmpty()) { + return List.of(); + } + + if (!input.isValid()) { + throw new IllegalArgumentException("Input Polygon is not geometrically valid."); + } + + GeometryFactory gf = input.getFactory(); + PrecisionModel pm = gf.getPrecisionModel(); + + // Remove holes by bridging (same as original C++ library). + List> simplePolys = removeHoles(input, pm); + + List out = new ArrayList<>(); + for (List polyPts : simplePolys) { + List pts = cleanupOpenRing(polyPts); + pts = removeConsecutiveDuplicates(pts); + if (pts.size() < 3) { + continue; + } + + pts = ensureCCW(pts); + + if (isConvexWeakly(pts)) { + out.add(toJTSPolygon(pts, gf, pm)); + continue; + } + + List> parts = convexPartitionSimple(pts); + for (List part : parts) { + List p = cleanupOpenRing(part); + p = removeConsecutiveDuplicates(p); + if (p.size() < 3) { + continue; + } + p = ensureCCW(p); + out.add(toJTSPolygon(p, gf, pm)); + } + } + return out; + } + + private static List> convexPartitionSimple(List ptsCCW) { + final int n = ptsCCW.size(); + if (n < 3) { + throw new IllegalArgumentException("Polygon has < 3 vertices"); + } + if (n == 3) { + return List.of(List.copyOf(ptsCCW)); + } + + // vertices[] + final V[] vertices = new V[n]; + for (int i = 0; i < n; i++) { + vertices[i] = new V(ptsCCW.get(i)); + } + for (int i = 0; i < n; i++) { + vertices[i].prev = (i == 0) ? (n - 1) : (i - 1); + vertices[i].next = (i == n - 1) ? 0 : (i + 1); + } + for (int i = 1; i < n; i++) { + vertices[i].isConvex = !isReflex(vertices[vertices[i].prev].p, vertices[i].p, vertices[vertices[i].next].p); + } + vertices[0].isConvex = false; // by convention (as in C++) + + final boolean[][] visible = new boolean[n][n]; + final int[][] weight = new int[n][n]; + final DiagonalDeque[][] pairs = new DiagonalDeque[n][n]; + + // Initialize visibility and base weights. + for (int i = 0; i < n - 1; i++) { + final Coordinate p1 = vertices[i].p; + for (int j = i + 1; j < n; j++) { + visible[i][j] = true; + weight[i][j] = (j == i + 1) ? 0 : Integer.MAX_VALUE; + + if (j != i + 1) { + final Coordinate p2 = vertices[j].p; + + if (!inCone(vertices, i, p2)) { + visible[i][j] = false; + continue; + } + if (!inCone(vertices, j, p1)) { + visible[i][j] = false; + continue; + } + + for (int k = 0; k < n; k++) { + final Coordinate p3 = vertices[k].p; + final Coordinate p4 = vertices[(k == n - 1) ? 0 : (k + 1)].p; + if (intersects(p1, p2, p3, p4)) { + visible[i][j] = false; + break; + } + } + } + } + } + + // Triangles (gap=2) that are visible have weight 0 and one "pair". + for (int i = 0; i < n - 2; i++) { + int j = i + 2; + if (visible[i][j]) { + weight[i][j] = 0; + pairs[i][j] = new DiagonalDeque(); + pairs[i][j].addLast(i + 1, i + 1); + } + } + visible[0][n - 1] = true; + + // DP + for (int gap = 3; gap < n; gap++) { + for (int i = 0; i < n - gap; i++) { + if (vertices[i].isConvex) { + continue; + } + int k = i + gap; + if (!visible[i][k]) { + continue; + } + + if (!vertices[k].isConvex) { + for (int j = i + 1; j < k; j++) { + typeA(i, j, k, vertices, visible, weight, pairs); + } + } else { + for (int j = i + 1; j < k - 1; j++) { + if (vertices[j].isConvex) { + continue; + } + typeA(i, j, k, vertices, visible, weight, pairs); + } + typeA(i, k - 1, k, vertices, visible, weight, pairs); + } + } + + for (int k = gap; k < n; k++) { + if (vertices[k].isConvex) { + continue; + } + int i = k - gap; + if (!vertices[i].isConvex) { + continue; + } + if (!visible[i][k]) { + continue; + } + + typeB(i, i + 1, k, vertices, visible, weight, pairs); + for (int j = i + 2; j < k; j++) { + if (vertices[j].isConvex) { + continue; + } + typeB(i, j, k, vertices, visible, weight, pairs); + } + } + } + + // Recover solution (first pass prunes pair-lists to chosen solution). + boolean ok = true; + ArrayDeque diagonals = new ArrayDeque<>(); + diagonals.addFirst(new Diagonal(0, n - 1)); + + while (!diagonals.isEmpty()) { + Diagonal d = diagonals.removeFirst(); + if (d.b - d.a <= 1) { + continue; + } + + DiagonalDeque pd = pairs[d.a][d.b]; + if (pd == null || pd.isEmpty()) { + ok = false; + break; + } + + if (!vertices[d.a].isConvex) { + Diagonal chosen = pd.peekLast(); + int j = chosen.b; + + diagonals.addFirst(new Diagonal(j, d.b)); + if (j - d.a > 1) { + if (chosen.a != chosen.b) { + DiagonalDeque pd2 = pairs[d.a][j]; + while (true) { + if (pd2 == null || pd2.isEmpty()) { + ok = false; + break; + } + Diagonal t = pd2.peekLast(); + if (chosen.a != t.a) { + pd2.removeLast(); + } else { + break; + } + } + if (!ok) { + break; + } + } + diagonals.addFirst(new Diagonal(d.a, j)); + } + } else { + Diagonal chosen = pd.peekFirst(); + int j = chosen.a; + + diagonals.addFirst(new Diagonal(d.a, j)); + if (d.b - j > 1) { + if (chosen.a != chosen.b) { + DiagonalDeque pd2 = pairs[j][d.b]; + while (true) { + if (pd2 == null || pd2.isEmpty()) { + ok = false; + break; + } + Diagonal t = pd2.peekFirst(); + if (chosen.b != t.b) { + pd2.removeFirst(); + } else { + break; + } + } + if (!ok) { + break; + } + } + diagonals.addFirst(new Diagonal(j, d.b)); + } + } + } + + if (!ok) { + throw new IllegalStateException("convexPartition failed to recover a solution."); + } + + // Recover actual polygons (second pass). + List> parts = new ArrayList<>(); + diagonals.clear(); + diagonals.addFirst(new Diagonal(0, n - 1)); + + while (!diagonals.isEmpty()) { + Diagonal root = diagonals.removeFirst(); + if (root.b - root.a <= 1) { + continue; + } + + ArrayDeque diagonals2 = new ArrayDeque<>(); + diagonals2.addFirst(root); + + IntList indices = new IntList(); + indices.add(root.a); + indices.add(root.b); + + while (!diagonals2.isEmpty()) { + Diagonal d = diagonals2.removeFirst(); + if (d.b - d.a <= 1) { + continue; + } + + DiagonalDeque pd = pairs[d.a][d.b]; + + boolean ijReal = true, jkReal = true; + int j; + + if (!vertices[d.a].isConvex) { + Diagonal chosen = pd.peekLast(); + j = chosen.b; + if (chosen.a != chosen.b) { + ijReal = false; + } + } else { + Diagonal chosen = pd.peekFirst(); + j = chosen.a; + if (chosen.a != chosen.b) { + jkReal = false; + } + } + + Diagonal ij = new Diagonal(d.a, j); + if (ijReal) { + diagonals.addLast(ij); + } else { + diagonals2.addLast(ij); + } + + Diagonal jk = new Diagonal(j, d.b); + if (jkReal) { + diagonals.addLast(jk); + } else { + diagonals2.addLast(jk); + } + + indices.add(j); + } + + int[] idx = indices.toSortedArray(); + List poly = new ArrayList<>(idx.length); + for (int v : idx) { + poly.add(vertices[v].p); + } + parts.add(poly); + } + + return parts; + } + + private static void updateState(int a, int b, int w, int i, int j, boolean[][] visible, int[][] weight, DiagonalDeque[][] pairs) { + if (!visible[a][b]) { + return; + } + + int w2 = weight[a][b]; + if (w > w2) { + return; + } + + DiagonalDeque q = pairs[a][b]; + if (q == null) { + q = new DiagonalDeque(); + pairs[a][b] = q; + } + + if (w < w2) { + q.clear(); + q.addFirst(i, j); + weight[a][b] = w; + } else { + if (!q.isEmpty() && i <= q.peekFirst().a) { + return; + } + while (!q.isEmpty() && q.peekFirst().b >= j) { + q.removeFirst(); + } + q.addFirst(i, j); + } + } + + private static void typeA(int i, int j, int k, V[] vertices, boolean[][] visible, int[][] weight, DiagonalDeque[][] pairs) { + if (!visible[i][j]) { + return; + } + + int top = j; + int w = weight[i][j]; + + if (k - j > 1) { + if (!visible[j][k]) { + return; + } + w += weight[j][k] + 1; + } + + if (j - i > 1) { + DiagonalDeque q = pairs[i][j]; + + int lastPos = -1; + if (q != null && !q.isEmpty()) { + for (int pos = q.size() - 1; pos >= 0; pos--) { + int idx2 = q.get(pos).b; + if (!isReflex(vertices[idx2].p, vertices[j].p, vertices[k].p)) { + lastPos = pos; + } else { + break; + } + } + } + + if (lastPos < 0) { + w++; + } else { + int idx1 = q.get(lastPos).a; + if (isReflex(vertices[k].p, vertices[i].p, vertices[idx1].p)) { + w++; + } else { + top = idx1; + } + } + } + + updateState(i, k, w, top, j, visible, weight, pairs); + } + + private static void typeB(int i, int j, int k, V[] vertices, boolean[][] visible, int[][] weight, DiagonalDeque[][] pairs) { + if (!visible[j][k]) { + return; + } + + int top = j; + int w = weight[j][k]; + + if (j - i > 1) { + if (!visible[i][j]) { + return; + } + w += weight[i][j] + 1; + } + + if (k - j > 1) { + DiagonalDeque q = pairs[j][k]; + + if (q != null && !q.isEmpty() && !isReflex(vertices[i].p, vertices[j].p, vertices[q.peekFirst().a].p)) { + + int lastPos = 0; + for (int pos = 0; pos < q.size(); pos++) { + int idx1 = q.get(pos).a; + if (!isReflex(vertices[i].p, vertices[j].p, vertices[idx1].p)) { + lastPos = pos; + } else { + break; + } + } + int idx2 = q.get(lastPos).b; + if (isReflex(vertices[idx2].p, vertices[k].p, vertices[i].p)) { + w++; + } else { + top = idx2; + } + } else { + w++; + } + } + + updateState(i, k, w, j, top, visible, weight, pairs); + } + + // -------------------- Geometry primitives (ported) -------------------- + + private static final class V { + final Coordinate p; // treat as immutable (we only create our own) + int prev, next; + boolean isConvex; + + V(Coordinate p) { + this.p = p; + } + } + + private record Diagonal(int a, int b) { + } + + private static boolean isConvex(Coordinate p1, Coordinate p2, Coordinate p3) { + return Orientation.index(p1, p2, p3) == Orientation.COUNTERCLOCKWISE; + } + + private static boolean isReflex(Coordinate p1, Coordinate p2, Coordinate p3) { + return Orientation.index(p1, p2, p3) == Orientation.CLOCKWISE; + } + + private static boolean inCone(V[] vertices, int vi, Coordinate p) { + V v = vertices[vi]; + Coordinate p1 = vertices[v.prev].p; + Coordinate p2 = v.p; + Coordinate p3 = vertices[v.next].p; + return inCone(p1, p2, p3, p); + } + + private static boolean inCone(Coordinate p1, Coordinate p2, Coordinate p3, Coordinate p) { + boolean convex = isConvex(p1, p2, p3); + if (convex) { + if (!isConvex(p1, p2, p)) { + return false; + } + if (!isConvex(p2, p3, p)) { + return false; + } + return true; + } else { + if (isConvex(p1, p2, p)) { + return true; + } + if (isConvex(p2, p3, p)) { + return true; + } + return false; + } + } + + // C++ Intersects() exact port (true if segments intersect, excluding shared + // endpoints). + private static boolean intersects(Coordinate p11, Coordinate p12, Coordinate p21, Coordinate p22) { + if (p11.equals2D(p21) || p11.equals2D(p22) || p12.equals2D(p21) || p12.equals2D(p22)) { + return false; + } + + double v1ortx = p12.y - p11.y; + double v1orty = p11.x - p12.x; + + double v2ortx = p22.y - p21.y; + double v2orty = p21.x - p22.x; + + double vx, vy; + + vx = p21.x - p11.x; + vy = p21.y - p11.y; + double dot21 = vx * v1ortx + vy * v1orty; + + vx = p22.x - p11.x; + vy = p22.y - p11.y; + double dot22 = vx * v1ortx + vy * v1orty; + + vx = p11.x - p21.x; + vy = p11.y - p21.y; + double dot11 = vx * v2ortx + vy * v2orty; + + vx = p12.x - p21.x; + vy = p12.y - p21.y; + double dot12 = vx * v2ortx + vy * v2orty; + + if (dot11 * dot12 > 0) { + return false; + } + if (dot21 * dot22 > 0) { + return false; + } + return true; + } + + private static final class PolyHole { + final boolean hole; + final List pts; // open ring + + PolyHole(boolean hole, List pts) { + this.hole = hole; + this.pts = pts; + } + } + + private static List> removeHoles(Polygon input, PrecisionModel pm) { + List in = new ArrayList<>(); + + List shell = ringToCoords(input.getExteriorRing().getCoordinates(), pm); + shell = cleanupOpenRing(shell); + shell = ensureCCW(shell); + in.add(new PolyHole(false, shell)); + + for (int i = 0; i < input.getNumInteriorRing(); i++) { + List hole = ringToCoords(input.getInteriorRingN(i).getCoordinates(), pm); + hole = cleanupOpenRing(hole); + hole = ensureCW(hole); + in.add(new PolyHole(true, hole)); + } + + boolean hasHoles = false; + for (PolyHole ph : in) { + if (ph.hole) { + hasHoles = true; + break; + } + } + if (!hasHoles) { + return List.of(List.copyOf(shell)); + } + + List polys = new ArrayList<>(in); + + while (true) { + // Find the hole point with the largest x. + int holeIdx = -1; + int holePointIndex = 0; + + for (int pi = 0; pi < polys.size(); pi++) { + PolyHole ph = polys.get(pi); + if (!ph.hole) { + continue; + } + if (holeIdx < 0) { + holeIdx = pi; + holePointIndex = 0; + } + for (int i = 0; i < ph.pts.size(); i++) { + if (ph.pts.get(i).x > polys.get(holeIdx).pts.get(holePointIndex).x) { + holeIdx = pi; + holePointIndex = i; + } + } + } + if (holeIdx < 0) { + break; + } + + Coordinate holePoint = polys.get(holeIdx).pts.get(holePointIndex); + + boolean pointFound = false; + int polyIdx = -1; + int polyPointIndex = -1; + Coordinate bestPolyPoint = null; + + for (int pi = 0; pi < polys.size(); pi++) { + PolyHole ph = polys.get(pi); + if (ph.hole) { + continue; + } + + int m = ph.pts.size(); + for (int i = 0; i < m; i++) { + Coordinate candidate = ph.pts.get(i); + if (candidate.x <= holePoint.x) { + continue; + } + + Coordinate prev = ph.pts.get((i + m - 1) % m); + Coordinate next = ph.pts.get((i + 1) % m); + if (!inCone(prev, candidate, next, holePoint)) { + continue; + } + + if (pointFound) { + + double d1 = holePoint.distanceSq(candidate); + double d2 = holePoint.distanceSq(bestPolyPoint); + if (d2 < d1) { + continue; + } + } + + boolean visible = true; + for (int pj = 0; pj < polys.size() && visible; pj++) { + PolyHole ph2 = polys.get(pj); + if (ph2.hole) { + continue; + } + + int mm = ph2.pts.size(); + for (int e = 0; e < mm; e++) { + Coordinate a = ph2.pts.get(e); + Coordinate b = ph2.pts.get((e + 1) % mm); + if (intersects(holePoint, candidate, a, b)) { + visible = false; + break; + } + } + } + + if (visible) { + pointFound = true; + bestPolyPoint = candidate; + polyIdx = pi; + polyPointIndex = i; + } + } + } + + if (!pointFound) { + throw new IllegalStateException("RemoveHoles failed: no visible bridge found."); + } + + PolyHole hole = polys.get(holeIdx); + PolyHole poly = polys.get(polyIdx); + + List newPts = new ArrayList<>(hole.pts.size() + poly.pts.size() + 2); + for (int i = 0; i <= polyPointIndex; i++) { + newPts.add(poly.pts.get(i)); + } + for (int i = 0; i <= hole.pts.size(); i++) { + newPts.add(hole.pts.get((i + holePointIndex) % hole.pts.size())); + } + for (int i = polyPointIndex; i < poly.pts.size(); i++) { + newPts.add(poly.pts.get(i)); + } + + int a = Math.max(holeIdx, polyIdx); + int b = Math.min(holeIdx, polyIdx); + polys.remove(a); + polys.remove(b); + polys.add(new PolyHole(false, newPts)); + } + + List> out = new ArrayList<>(); + for (PolyHole ph : polys) { + out.add(List.copyOf(ph.pts)); + } + return out; + } + + /** + * Small deque for Diagonal pairs per DP state + */ + private static final class DiagonalDeque { + private int[] a = new int[4]; + private int[] b = new int[4]; + private int head = 0; + private int size = 0; + + boolean isEmpty() { + return size == 0; + } + + int size() { + return size; + } + + void clear() { + head = 0; + size = 0; + } + + Diagonal peekFirst() { + int i = head; + return new Diagonal(a[i], b[i]); + } + + Diagonal peekLast() { + int i = idx(size - 1); + return new Diagonal(a[i], b[i]); + } + + Diagonal get(int pos) { + int i = idx(pos); + return new Diagonal(a[i], b[i]); + } + + void addFirst(int ia, int ib) { + ensureCap(size + 1); + head = (head - 1 + a.length) % a.length; + a[head] = ia; + b[head] = ib; + size++; + } + + void addLast(int ia, int ib) { + ensureCap(size + 1); + int i = idx(size); + a[i] = ia; + b[i] = ib; + size++; + } + + void removeFirst() { + if (size == 0) { + throw new NoSuchElementException(); + } + head = (head + 1) % a.length; + size--; + } + + void removeLast() { + if (size == 0) { + throw new NoSuchElementException(); + } + size--; + } + + private int idx(int pos) { + return (head + pos) % a.length; + } + + private void ensureCap(int cap) { + if (cap <= a.length) { + return; + } + int newCap = Math.max(cap, a.length * 2); + int[] na = new int[newCap]; + int[] nb = new int[newCap]; + for (int i = 0; i < size; i++) { + int j = idx(i); + na[i] = a[j]; + nb[i] = b[j]; + } + a = na; + b = nb; + head = 0; + } + } + + // -------------------- Utilities / JTS interop -------------------- + + private static List ringToCoords(Coordinate[] coords, PrecisionModel pm) { + int len = coords.length; + if (len >= 2 && coords[0].equals2D(coords[len - 1])) { + len--; + } + + List out = new ArrayList<>(len); + for (int i = 0; i < len; i++) { + Coordinate c = new Coordinate(coords[i].x, coords[i].y); + pm.makePrecise(c); + if (c.x == 0.0) { + c.x = 0.0; // normalize -0.0 + } + if (c.y == 0.0) { + c.y = 0.0; + } + out.add(c); + } + return out; + } + + private static Polygon toJTSPolygon(List pts, GeometryFactory gf, PrecisionModel pm) { + List p = removeConsecutiveDuplicates(cleanupOpenRing(pts)); + if (p.size() < 3) { + throw new IllegalArgumentException("Part has < 3 vertices after cleanup"); + } + + p = ensureCCW(p); + + Coordinate[] cs = new Coordinate[p.size() + 1]; + for (int i = 0; i < p.size(); i++) { + Coordinate c = new Coordinate(p.get(i).x, p.get(i).y); + pm.makePrecise(c); + cs[i] = c; + } + cs[p.size()] = new Coordinate(cs[0]); + return gf.createPolygon(cs); + } + + private static List cleanupOpenRing(List pts) { + if (pts.isEmpty()) { + return List.of(); + } + int n = pts.size(); + if (n >= 2 && pts.get(0).equals2D(pts.get(n - 1))) { + return List.copyOf(pts.subList(0, n - 1)); + } + return List.copyOf(pts); + } + + private static List removeConsecutiveDuplicates(List pts) { + if (pts.size() < 2) { + return pts; + } + + List out = new ArrayList<>(pts.size()); + Coordinate prev = null; + for (Coordinate c : pts) { + if (prev == null || !c.equals2D(prev)) { + out.add(c); + } + prev = c; + } + if (out.size() >= 2 && out.get(0).equals2D(out.get(out.size() - 1))) { + out.remove(out.size() - 1); + } + return List.copyOf(out); + } + + private static List ensureCCW(List pts) { + if (signedArea2(pts) < 0.0) { + List rev = new ArrayList<>(pts); + Collections.reverse(rev); + return List.copyOf(rev); + } + return pts; + } + + private static List ensureCW(List pts) { + if (signedArea2(pts) > 0.0) { + List rev = new ArrayList<>(pts); + Collections.reverse(rev); + return List.copyOf(rev); + } + return pts; + } + + private static double signedArea2(List pts) { + double a2 = 0.0; + for (int i = 0, n = pts.size(); i < n; i++) { + Coordinate p = pts.get(i); + Coordinate q = pts.get((i + 1) % n); + a2 += p.x * q.y - p.y * q.x; + } + return a2; + } + + private static boolean isConvexWeakly(List pts) { + int n = pts.size(); + if (n < 3) { + return false; + } + + double a2 = signedArea2(pts); + if (a2 == 0.0) { + return false; + } + boolean ccw = a2 > 0.0; + + for (int i = 0; i < n; i++) { + Coordinate pPrev = pts.get((i + n - 1) % n); + Coordinate p = pts.get(i); + Coordinate pNext = pts.get((i + 1) % n); + + int o = Orientation.index(pPrev, p, pNext); + if (ccw) { + if (o == Orientation.CLOCKWISE) { + return false; + } + } else { + if (o == Orientation.COUNTERCLOCKWISE) { + return false; + } + } + } + return true; + } + + private static final class IntList { + private int[] a = new int[8]; + private int size = 0; + + void add(int v) { + if (size == a.length) { + a = Arrays.copyOf(a, a.length * 2); + } + a[size++] = v; + } + + int[] toSortedArray() { + int[] r = Arrays.copyOf(a, size); + Arrays.sort(r); + return r; + } + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/LargestEmptyCircles.java b/src/main/java/micycle/pgs/commons/LargestEmptyCircles.java index 8a713e59..51cf7fa2 100644 --- a/src/main/java/micycle/pgs/commons/LargestEmptyCircles.java +++ b/src/main/java/micycle/pgs/commons/LargestEmptyCircles.java @@ -4,295 +4,293 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; +import java.util.Deque; import java.util.List; -import org.locationtech.jts.algorithm.locate.PointOnGeometryLocator; import org.locationtech.jts.geom.Coordinate; import org.locationtech.jts.geom.Envelope; import org.locationtech.jts.geom.Geometry; -import org.locationtech.jts.geom.GeometryFactory; -import org.locationtech.jts.geom.Location; import org.locationtech.jts.geom.Point; import org.locationtech.jts.geom.Polygonal; -import org.locationtech.jts.operation.distance.IndexedFacetDistance; -import com.github.micycle1.geoblitz.YStripesPointInAreaLocator; +import com.github.micycle1.geoblitz.PointDistanceIndex; /** - * Adapts {@link org.locationtech.jts.algorithm.construct.LargestEmptyCircle - * LargestEmptyCircle}, allowing for repeated calls to find the N largest empty - * circles in an optimised manner. + * Computes a sequence of largest empty circles (LECs) whose centers + * are constrained to lie within a polygonal {@code boundary} (holes respected). *

      - * In this adaptation circle circumferences are constrained to lie within the - * boundary (originally only circle center points must lie within the boundary). - * This adaption also supports polygonal obstacles; if a boundary is provided, - * circles will not lie within the polygonal obstacles. - * - * @author Martin Davis - * @author Michael Carleton + * The "emptiness" constraint is defined against: + *

        + *
      • the boundary rings (outer shell and holes), and
      • + *
      • optional {@code obstacles}.
      • + *
      + * + *

      Obstacles

      The optional {@code obstacles} geometry may contain a + * mixture of: + *
        + *
      • Polygonal components: treated as excluded regions (additional + * holes). They affect the sign of the distance (points inside an obstacle + * polygon are considered outside the feasible region).
      • + *
      • Linear components (lineal): contribute to the distance target + * (circles must not cross them).
      • + *
      • Puntal components (pointal): contribute to the distance target + * (circles must not cover them).
      • + *
      + * + *

      Iteration and reuse

      Repeated calls to {@link #findNextLEC()} reuse + * and refine a cached set of candidate cells, making successive extractions + * faster than recomputing from scratch. */ public class LargestEmptyCircles { - private final Geometry obstacles; - private final Geometry boundary; + private final Geometry boundary; // polygonal + private final Geometry obstacles; // nullable; may be any Geometry private final double tolerance; - private final GeometryFactory factory; - private PointOnGeometryLocator obstaclesPointLocator; // when obstacles are polygonal - private PointOnGeometryLocator boundsPointLocator; - private IndexedFacetDistance obstacleDistance; - private IndexedFacetDistance boundaryDistance; + private PointDistanceIndex boundaryDistance; + private Envelope gridEnv; private Cell farthestCell; - private ArrayDeque cellStack = new ArrayDeque<>(); - private List nextIterCells = new ArrayList<>(); - private List circles = new ArrayList<>(); + private final Deque cellStack = new ArrayDeque<>(); + private final List nextIterCells = new ArrayList<>(4096); + + // Primitive circle store (x,y,r) + private double[] cx = new double[64]; + private double[] cy = new double[64]; + private double[] cr = new double[64]; + private int circleCount = 0; + + private final Coordinate tmp = new Coordinate(); // scratch /** - * Constructs a new Largest Empty Circles (LEC) instance, ensuring that the - * circles are interior-disjoint to a set of obstacle geometries and (optional) - * contained within a polygonal boundary. - *

      - *

    • If the provided boundary is null and obstacles are linear/pointal, the - * convex hull of the obstacles is used as the boundary.
    • - *
    • If the provided boundary is null and obstacles are polygonal, the - * obstacles themselves form the boundary (effectively function as a "Maximum - * Inscribed Circles" algorithm.).
    • - * - * @param obstacles geometry representing the obstacles; if null, the boundary - * is used instead - * @param boundary a polygonal geometry (may be null) - * @param tolerance a positive distance tolerance for computing the circle - * center point - * @throws IllegalArgumentException if the obstacles geometry or the tolerance - * is non-positive + * Creates an instance constrained only by a polygonal boundary. + * + * @param boundary polygonal constraint region (shell and holes are respected) + * @param tolerance accuracy tolerance (> 0) + * @throws IllegalArgumentException if {@code boundary} is null/empty, + * non-polygonal, or if {@code tolerance <= 0} + */ + public LargestEmptyCircles(Geometry boundary, double tolerance) { + this(boundary, null, tolerance); + } + + /** + * Creates an instance constrained by a polygonal boundary and optional + * obstacles. + * + * @param boundary polygonal constraint region (shell and holes are respected) + * @param obstacles optional constraints geometry (may be any {@link Geometry}): + *
        + *
      • polygonal components are excluded regions (additional + * holes)
      • + *
      • lineal components contribute distance constraints
      • + *
      • puntal components contribute distance constraints
      • + *
      + * May be {@code null} or empty. + * @param tolerance accuracy tolerance (> 0). Smaller values increase work and + * accuracy. + * @throws IllegalArgumentException if {@code boundary} is null/empty, + * non-polygonal, or if {@code tolerance <= 0} */ - public LargestEmptyCircles(Geometry obstacles, Geometry boundary, double tolerance) { - if (obstacles == null || obstacles.isEmpty()) { - throw new IllegalArgumentException("Obstacles geometry is null or empty."); + public LargestEmptyCircles(Geometry boundary, Geometry obstacles, double tolerance) { + if (boundary == null || boundary.isEmpty()) { + throw new IllegalArgumentException("Boundary geometry is null or empty."); } - if (boundary != null && !(boundary instanceof Polygonal)) { - throw new IllegalArgumentException("A non-null boundary must be polygonal."); + if (!(boundary instanceof Polygonal)) { + throw new IllegalArgumentException("Boundary must be polygonal."); } if (tolerance <= 0) { throw new IllegalArgumentException("Accuracy tolerance is non-positive: " + tolerance); } - this.tolerance = tolerance; - - if (obstacles instanceof Polygonal && boundary != null) { - obstaclesPointLocator = new YStripesPointInAreaLocator(obstacles); - } - if (boundary == null || boundary.isEmpty()) { - if (obstacles instanceof Polygonal) { - boundary = obstacles; - } else { - /* - * If no boundary given, use convex hull of obstacles as boundary. - */ - boundary = obstacles.convexHull(); - } - } - - this.obstacles = obstacles; this.boundary = boundary; - this.factory = obstacles.getFactory(); - - /* - * Combine, in case the nearest obstacle is farther away than the nearest - * boundary. (it's faster to make one call on a larger index than 2 separate - * calls to each index). - */ - final Geometry distGeom = obstacles.getFactory().createGeometryCollection(new Geometry[] { obstacles, boundary }); - obstacleDistance = new IndexedFacetDistance(distGeom); -// obstacleDistance = new IndexedFacetDistance(obstacles); + this.obstacles = obstacles; + this.tolerance = tolerance; } private void initBoundary() { gridEnv = boundary.getEnvelopeInternal(); - if (boundary.getDimension() >= 2) { - boundsPointLocator = new YStripesPointInAreaLocator(boundary); - boundaryDistance = new IndexedFacetDistance(boundary); - } + + // Index distance to boundary rings AND obstacle linework. + // Sign is determined by boundary (modified by polygonal obstacles). + boundaryDistance = new PointDistanceIndex(boundary, obstacles); + createInitialGrid(gridEnv, cellStack); } /** - * Computes the signed distance from a point to the constraints (obstacles and - * boundary). Points outside the boundary polygon are assigned a negative - * distance. Their containing cells will be last in the priority queue (but will - * still end up being tested since they may be refined). - * - * @param p the point to compute the distance for - * @return the signed distance to the constraints (negative indicates outside - * the boundary) + * Computes signed distance to the constraint set at the given coordinate. + *

      + * The returned value is: + *

        + *
      • positive if the point lies inside the feasible region (inside + * {@code boundary} and outside any polygonal obstacles),
      • + *
      • negative if the point lies outside the feasible region,
      • + *
      + * and the magnitude is the distance to the nearest constraining feature, which + * includes: boundary rings, obstacle lineal components, and obstacle puntal + * components. + * + * @param x x-ordinate + * @param y y-ordinate + * @return signed distance to constraints */ - private double distanceToConstraints(Point p) { - Coordinate c = p.getCoordinate(); - boolean isOutsideBounds = boundsPointLocator.locate(c) == Location.EXTERIOR; - if (isOutsideBounds) { - double boundaryDist = boundaryDistance.distance(p); - return -boundaryDist; - } - /* - * If obstacles are polygonal, ensure circles do not lie within their interior. - * Only applies when the given boundary is not null. - */ - double dist = obstacleDistance.distance(p); - if (obstaclesPointLocator != null && (obstaclesPointLocator.locate(c) == Location.INTERIOR)) { - dist = -dist; - } - return dist; - } - private double distanceToConstraints(double x, double y) { - Coordinate coord = new Coordinate(x, y); - Point pt = factory.createPoint(coord); - return distanceToConstraints(pt); + tmp.x = x; + tmp.y = y; + return boundaryDistance.distance(tmp); // signed distance } + /** + * Computes the next {@code n} largest empty circles by repeatedly calling + * {@link #findNextLEC()}. + * + * @param n number of circles to compute + * @return array of circles as {@code [x, y, r]} triples (length {@code n}) + */ public double[][] findLECs(int n) { - double[][] lecs = new double[n][3]; + double[][] out = new double[n][3]; for (int i = 0; i < n; i++) { - lecs[i] = findNextLEC(); + out[i] = findNextLEC(); } - return lecs; + return out; } + /** + * Finds the next largest empty circle and caches it internally. + *

      + * On the first call, initialises the search structure. On subsequent calls, + * reuses candidate cells and updates them against the most recently found + * circle to avoid recomputing from scratch. + * + * @return the next circle as {@code [x, y, r]} where {@code (x,y)} is the + * center and {@code r} is the radius (signed distance at the selected + * center; typically {@code r > 0}) + */ public double[] findNextLEC() { - double farthestD; + if (gridEnv == null) { // first iteration initBoundary(); - // pick best seed from initial grid instead of only centroid - farthestCell = createCentroidCell(obstacles); + + farthestCell = createCentroidCell(boundary); farthestD = farthestCell.getDistance(); for (Cell c : cellStack) { - if (c.getDistance() > farthestD) { + double d = c.getDistance(); + if (d > farthestD) { + farthestD = d; farthestCell = c; - farthestD = c.getDistance(); } } } else { - // Update remaining candidates with the newly-placed circle - nextIterCells.forEach(c -> c.updateDistance(circles.get(circles.size() - 1))); - cellStack = new ArrayDeque<>(nextIterCells); + // update remaining candidates with newest circle only + final double lastX = cx[circleCount - 1]; + final double lastY = cy[circleCount - 1]; + final double lastR = cr[circleCount - 1]; + + for (Cell nextIterCell : nextIterCells) { + nextIterCell.updateDistance(lastX, lastY, lastR); + } + + cellStack.clear(); + cellStack.addAll(nextIterCells); nextIterCells.clear(); - // Seed best for this iteration from what we already have farthestD = Double.NEGATIVE_INFINITY; for (Cell c : cellStack) { - if (c.getDistance() > farthestD) { + double d = c.getDistance(); + if (d > farthestD) { + farthestD = d; farthestCell = c; - farthestD = c.getDistance(); } } } // Branch-and-bound while (!cellStack.isEmpty()) { - // LIFO pop for DFS-like behavior - Cell cell = cellStack.removeLast(); + Cell cell = cellStack.removeLast(); // DFS-like - if (cell.getDistance() > farthestD) { + double d = cell.getDistance(); + if (d > farthestD) { + farthestD = d; farthestCell = cell; - farthestD = farthestCell.getDistance(); } - /* - * If this cell may contain a better approximation to the center of the empty - * circle, then refine it (partition into subcells which are added into the - * queue for further processing). Otherwise the cell is pruned (not investigated - * further), since no point in it can be further than the current farthest - * distance. - */ - if (!cell.isFullyOutside()) { - /* - * The cell is outside, but overlaps the boundary so it may contain a point - * which should be checked. This is only the case if the potential overlap - * distance is larger than the tolerance. - */ - if (cell.isOutside()) { - boolean isOverlapSignificant = cell.getMaxDistance() > tolerance; - if (isOverlapSignificant) { - enqueueChildren(cell); - } + + if (cell.isFullyOutside()) { + continue; + } + + if (cell.isOutside()) { + if (cell.getMaxDistance() > tolerance) { + enqueueChildren(cell, farthestD); + } + } else { + if (cell.getMaxDistance() - farthestD > tolerance) { + enqueueChildren(cell, farthestD); } else { - /* - * Cell is inside the boundary. It may contain the center if the maximum - * possible distance is greater than the current distance (up to tolerance). - */ - double potentialIncrease = cell.getMaxDistance() - farthestD; - if (potentialIncrease > tolerance) { - enqueueChildren(cell); - } else { - nextIterCells.add(cell); - } + nextIterCells.add(cell); } } } - final Cell lecCell = farthestCell; - final double r = lecCell.distance; - final double[] circle = new double[] { lecCell.getX(), lecCell.getY(), r }; - circles.add(circle); + double x = farthestCell.getX(); + double y = farthestCell.getY(); + double r = farthestCell.getDistance(); - return circle; + addCircle(x, y, r); + return new double[] { x, y, r }; } - private void enqueueChildren(final Cell cell) { - final double h2 = cell.getHSide() / 2; - final double parentDist = cell.getDistance(); - final double farthestD = (farthestCell != null) ? farthestCell.getDistance() : Double.NEGATIVE_INFINITY; + private void addCircle(double x, double y, double r) { + if (circleCount == cx.length) { + int n = cx.length << 1; + cx = Arrays.copyOf(cx, n); + cy = Arrays.copyOf(cy, n); + cr = Arrays.copyOf(cr, n); + } + cx[circleCount] = x; + cy[circleCount] = y; + cr[circleCount] = r; + circleCount++; + } - // The max potential increase from parent's center to any point in a child cell - // is dist(parent_center, child_corner) = sqrt((h2)^2 + (h2)^2) = h2*sqrt(2) - // The max distance in a child cell is at its corner, which is h2*sqrt(2) from - // its center. - // So, an upper bound on a child's maxDist is parentDist + 2 * h2 * SQRT2 - double maxChildPotential = parentDist + 2 * h2 * Math.sqrt(2); + private void enqueueChildren(final Cell cell, final double farthestD) { + final double h2 = cell.getHSide() / 2.0; + // optimistic bound for any child of this cell + final double maxChildPotential = cell.getDistance() + 2.0 * h2 * Cell.SQRT2; if (maxChildPotential <= farthestD + tolerance) { - // Even the most optimistic estimate for any child of this cell - // won't beat the current best. So we don't need to subdivide. - // We might still need to keep this cell for the next iteration. nextIterCells.add(cell); return; } - Cell c1 = createCellIfPromising(cell.x - h2, cell.y - h2, h2, farthestD); - Cell c2 = createCellIfPromising(cell.x + h2, cell.y - h2, h2, farthestD); - Cell c3 = createCellIfPromising(cell.x - h2, cell.y + h2, h2, farthestD); - Cell c4 = createCellIfPromising(cell.x + h2, cell.y + h2, h2, farthestD); - - Cell[] kids = new Cell[] { c1, c2, c3, c4 }; - Arrays.sort(kids, (a, b) -> { - if (a == null && b == null) - return 0; - if (a == null) - return -1; // nulls go first - if (b == null) - return 1; - return Double.compare(a.getMaxDistance(), b.getMaxDistance()); - }); - - for (Cell k : kids) { - if (k != null) { - cellStack.addLast(k); - } + // Create 4 kids, push all (no sorting) + Cell c1 = createCellIfUseful(cell.x - h2, cell.y - h2, h2, farthestD); + Cell c2 = createCellIfUseful(cell.x + h2, cell.y - h2, h2, farthestD); + Cell c3 = createCellIfUseful(cell.x - h2, cell.y + h2, h2, farthestD); + Cell c4 = createCellIfUseful(cell.x + h2, cell.y + h2, h2, farthestD); + + if (c1 != null) { + cellStack.addLast(c1); + } + if (c2 != null) { + cellStack.addLast(c2); + } + if (c3 != null) { + cellStack.addLast(c3); + } + if (c4 != null) { + cellStack.addLast(c4); } } - // Helper method to create a cell only if it's worth investigating - private Cell createCellIfPromising(final double x, final double y, final double h, double farthestD) { - // We can't use the Lipschitz bound here because we don't know the parent's - // distance, - // but we can check the cell after creation before adding it. - // The main pruning is the check in enqueueChildren. This is a secondary check. + private Cell createCellIfUseful(final double x, final double y, final double h, final double farthestD) { Cell c = createCell(x, y, h); + if (c.getMaxDistance() > farthestD + tolerance) { return c; } - // If not promising, but might be useful for the next LEC search, add it there. + if (!c.isFullyOutside()) { nextIterCells.add(c); } @@ -300,13 +298,9 @@ private Cell createCellIfPromising(final double x, final double y, final double } private void createInitialGrid(Envelope env, Collection target) { - double minX = env.getMinX(); - double maxX = env.getMaxX(); - double minY = env.getMinY(); - double maxY = env.getMaxY(); - double width = env.getWidth(); - double height = env.getHeight(); - double cellSize = Math.min(width, height); + double minX = env.getMinX(), maxX = env.getMaxX(); + double minY = env.getMinY(), maxY = env.getMaxY(); + double cellSize = Math.min(env.getWidth(), env.getHeight()); double hSize = cellSize / 2.0; for (double x = minX; x < maxX; x += cellSize) { @@ -318,72 +312,67 @@ private void createInitialGrid(Envelope env, Collection target) { private Cell createCell(final double x, final double y, final double h) { Cell c = new Cell(x, y, h, distanceToConstraints(x, y)); - c.updateDistance(circles); + c.updateDistanceAll(cx, cy, cr, circleCount); return c; } private Cell createCentroidCell(Geometry geom) { Point p = geom.getCentroid(); - return new Cell(p.getX(), p.getY(), 0, distanceToConstraints(p)); + Cell c = new Cell(p.getX(), p.getY(), 0, distanceToConstraints(p.getX(), p.getY())); + c.updateDistanceAll(cx, cy, cr, circleCount); + return c; } - private static class Cell implements Comparable { + private static final class Cell { + static final double SQRT2 = Math.sqrt(2); - private static final double SQRT2 = 1.4142135623730951; - - private double x; - private double y; - private double hSide; - private double distance; + private final double x, y, hSide; + private double distance; // signed private double maxDist; - Cell(double x, double y, double hSide, double distanceToConstraints) { + Cell(double x, double y, double hSide, double dist) { this.x = x; this.y = y; this.hSide = hSide; - distance = distanceToConstraints; - this.maxDist = distance + hSide * SQRT2; + this.distance = dist; + this.maxDist = dist + hSide * SQRT2; } - // CHANGED: sqrt-free early-rejection for circle; only take sqrt if it might - // improve - public void updateDistance(double[] c) { - final double dx = x - c[0]; - final double dy = y - c[1]; + void updateDistance(double cX, double cY, double cR) { + final double dx = x - cX; + final double dy = y - cY; final double dsq = dx * dx + dy * dy; - final double r = c[2]; double D = distance; - double t = D + r; // improvement only possible if sqrt(dsq) < D + r + double t = D + cR; if (t > 0) { double tsq = t * t; if (dsq < tsq) { - double d = Math.sqrt(dsq) - r; // signed (negative when inside) + double d = Math.sqrt(dsq) - cR; if (d < D) { distance = d; - maxDist = distance + hSide * SQRT2; + maxDist = d + hSide * SQRT2; } } } } - // CHANGED: sqrt-free early-rejection loop over all circles - public void updateDistance(List circles) { + void updateDistanceAll(double[] cx, double[] cy, double[] cr, int n) { double D = distance; - for (double[] c : circles) { - final double r = c[2]; + for (int i = 0; i < n; i++) { + final double r = cr[i]; double t = D + r; if (t <= 0) { - // No circle can improve when D <= -r for this circle continue; } - final double dx = x - c[0]; - final double dy = y - c[1]; + + final double dx = x - cx[i]; + final double dy = y - cy[i]; final double dsq = dx * dx + dy * dy; - final double tsq = t * t; + final double tsq = t * t; if (dsq < tsq) { - double d = Math.sqrt(dsq) - r; // signed (negative when inside) + double d = Math.sqrt(dsq) - r; if (d < D) { D = d; } @@ -391,42 +380,36 @@ public void updateDistance(List circles) { } if (D < distance) { distance = D; - maxDist = distance + hSide * SQRT2; + maxDist = D + hSide * SQRT2; } } - public boolean isFullyOutside() { - return getMaxDistance() < 0; + boolean isFullyOutside() { + return maxDist < 0; } - public boolean isOutside() { + boolean isOutside() { return distance < 0; } - public double getMaxDistance() { + double getMaxDistance() { return maxDist; } - public double getDistance() { + double getDistance() { return distance; } - public double getHSide() { + double getHSide() { return hSide; } - public double getX() { + double getX() { return x; } - public double getY() { + double getY() { return y; } - - @Override - public int compareTo(Cell o) { - return (int) (o.maxDist - this.maxDist); - } } - -} +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/ManhattanVoronoi.java b/src/main/java/micycle/pgs/commons/ManhattanVoronoi.java new file mode 100644 index 00000000..c01c6897 --- /dev/null +++ b/src/main/java/micycle/pgs/commons/ManhattanVoronoi.java @@ -0,0 +1,1080 @@ +package micycle.pgs.commons; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; +import java.util.IdentityHashMap; +import java.util.List; +import java.util.Set; +import java.util.stream.Collectors; + +import org.locationtech.jts.algorithm.LineIntersector; +import org.locationtech.jts.algorithm.RobustLineIntersector; +import org.locationtech.jts.geom.Coordinate; +import org.locationtech.jts.geom.Envelope; +import org.locationtech.jts.geom.GeometryFactory; +import org.locationtech.jts.geom.Polygon; + +/** + * Computes Manhattan (L1) Voronoi diagrams clipped to an axis-aligned + * rectangular bounding box. + *

      + * This implementation is a Java port of the original JavaScript reference + * implementation ({@code JDragovich/manhattan-voronoi}) and is intended to + * preserve its behavior and structure for correctness and parity. + *

      + * The underlying approach follows the divide-and-conquer scheme described by: + *

      Lee, Der-Tsai, and C. K. Wong. "Voronoui Diagrams in L_1(L_∞) + * Metrics with 2-Dimensional Storage Applications." SIAM Journal on + * Computing 9, no. 1 (1980): 200–211.
      + * + *

      High-level algorithm (graph construction)

      + *
        + *
      1. Preprocess (optional): nudge input sites slightly to avoid the + * degenerate "square bisector" case (|dx| = |dy|).
      2. + *
      3. Sort sites by x-coordinate (breaking ties by y).
      4. + *
      5. Recursive split: recursively divide the sorted sites into left and + * right subsets; for base cases, compute and attach the L1 bisector for a pair + * of sites.
      6. + *
      7. Merge step: + *
          + *
        1. Choose an initial cross-subset bisector between a candidate site from the + * left subset and its nearest neighbor in the right subset.
        2. + *
        3. Walk the merge chain upward and downward: repeatedly intersect the + * current merge bisector with existing bisectors, trim bisectors at + * intersection points, and "hop" to adjacent sites to continue the merge.
        4. + *
        5. Orphan handling: detect and remove bisectors that become trapped + * by the newly formed merge boundary; add replacement merge bisectors when + * needed.
        6. + *
        7. Attach all merge bisectors to their incident sites and remove bisectors + * invalidated by trapping.
        8. + *
        + *
      8. + *
      9. Post-process per site: derive an ordered list of boundary vertices + * for the clipped cell (and neighbor sites) from the site’s incident + * bisectors.
      10. + *
      + * + *

      + * Note: This class produces a topological graph (sites with incident + * bisectors and neighbor relations) and a set of boundary vertices for each + * cell. + * + * @author Original javascript implementation by Joe Dragovich + * @author Java port by Michael Carleton + */ +public final class ManhattanVoronoi { + + static final LineIntersector li = new RobustLineIntersector(); + + private final double minX; + private final double minY; + private final double maxX; + private final double maxY; + + private ManhattanVoronoi(Envelope bounds) { + if (bounds == null) { + throw new IllegalArgumentException("Bounds must not be null."); + } + this.minX = bounds.getMinX(); + this.minY = bounds.getMinY(); + this.maxX = bounds.getMaxX(); + this.maxY = bounds.getMaxY(); + } + + public static final class Site { + public final Coordinate site; // the actual point + public List bisectors = new ArrayList<>(); + + // outputs filled by generate + public List polygonPoints = new ArrayList<>(); + public List neighbors = new ArrayList<>(); + + public Site(Coordinate site) { + this.site = site; + } + + /** + * Builds a JTS Polygon from this site's polygonPoints. + * + * @param gf geometry factory + */ + public Polygon toPolygon(GeometryFactory gf) { + if (polygonPoints == null || polygonPoints.isEmpty()) { + return gf.createPolygon(); // empty + } + + List pts = new ArrayList<>(polygonPoints.size()); + for (int i = 0; i < polygonPoints.size(); i++) { + Coordinate c = polygonPoints.get(i); + pts.add(c); + } + + // Need at least 3 vertices + if (pts.size() < 3) { + return gf.createPolygon(); + } + + if (pts.size() < 3) { + return gf.createPolygon(); + } + + // Close ring + pts.add(new Coordinate(pts.get(0))); + + return gf.createPolygon(pts.toArray(Coordinate[]::new)); + } + + @Override + public String toString() { + return "Site{" + "site=" + fmt(site) + '}'; + } + } + + public static final class Bisector { + public final Site[] sites = new Site[2]; + public boolean up; // same meaning as JS + public List points = new ArrayList<>(); + public List intersections = new ArrayList<>(); + public boolean compound = false; + public int mergeLine = 0; + + public Bisector(Site a, Site b) { + sites[0] = a; + sites[1] = b; + } + + public boolean sitesContainBoth(Site a, Site b) { + return (sites[0] == a || sites[1] == a) && (sites[0] == b || sites[1] == b); + } + + @Override + public String toString() { + return "Bisector{" + "sites=" + sites[0] + "," + sites[1] + ", up=" + up + ", points=" + + points.stream().map(ManhattanVoronoi::fmt).collect(Collectors.toList()) + '}'; + } + } + + public static List generate(Collection sitePoints, double width, double height) { + return generate(sitePoints, new Envelope(0, width, 0, height)); + } + + public static List generate(Collection sitePoints, double width, double height, boolean nudgeData) { + return generate(sitePoints, new Envelope(0, width, 0, height), nudgeData); + } + + public static List generate(Collection sitePoints, Envelope bounds) { + return generate(sitePoints, bounds, true); + } + + public static List generate(Collection sitePoints, Envelope bounds, boolean nudgeData) { + return new ManhattanVoronoi(bounds).generate(sitePoints, nudgeData); + } + + private List generate(Collection sitePoints, boolean nudgeData) { + final int n = sitePoints.size(); + + List points = sitePoints.stream().map(Coordinate::copy).collect(Collectors.toList()); + + if (nudgeData) { + cleanData(points); + } + + // Sort points by x then y + points.sort((a, b) -> { + int cx = Double.compare(a.x, b.x); + return (cx != 0) ? cx : Double.compare(a.y, b.y); + }); + + // Create sites into an array (faster for in-place range recursion) + Site[] sites = new Site[n]; + for (int i = 0; i < n; i++) { + sites[i] = new Site(points.get(i)); + } + + // Build graph in-place (no BiFunction, no list slicing/copies) + recursiveSplit(sites, 0, n); + + List graph = new ArrayList<>(n); + Collections.addAll(graph, sites); + + postProcessSites(graph); + return graph; + } + + private void postProcessSites(List graph) { + // Pre-create corners once + final Coordinate[] corners = new Coordinate[] { new Coordinate(minX, minY), new Coordinate(maxX, minY), new Coordinate(maxX, maxY), + new Coordinate(minX, maxY) }; + + graph.parallelStream().forEach(site -> { + buildPolygonPointsByChaining(site); + injectCornersIfNeeded(site, corners); + + if (!site.polygonPoints.isEmpty()) { + // Sort around site + site.polygonPoints.sort((p1, p2) -> Double.compare(angle(site.site, p1), angle(site.site, p2))); + } +// computeNeighbors(site); // NOTE + }); + } + + private void recursiveSplit(Site[] sites, int from, int to) { + final int size = to - from; + + if (size > 2) { + final int half = (size - (size % 2)) / 2; + final int splitPoint = from + half; + + // recurse + recursiveSplit(sites, from, splitPoint); + recursiveSplit(sites, splitPoint, to); + + // working sites + final Site lLast = sites[splitPoint - 1]; + + // Find nearest neighbor in the right half WITHOUT sorting the entire R + Site nearest = sites[splitPoint]; + double best = distance(lLast.site, nearest.site); + for (int i = splitPoint + 1; i < to; i++) { + Site s = sites[i]; + double d = distance(lLast.site, s.site); + if (d < best) { + best = d; + nearest = s; + } + } + + StartingInfo startingInfo = determineStartingBisector(lLast, nearest, null); + + Bisector initialBisector = startingInfo.startingBisector; + Site initialR = startingInfo.nearestNeighbor; + Site initialL = startingInfo.w; + + // Single merge list; walkMergeLine appends as needed + List mergeArray = new ArrayList<>(); + mergeArray.add(initialBisector); + + walkMergeLine(initialR, initialL, initialBisector, new Coordinate(maxX, maxY), true, null, mergeArray); + walkMergeLine(initialR, initialL, initialBisector, new Coordinate(minX, minY), false, null, mergeArray); + + // attach merge bisectors + for (int i = 0; i < mergeArray.size(); i++) { + Bisector bisector = mergeArray.get(i); + + bisector.mergeLine = size; + + bisector.sites[0].bisectors = clearOutOrphans(bisector.sites[0], bisector.sites[1]); + bisector.sites[1].bisectors = clearOutOrphans(bisector.sites[1], bisector.sites[0]); + + bisector.sites[0].bisectors.add(bisector); + bisector.sites[1].bisectors.add(bisector); + } + + } else if (size == 2) { + Bisector bisector = findL1Bisector(sites[from], sites[from + 1]); + sites[from].bisectors.add(bisector); + sites[from + 1].bisectors.add(bisector); + } else { + // size 0/1: nothing to do + } + } + + private void walkMergeLine(Site currentR, Site currentL, Bisector currentBisector, Coordinate currentCropPoint, boolean goUp, Bisector crossedBorder, + List mergeArray) { + while (true) { + + // ensure bisector matches current sites; if not, create and trim + if (!currentBisector.sitesContainBoth(currentR, currentL)) { + currentBisector = findL1Bisector(currentR, currentL); + trimBisector(currentBisector, crossedBorder, currentCropPoint); + mergeArray.add(currentBisector); + } + + List cropLArray = buildCropCandidatesForSide(currentL, currentBisector, currentR, currentCropPoint, goUp, crossedBorder, true); + + List cropRArray = buildCropCandidatesForSide(currentR, currentBisector, currentL, currentCropPoint, goUp, crossedBorder, false); + + CropCandidate cropL = (!cropLArray.isEmpty() && cropLArray.get(0).bisector != currentBisector) ? cropLArray.get(0) + : new CropCandidate(null, goUp ? new Coordinate(Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY) + : new Coordinate(Double.NEGATIVE_INFINITY, Double.NEGATIVE_INFINITY)); + + CropCandidate cropR = (!cropRArray.isEmpty() && cropRArray.get(0).bisector != currentBisector) ? cropRArray.get(0) + : new CropCandidate(null, goUp ? new Coordinate(Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY) + : new Coordinate(Double.NEGATIVE_INFINITY, Double.NEGATIVE_INFINITY)); + + // done? + if (cropL.bisector == null && cropR.bisector == null) { + + Bisector leftOrphan = checkForOphans(currentR, currentL, goUp); + Bisector rightOrphan = checkForOphans(currentL, currentR, goUp); + + if (leftOrphan != null) { + // Remove trapped bisector + for (Site s : leftOrphan.sites) { + s.bisectors.removeIf(b -> b == leftOrphan); + } + + Site hopTo = findHopTo(leftOrphan, currentL); + currentR = findCorrectW(currentR, hopTo); + + Bisector newMergeBisector = findL1Bisector(hopTo, currentR); + mergeArray.add(newMergeBisector); + + // continue with updated state + currentL = hopTo; + currentBisector = newMergeBisector; + continue; + + } else if (rightOrphan != null) { + for (Site s : rightOrphan.sites) { + s.bisectors.removeIf(b -> b == rightOrphan); + } + + Site hopTo = findHopTo(rightOrphan, currentR); + currentL = findCorrectW(currentL, hopTo); + + Bisector newMergeBisector = findL1Bisector(hopTo, currentL); + mergeArray.add(newMergeBisector); + + // continue with updated state + currentR = hopTo; + currentBisector = newMergeBisector; + continue; + } + + return; // finished + } + + FirstBorderCross first = determineFirstBorderCross(cropR, cropL, currentCropPoint); + + if (first == FirstBorderCross.RIGHT) { + trimBisector(cropR.bisector, currentBisector, cropR.point); + trimBisector(currentBisector, cropR.bisector, cropR.point); + currentBisector.intersections.add(cropR.point); + + crossedBorder = cropR.bisector; + currentR = findOtherSite(cropR.bisector, currentR); + currentCropPoint = cropR.point; + + } else if (first == FirstBorderCross.LEFT) { + trimBisector(cropL.bisector, currentBisector, cropL.point); + trimBisector(currentBisector, cropL.bisector, cropL.point); + currentBisector.intersections.add(cropL.point); + + crossedBorder = cropL.bisector; + currentL = findOtherSite(cropL.bisector, currentL); + currentCropPoint = cropL.point; + + } else { // BOTH + trimBisector(cropR.bisector, currentBisector, cropR.point); + trimBisector(currentBisector, cropR.bisector, cropR.point); + currentBisector.intersections.add(cropR.point); + + crossedBorder = cropR.bisector; + currentR = findOtherSite(cropR.bisector, currentR); + currentCropPoint = cropR.point; + + trimBisector(cropL.bisector, currentBisector, cropL.point); + trimBisector(currentBisector, cropL.bisector, cropL.point); + currentBisector.intersections.add(cropL.point); + + crossedBorder = cropL.bisector; + currentL = findOtherSite(cropL.bisector, currentL); + currentCropPoint = cropL.point; + } + } + } + + private StartingInfo determineStartingBisector(Site w, Site nearestNeighbor, Coordinate lastIntersect) { + while (true) { + if (lastIntersect == null) { + lastIntersect = w.site; + } + + // horizontal ray to the right boundary + final Coordinate z = new Coordinate(maxX, w.site.y); + + IntersectionHit hit = null; + for (Bisector b : nearestNeighbor.bisectors) { + Coordinate p = segmentBisectorIntersection(w.site, z, b); + if (p != null) { + hit = new IntersectionHit(p, b); + break; + } + } + + if (hit != null && distance(w.site, hit.point) > distance(nearestNeighbor.site, hit.point)) { + Bisector startingBisector = findL1Bisector(w, nearestNeighbor); + return new StartingInfo(startingBisector, w, nearestNeighbor, hit.point); + + } else if (hit != null && distance(w.site, hit.point) < distance(nearestNeighbor.site, hit.point) && hit.point.x > lastIntersect.x) { + + nearestNeighbor = findOtherSite(hit.bisector, nearestNeighbor); + lastIntersect = hit.point; + continue; + + } else { + w = findCorrectW(w, nearestNeighbor); + Bisector startingBisector = findL1Bisector(w, nearestNeighbor); + return new StartingInfo(startingBisector, w, nearestNeighbor, hit != null ? hit.point : w.site); + } + } + } + + private static Coordinate segmentBisectorIntersection(Coordinate s0, Coordinate s1, Bisector b) { + List pts = b.points; + for (int i = 0; i < pts.size() - 1; i++) { + Coordinate p0 = pts.get(i); + Coordinate p1 = pts.get(i + 1); + Coordinate ip = segmentIntersection(s0, s1, p0, p1); + if (ip != null) { + return ip; + } + } + return null; + } + + private Site findCorrectW(Site w, Site nearestNeighbor) { + while (true) { + Bisector startingBisector = findL1Bisector(w, nearestNeighbor); + + Site bestHop = null; + double bestDist = Double.POSITIVE_INFINITY; + + // find closest hopTo that traps the starting bisector + List wb = w.bisectors; + for (int i = 0; i < wb.size(); i++) { + Bisector b = wb.get(i); + Site hopTo = findHopTo(b, w); + + if (isBisectorTrapped(hopTo, startingBisector)) { + double d = distance(hopTo.site, nearestNeighbor.site); + if (d < bestDist) { + bestDist = d; + bestHop = hopTo; + } + } + } + + if (bestHop == null) { + return w; + } + + w = bestHop; + } + } + + private Bisector checkForOphans(Site trapper, Site trapped, boolean goUp) { + Bisector bestBisector = null; + double bestExtreme = goUp ? Double.NEGATIVE_INFINITY : Double.POSITIVE_INFINITY; + + List tb = trapped.bisectors; + for (int i = 0; i < tb.size(); i++) { + Bisector b = tb.get(i); + + Site hopTo = findHopTo(b, trapped); + boolean directionOk = (goUp == (hopTo.site.y < trapped.site.y)); + if (!directionOk) { + continue; + } + + if (!isBisectorTrapped(trapper, b)) { + continue; + } + + Bisector mergeLine = findL1Bisector(hopTo, trapper); + double extreme = getExtremePoint(mergeLine, goUp); + + if (goUp) { + if (extreme > bestExtreme) { + bestExtreme = extreme; + bestBisector = b; + } + } else { + if (extreme < bestExtreme) { + bestExtreme = extreme; + bestBisector = b; + } + } + } + + return bestBisector; + } + + private List buildCropCandidatesForSide(Site sideSite, Bisector currentBisector, Site otherMergeSite, Coordinate currentCropPoint, + boolean goUp, Bisector crossedBorder, boolean isLeftSide) { + // Build candidate list + precompute hopTo for each candidate + List candidates = new ArrayList<>(); + List hopTos = new ArrayList<>(); + + List bis = sideSite.bisectors; + for (int i = 0; i < bis.size(); i++) { + Bisector b = bis.get(i); + + Coordinate p = bisectorIntersection(currentBisector, b); + if (p == null) { + continue; + } + + Site hopTo = findHopTo(b, sideSite); + boolean upward = isNewBisectorUpward(hopTo, sideSite, otherMergeSite, goUp); + + boolean sameCropAndSameBorder = samePoint(p, currentCropPoint) && b == crossedBorder; + if ((goUp == upward) && !sameCropAndSameBorder) { + candidates.add(new CropCandidate(b, p)); + hopTos.add(hopTo); + } + } + + // Sort stage (matches original JS directionality) + candidates.sort((a, b) -> { + if (isLeftSide) { + Site hopToB = findHopTo(b.bisector, sideSite); + Site hopToA = findHopTo(a.bisector, sideSite); + return Double.compare(angle(sideSite.site, hopToB.site), angle(sideSite.site, hopToA.site)); + } else { + Site hopToA = findHopTo(a.bisector, sideSite); + Site hopToB = findHopTo(b.bisector, sideSite); + return Double.compare(angle(sideSite.site, hopToA.site), angle(sideSite.site, hopToB.site)); + } + }); + + // Rebuild hopTos in the same order as candidates (since we sorted candidates) + hopTos.clear(); + for (int i = 0; i < candidates.size(); i++) { + hopTos.add(findHopTo(candidates.get(i).bisector, sideSite)); + } + + // JS-like "every(...)" filter (still O(n^2), but fewer repeated calls) + List filtered = new ArrayList<>(); + for (int i = 0; i < candidates.size(); i++) { + CropCandidate e = candidates.get(i); + Site hopTo = hopTos.get(i); + + Bisector newMergeLine = findL1Bisector(otherMergeSite, hopTo); + trimBisector(newMergeLine, e.bisector, e.point); + + boolean ok = true; + for (int j = 0; j < candidates.size(); j++) { + Site hopToD = hopTos.get(j); + if (hopToD == hopTo) { + continue; + } + if (isBisectorTrapped(hopToD, newMergeLine)) { + ok = false; + break; + } + } + + if (ok) { + filtered.add(e); + } + } + + return filtered; + } + + private void buildPolygonPointsByChaining(Site site) { + + if (site.bisectors.isEmpty()) { + site.polygonPoints = new ArrayList<>(); + return; + } + + Bisector startBisector = findStartBisectorOnEdge(site); + if (startBisector == null) { + startBisector = site.bisectors.get(0); + } + + // used set: identity semantics + Set used = Collections.newSetFromMap(new IdentityHashMap<>()); + used.add(startBisector); + + List polygon = new ArrayList<>(startBisector.points.size()); + polygon.addAll(startBisector.points); + + // reverse if last point is on edge (matches JS) + if (!polygon.isEmpty() && isPointOnEdge(polygon.get(polygon.size() - 1))) { + Collections.reverse(polygon); + } + + // chain remaining bisectors + while (used.size() < site.bisectors.size()) { + Coordinate last = polygon.get(polygon.size() - 1); + + Bisector next = null; + double bestDist = Double.POSITIVE_INFINITY; + + for (Bisector candidate : site.bisectors) { + if (used.contains(candidate)) { + continue; + } + + Coordinate a = candidate.points.get(0); + Coordinate b = candidate.points.get(candidate.points.size() - 1); + double candDist = Math.min(distance(last, a), distance(last, b)); + + if (candDist < bestDist) { + bestDist = candDist; + next = candidate; + } + } + + if (next == null) { + break; // JS assumes exists; keep safe + } + used.add(next); + + // append points, reversing if needed + List nextPts = new ArrayList<>(next.points); + if (!nextPts.isEmpty() && samePoint(nextPts.get(nextPts.size() - 1), last)) { + Collections.reverse(nextPts); + } + polygon.addAll(nextPts); + } + + site.polygonPoints = polygon; + } + + private Bisector findStartBisectorOnEdge(Site site) { + for (Bisector b : site.bisectors) { + for (Coordinate element : b.points) { + if (isPointOnEdge(element)) { + return b; + } + } + } + return null; + } + + private void injectCornersIfNeeded(Site site, Coordinate[] corners) { + if (site.polygonPoints.isEmpty()) { + return; + } + + Coordinate first = site.polygonPoints.get(0); + Coordinate last = site.polygonPoints.get(site.polygonPoints.size() - 1); + + if (!(isPointOnEdge(first) && isPointOnEdge(last) && !arePointsOnSameEdge(first, last))) { + return; + } + + List filteredCorners = new ArrayList<>(4); + + for (Coordinate corner : corners) { + boolean ok = true; + for (Bisector b : site.bisectors) { + // you already have this helper + if (segmentIntersectsBisector(corner, site.site, b)) { + ok = false; + break; + } + } + + if (ok) { + filteredCorners.add(new Coordinate(corner)); + } + } + + if (!filteredCorners.isEmpty()) { + site.polygonPoints.addAll(filteredCorners); + } + } + + private static void computeNeighbors(Site site) { + if (site.bisectors.isEmpty()) { + site.neighbors = new ArrayList<>(); + return; + } + + List neigh = new ArrayList<>(site.bisectors.size()); + for (int bi = 0; bi < site.bisectors.size(); bi++) { + neigh.add(findHopTo(site.bisectors.get(bi), site)); + } + site.neighbors = neigh; + } + + private static boolean segmentIntersectsBisector(Coordinate s0, Coordinate s1, Bisector b) { + // Intersect the segment (s0->s1) with every segment of the bisector polyline + List pts = b.points; + for (int i = 0; i < pts.size() - 1; i++) { + Coordinate p0 = pts.get(i); + Coordinate p1 = pts.get(i + 1); + if (segmentIntersection(s0, s1, p0, p1) != null) { + return true; + } + } + return false; + } + + /** + * Nudge points to hopefully eliminate square bisectors. Mutates the given + * coordinates, matching JS behavior. + */ + public static List cleanData(List data) { + final double epsOffset = 1e-10; + for (int i = 0; i < data.size(); i++) { + Coordinate e = data.get(i); + for (int j = 0; j < data.size(); j++) { + Coordinate d = data.get(j); + if (i != j && Math.abs(d.x - e.x) == Math.abs(d.y - e.y)) { + d.x += epsOffset; + d.y -= epsOffset; + } + } + } + return data; + } + + private static Site findOtherSite(Bisector bisector, Site current) { + return bisector.sites[0] == current ? bisector.sites[1] : bisector.sites[0]; + } + + private static double angle(Coordinate p1, Coordinate p2) { + double ang = FastAtan2.atan2(p2.y - p1.y, p2.x - p1.x); + if (ang < 0) { + ang = Math.PI + Math.PI + ang; + } + return ang; + } + + private enum FirstBorderCross { + RIGHT, LEFT, BOTH + } + + private static FirstBorderCross determineFirstBorderCross(CropCandidate cropR, CropCandidate cropL, Coordinate currentCropPoint) { + double dr = Math.abs(cropR.point.y - currentCropPoint.y); + double dl = Math.abs(cropL.point.y - currentCropPoint.y); + + if (dr == dl) { + return FirstBorderCross.BOTH; + } + return (dr < dl) ? FirstBorderCross.RIGHT : FirstBorderCross.LEFT; + } + + private record StartingInfo(Bisector startingBisector, Site w, Site nearestNeighbor, Coordinate startingIntersection) { + } + + private record IntersectionHit(Coordinate point, Bisector bisector) { + } + + /** + * Hyperoptimized findL1Bisector - eliminates allocations and redundant + * operations. + */ + private Bisector findL1Bisector(Site P1, Site P2) { + final double p1x = P1.site.x; + final double p1y = P1.site.y; + final double p2x = P2.site.x; + final double p2y = P2.site.y; + + final double xDistance = p1x - p2x; + final double yDistance = p1y - p2y; + + if (p1x == p2x && p1y == p2y) { + throw new IllegalArgumentException("Duplicate point: Points " + P1 + " and " + P2 + " are duplicates."); + } + + final double absX = Math.abs(xDistance); + final double absY = Math.abs(yDistance); + + // Square bisector check + if (absX == absY) { + throw new IllegalArgumentException("Square bisector: Points " + P1 + " and " + P2 + + " are points on a square (their vertical distance equals horizontal distance). Consider nudgeData."); + } + + Bisector bisector = new Bisector(P1, P2); + + // Fast path: vertical line (xDistance == 0) + if (absX == 0) { + final double midY = (p1y + p2y) * 0.5; + bisector.up = false; + bisector.points = List.of(new Coordinate(minX, midY), new Coordinate(maxX, midY)); + return bisector; + } + + // Fast path: horizontal line (yDistance == 0) + if (absY == 0) { + final double midX = (p1x + p2x) * 0.5; + bisector.up = true; + bisector.points = List.of(new Coordinate(midX, minY), new Coordinate(midX, maxY)); + return bisector; + } + + // Pre-compute midpoint (used in both branches) + final double midX = (p1x + p2x) * 0.5; + final double midY = (p1y + p2y) * 0.5; + + // Determine slope + final double slope = (yDistance * xDistance > 0) ? -1.0 : 1.0; + final double intercept = midY - midX * slope; + + // Branch based on dominant direction + if (absX >= absY) { + // Horizontal-dominant (up = true) + bisector.up = true; + + // Compute vertices directly in sorted order + final double v1x = (p1y - intercept) * slope; // slope is ±1, so division becomes multiplication + final double v2x = (p2y - intercept) * slope; + + // Determine order based on y-coordinates + if (p1y < p2y) { + // p1y is lower, so v1 comes first + bisector.points = List.of(new Coordinate(v1x, minY), new Coordinate(v1x, p1y), new Coordinate(v2x, p2y), new Coordinate(v2x, maxY)); + } else { + // p2y is lower, so v2 comes first + bisector.points = List.of(new Coordinate(v2x, minY), new Coordinate(v2x, p2y), new Coordinate(v1x, p1y), new Coordinate(v1x, maxY)); + } + + } else { + // Vertical-dominant (up = false) + bisector.up = false; + + // Compute vertices directly in sorted order + final double v1y = p1x * slope + intercept; + final double v2y = p2x * slope + intercept; + + // Determine order based on x-coordinates + if (p1x < p2x) { + // p1x is leftmost, so v1 comes first + bisector.points = List.of(new Coordinate(minX, v1y), new Coordinate(p1x, v1y), new Coordinate(p2x, v2y), new Coordinate(maxX, v2y)); + } else { + // p2x is leftmost, so v2 comes first + bisector.points = List.of(new Coordinate(minX, v2y), new Coordinate(p2x, v2y), new Coordinate(p1x, v1y), new Coordinate(maxX, v1y)); + } + } + + return bisector; + } + + private static List clearOutOrphans(Site orphanage, Site trapPoint) { + orphanage.bisectors.removeIf(b -> isBisectorTrapped(trapPoint, b)); + return orphanage.bisectors; + } + + private static Site findHopTo(Bisector bisector, Site hopFrom) { + return bisector.sites[0] == hopFrom ? bisector.sites[1] : bisector.sites[0]; + } + + /** Manhattan (L1) distance. */ + public static double distance(final Coordinate p1, final Coordinate p2) { + return Math.abs(p1.x - p2.x) + Math.abs(p1.y - p2.y); + } + + private static boolean isBisectorTrapped(Site trapPoint, Bisector bisector) { + for (Coordinate point : bisector.points) { + double dTrap = distance(trapPoint.site, point); + if (!(dTrap <= distance(bisector.sites[0].site, point) && dTrap <= distance(bisector.sites[1].site, point))) { + return false; + } + } + return true; + } + + private static double getExtremePoint(Bisector bisector, boolean goUp) { + double acc = goUp ? Double.NEGATIVE_INFINITY : Double.POSITIVE_INFINITY; + for (Coordinate e : bisector.points) { + acc = goUp ? Math.max(e.y, acc) : Math.min(e.y, acc); + } + return acc; + } + + private static void trimBisector(Bisector target, Bisector intersector, Coordinate intersection) { + if (intersector == null) { + return; + } + + // Find the "polygon site" (the intersector site not in target.sites) + Site polygonSite = null; + Site t0 = target.sites[0]; + Site t1 = target.sites[1]; + + Site i0 = intersector.sites[0]; + Site i1 = intersector.sites[1]; + + if (i0 != t0 && i0 != t1) { + polygonSite = i0; + } else if (i1 != t0 && i1 != t1) { + polygonSite = i1; + } else { + return; + } + + final Coordinate poly = polygonSite.site; + + List src = target.points; + List newPoints = new ArrayList<>(src.size() + 1); + + final Coordinate s0 = t0.site; + final Coordinate s1 = t1.site; + + for (int i = 0; i < src.size(); i++) { + Coordinate p = src.get(i); + + // keep point if it's closer to both target sites than to polygonSite + if (distance(p, s0) < distance(p, poly) && distance(p, s1) < distance(p, poly)) { + newPoints.add(p); + } + } + + newPoints.add(new Coordinate(intersection.x, intersection.y)); + + if (target.up) { + newPoints.sort(Comparator.comparingDouble(c -> c.y)); + } else { + newPoints.sort(Comparator.comparingDouble(c -> c.x)); + } + + target.points = newPoints; + } + + private record CropCandidate(Bisector bisector, Coordinate point) { + } + + private static boolean isNewBisectorUpward(Site hopTo, Site hopFrom, Site site, boolean goUpUnused) { + double slope = (hopTo.site.y - site.site.y) / (hopTo.site.x - site.site.x); + double intercept = hopTo.site.y - (slope * hopTo.site.x); + + if (Double.isInfinite(slope)) { + return site.site.y > hopTo.site.y; + } + + return hopFrom.site.y > (slope * hopFrom.site.x) + intercept; + } + + private static Coordinate bisectorIntersection(Bisector b1, Bisector b2) { + if (b1 == b2) { + return null; + } + + final List pts1 = b1.points; + final List pts2 = b2.points; + final int size1 = pts1.size(); + final int size2 = pts2.size(); + + // Early exit for empty bisectors + if (size1 < 2 || size2 < 2) { + return null; + } + + // Bounding box check + final Coordinate p1_0 = pts1.get(0); + final Coordinate p1_last = pts1.get(size1 - 1); + final Coordinate p2_0 = pts2.get(0); + final Coordinate p2_last = pts2.get(size2 - 1); + + final double b1_minX = Math.min(p1_0.x, p1_last.x); + final double b1_maxX = Math.max(p1_0.x, p1_last.x); + final double b1_minY = Math.min(p1_0.y, p1_last.y); + final double b1_maxY = Math.max(p1_0.y, p1_last.y); + + final double b2_minX = Math.min(p2_0.x, p2_last.x); + final double b2_maxX = Math.max(p2_0.x, p2_last.x); + final double b2_minY = Math.min(p2_0.y, p2_last.y); + final double b2_maxY = Math.max(p2_0.y, p2_last.y); + + // No overlap = no intersection + if (b1_maxX < b2_minX || b2_maxX < b1_minX || b1_maxY < b2_minY || b2_maxY < b1_minY) { + return null; + } + + // Cache segment count (avoid repeated size() - 1) + final int seg1Count = size1 - 1; + final int seg2Count = size2 - 1; + + // Cache-friendly iteration with segment bounding box tests + for (int i = 0; i < seg1Count; i++) { + final Coordinate a0 = pts1.get(i); + final Coordinate a1 = pts1.get(i + 1); + + // Segment 1 bounds (extract to locals for cache efficiency) + final double a_minX = Math.min(a0.x, a1.x); + final double a_maxX = Math.max(a0.x, a1.x); + final double a_minY = Math.min(a0.y, a1.y); + final double a_maxY = Math.max(a0.y, a1.y); + + for (int j = 0; j < seg2Count; j++) { + final Coordinate b0 = pts2.get(j); + final Coordinate bb1 = pts2.get(j + 1); + + // Cheap segment bounding box test (eliminates 90% of intersection checks) + final double b_minX = Math.min(b0.x, bb1.x); + final double b_maxX = Math.max(b0.x, bb1.x); + + if (a_maxX < b_minX || b_maxX < a_minX) { + continue; + } + + final double b_minY = Math.min(b0.y, bb1.y); + final double b_maxY = Math.max(b0.y, bb1.y); + + if (a_maxY < b_minY || b_maxY < a_minY) { + continue; + } + + // Bounding boxes overlap - now do the expensive intersection test + Coordinate intersect = segmentIntersection(a0, a1, b0, bb1); + if (intersect != null) { + return intersect; // Early termination on first intersection + } + } + } + + return null; + } + + /** + * Segment intersection ported from JS segementIntersection. - denom == 0 => + * null (JS returned null for parallel/collinear) - if no intersection within + * segment bounds => null (JS returned false) + */ + private static Coordinate segmentIntersection(final Coordinate l10, final Coordinate l11, final Coordinate l20, final Coordinate l21) { + final double denom = (l21.y - l20.y) * (l11.x - l10.x) - (l21.x - l20.x) * (l11.y - l10.y); + if (denom == 0) { + return null; + } + + final double ua = ((l21.x - l20.x) * (l10.y - l20.y) - (l21.y - l20.y) * (l10.x - l20.x)) / denom; + final double ub = ((l11.x - l10.x) * (l10.y - l20.y) - (l11.y - l10.y) * (l10.x - l20.x)) / denom; + + if (!(ua >= 0 && ua <= 1 && ub >= 0 && ub <= 1)) { + return null; + } + + return new Coordinate(l10.x + ua * (l11.x - l10.x), l10.y + ua * (l11.y - l10.y)); + } + + private static Coordinate segmentIntersectionRobust(Coordinate a0, Coordinate a1, Coordinate b0, Coordinate b1) { + li.computeIntersection(a0, a1, b0, b1); + + if (!li.hasIntersection()) { + return null; + } + if (li.getIntersectionNum() != 1) { + return null; + } + + Coordinate ip = li.getIntersection(0); + return new Coordinate(ip.x, ip.y); + } + + private static boolean samePoint(final Coordinate p1, final Coordinate p2) { + return p1.x == p2.x && p1.y == p2.y; + } + + private boolean isPointOnEdge(final Coordinate p) { + return p.x == minX || p.x == maxX || p.y == minY || p.y == maxY; + } + + private boolean arePointsOnSameEdge(final Coordinate p1, final Coordinate p2) { + return (p1.x == p2.x && (p1.x == minX || p1.x == maxX)) || (p1.y == p2.y && (p1.y == minY || p1.y == maxY)); + } + + private static String fmt(final Coordinate c) { + return "[" + c.x + "," + c.y + "]"; + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/MarchingSquares.java b/src/main/java/micycle/pgs/commons/MarchingSquares.java new file mode 100644 index 00000000..f3ee2dbc --- /dev/null +++ b/src/main/java/micycle/pgs/commons/MarchingSquares.java @@ -0,0 +1,821 @@ +package micycle.pgs.commons; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.function.DoubleBinaryOperator; +import java.util.stream.Collectors; +import java.util.stream.IntStream; + +import processing.core.PConstants; +import processing.core.PShape; + +/** + * Fast isolines from a regular grid using Marching Squares + contour tracing. + */ +public final class MarchingSquares { + + private MarchingSquares() { + } + + /** + * Builds isolines by sampling a regular grid over a rectangle and evaluating a + * user-provided height function z=f(x,y) at each sample. + * + * @param bounds [xmin, ymin, xmax, ymax] of the sampling area + * @param sampleSpacing grid spacing in pixels (smaller = more detail, + * slower) + * @param intervalValueSpacing contour interval spacing in "height units" + * @param fn (x,y) -> height + */ + public static Map isolines(double[] bounds, double sampleSpacing, double intervalValueSpacing, DoubleBinaryOperator fn) { + return isolines(bounds, sampleSpacing, intervalValueSpacing, Double.NaN, Double.NaN, fn); + } + + /** + * Builds isolines by sampling a regular grid over a rectangle and evaluating a + * user-provided height function z=f(x,y) at each sample. + * + * @param bounds [xmin, ymin, xmax, ymax] of the sampling area + * @param sampleSpacing grid spacing in pixels (smaller = more detail, + * slower) + * @param intervalValueSpacing contour interval spacing in "height units" + * @param isolineMin minimum contour value (inclusive). Pass + * Double.NaN to auto-detect from data. + * @param isolineMax maximum contour value (inclusive). Pass + * Double.NaN to auto-detect from data. + * @param fn (x,y) -> height + */ + public static Map isolines(double[] bounds, double sampleSpacing, double intervalValueSpacing, double isolineMin, double isolineMax, + DoubleBinaryOperator fn) { + if (sampleSpacing <= 0) { + throw new IllegalArgumentException("sampleSpacing must be > 0"); + } + if (intervalValueSpacing <= 0) { + throw new IllegalArgumentException("intervalValueSpacing must be > 0"); + } + if (!Double.isNaN(isolineMax) && !Double.isNaN(isolineMin) && isolineMax < isolineMin) { + return Collections.emptyMap(); + } + + if (bounds.length < 4) { + throw new IllegalArgumentException("bounds must be double[4] {xmin, ymin, xmax, ymax}"); + } + double x = bounds[0]; + double y = bounds[1]; + double w = bounds[2] - x; + double h = bounds[3] - y; + + // Regular grid counts + final int nx = (int) Math.floor(w / sampleSpacing) + 1; + final int ny = (int) Math.floor(h / sampleSpacing) + 1; + + final float x0 = (float) x; + final float y0 = (float) y; + final float dx = (float) sampleSpacing; + final float dy = (float) sampleSpacing; + + final float[] z = new float[nx * ny]; + float minZ = Float.POSITIVE_INFINITY; + float maxZ = Float.NEGATIVE_INFINITY; + + // Row-major fill: y outer, x inner => index = iy*nx + ix + int idx = 0; + for (int iy = 0; iy < ny; iy++) { + final double yy = y + iy * sampleSpacing; + for (int ix = 0; ix < nx; ix++, idx++) { + final double xx = x + ix * sampleSpacing; + float val = (float) fn.applyAsDouble(xx, yy); + z[idx] = val; + if (val < minZ) { + minZ = val; + } + if (val > maxZ) { + maxZ = val; + } + } + } + + return isolinesRegularGrid(z, nx, ny, x0, y0, dx, dy, intervalValueSpacing, isolineMin, isolineMax, minZ, maxZ); + } + + /** + * Computes isolines for a regular grid of data. + * + * @param z row-major array of grid values + * @param nx number of points in x direction + * @param ny number of points in y direction + * @param bounds [xmin, ymin, xmax, ymax] of the sampling area + * @param intervalValueSpacing vertical distance between contour levels + */ + public static Map isolines(float[] z, int nx, int ny, double[] bounds, double intervalValueSpacing) { + return isolines(z, nx, ny, bounds, intervalValueSpacing, Double.NaN, Double.NaN); + } + + /** + * Computes isolines for a regular grid of data. + * + * @param z row-major array of grid values + * @param nx number of points in x direction + * @param ny number of points in y direction + * @param bounds [xmin, ymin, xmax, ymax] of the sampling area + * @param intervalValueSpacing vertical distance between contour levels + * @param isolineMin minimum value to contour. Pass Double.NaN to + * auto-detect from data. + * @param isolineMax maximum value to contour. Pass Double.NaN to + * auto-detect from data. + */ + public static Map isolines(float[] z, int nx, int ny, double[] bounds, double intervalValueSpacing, double isolineMin, double isolineMax) { + if (bounds.length < 4) { + throw new IllegalArgumentException("bounds must be double[4] {xmin, ymin, xmax, ymax}"); + } + double x = bounds[0]; + double y = bounds[1]; + double w = bounds[2] - x; + double h = bounds[3] - y; + + final float x0 = (float) x; + final float y0 = (float) y; + final float dx = (float) (w / (nx - 1)); + final float dy = (float) (h / (ny - 1)); + + // Compute min/max for auto-ranging and optimization + float minZ = Float.POSITIVE_INFINITY, maxZ = Float.NEGATIVE_INFINITY; + for (float val : z) { + minZ = Math.min(minZ, val); + maxZ = Math.max(maxZ, val); + } + + return isolinesRegularGrid(z, nx, ny, x0, y0, dx, dy, intervalValueSpacing, isolineMin, isolineMax, minZ, maxZ); + } + + /** + * Traces only the zero-contour (fn = 0) using marching squares. Useful for + * implicit curves like Voronoi edges where you want f(x,y)=0 only. + * + * @param bounds [xmin, ymin, xmax, ymax] of the sampling area + * @param sampleSpacing grid spacing in pixels (smaller = more detail, slower) + * @param smoothing unused + * @param fn (x,y) -> value whose zero-set is traced + * @return map of PShape -> level (always 0f) + */ + public static Map isolineZero(double[] bounds, double sampleSpacing, DoubleBinaryOperator fn) { + + if (sampleSpacing <= 0) { + throw new IllegalArgumentException("sampleSpacing must be > 0"); + } + + if (bounds.length < 4) { + throw new IllegalArgumentException("bounds must be double[4] {xmin, ymin, xmax, ymax}"); + } + double x = bounds[0]; + double y = bounds[1]; + double w = bounds[2] - x; + double h = bounds[3] - y; + + // Regular grid counts (stable, no epsilon loops) + final int nx = (int) Math.floor(w / sampleSpacing) + 1; + final int ny = (int) Math.floor(h / sampleSpacing) + 1; + + if (nx < 2 || ny < 2) { + return Collections.emptyMap(); + } + + final float x0 = (float) x; + final float y0 = (float) y; + final float dx = (float) sampleSpacing; + final float dy = (float) sampleSpacing; + + final float[] z = new float[nx * ny]; + + // Row-major fill: y outer, x inner => index = iy*nx + ix + int idx = 0; + for (int iy = 0; iy < ny; iy++) { + final double yy = y + iy * sampleSpacing; + for (int ix = 0; ix < nx; ix++, idx++) { + final double xx = x + ix * sampleSpacing; + z[idx] = (float) fn.applyAsDouble(xx, yy); + } + } + + final int cellsX = nx - 1, cellsY = ny - 1; + final int cellCount = cellsX * cellsY; + + // Single level: 0 + return processLevel(0f, z, nx, ny, cellsX, cellsY, cellCount, x0, y0, dx, dy); + } + + /** + * Computes isolines for a regular grid of data. + * + * @param z row-major array of grid values + * @param nx number of points in x direction + * @param ny number of points in y direction + * @param x0 minimum x coordinate + * @param y0 minimum y coordinate + * @param dx sampling spacing in x + * @param dy sampling spacing in y + * @param intervalValueSpacing vertical distance between contour levels + * @param isolineMin minimum value to contour (can be NaN) + * @param isolineMax maximum value to contour (can be NaN) + * @param dataMin min value present in z + * @param dataMax max value present in z + * @return a map of PShapes to their corresponding levels + */ + private static Map isolinesRegularGrid(float[] z, int nx, int ny, float x0, float y0, float dx, float dy, double intervalValueSpacing, + double isolineMin, double isolineMax, float dataMin, float dataMax) { + + final int cellsX = nx - 1, cellsY = ny - 1; + final int cellCount = cellsX * cellsY; + + double start = Double.isNaN(isolineMin) ? dataMin : isolineMin; + double end = Double.isNaN(isolineMax) ? dataMax : isolineMax; + + final int levelCount = (int) Math.floor((end - start) / intervalValueSpacing) + 1; + + if (levelCount <= 0) { + return Collections.emptyMap(); + } + + // Use pre-computed min/max for early exit optimization + final float finalMinZ = dataMin; + final float finalMaxZ = dataMax; + + // @formatter:off + return IntStream.range(0, levelCount) + .parallel() + .mapToObj(li -> { + final float level = (float) (start + li * intervalValueSpacing); + // Early exit if level out of bounds + if (level < finalMinZ || level > finalMaxZ) { + return Collections.emptyMap(); + } + return processLevel(level, z, nx, ny, cellsX, cellsY, cellCount, x0, y0, dx, dy); + }) + .flatMap(map -> map.entrySet().stream()) + .collect(Collectors.toMap( + Map.Entry::getKey, + Map.Entry::getValue, + (v1, v2) -> v1, // merge function (shouldn't happen, but required) + () -> new LinkedHashMap<>(Math.max(16, levelCount * 8)) + )); + // @formatter:on + } + + /** + * Process a single contour level and return all shapes at that level. + */ + private static Map processLevel(float level, float[] z, int nx, int ny, int cellsX, int cellsY, int cellCount, float x0, float y0, float dx, + float dy) { + + // Thread-local arrays (each parallel stream gets its own) + final byte[] codes = new byte[cellCount]; + final byte[] amb = new byte[cellCount]; + final byte[] visited = new byte[cellCount]; + + // Build marching squares codes + buildMarchingSquaresCodes(codes, amb, z, nx, cellsX, cellsY, level); + + // Trace all contours at this level + return traceContoursAtLevel(level, codes, amb, visited, z, nx, ny, cellsX, cellsY, cellCount, x0, y0, dx, dy); + } + + /** + * Build marching squares codes for all cells at given level. + */ + private static void buildMarchingSquaresCodes(byte[] codes, byte[] amb, float[] z, int nx, int cellsX, int cellsY, float level) { + int c = 0; + for (int iy = 0; iy < cellsY; iy++) { + final int row0 = iy * nx; + final int row1 = (iy + 1) * nx; + + for (int ix = 0; ix < cellsX; ix++, c++) { + final float v0 = z[row0 + ix]; + final float v1 = z[row0 + ix + 1]; + final float v2 = z[row1 + ix + 1]; + final float v3 = z[row1 + ix]; + + int code = ((v0 > level ? 1 : 0) << 0) | ((v1 > level ? 1 : 0) << 1) | ((v2 > level ? 1 : 0) << 2) | ((v3 > level ? 1 : 0) << 3); + codes[c] = (byte) code; + + // Asymptotic decider (Nielson & Hamann) better? + if (code == 5 || code == 10) { + float center = 0.25f * (v0 + v1 + v2 + v3); + amb[c] = (byte) (center > level ? 1 : 0); + } else { + amb[c] = 0; + } + } + } + } + + /** + * Trace all contours at a given level. + */ + private static Map traceContoursAtLevel(float level, byte[] codes, byte[] amb, byte[] visited, float[] z, int nx, int ny, int cellsX, + int cellsY, int cellCount, float x0, float y0, float dx, float dy) { + + // Collect raw polylines first + List paths = new ArrayList<>(); + + for (int cell = 0; cell < cellCount; cell++) { + int code = codes[cell] & 0xFF; + if (code == 0 || code == 15) + continue; + + int edgesMask = edgesUsedMask(code); + int unvisited = edgesMask & (~visited[cell] & 0x0F); + + while (unvisited != 0) { + int startEdge = Integer.numberOfTrailingZeros(unvisited); + + FloatPath path = traceOne(level, cell, startEdge, codes, amb, visited, z, nx, ny, x0, y0, dx, dy); + if (path != null && path.sizePairs() >= 2) { + paths.add(path); + } + + unvisited = edgesMask & (~visited[cell] & 0x0F); + } + } + + stitchPathsDirect(paths); + + Map result = new LinkedHashMap<>(paths.size() * 2); + for (FloatPath p : paths) { + p.snapClosed(1e-3f); + + PShape s = toPShape(p); + if (s.getVertexCount() > 1) { + result.put(s, level); + } + } + return result; + } + + private static void stitchPathsDirect(List paths) { + boolean merged; + do { + merged = false; + + outer: for (int i = 0; i < paths.size(); i++) { + FloatPath a = paths.get(i); + + for (int j = i + 1; j < paths.size(); j++) { + FloatPath b = paths.get(j); + + FloatPath res = tryMergeDirect(a, b); + if (res != null) { + // res is the merged path; it is either 'a' or 'b' + if (res == a) { + paths.set(i, a); + paths.remove(j); + } else { // res == b + paths.set(i, b); + paths.remove(j); + } + merged = true; + break outer; // restart scanning after any merge + } + } + } + } while (merged); + } + + private static boolean sameXY(float ax, float ay, float bx, float by) { + return ax == bx && ay == by; + } + + /** + * Try to merge two paths if any endpoints match exactly. Returns the merged + * path (either a or b) or null if no merge. + */ + private static FloatPath tryMergeDirect(FloatPath a, FloatPath b) { + float aFx = a.firstX(), aFy = a.firstY(); + float aLx = a.lastX(), aLy = a.lastY(); + float bFx = b.firstX(), bFy = b.firstY(); + float bLx = b.lastX(), bLy = b.lastY(); + + // A.end == B.start => A += B + if (sameXY(aLx, aLy, bFx, bFy)) { + a.appendPath(b, true); + return a; + } + + // A.end == B.end => reverse B, A += B + if (sameXY(aLx, aLy, bLx, bLy)) { + b.reversePairs(); + a.appendPath(b, true); + return a; + } + + // A.start == B.end => B += A (result is B) + if (sameXY(aFx, aFy, bLx, bLy)) { + b.appendPath(a, true); + return b; + } + + // A.start == B.start => reverse B, B += A (result is B) + if (sameXY(aFx, aFy, bFx, bFy)) { + b.reversePairs(); + b.appendPath(a, true); + return b; + } + + return null; + } + + // Edge ids for a cell (i,j) with corners: + // v0 = (i, j) bottom-left + // v1 = (i+1, j) bottom-right + // v2 = (i+1, j+1) top-right + // v3 = (i, j+1) top-left + // edges: + // 0 = bottom (v0-v1) + // 1 = right (v1-v2) + // 2 = top (v3-v2) + // 3 = left (v0-v3) + + private static final int E_BOTTOM = 0; + private static final int E_RIGHT = 1; + private static final int E_TOP = 2; + private static final int E_LEFT = 3; + + /** + * Returns the edge opposite to the given edge in a square cell. + * + * @param e edge index [0..3] + * @return opposite edge index + */ + private static int oppositeEdge(int e) { + return switch (e) { + case E_BOTTOM -> E_TOP; + case E_TOP -> E_BOTTOM; + case E_LEFT -> E_RIGHT; + case E_RIGHT -> E_LEFT; + default -> throw new IllegalArgumentException("bad edge " + e); + }; + } + + /** + * Returns a bitmask representing the given edge. + */ + private static int edgeBit(int e) { + return 1 << e; + } + + /** Which edges are intersected for a given marching squares case (0..15). */ + private static int edgesUsedMask(int code) { + return switch (code) { + case 0, 15 -> 0; + case 1, 14 -> edgeBit(E_LEFT) | edgeBit(E_BOTTOM); + case 2, 13 -> edgeBit(E_BOTTOM) | edgeBit(E_RIGHT); + case 3, 12 -> edgeBit(E_LEFT) | edgeBit(E_RIGHT); + case 4, 11 -> edgeBit(E_RIGHT) | edgeBit(E_TOP); + case 6, 9 -> edgeBit(E_BOTTOM) | edgeBit(E_TOP); + case 7, 8 -> edgeBit(E_LEFT) | edgeBit(E_TOP); + case 5, 10 -> edgeBit(E_BOTTOM) | edgeBit(E_RIGHT) | edgeBit(E_TOP) | edgeBit(E_LEFT); + default -> 0; + }; + } + + /** + * Given a cell and an entry edge that is intersected, return the exit edge (the + * other end of the segment inside that cell). For ambiguous cases 5/10 we + * consult amb[cell]. + */ + private static int partnerEdge(int code, int ambMode, int entryEdge) { + return switch (code) { + case 1, 14 -> (entryEdge == E_LEFT) ? E_BOTTOM : E_LEFT; + case 2, 13 -> (entryEdge == E_BOTTOM) ? E_RIGHT : E_BOTTOM; + case 3, 12 -> (entryEdge == E_LEFT) ? E_RIGHT : E_LEFT; + case 4, 11 -> (entryEdge == E_RIGHT) ? E_TOP : E_RIGHT; + case 6, 9 -> (entryEdge == E_BOTTOM) ? E_TOP : E_BOTTOM; + case 7, 8 -> (entryEdge == E_LEFT) ? E_TOP : E_LEFT; + + case 5 -> { + // case 5: v0 and v2 are "high" corners (diagonal) + // ambMode==1 connect high corners => pairs (0-3) and (1-2) + // ambMode==0 connect other => pairs (0-1) and (2-3) + if (ambMode == 1) { + yield switch (entryEdge) { + case E_BOTTOM -> E_LEFT; + case E_LEFT -> E_BOTTOM; + case E_RIGHT -> E_TOP; + case E_TOP -> E_RIGHT; + default -> -1; + }; + } else { + yield switch (entryEdge) { + case E_BOTTOM -> E_RIGHT; + case E_RIGHT -> E_BOTTOM; + case E_TOP -> E_LEFT; + case E_LEFT -> E_TOP; + default -> -1; + }; + } + } + + case 10 -> { + // case 10: v1 and v3 are "high" corners (other diagonal) + // ambMode==1 connect high corners => pairs (0-1) and (2-3) + // ambMode==0 connect other => pairs (0-3) and (1-2) + if (ambMode == 1) { + yield switch (entryEdge) { + case E_BOTTOM -> E_RIGHT; + case E_RIGHT -> E_BOTTOM; + case E_TOP -> E_LEFT; + case E_LEFT -> E_TOP; + default -> -1; + }; + } else { + yield switch (entryEdge) { + case E_BOTTOM -> E_LEFT; + case E_LEFT -> E_BOTTOM; + case E_RIGHT -> E_TOP; + case E_TOP -> E_RIGHT; + default -> -1; + }; + } + } + + default -> -1; + }; + } + + /** + * Trace one polyline starting from (startCell,startEdge) at a given level. + * Marks visited edges along the way. + */ + private static FloatPath traceOne(float level, int startCell, int startEdge, byte[] codes, byte[] amb, byte[] visited, float[] z, int nx, int ny, float x0, + float y0, float dx, float dy) { + final int cellsX = nx - 1; + final int cellsY = ny - 1; + + int cell = startCell; + int edge = startEdge; + + FloatPath path = new FloatPath(128); + + final float[] tmp0 = new float[2]; + final float[] tmp1 = new float[2]; + + edgePoint(level, cell, edge, z, nx, x0, y0, dx, dy, tmp0); + path.add(tmp0[0], tmp0[1]); + + final int maxSteps = cellsX * cellsY * 4; + + for (int steps = 0; steps < maxSteps; steps++) { + final int code = codes[cell] & 0xFF; + final int exit = partnerEdge(code, amb[cell] & 0xFF, edge); + if (exit < 0) { + break; + } + + visited[cell] = (byte) (visited[cell] | edgeBit(edge) | edgeBit(exit)); + + edgePoint(level, cell, exit, z, nx, x0, y0, dx, dy, tmp1); + path.addIfDifferent(tmp1[0], tmp1[1]); + + final int cx = cell % cellsX; + final int cy = cell / cellsX; + + int ncx = cx, ncy = cy; + switch (exit) { + case E_BOTTOM -> ncy = cy - 1; + case E_TOP -> ncy = cy + 1; + case E_LEFT -> ncx = cx - 1; + case E_RIGHT -> ncx = cx + 1; + } + + // leaving grid => open contour + if (ncx < 0 || ncx >= cellsX || ncy < 0 || ncy >= cellsY) { + break; + } + + final int nextCell = ncy * cellsX + ncx; + final int nextEdge = oppositeEdge(exit); + + // closed loop + if (nextCell == startCell && nextEdge == startEdge) { + break; + } + + // stop if next cell doesn't contain this entry edge, or already traced there + final int nextCode = codes[nextCell] & 0xFF; + if ((edgesUsedMask(nextCode) & edgeBit(nextEdge)) == 0) { + break; + } + if ((visited[nextCell] & edgeBit(nextEdge)) != 0) { + break; + } + + cell = nextCell; + edge = nextEdge; + } + + return path.sizePairs() >= 2 ? path : null; + } + + /** Compute interpolated (x,y) point where contour level crosses a cell edge. */ + private static void edgePoint(float level, int cell, int edge, float[] z, int nx, float x0, float y0, float dx, float dy, float[] outXY) { + final int cellsX = nx - 1; + final int ix = cell % cellsX; + final int iy = cell / cellsX; + + final int row0 = iy * nx; + final int row1 = (iy + 1) * nx; + + final float v0 = z[row0 + ix]; + final float v1 = z[row0 + ix + 1]; + final float v2 = z[row1 + ix + 1]; + final float v3 = z[row1 + ix]; + + final float X0 = x0 + ix * dx; + final float X1 = X0 + dx; + final float Y0 = y0 + iy * dy; + final float Y1 = Y0 + dy; + + float t, x, y; + + switch (edge) { + case E_BOTTOM -> { + t = interp(level, v0, v1); + x = X0 + t * (X1 - X0); + y = Y0; + } + case E_RIGHT -> { + t = interp(level, v1, v2); + x = X1; + y = Y0 + t * (Y1 - Y0); + } + case E_TOP -> { + t = interp(level, v3, v2); + x = X0 + t * (X1 - X0); + y = Y1; + } + case E_LEFT -> { + t = interp(level, v0, v3); + x = X0; + y = Y0 + t * (Y1 - Y0); + } + default -> throw new IllegalArgumentException("bad edge " + edge); + } + + outXY[0] = x; + outXY[1] = y; + } + + /** + * Linear interpolation between two values to find where a target level falls. + * + * @param level target iso-value + * @param a first value + * @param b second value + * @return interpolation factor [0..1] + */ + private static float interp(float level, float a, float b) { + float d = (b - a); + if (d == 0f) { + // Deterministic choice so adjacent cells compute identical points. + // If the whole edge is exactly on the contour, pick the first endpoint. + return (level == a) ? 0f : 0.5f; + } + return (level - a) / d; + } + + /** + * Converts a FloatPath into a Processing PShape PATH. + */ + private static PShape toPShape(FloatPath path) { + PShape s = new PShape(); + s.setFamily(PShape.PATH); + s.setStroke(true); + s.setStroke(128); + s.setStrokeWeight(2); + s.setFill(false); + + boolean closed = path.isClosed(1e-4f); + int vertexCount = closed ? path.sizePairs() - 1 : path.sizePairs(); + + s.beginShape(); + for (int i = 0; i < vertexCount; i++) { + float x = path.x(i); + float y = path.y(i); + s.vertex(x, y); + } + s.endShape(closed ? PConstants.CLOSE : PConstants.OPEN); + + return s; + } + + /** + * A lightweight list-of-floats structure for storing (x,y) coordinate pairs of + * a path. Avoids the overhead of many PVector objects. + */ + private static final class FloatPath { + private float[] data; + private int size; // number of floats used (even) + + /** + * @param initialPairsCapacity initial number of (x,y) pairs to allocate space + * for + */ + FloatPath(int initialPairsCapacity) { + data = new float[Math.max(16, initialPairsCapacity * 2)]; + size = 0; + } + + int sizePairs() { + return size >> 1; + } + + float x(int i) { + return data[i << 1]; + } + + float y(int i) { + return data[(i << 1) + 1]; + } + + void add(float x, float y) { + int ns = size + 2; + if (ns > data.length) { + data = Arrays.copyOf(data, data.length << 1); + } + data[size] = x; + data[size + 1] = y; + size = ns; + } + + void addIfDifferent(float x, float y) { + if (size >= 2) { + float lx = data[size - 2]; + float ly = data[size - 1]; + if (lx == x && ly == y) { + return; + } + } + add(x, y); + } + + boolean isClosed(float eps) { + if (sizePairs() < 3) { + return false; + } + float x0 = data[0], y0 = data[1]; + float xn = data[size - 2], yn = data[size - 1]; + return (Math.abs(x0 - xn) <= eps) && (Math.abs(y0 - yn) <= eps); + } + + float firstX() { + return data[0]; + } + + float firstY() { + return data[1]; + } + + float lastX() { + return data[size - 2]; + } + + float lastY() { + return data[size - 1]; + } + + void reversePairs() { + int n = sizePairs(); + for (int i = 0, j = n - 1; i < j; i++, j--) { + int ia = i << 1; + int ja = j << 1; + float tx = data[ia], ty = data[ia + 1]; + data[ia] = data[ja]; + data[ia + 1] = data[ja + 1]; + data[ja] = tx; + data[ja + 1] = ty; + } + } + + void appendPath(FloatPath other, boolean skipFirstPair) { + int startPair = skipFirstPair ? 1 : 0; + int otherPairs = other.sizePairs(); + for (int i = startPair; i < otherPairs; i++) { + add(other.x(i), other.y(i)); + } + } + + void snapClosed(float eps) { + if (sizePairs() < 3) + return; + float x0 = firstX(), y0 = firstY(); + float xn = lastX(), yn = lastY(); + if (Math.abs(x0 - xn) <= eps && Math.abs(y0 - yn) <= eps) { + data[size - 2] = x0; + data[size - 1] = y0; + } + } + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/MinimumBoundingTriangle.java b/src/main/java/micycle/pgs/commons/MinimumBoundingTriangle.java deleted file mode 100644 index 73407197..00000000 --- a/src/main/java/micycle/pgs/commons/MinimumBoundingTriangle.java +++ /dev/null @@ -1,395 +0,0 @@ -package micycle.pgs.commons; - -import static java.lang.Math.floorMod; - -import java.util.Arrays; - -import org.locationtech.jts.geom.Coordinate; -import org.locationtech.jts.geom.Geometry; -import org.locationtech.jts.geom.GeometryFactory; -import org.locationtech.jts.geom.Polygon; -import org.locationtech.jts.geom.PrecisionModel; - -/** - * Computes the Minimum Bounding Triangle (MBT) for the points in a Geometry. - * The MBT is the smallest triangle which covers all the input points (this is - * also known as the Smallest Enclosing Triangle). - *

      - * The algorithm for finding minimum area enclosing triangles is based on an - * elegant geometric characterisation initially introduced in Klee & Laskowski. - * The algorithm iterates over each edge of the convex polygon setting side C of - * the enclosing triangle to be flush with this edge. A side S is said to - * be flush with edge E if S⊇E. The authors of O’Rourke et al. - * prove that for each fixed flush side C a local minimum enclosing triangle - * exists. Moreover, the authors have shown that: - *

        - *
      • The midpoints of the enclosing triangle’s sides must touch the - * polygon.
      • - *
      • There exists a local minimum enclosing triangle with at least two sides - * flush with edges of the polygon. The third side of the triangle can be either - * flush with an edge or tangent to the polygon.
      • - *
      - * Thus, for each flush side C the algorithm will find the second flush side and - * set the third side either flush/tangent to the polygon. - *

      - * O'Rourke provides a θ(n) algorithm for finding the minimal enclosing triangle - * of a 2D convex polygon with n vertices. However, the overall - * complexity for the concave computation is O(nlog(n)) because a convex hull - * must first be computed for the input geometry. - * - * @author Python implementation - * by Charlie Marsh - * @author Java port by Michael Carleton - * - */ -public class MinimumBoundingTriangle { - - private static final GeometryFactory GEOM_FACTORY = new GeometryFactory(new PrecisionModel(PrecisionModel.FLOATING_SINGLE)); - private static final double EPSILON = 0.01; // account for floating-point errors / within-distance thresold - - private final int n; - private final Coordinate[] points; - - /** - * Creates a new instance of a Maximum Inscribed Triangle computation. - * - * @param shape an areal geometry - */ - public MinimumBoundingTriangle(Geometry shape) { - shape = shape.convexHull(); - points = Arrays.copyOfRange(shape.getCoordinates(), 0, shape.getCoordinates().length - 1); // treat coordinates as unclosed - n = points.length; - } - - /** - * Gets a geometry which represents the Minimium Bounding Triangle. - * - * @return a triangle Geometry representing the Minimum Bounding Triangle - */ - public Geometry getTriangle() { - int a = 1; - int b = 2; - - double minArea = Double.MAX_VALUE; - Polygon optimalTriangle = null; - - for (int i = 0; i < n; i++) { - TriangleForIndex tForIndex = new TriangleForIndex(i, a, b); - Polygon triangle = tForIndex.triangle; - a = tForIndex.aOut; - b = tForIndex.bOut; - if (triangle != null) { - double area = triangle.getArea(); - /* - * If the found enclosing triangle is valid/minimal and its area is less than - * the area of the optimal enclosing triangle found so far, then the optimal - * enclosing triangle is updated. - */ - if (optimalTriangle == null || area < minArea) { - optimalTriangle = triangle; - minArea = area; - } - } - } - - return optimalTriangle; - } - - /** - * Computes the minimal triangle with edge C flush to vertex c. - * - * Abstracted into class (during Java port) to better structure the many - * methods. - */ - private class TriangleForIndex { - - // return values - final int aOut, bOut; - final Polygon triangle; - - private final Side sideC; - private Side sideA, sideB; - - TriangleForIndex(int c, int a, int b) { - a = Math.max(a, c + 1) % n; - b = Math.max(b, c + 2) % n; - sideC = side(c); - - /* - * A necessary condition for finding a minimum enclosing triangle is that b is - * on the right chain and a on the left. The first step inside the loop is - * therefore to move the index b on the right chain using the onLeftChain() - * subalgorithm. - */ - while (onLeftChain(b)) { - b = (b + 1) % n; - } - - /* - * The next condition which must be fulfilled is that a and b must be critical, - * or high. The incrementLowHigh() subalgorithm advances a and b until this - * condition is fulfilled. - */ - while (dist(b, sideC) > dist(a, sideC)) { // Increment a if low, b if high - int[] ab = incrementLowHigh(a, b); - a = ab[0]; - b = ab[1]; - } - - /* - * Next, b will be advanced until [gamma(a) b] is tangent to the convex polygon - * via the tangency() subalgorithm. - */ - while (tangency(a, b)) { // Search for b tangency - b = (b + 1) % n; - } - - Coordinate gammaB = gamma(points[b], side(a), sideC); - if (low(b, gammaB) || dist(b, sideC) < dist((a - 1) % n, sideC)) { - sideB = side(b); - sideA = side(a); - sideB = new Side(sideC.intersection(sideB), sideA.intersection(sideB)); - - if (dist(sideB.midpoint(), sideC) < dist(floorMod(a - 1, n), sideC)) { - Coordinate gammaA = gamma(points[floorMod(a - 1, n)], sideB, sideC); - sideA = new Side(gammaA, points[floorMod(a - 1, n)]); - } - } else { - gammaB = gamma(points[b], side(a), sideC); - sideB = new Side(gammaB, points[b]); - sideA = new Side(gammaB, points[floorMod(a - 1, n)]); - } - - // Calculate final intersections - final Coordinate vertexA = sideC.intersection(sideB); - final Coordinate vertexB = sideC.intersection(sideA); - final Coordinate vertexC = sideA.intersection(sideB); - - // Check if triangle is valid local minimum - if (!isValidTriangle(vertexA, vertexB, vertexC, a, b, c)) { - triangle = null; - } else { - triangle = GEOM_FACTORY.createPolygon(new Coordinate[] { vertexA, vertexB, vertexC, vertexA }); - } - - aOut = a; - bOut = b; - } - - /** - * Computes the distance from the point (specified by its index) to the side. - */ - private double dist(int point, Side side) { - return side.distance(points[floorMod(point, points.length)]); - } - - /** - * Computes the distance from the point to the side. - */ - private double dist(Coordinate point, Side side) { - return side.distance(point); - } - - /** - * Calculate the point on the side 'on' that is twice as far from 'base' as - * 'point'. More formally, point γ(p) (Gamma) is the point on the line [a, a−1] - * such that h(γ(p))=2 × h(p), where h(p) is the distance of p from line - * determined by side C. - */ - private Coordinate gamma(Coordinate point, Side on, Side base) { - Coordinate intersection = on.intersection(base); - if (intersection != null) { - double dist = 2 * dist(point, base); - // Calculate differential change in distance - if (on.vertical) { - double ddist = dist(new Coordinate(intersection.x, intersection.y + 1), base); - Coordinate guess = new Coordinate(intersection.x, intersection.y + dist / ddist); - if (ccw(base.p1, base.p2, guess) != ccw(base.p1, base.p2, point)) { - guess = new Coordinate(intersection.x, intersection.y - dist / ddist); - } - return guess; - } else { - double ddist = dist(on.atX(intersection.x + 1), base); - Coordinate guess = on.atX(intersection.x + dist / ddist); - if (ccw(base.p1, base.p2, guess) != ccw(base.p1, base.p2, point)) { - guess = on.atX(intersection.x - dist / ddist); - } - return guess; - } - } - return intersection; - } - - private boolean onLeftChain(int b) { - return dist((b + 1) % n, sideC) >= dist(b, sideC); - } - - /** - * @return [a, b] tuple - */ - private int[] incrementLowHigh(int a, int b) { - Coordinate gammaA = gamma(points[a], side(a), sideC); - - if (high(b, gammaA)) { - b = (b + 1) % n; - } else { - a = (a + 1) % n; - } - return new int[] { a, b }; - } - - private boolean tangency(int a, int b) { - Coordinate gammaB = gamma(points[b], side(a), sideC); - return dist(b, sideC) >= dist((a - 1) % n, sideC) && high(b, gammaB); - } - - private boolean high(int b, Coordinate gammaB) { - // Test if two adjacent vertices are on same side of line (implies tangency) - if (ccw(gammaB, points[b], points[floorMod(b - 1, n)]) == ccw(gammaB, points[b], points[(b + 1) % n])) { - return false; - } - - // Test if Gamma and B are on same side of line from adjacent vertices - if (ccw(points[floorMod(b - 1, n)], points[(b + 1) % n], gammaB) == ccw(points[floorMod(b - 1, n)], points[(b + 1) % n], - points[b])) { - return dist(gammaB, sideC) > dist(b, sideC); - } else { - return false; - } - } - - private boolean low(int b, Coordinate gammaB) { - // Test if two adjacent vertices are on same side of line (implies tangency) - if (ccw(gammaB, points[b], points[floorMod(b - 1, n)]) == ccw(gammaB, points[b], points[(b + 1) % n])) { - return false; - } - - // Test if Gamma and B are on same side of line from adjacent vertices - if (ccw(points[floorMod(b - 1, n)], points[(b + 1) % n], gammaB) == ccw(points[floorMod(b - 1, n)], points[(b + 1) % n], - points[b])) { - return false; - } else { - return dist(gammaB, sideC) > dist(b, sideC); - } - } - - /** - * Tests whether the chain formed by A, B, and C is counter-clockwise. - */ - private boolean ccw(Coordinate a, Coordinate b, Coordinate c) { - return (b.x - a.x) * (c.y - a.y) > (b.y - a.y) * (c.x - a.x); - } - - /** - * Checks that a triangle composed of the given vertices is a valid local - * minimum (entails that all midpoints of the triangle should touch the - * polygon). - * - * @param vertexA Vertex A of the enclosing triangle - * @param vertexB Vertex B of the enclosing triangle - * @param vertexC Vertex C of the enclosing triangle - */ - private boolean isValidTriangle(Coordinate vertexA, Coordinate vertexB, Coordinate vertexC, int a, int b, int c) { - if (vertexA == null || vertexB == null || vertexC == null) { - return false; - } - Coordinate midpointA = midpoint(vertexC, vertexB); - Coordinate midpointB = midpoint(vertexA, vertexC); - Coordinate midpointC = midpoint(vertexA, vertexB); - return (validateMidpoint(midpointA, a) && validateMidpoint(midpointB, b) && validateMidpoint(midpointC, c)); - } - - /** - * Checks that a midpoint touches the polygon on the appropriate side. - */ - private boolean validateMidpoint(Coordinate midpoint, int index) { - Side s = side(index); - - if (s.vertical) { - if (midpoint.x != s.p1.x) { - return false; - } - double maxY = Math.max(s.p1.y, s.p2.y) + EPSILON; - double minY = Math.min(s.p1.y, s.p2.y) - EPSILON; - return (midpoint.y <= maxY && midpoint.y >= minY); - } else { - double maxX = Math.max(s.p1.x, s.p2.x) + EPSILON; - double minX = Math.min(s.p1.x, s.p2.x) - EPSILON; - // Must touch polygon - if (!(midpoint.x <= maxX && midpoint.x >= minX)) { - return false; - } - - return (s.atX(midpoint.x).distance(midpoint) < EPSILON); - } - } - - private Side side(final int i) { - return new Side(points[floorMod(i - 1, n)], points[i]); - } - - private Coordinate midpoint(Coordinate a, Coordinate b) { - return new Coordinate((a.x + b.x) / 2, (a.y + b.y) / 2); - } - } - - /** - * Helper class representing a side, or edge. - */ - private class Side { - - final Coordinate p1, p2; - final double slope, intercept; - final boolean vertical; - - Side(Coordinate p1, Coordinate p2) { - this.p1 = p1; - this.p2 = p2; - slope = (p2.y - p1.y) / (p2.x - p1.x); - intercept = p1.y - slope * p1.x; - vertical = p1.x == p2.x; - } - - private double sqrDistance(Coordinate p) { - double numerator = (p2.x - p1.x) * (p1.y - p.y) - (p1.x - p.x) * (p2.y - p1.y); - numerator *= numerator; - double denominator = (p2.x - p1.x) * (p2.x - p1.x) + (p2.y - p1.y) * (p2.y - p1.y); - return numerator / denominator; - } - - /** - * @return Returns the distance of p from the line - */ - private double distance(Coordinate p) { - return Math.sqrt(sqrDistance(p)); - } - - private Coordinate atX(double x) { - if (vertical) { - return p1; // NOTE return p1 (though incorrect) rather than null - } - return new Coordinate(x, slope * x + intercept); - } - - private Coordinate intersection(Side that) { - if (that.slope == slope) { - return null; - } - - if (vertical) { - return that.atX(p1.x); - } else if (that.vertical) { - return atX(that.p1.x); - } - - double x = (intercept - that.intercept) / (that.slope - slope); - return atX(x); - } - - private Coordinate midpoint() { - return new Coordinate((p1.x + p2.x) / 2, (p1.y + p2.y) / 2); - } - } - -} diff --git a/src/main/java/micycle/pgs/commons/MultiplicativelyWeightedVoronoi.java b/src/main/java/micycle/pgs/commons/MultiplicativelyWeightedVoronoi.java index bf372a5e..db4b1a91 100644 --- a/src/main/java/micycle/pgs/commons/MultiplicativelyWeightedVoronoi.java +++ b/src/main/java/micycle/pgs/commons/MultiplicativelyWeightedVoronoi.java @@ -253,7 +253,9 @@ private static List filterContainedCircles(List exCircleData } /** - * Vanilla implementation. Handles equal-weighted pairs. + * Builds an MWVD cell per site by intersecting its dominance constraints + * against other sites, but accelerates the O(n²) all-pairs process using an + * adaptive KNN expansion on a KD-tree. Handles equal-weighted pairs. */ private static List getMWVD(List sites, Envelope extent) { final PointMap tree = KDTree.create(2); @@ -261,45 +263,86 @@ private static List getMWVD(List sites, Envelope extent) { Point p = geometryFactory.createPoint(s); p.setUserData(s.z); tree.insert(new double[] { p.getX(), p.getY() }, p); - }); - Geometry extentGeometry = geometryFactory.toGeometry(extent); + final Geometry extentGeometry = geometryFactory.toGeometry(extent); + + // Start small, expand as needed + final int kStart = Math.min(16, Math.max(2, sites.size())); + final int kMax = sites.size(); // correctness backstop + + // Heuristic: if the furthest neighbor we considered is much farther than the + // current cell size, + // then remaining (even farther) points are unlikely to affect it. + final double stopFactor = 4.0; - /* - * NOTE optimisation. Look at first 30 (or N/3, if larger, but up to 60) - * neighbors only, since subsequent neighbors usually have negligible effect. - * This makes producing MWVD for hundreds of points a lot more feasible. - */ - final int n = Math.min(Math.max(sites.size() / 3, Math.min(sites.size(), 30)), 60); + return sites.parallelStream().map(site -> { + Geometry dominance = extentGeometry; - List polygons = sites.stream().map(site -> { - var query = tree.queryKnn(new double[] { site.x, site.y }, n); - var me = query.next().value(); // first query is always the site itself - Double w = (Double) me.getUserData(); - Geometry dominance = extentGeometry; // dominance area begins with whole plane + int k = Math.min(kStart, kMax); + int processed = 0; List> neighbors = new ArrayList<>(); - query.forEachRemaining(item -> { - neighbors.add(item); - }); - /* - * The dominance region of a site is formed by taking the boolean AND - * (intersection) of all the Apollonius circles it forms with every other site. - */ - for (var otherSite : neighbors) { - Coordinate other = otherSite.value().getCoordinate(); - Double wOther = (Double) otherSite.value().getUserData(); - Geometry localDominanceCircle = apolloniusCircle(site, other, w, wOther, extentGeometry); - dominance = dominance.intersection(localDominanceCircle); - } + while (true) { + neighbors.clear(); + + var query = tree.queryKnn(new double[] { site.x, site.y }, k); + var me = query.next().value(); // first query is always the site itself + Double w = (Double) me.getUserData(); + + query.forEachRemaining(neighbors::add); + + // Process only newly-added neighbors (when k expands) + for (int i = processed; i < neighbors.size(); i++) { + var otherSite = neighbors.get(i); + Coordinate other = otherSite.value().getCoordinate(); + Double wOther = (Double) otherSite.value().getUserData(); + + Geometry constraint = apolloniusCircle(site, other, w, wOther, extentGeometry); + + if (!dominance.getEnvelopeInternal().intersects(constraint.getEnvelopeInternal())) { + return null; + } - return dominance != null && !dominance.isEmpty() ? dominance : null; - }).filter(dominance -> dominance != null) // Filter out empty geoms - .toList(); + // More robust overlay than Geometry#intersection for tricky cases + dominance = OverlayNG.overlay(dominance, constraint, OverlayNG.INTERSECTION); + + if (dominance.isEmpty()) { + return null; + } + } + processed = neighbors.size(); + + if (k >= kMax || dominance.isEmpty()) { + break; + } + + // Stopping heuristic based on current cell size vs. furthest considered + // neighbor distance + MinimumBoundingCircle mbc = new MinimumBoundingCircle(dominance); + double cellR = mbc.getRadius(); + + // If the cell has collapsed to tiny, we’re done + if (cellR <= 1e-12) { + break; + } + + // KNN is returned sorted by distance, so last neighbor is the furthest in this + // batch + Coordinate furthest = neighbors.get(neighbors.size() - 1).value().getCoordinate(); + double furthestDist = site.distance(furthest); + + if (furthestDist > stopFactor * cellR) { + break; + } + + // Expand search + k = Math.min(kMax, k * 2); + } - return polygons; + return !dominance.isEmpty() ? dominance : null; + }).filter(g -> g != null && !g.isEmpty()).toList(); } private static Geometry apolloniusCircle(Coordinate s1, Coordinate s2, double w1, double w2, Geometry extentG) { diff --git a/src/main/java/micycle/pgs/commons/NewtonThieleRingMorpher.java b/src/main/java/micycle/pgs/commons/NewtonThieleRingMorpher.java new file mode 100644 index 00000000..40509125 --- /dev/null +++ b/src/main/java/micycle/pgs/commons/NewtonThieleRingMorpher.java @@ -0,0 +1,551 @@ +package micycle.pgs.commons; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.Objects; + +import org.locationtech.jts.algorithm.Orientation; +import org.locationtech.jts.geom.Coordinate; +import org.locationtech.jts.geom.GeometryFactory; +import org.locationtech.jts.geom.LinearRing; + +/** + *

      + * Morphs between two or more planar rings by evaluating a bivariate + * Newton–Thiele interpolation surface R(t, y) over the + * rings’ boundary coordinates. + *

      + * + *

      + * Overview: + *

      + *
        + *
      1. Normalize and resample each input ring to nIn unique points + * (CCW, uniform arc-length).
      2. + *
      3. Align cyclic rotation of all rings to the first ring to fix the + * y-parameter seam.
      4. + *
      5. For each boundary index j, compute Newton divided + * differences over time nodes t in [0,1].
      6. + *
      7. For each Newton level p, build a Thiele (rational) + * interpolant over y and evaluate it at the output y + * grid.
      8. + *
      9. Evaluate the resulting Newton polynomial at time t for each + * output y to form the intermediate ring.
      10. + *
      + * + *

      + * Notes: + *

      + *
        + *
      • Works best with 3+ keyframes; for 2 keyframes the time interpolation is + * linear.
      • + *
      • y ∈ [0,1) excludes the closure point to avoid duplicating + * the first vertex.
      • + *
      • Output can be retessellated to any vertex count nOut.
      • + *
      + * + *

      + * Reference: B. Chen, J. Chen, “Algorithm of Shape + * Morphing Based on Bivariate Non-linear Interpolation,” ACM CSAE 2020. + * https://doi.org/10.1145/3424978.3424991 + *

      + * + * @author Michael Carleton + */ +public final class NewtonThieleRingMorpher { + + /*- + * TODO: + * Cut shape with holes (hairline gap) to ensure each geometry has genus=1. + * Cuts are discussed in 'Polygon Vertex Set Matching Algorithm for Shapefile Tweening' + * Break-up single polygon to interpolate with a multipolygon. + * See https://github.com/veltman/openvis/blob/master/README.md + * See 'Guaranteed intersection-free polygon morphing' + * See CGAl Shape Deformation: https://doc.cgal.org/latest/Barycentric_coordinates_2/index.html#title10 + * https://homepages.inf.ed.ac.uk/tkomura/cav/presentation14_2018.pdf + * RAP C++ : https://github.com/catherinetaylor2/Shape_Interpolation/blob/master/rigid_interp.cpp + * and https://github.com/deliagander/ARAPShapeInterpolation + */ + + private final GeometryFactory gf; + + private final int m; // number of keyframes + private final int nIn; // input resampled vertices (unique, not closed) + private final int nOut; // output vertices (unique, not closed) + + private final double[] tNodes; // size m + private final double[] yIn; // size nIn, in [0,1) + private final double[] yOut; // size nOut, in [0,1) + + // Precomputed Newton coefficients evaluated at yOut: + // coeffXAtOut[p][k] = coefficient of Newton level p at output vertex k + // (parameter yOut[k]) + private final double[][] coeffXAtOut; // [m][nOut] + private final double[][] coeffYAtOut; // [m][nOut] + + /** + * Convenience constructor for N rings. + * + * @param rings the rings to interpolate (can be more than two) + */ + public NewtonThieleRingMorpher(LinearRing... rings) { + this(Arrays.asList(rings), maxUniqueVertexCount(rings), // resampleVertices + maxUniqueVertexCount(rings), // outputVertices + false // useThieleInY (can be false since nOut==nIn) + ); + } + + /** + * General constructor for multiple keyframes. + * + * @param keyframes rings in temporal order + * @param resampleVertices vertices to resample each ring to (unique points, no + * closure point included) + * @param outputVertices output vertex count (unique) + * @param useThieleInY if false and outputVertices==resampleVertices, skips + * Thiele and uses lattice values + */ + public NewtonThieleRingMorpher(List keyframes, int resampleVertices, int outputVertices, boolean useThieleInY) { + this.gf = keyframes.get(0).getFactory(); + Objects.requireNonNull(keyframes, "keyframes"); + if (keyframes.size() < 2) { + throw new IllegalArgumentException("Need at least 2 keyframes"); + } + if (resampleVertices < 3) { + throw new IllegalArgumentException("resampleVertices must be >= 3"); + } + if (outputVertices < 3) { + throw new IllegalArgumentException("outputVertices must be >= 3"); + } + + this.m = keyframes.size(); + this.nIn = resampleVertices; + this.nOut = outputVertices; + + this.tNodes = linspace01(m); // [0..1], inclusive endpoints + this.yIn = linspace01Open(nIn); // [0..1), no 1.0 + this.yOut = linspace01Open(nOut); // [0..1), no 1.0 + + // 1) Normalize CCW + resample all rings to nIn vertices (unique) + List rings = new ArrayList<>(m); + for (LinearRing ring : keyframes) { + LinearRing ccw = ensureCCW(ring); + Coordinate[] pts = resampleClosedRingUniform(ccw, nIn); + rings.add(pts); + } + + // 2) Align cyclic rotation of each ring to the first ring (keeps y-parameter + // seam consistent) + Coordinate[] ref = rings.get(0); + for (int i = 1; i < m; i++) { + Coordinate[] aligned = rotateToBestMatch(ref, rings.get(i)); + rings.set(i, aligned); + } + + // 3) Build X[i][j], Y[i][j] grid + double[][] X = new double[m][nIn]; + double[][] Y = new double[m][nIn]; + for (int i = 0; i < m; i++) { + Coordinate[] pts = rings.get(i); + for (int j = 0; j < nIn; j++) { + X[i][j] = pts[j].x; + Y[i][j] = pts[j].y; + } + } + + // 4) Newton divided differences along t for each boundary sample j + // coeffX[p][j] = Newton coefficient at level p for vertex j (same for Y) + double[][] coeffX = new double[m][nIn]; + double[][] coeffY = new double[m][nIn]; + + double[] tmp = new double[m]; + for (int j = 0; j < nIn; j++) { + for (int i = 0; i < m; i++) { + tmp[i] = X[i][j]; + } + double[] nx = newtonCoefficients(tNodes, tmp); + for (int p = 0; p < m; p++) { + coeffX[p][j] = nx[p]; + } + + for (int i = 0; i < m; i++) { + tmp[i] = Y[i][j]; + } + double[] ny = newtonCoefficients(tNodes, tmp); + for (int p = 0; p < m; p++) { + coeffY[p][j] = ny[p]; + } + } + + // 5) Thiele recursion along y (boundary parameter) for each Newton level p, + // then precompute those coefficients on yOut. + this.coeffXAtOut = new double[m][nOut]; + this.coeffYAtOut = new double[m][nOut]; + + boolean canSkipThiele = !useThieleInY && (nOut == nIn); + + if (canSkipThiele) { + // lattice mode: coeff at yOut[k] is just coeff[*][k] + for (int p = 0; p < m; p++) { + System.arraycopy(coeffX[p], 0, coeffXAtOut[p], 0, nOut); + System.arraycopy(coeffY[p], 0, coeffYAtOut[p], 0, nOut); + } + } else { + // full mode: build Thiele interpolators for each level p and evaluate at yOut + for (int p = 0; p < m; p++) { + Thiele1D thX = Thiele1D.fromSamples(yIn, coeffX[p]); + Thiele1D thY = Thiele1D.fromSamples(yIn, coeffY[p]); + for (int k = 0; k < nOut; k++) { + double y = yOut[k]; + coeffXAtOut[p][k] = thX.eval(y); + coeffYAtOut[p][k] = thY.eval(y); + } + } + } + } + + /** + * Build the intermediate ring at time t in [0,1]. Output is a valid closed + * LinearRing (first point repeated at end). + */ + public LinearRing interpolate(double t) { + if (Double.isNaN(t) || Double.isInfinite(t)) { + throw new IllegalArgumentException("t must be finite"); + } + t = clamp01(t); + + Coordinate[] out = new Coordinate[nOut + 1]; + for (int k = 0; k < nOut; k++) { + double x = evalNewtonAtT(t, tNodes, coeffXAtOut, k); + double y = evalNewtonAtT(t, tNodes, coeffYAtOut, k); + out[k] = new Coordinate(x, y); + } + out[nOut] = new Coordinate(out[0]); // close ring + return gf.createLinearRing(out); + } + + /** + * Compute Newton interpolation coefficients a[0..m-1] from samples f[0..m-1] at + * nodes x[0..m-1]. a[] are the divided differences: P(x)=a0 + a1(x-x0) + + * a2(x-x0)(x-x1)+... + */ + private static double[] newtonCoefficients(double[] x, double[] f) { + int n = x.length; + double[] a = Arrays.copyOf(f, n); // will be overwritten into divided differences + for (int k = 1; k < n; k++) { + for (int i = n - 1; i >= k; i--) { + double den = x[i] - x[i - k]; + if (den == 0.0) { + throw new IllegalArgumentException("Duplicate x nodes in Newton interpolation"); + } + a[i] = (a[i] - a[i - 1]) / den; + } + } + // now a[k] is divided difference f[x0..xk] + return a; + } + + /** + * Evaluate Newton polynomial at time t for a fixed output vertex index k. + * coeff[p][k] are the Newton coefficients at level p for this vertex. + */ + private static double evalNewtonAtT(double t, double[] tNodes, double[][] coeff, int k) { + int n = tNodes.length; + double v = coeff[n - 1][k]; + for (int p = n - 2; p >= 0; p--) { + v = coeff[p][k] + (t - tNodes[p]) * v; + } + return v; + } + + /** + * Thiele 1D rational interpolation (continued fraction). Builds + * reciprocal-difference coefficients c[] and evaluates: R(x)=c0 + (x-x0)/(c1 + + * (x-x1)/(c2 + ...)) + */ + private static final class Thiele1D { + private static final double EPS = 1e-12; + + private final double[] x; // nodes + private final double[] c; // continued fraction coefficients + + private Thiele1D(double[] x, double[] c) { + this.x = x; + this.c = c; + } + + static Thiele1D fromSamples(double[] x, double[] f) { + if (x.length != f.length) { + throw new IllegalArgumentException("x and f length mismatch"); + } + int n = x.length; + if (n < 2) { + throw new IllegalArgumentException("Need at least 2 points for Thiele"); + } + + double[] coeff = new double[n]; + double[] r = Arrays.copyOf(f, n); // r[i] holds reciprocal differences in-place + + coeff[0] = r[0]; + for (int k = 1; k < n; k++) { + // update r[i] for i>=k using previous stage values + for (int i = n - 1; i >= k; i--) { + double denom = r[i] - r[k - 1]; + if (Math.abs(denom) < EPS) { + // guard singularity; preserve sign + denom = (denom >= 0.0) ? EPS : -EPS; + } + r[i] = (x[i] - x[k - 1]) / denom; + } + coeff[k] = r[k]; + } + return new Thiele1D(Arrays.copyOf(x, n), coeff); + } + + double eval(double xq) { + int n = c.length; + double v = c[n - 1]; + for (int k = n - 2; k >= 0; k--) { + double denom = v; + if (Math.abs(denom) < EPS) { + denom = (denom >= 0.0) ? EPS : -EPS; + } + v = c[k] + (xq - x[k]) / denom; + } + return v; + } + } + + // ------------------------- Geometry helpers ------------------------- + + private static LinearRing ensureCCW(LinearRing ring) { + Coordinate[] c = ring.getCoordinates(); + if (!Orientation.isCCW(c)) { + // reverse() returns a LineString; for a LinearRing it will still be a + // LinearRing in practice, + // but JTS types return LineString so we rebuild below if needed. + Coordinate[] rev = reverseCoords(c); + // ensure it's still a proper ring coordinate array: + if (!rev[0].equals2D(rev[rev.length - 1])) { + rev = Arrays.copyOf(rev, rev.length + 1); + rev[rev.length - 1] = new Coordinate(rev[0]); + } + return ring.getFactory().createLinearRing(rev); + } + return ring; + } + + /** + * Resample a closed ring uniformly by arc length into n unique points (no + * closure point). + */ + private static Coordinate[] resampleClosedRingUniform(LinearRing ring, int n) { + Coordinate[] coords = ring.getCoordinates(); + if (coords.length < 4) { + throw new IllegalArgumentException("Ring must have at least 4 coordinates (including closure)"); + } + + // Drop closure coordinate if present + int last = coords.length - 1; + boolean closed = coords[0].equals2D(coords[last]); + Coordinate[] unique = closed ? Arrays.copyOf(coords, coords.length - 1) : Arrays.copyOf(coords, coords.length); + + unique = removeConsecutiveDuplicates(unique); + if (unique.length < 3) { + throw new IllegalArgumentException("Ring has too few unique points after cleanup"); + } + + // Compute perimeter + double perimeter = 0.0; + for (int i = 0; i < unique.length; i++) { + Coordinate a = unique[i]; + Coordinate b = unique[(i + 1) % unique.length]; + perimeter += a.distance(b); + } + if (perimeter == 0.0) { + throw new IllegalArgumentException("Degenerate ring (zero perimeter)"); + } + + // Sample n points at distances k*perimeter/n + Coordinate[] out = new Coordinate[n]; + + int seg = 0; + double segStartDist = 0.0; + double segLen = unique[0].distance(unique[1 % unique.length]); + + for (int k = 0; k < n; k++) { + double target = (k * perimeter) / n; + + while (segStartDist + segLen < target) { + segStartDist += segLen; + seg++; + Coordinate a = unique[seg % unique.length]; + Coordinate b = unique[(seg + 1) % unique.length]; + segLen = a.distance(b); + if (segLen == 0.0) { + // skip zero-length segments + continue; + } + } + + Coordinate a = unique[seg % unique.length]; + Coordinate b = unique[(seg + 1) % unique.length]; + double u = (segLen == 0.0) ? 0.0 : (target - segStartDist) / segLen; + out[k] = lerp(a, b, u); + } + + return out; + } + + private static Coordinate[] rotateToBestMatch(Coordinate[] ref, Coordinate[] candidate) { + if (ref.length != candidate.length) { + throw new IllegalArgumentException("Length mismatch for rotation alignment"); + } + int n = ref.length; + + int bestShift = 0; + double best = Double.POSITIVE_INFINITY; + + for (int shift = 0; shift < n; shift++) { + double sse = 0.0; + for (int i = 0; i < n; i++) { + Coordinate a = ref[i]; + Coordinate b = candidate[(i + shift) % n]; + double dx = a.x - b.x; + double dy = a.y - b.y; + sse += dx * dx + dy * dy; + } + if (sse < best) { + best = sse; + bestShift = shift; + } + } + + if (bestShift == 0) { + return candidate; + } + + Coordinate[] out = new Coordinate[n]; + for (int i = 0; i < n; i++) { + out[i] = candidate[(i + bestShift) % n]; + } + return out; + } + + private static Coordinate lerp(Coordinate a, Coordinate b, double t) { + return new Coordinate(a.x + t * (b.x - a.x), a.y + t * (b.y - a.y)); + } + + private static Coordinate[] removeConsecutiveDuplicates(Coordinate[] pts) { + ArrayList out = new ArrayList<>(pts.length); + Coordinate prev = null; + for (Coordinate c : pts) { + if (prev == null || !c.equals2D(prev)) { + out.add(c); + prev = c; + } + } + // also avoid last==first in unique list + if (out.size() >= 2 && out.get(0).equals2D(out.get(out.size() - 1))) { + out.remove(out.size() - 1); + } + return out.toArray(new Coordinate[0]); + } + + private static Coordinate[] reverseCoords(Coordinate[] c) { + Coordinate[] r = new Coordinate[c.length]; + for (int i = 0; i < c.length; i++) { + r[i] = c[c.length - 1 - i]; + } + return r; + } + + private static double clamp01(double t) { + if (t <= 0.0) { + return 0.0; + } + if (t >= 1.0) { + return 1.0; + } + return t; + } + + /** + * tNodes in [0,1], inclusive endpoints: size m. + */ + private static double[] linspace01(int m) { + if (m == 1) { + return new double[] { 0.0 }; + } + double[] x = new double[m]; + for (int i = 0; i < m; i++) { + x[i] = i / (double) (m - 1); + } + return x; + } + + /** + * y nodes in [0,1), excludes 1.0 to avoid duplicating ring closure point. + */ + private static double[] linspace01Open(int n) { + double[] x = new double[n]; + for (int i = 0; i < n; i++) { + x[i] = i / (double) n; // last is (n-1)/n < 1 + } + return x; + } + + private static int maxUniqueVertexCount(LinearRing... rings) { + int max = 3; + if (rings == null || rings.length == 0) { + return max; + } + for (LinearRing r : rings) { + if (r == null) { + continue; + } + int c = uniqueVertexCount(r); + if (c > max) { + max = c; + } + } + return max; + } + + private static int uniqueVertexCount(LinearRing ring) { + Coordinate[] c = ring.getCoordinates(); + if (c.length == 0) { + return 0; + } + + // drop closure if present + int len = c.length; + if (len >= 2 && c[0].equals2D(c[len - 1])) { + len--; + } + + // remove consecutive duplicates (same logic as resampling pre-clean) + int count = 0; + Coordinate prev = null; + for (int i = 0; i < len; i++) { + Coordinate cur = c[i]; + if (prev == null || !cur.equals2D(prev)) { + count++; + prev = cur; + } + } + + // if last equals first after dedupe, drop it + if (count >= 2) { + // cheap check using original endpoints; good enough for sizing N + if (c[0].equals2D(c[len - 1])) { + count--; + } + } + + // ensure minimum sensible ring size + return Math.max(count, 3); + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/SchneiderBezierFitter.java b/src/main/java/micycle/pgs/commons/SchneiderBezierFitter.java new file mode 100644 index 00000000..f3538bc6 --- /dev/null +++ b/src/main/java/micycle/pgs/commons/SchneiderBezierFitter.java @@ -0,0 +1,440 @@ +package micycle.pgs.commons; + +import java.util.ArrayList; +import java.util.List; +import java.util.Objects; + +import org.locationtech.jts.geom.Coordinate; +import org.locationtech.jts.geom.GeometryFactory; +import org.locationtech.jts.geom.LineString; +import org.locationtech.jts.math.Vector2D; + +import com.github.micycle1.betterbeziers.CubicBezier; + +/** + * Piecewise cubic Bezier curve fitting + resampling for JTS geometries. + * + *

      + * Implements Philip J. Schneider’s “An Algorithm for Automatically Fitting + * Digitized Curves” (Graphics Gems, 1990): given a polyline (digitized + * vertices) and an error tolerance, it fits one or more cubic Bezier + * segments that approximate the input, then returns a JTS + * {@link org.locationtech.jts.geom.LineString LineString} by sampling those + * Beziers at a fixed arc-length spacing. + */ + +public final class SchneiderBezierFitter { + + private static final int MAX_FIT_ITERS = 4; + + private SchneiderBezierFitter() { + } + + /** + * Fits a piecewise cubic Bezier approximation to a JTS {@link LineString} and + * returns a new {@link LineString} created by sampling the fitted curve at a + * fixed arc-length spacing. + * + *

      + * Uses the input line's {@link GeometryFactory}. If the input is closed, the + * output will be closed as well. + * + * @param line input polyline/ring (must have at least 2 + * vertices) + * @param error maximum allowed distance error used by Schneider's + * fitter + * @param interSampleDistance target distance between successive output vertices + * along the fitted curve (must be > 0) + * @return fitted-and-sampled LineString (closed if input was closed) + */ + public static LineString fitAndSample(LineString line, double error, double interSampleDistance) { + Objects.requireNonNull(line, "line"); + return fitAndSample(line, error, interSampleDistance, line.getFactory()); + } + + /** + * Fits a piecewise cubic Bezier approximation to a JTS {@link LineString} and + * returns a new {@link LineString} created by sampling the fitted curve at a + * fixed arc-length spacing. + * + *

      + * If the input is closed, the output will be closed as well. + * + * @param line input polyline/ring (must have at least 2 + * vertices) + * @param error maximum allowed distance error used by Schneider's + * fitter + * @param interSampleDistance target distance between successive output vertices + * along the fitted curve (must be > 0) + * @param gf geometry factory used to build the output + * LineString + * @return fitted-and-sampled LineString (closed if input was closed) + */ + public static LineString fitAndSample(LineString line, double error, double interSampleDistance, GeometryFactory gf) { + Objects.requireNonNull(line, "line"); + Objects.requireNonNull(gf, "gf"); + Coordinate[] coords = line.getCoordinates(); + if (coords.length < 2) { + throw new IllegalArgumentException("LineString must have at least 2 coordinates"); + } + + boolean closed = line.isClosed(); + + // If closed, drop the duplicate last vertex before fitting; we will re-close + // after sampling. + List pts = new ArrayList<>(coords.length); + for (Coordinate coord : coords) { + pts.add(coord); + } + if (closed && pts.size() > 1 && pts.get(0).equals2D(pts.get(pts.size() - 1))) { + pts.remove(pts.size() - 1); + } + + LineString smoothed = fitAndSample(pts, error, interSampleDistance, gf); + + if (closed) { + return ensureClosed(smoothed, gf); + } + return smoothed; + } + + /** + * Fits a piecewise cubic Bezier approximation to an ordered list of coordinates + * and returns a new {@link LineString} created by sampling the fitted curve at + * a fixed arc-length spacing. + * + *

      + * This overload does not infer “closed-ness” from the input list; if you need + * ring-closure preservation, prefer the + * {@link #fitAndSample(LineString, double, double)} overload or ensure closure + * yourself after sampling. + * + * @param points input vertices (must contain at least 2 points) + * @param error maximum allowed distance error used by Schneider's + * fitter + * @param interSampleDistance target distance between successive output vertices + * along the fitted curve (must be > 0) + * @param gf geometry factory used to build the output + * LineString + * @return fitted-and-sampled LineString + */ + private static LineString fitAndSample(List points, double error, double interSampleDistance, GeometryFactory gf) { + Objects.requireNonNull(gf, "gf"); + if (points == null || points.size() < 2) { + throw new IllegalArgumentException("The number of points must be greater than 1"); + } + if (!(interSampleDistance > 0)) { + throw new IllegalArgumentException("interSampleDistance must be > 0"); + } + + List pts = new ArrayList<>(points.size()); + for (Coordinate c : points) { + if (c == null) { + throw new IllegalArgumentException("points contains null Coordinate"); + } + pts.add(new Vector2D(c.x, c.y)); + } + + Vector2D leftTangent = computeLeftTangent(pts); + Vector2D rightTangent = computeRightTangent(pts); + + MultiBezierCurve fitted = fitCurves(0, pts.size() - 1, error, new MultiBezierCurve(), leftTangent, rightTangent, pts); + + List out = new ArrayList<>(); + boolean firstSeg = true; + + for (BezierSeg seg : fitted.segments) { + CubicBezier cb = seg.toBetterBezier(); + double[][] samples = cb.sampleEquidistantPoints(interSampleDistance); + + for (int i = 0; i < samples.length; i++) { + if (!firstSeg && i == 0) { + continue; // avoid duplicate join vertex + } + out.add(new Coordinate(samples[i][0], samples[i][1])); + } + firstSeg = false; + } + + if (out.isEmpty()) { + Coordinate a = points.get(0); + Coordinate b = points.get(points.size() - 1); + return gf.createLineString(new Coordinate[] { new Coordinate(a), new Coordinate(b) }); + } + + return gf.createLineString(out.toArray(new Coordinate[0])); + } + + private static MultiBezierCurve fitCurves(int first, int last, double error, MultiBezierCurve curve, Vector2D leftTangent, Vector2D rightTangent, + List points) { + if (last - first + 1 == 2) { + double distance = dist(points.get(first), points.get(last)) / 3.0; + BezierSeg bez = new BezierSeg(); + bez.v1 = points.get(first); + bez.v4 = points.get(last); + bez.v2 = bez.v1.add(leftTangent.multiply(distance)); + bez.v3 = bez.v4.add(rightTangent.multiply(distance)); + curve.segments.add(bez); + return curve; + } + + double[] u = chordLengthParameterize(points, first, last); + BezierSeg bezier = generateBezier(points, first, last, u, leftTangent, rightTangent); + + int[] splitIndex = new int[] { 0 }; + double maxError = computeMaxError(points, first, last, bezier, u, splitIndex); + + if (maxError < error) { + curve.segments.add(bezier); + return curve; + } + + double iterationError = error * 4.0; + if (maxError < iterationError) { + for (int i = 0; i < MAX_FIT_ITERS; i++) { + double[] uPrime = reparameterize(points, first, last, u, bezier); + bezier = generateBezier(points, first, last, uPrime, leftTangent, rightTangent); + maxError = computeMaxError(points, first, last, bezier, uPrime, splitIndex); + if (maxError < error) { + curve.segments.add(bezier); + return curve; + } + u = uPrime; + } + } + + Vector2D centerTangent = computeCenterTangent(points, splitIndex[0]); + fitCurves(first, splitIndex[0], error, curve, leftTangent, centerTangent, points); + fitCurves(splitIndex[0], last, error, curve, centerTangent.negate(), rightTangent, points); + return curve; + } + + private static BezierSeg generateBezier(List points, int first, int last, double[] u, Vector2D leftVector, Vector2D rightVector) { + int size = last - first + 1; + + Vector2D[][] A = new Vector2D[size][2]; + for (int i = 0; i < size; i++) { + A[i][0] = leftVector.multiply(B1(u[i])); + A[i][1] = rightVector.multiply(B2(u[i])); + } + + Vector2D firstPoint = points.get(first); + Vector2D lastPoint = points.get(last); + + double[][] C = new double[][] { { 0, 0 }, { 0, 0 } }; + double[] X = new double[] { 0, 0 }; + + for (int i = 0; i < size; i++) { + C[0][0] += A[i][0].dot(A[i][0]); + C[0][1] += A[i][0].dot(A[i][1]); + C[1][0] = C[0][1]; + C[1][1] += A[i][1].dot(A[i][1]); + + Vector2D tmp = points.get(first + i).subtract( + firstPoint.multiply(B0(u[i])).add(firstPoint.multiply(B1(u[i]))).add(lastPoint.multiply(B2(u[i]))).add(lastPoint.multiply(B3(u[i])))); + + X[0] += A[i][0].dot(tmp); + X[1] += A[i][1].dot(tmp); + } + + double detC0C1 = C[0][0] * C[1][1] - C[1][0] * C[0][1]; + double detC0X = C[0][0] * X[1] - C[1][0] * X[0]; + double detXC1 = X[0] * C[1][1] - X[1] * C[0][1]; + + double alphaL, alphaR; + if (detC0C1 == 0.0) { + alphaL = 0.0; + alphaR = 0.0; + } else { + alphaL = detXC1 / detC0C1; + alphaR = detC0X / detC0C1; + } + + double segLength = dist(firstPoint, lastPoint); + double epsilon = 1.0e-6 * segLength; + + BezierSeg bez = new BezierSeg(); + bez.v1 = firstPoint; + bez.v4 = lastPoint; + + if (alphaL < epsilon || alphaR < epsilon) { + double d = segLength / 3.0; + bez.v2 = bez.v1.add(leftVector.multiply(d)); + bez.v3 = bez.v4.add(rightVector.multiply(d)); + } else { + bez.v2 = bez.v1.add(leftVector.multiply(alphaL)); + bez.v3 = bez.v4.add(rightVector.multiply(alphaR)); + } + + return bez; + } + + private static double computeMaxError(List points, int first, int last, BezierSeg bezier, double[] u, int[] splitIndex) { + splitIndex[0] = (last - first + 1) / 2; + double maxDist2 = -Double.MAX_VALUE; + + CubicBezier cb = bezier.toBetterBezier(); + + for (int i = first + 1; i < last; i++) { + double t = u[i - first]; + double[] pt = cb.getPointAtParameter(t); + + double dx = pt[0] - points.get(i).getX(); + double dy = pt[1] - points.get(i).getY(); + double dist2 = dx * dx + dy * dy; + + if (dist2 >= maxDist2) { + maxDist2 = dist2; + splitIndex[0] = i; + } + } + return Math.sqrt(maxDist2); + } + + private static double[] reparameterize(List points, int first, int last, double[] u, BezierSeg bezier) { + double[] out = new double[last - first + 1]; + CubicBezier cb = bezier.toBetterBezier(); + for (int i = first; i <= last; i++) { + out[i - first] = newtonRaphsonRootFind(cb, points.get(i), u[i - first]); + } + return out; + } + + /** + * Newton-Raphson root refinement. Since your + * CubicBezier#getGradientAtParameter(u) returns a single double (not a + * derivative vector), Q'(u) and Q''(u) are computed by finite differences of + * getPointAtParameter(u). + */ + private static double newtonRaphsonRootFind(CubicBezier curve, Vector2D p, double u) { + final double h0 = 1e-4; + + double um = Math.max(0.0, u - h0); + double up = Math.min(1.0, u + h0); + if (up == um) { + return u; + } + + double h = (up - um) / 2.0; + + double[] qm = curve.getPointAtParameter(um); + double[] q = curve.getPointAtParameter(u); + double[] qp = curve.getPointAtParameter(up); + + double q1x = (qp[0] - qm[0]) / (2.0 * h); + double q1y = (qp[1] - qm[1]) / (2.0 * h); + + double h2 = h * h; + double q2x = (qp[0] - 2.0 * q[0] + qm[0]) / h2; + double q2y = (qp[1] - 2.0 * q[1] + qm[1]) / h2; + + double dx = q[0] - p.getX(); + double dy = q[1] - p.getY(); + + double numerator = dx * q1x + dy * q1y; + double denominator = (q1x * q1x + q1y * q1y) + (dx * q2x + dy * q2y); + + if (denominator == 0.0) { + return u; + } + + double uPrime = u - (numerator / denominator); + return Math.max(0.0, Math.min(1.0, uPrime)); + } + + private static double[] chordLengthParameterize(List points, int first, int last) { + int n = last - first + 1; + double[] u = new double[n]; + + for (int i = 1; i < n; i++) { + u[i] = u[i - 1] + dist(points.get(first + i), points.get(first + i - 1)); + } + + double total = u[n - 1]; + if (total == 0.0) { + for (int i = 1; i < n; i++) { + u[i] = i / (double) (n - 1); + } + return u; + } + + for (int i = 1; i < n; i++) { + u[i] /= total; + } + return u; + } + + private static Vector2D computeLeftTangent(List points) { + return safeNormalize(points.get(1).subtract(points.get(0))); + } + + private static Vector2D computeRightTangent(List points) { + int n = points.size(); + return safeNormalize(points.get(n - 2).subtract(points.get(n - 1))); + } + + private static Vector2D computeCenterTangent(List points, int centerIndex) { + Vector2D v1 = points.get(centerIndex - 1).subtract(points.get(centerIndex)); + Vector2D v2 = points.get(centerIndex).subtract(points.get(centerIndex + 1)); + return safeNormalize(v1.add(v2).multiply(0.5)); + } + + private static Vector2D safeNormalize(Vector2D v) { + double len = v.length(); + return len == 0.0 ? new Vector2D(0, 0) : v.multiply(1.0 / len); + } + + private static double dist(Vector2D a, Vector2D b) { + return a.distance(b); + } + + private static LineString ensureClosed(LineString ls, GeometryFactory gf) { + Coordinate[] c = ls.getCoordinates(); + if (c.length == 0) { + return ls; + } + if (c.length == 1) { + return gf.createLineString(new Coordinate[] { new Coordinate(c[0]), new Coordinate(c[0]) }); + } + + if (c[0].equals2D(c[c.length - 1])) { + return ls; + } + + Coordinate[] closed = new Coordinate[c.length + 1]; + System.arraycopy(c, 0, closed, 0, c.length); + closed[closed.length - 1] = new Coordinate(c[0]); + return gf.createLineString(closed); + } + + private static double B0(double t) { + double tmp = 1 - t; + return tmp * tmp * tmp; + } + + private static double B1(double t) { + double tmp = 1 - t; + return 3 * t * tmp * tmp; + } + + private static double B2(double t) { + double tmp = 1 - t; + return 3 * t * t * tmp; + } + + private static double B3(double t) { + return t * t * t; + } + + private static final class MultiBezierCurve { + final List segments = new ArrayList<>(); + } + + private static final class BezierSeg { + Vector2D v1, v2, v3, v4; + + CubicBezier toBetterBezier() { + return new CubicBezier(v1.getX(), v1.getY(), v2.getX(), v2.getY(), v3.getX(), v3.getY(), v4.getX(), v4.getY()); + } + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/ShapeInterpolation.java b/src/main/java/micycle/pgs/commons/ShapeInterpolation.java deleted file mode 100644 index a57a8e7f..00000000 --- a/src/main/java/micycle/pgs/commons/ShapeInterpolation.java +++ /dev/null @@ -1,154 +0,0 @@ -package micycle.pgs.commons; - -import java.util.Collections; -import java.util.SplittableRandom; - -import org.locationtech.jts.algorithm.Orientation; -import org.locationtech.jts.geom.Coordinate; -import org.locationtech.jts.geom.CoordinateList; -import org.locationtech.jts.geom.Geometry; -import org.locationtech.jts.geom.LinearRing; -import org.locationtech.jts.geom.Polygon; - -/** - * Best-guess interpolation between any two linear rings. - * - * @author Michael Carleton - * - */ -public class ShapeInterpolation { - - /*- - * TODO: - * Cut shape with holes (hairline gap) to ensure each geometry has genus=1. - * Cuts are discussed in 'Polygon Vertex Set Matching Algorithm for Shapefile Tweening' - * Break-up single polygon to interpolate with a multipolygon. - * See https://github.com/veltman/openvis/blob/master/README.md - * See 'Guaranteed intersection-free polygon morphing' - * See CGAl Shape Deformation: https://doc.cgal.org/latest/Barycentric_coordinates_2/index.html#title10 - * https://homepages.inf.ed.ac.uk/tkomura/cav/presentation14_2018.pdf - * RAP C++ : https://github.com/catherinetaylor2/Shape_Interpolation/blob/master/rigid_interp.cpp - * and https://github.com/deliagander/ARAPShapeInterpolation - */ - - private final CoordinateList from, to; - - public ShapeInterpolation(Geometry from, Geometry to) { - this(((Polygon) from).getExteriorRing(), ((Polygon) to).getExteriorRing()); - } - - public ShapeInterpolation(LinearRing from, LinearRing to) { - if (!Orientation.isCCW(from.getCoordinates())) { - from = from.reverse(); - } - if (!Orientation.isCCW(to.getCoordinates())) { - to = to.reverse(); - } - - // find the "smaller" ring (as measured by number of vertices) - // NOTE use Kabsch algorithm? - CoordinateList smaller, bigger; - boolean smallerIsTo = false; - if (from.getNumPoints() > to.getNumPoints()) { - bigger = new CoordinateList(from.getCoordinates(), false); - smaller = new CoordinateList(to.getCoordinates(), false); - smallerIsTo = true; - } else { - bigger = new CoordinateList(to.getCoordinates(), false); - smaller = new CoordinateList(from.getCoordinates(), false); - } - - bigger.closeRing(); // ensure closed - smaller.closeRing(); // ensure closed - smaller.remove(smaller.size() - 1); // unclose (to be closed later, after array rotation) - bigger.remove(bigger.size() - 1); // unclose (to be closed later, after array rotation) - - // densify smaller list - final int diff = bigger.size() - smaller.size(); - SplittableRandom r = new SplittableRandom(1337); - for (int i = 0; i < diff; i++) { - int index = r.nextInt(smaller.size() - 1); - Coordinate a = smaller.get(index); - Coordinate c = smaller.get(index + 1); - Coordinate b = new Coordinate((a.x + c.x) / 2, (a.y + c.y) / 2); // midpoint - smaller.add(index + 1, b); // insert b between a and c (shift c onwards right) - } - - /* - * The densified shape is rotated until the squared distance between point pairs - * of the two shapes is minimised. - */ - int bestOffset = findBestRotation(smaller, bigger); - - if (bestOffset != 0) { - Collections.rotate(smaller, -bestOffset); - } - - smaller.closeRing(); - bigger.closeRing(); - if (smallerIsTo) { - this.to = smaller; - this.from = bigger; - } else { - this.from = smaller; - this.to = bigger; - } - } - - public Coordinate[] tween(double t) { - if (t == 0) { - return from.toCoordinateArray(); - } else if (t == 1) { - return to.toCoordinateArray(); - } - t %= 1; - CoordinateList morph = new CoordinateList(); - for (int i = 0; i < from.size(); i++) { - morph.add(lerp(from.get(i), to.get(i), t), true); - } - return morph.toCoordinateArray(); - } - - /** - * @return a rotation offset for list a that minimises the squared - * distance between all point pairs - */ - private static int findBestRotation(CoordinateList a, CoordinateList b) { - /* - * Ternary search optimisation. Will converge to the global best rotation if the - * rotation-distance "function" is unimodal. Not sure if it is, but seems so in - * practice. - */ - final int n = a.size(); - int low = 0; - int high = n - 1; - - while (low < high) { - int mid1 = low + (high - low) / 3; - int mid2 = high - (high - low) / 3; - double dist1 = calculateSumOfSquares(a, b, mid1, n); - double dist2 = calculateSumOfSquares(a, b, mid2, n); - - if (dist1 < dist2) { - high = mid2 - 1; - } else { - low = mid1 + 1; - } - } - - return low; - } - - private static double calculateSumOfSquares(CoordinateList a, CoordinateList b, int offset, int n) { - double sumOfSquares = 0; - for (int i = 0; i < n; i++) { - sumOfSquares += a.get((offset + i) % n).distanceSq(b.get(i)); - } - return sumOfSquares; - } - - private static Coordinate lerp(Coordinate from, Coordinate to, double t) { - return new Coordinate(from.x + (to.x - from.x) * t, from.y + (to.y - from.y) * t); - } - -} diff --git a/src/main/java/micycle/pgs/commons/ShapeRandomPointSampler.java b/src/main/java/micycle/pgs/commons/ShapeRandomPointSampler.java index b49f7f5a..3ccc313e 100644 --- a/src/main/java/micycle/pgs/commons/ShapeRandomPointSampler.java +++ b/src/main/java/micycle/pgs/commons/ShapeRandomPointSampler.java @@ -13,6 +13,7 @@ import org.tinfour.common.Vertex; import org.tinfour.utils.TriangleCollector; +import micycle.pgs.PGS_Processing; import micycle.pgs.PGS_Triangulation; import processing.core.PShape; import processing.core.PVector; @@ -31,20 +32,22 @@ public final class ShapeRandomPointSampler { public ShapeRandomPointSampler(final PShape shape, final long seed) { reseed(seed); - // Build constrained Delaunay TIN - final IIncrementalTin tin = PGS_Triangulation.delaunayTriangulationMesh(shape); + // normalise required for identical runs (on shapes with holes having different + // structure) + final IIncrementalTin tin = PGS_Triangulation.delaunayTriangulationMesh(PGS_Processing.normalise(shape)); final boolean constrained = !tin.getConstraints().isEmpty(); // Collect valid triangles and their areas final List tris = new ArrayList<>(); final List areas = new ArrayList<>(); - final double eps = 1e-15; + final double eps = 1e-12; TriangleCollector.visitSimpleTriangles(tin, (SimpleTriangle tri) -> { final IConstraint region = tri.getContainingRegion(); final boolean inside = !constrained || (region != null && region.definesConstrainedRegion()); - if (!inside) + if (!inside) { return; + } final Vertex A = tri.getVertexA(); final Vertex B = tri.getVertexB(); diff --git a/src/main/java/micycle/pgs/commons/SoftCells.java b/src/main/java/micycle/pgs/commons/SoftCells.java new file mode 100644 index 00000000..98948311 --- /dev/null +++ b/src/main/java/micycle/pgs/commons/SoftCells.java @@ -0,0 +1,537 @@ +package micycle.pgs.commons; + +import java.util.ArrayList; +import java.util.Comparator; +import java.util.HashMap; +import java.util.LinkedHashSet; +import java.util.List; +import java.util.Map; +import java.util.Random; +import java.util.Set; + +import processing.core.PConstants; +import processing.core.PShape; +import processing.core.PVector; + +/** + * Softer tesselations. + * + * A Java implementation of the algorithm described in the paper "A Generative + * Approach to Smooth Tessellations" (PNAS Nexus, 2024). + * + *

      + * This class generates and renders smooth, curved tessellations using Bezier + * curves to soften the edges of a base mesh. The algorithm supports various + * tangent modes to control the direction and curvature of the edges, enabling a + * wide range of artistic and geometric effects. + *

      + * + *

      + * The algorithm is highly customizable, supporting multiple tangent modes and + * input meshes. It can be used for generative art, architectural design, and + * scientific visualization. + *

      + * + * @see Original + * Paper + * @author Michael Carleton + * @author CLAUDIO ESPERANÇA + */ +public class SoftCells { + + // https://openprocessing.org/sketch/2419830 + + /** + * Tangents are computed for each edge based on the selected tangent mode. These + * tangents control the curvature of the Bezier curves used to render the edges. + * + * Each tangent is scaled to half the length of the shortest edge incident to + * the source vertex. Unless otherwise noted, the direction is flipped per edge + * so the tangent generally points in the same hemisphere as the edge. + */ + public enum TangentMode { + /** Fixed +X direction for all edges of the vertex; no per-edge flip. */ + HORIZONTAL, + /** Fixed +Y direction for all edges of the vertex; no per-edge flip. */ + VERTICAL, + /** + * 45° diagonal with slope +1 (vector (1,1)); flipped per edge to align via dot + * sign. + */ + DIAGONAL, + /** + * 45° diagonal with slope −1 (vector (1,−1)); flipped per edge to align via dot + * sign. + */ + DIAGONAL2, + /** + * Randomly picks one of the two 45° diagonals per vertex; flipped per edge to + * align. + */ + RANDOM_DIAGONAL, + /** + * Picks one of three directions 60° apart per vertex; flipped per edge to + * align. + */ + RANDOM_60DEG, + /** + * Uses the sum/average of incident edge directions; flipped per edge to align. + */ + ADAPTIVE, + /** Uniformly random direction per vertex; flipped per edge to align. */ + RANDOM, + /** + * Horizontal base; sign determined by row parity and the edge’s vertical sign. + * Edges that are nearly horizontal (abs(u.y) < 1e-3) align to their own + * direction. + */ + EVEN_ODD, + /** + * Alternates between the two 45° diagonals by (col+row) parity; flipped per + * edge to align. + */ + ALT_DIAGONAL, + /** + * Cycles through three 60° directions by (col+row)%3; flipped per edge to + * align. + */ + ALT_60DEG + } + + private List points = new ArrayList<>(); + private List faceMap = new ArrayList<>(); + private Map edgeTangentMap = new HashMap<>(); + private List> vertexEdgeMap = new ArrayList<>(); + + private float avgSize = 1f; + private float minX = 0, minY = 0, maxX = 0, maxY = 0; + + // deterministic RNG + private Random rng = new Random(0L); + + public SoftCells() { + this(0L); + } + + public SoftCells(long seed) { + setSeed(seed); + } + + /** + * Convenience method: load mesh, compute tangents, build shape in one call. + */ + public PShape generate(PShape mesh, TangentMode mode, float ratio) { + loadMeshFromPShape(mesh); + computeTangents(mode); + return buildSoftFacesShape(ratio); + } + + public void setSeed(long seed) { + this.rng = new Random(seed); + } + + /** + * Loads mesh data from a PShape and populates face/vertex data structures. + * Assumes the input mesh is conforming, meaning vertices are shared exactly + * between faces. + * + * @param mesh PShape containing the mesh (each child is a face polygon) + */ + public void loadMeshFromPShape(final PShape mesh) { + // 1) reset + resetDataStructures(); + + // 2) collect unique vertices by value + final Set unique = new LinkedHashSet<>(); + for (int f = 0; f < mesh.getChildCount(); f++) { + final PShape face = mesh.getChild(f); + for (int v = 0; v < face.getVertexCount(); v++) { + unique.add(face.getVertex(v)); + } + } + + // 3) build and sort points (x, then y, then z) + points = new ArrayList<>(unique); + points.sort(Comparator.comparing(p -> p.x).thenComparing(p -> p.y)); + + // 4) init adjacency and index map (value-based) + vertexEdgeMap = new ArrayList<>(points.size()); + for (int i = 0; i < points.size(); i++) { + vertexEdgeMap.add(new ArrayList<>()); + } + final Map index = new HashMap<>(points.size() * 2); + for (int i = 0; i < points.size(); i++) { + index.put(points.get(i), i); + } + + // 5) faces + adjacency using sorted indices + for (int f = 0; f < mesh.getChildCount(); f++) { + final PShape face = mesh.getChild(f); + final int n = face.getVertexCount(); + final int[] faceVerts = new int[n]; + + for (int v = 0; v < n; v++) { + final PVector p = face.getVertex(v); + final Integer idx = index.get(p); + if (idx == null) { + throw new IllegalStateException("Vertex not in index map: " + p); + } + faceVerts[v] = idx; + } + faceMap.add(faceVerts); + + for (int i = 0; i < n; i++) { + final int a = faceVerts[i]; + final int b = faceVerts[(i + 1) % n]; + addUnique(vertexEdgeMap.get(a), b); + addUnique(vertexEdgeMap.get(b), a); + } + } + + // 6) finalise + sortVertexNeighborsByAngle(); + computeBoundsAndAvgSize(); + } + + private void resetDataStructures() { + points = new ArrayList<>(); + faceMap = new ArrayList<>(); + vertexEdgeMap = new ArrayList<>(); + edgeTangentMap = new HashMap<>(); + avgSize = 1f; + minX = minY = maxX = maxY = 0; + } + + private void addUnique(final List list, final int value) { + if (!list.contains(value)) { + list.add(value); + } + } + + private void sortVertexNeighborsByAngle() { + for (int i = 0; i < points.size(); i++) { + final PVector center = points.get(i); + final List neighbors = vertexEdgeMap.get(i); + + neighbors.sort((a, b) -> { + final PVector va = PVector.sub(points.get(a), center); + final PVector vb = PVector.sub(points.get(b), center); + return Float.compare(va.heading(), vb.heading()); + }); + } + } + + private void computeBoundsAndAvgSize() { + if (points.isEmpty()) { + avgSize = 1f; + return; + } + + minX = maxX = points.get(0).x; + minY = maxY = points.get(0).y; + + for (PVector p : points) { + if (p.x < minX) { + minX = p.x; + } + if (p.y < minY) { + minY = p.y; + } + if (p.x > maxX) { + maxX = p.x; + } + if (p.y > maxY) { + maxY = p.y; + } + } + + // average unique edge length + double sum = 0.0; + int cnt = 0; + for (int i = 0; i < points.size(); i++) { + final PVector pi = points.get(i); + for (int j : vertexEdgeMap.get(i)) { + if (j > i) { // unique edge (i,j) with i 0) ? (float) (sum / cnt) : Math.max(1f, (maxX - minX + maxY - minY) * 0.01f); + if (avgSize <= 0) { + avgSize = 1f; + } + } + + public void computeTangents(final TangentMode mode) { + edgeTangentMap = new HashMap<>(); + for (int i = 0; i < points.size(); i++) { + final TangentEstimatorFunc tangentFunc = getTangentEstimator(mode, i); + final List neighbors = vertexEdgeMap.get(i); + if (neighbors != null) { + for (int k = 0; k < neighbors.size(); k++) { + final int j = neighbors.get(k); + final PVector tangent = tangentFunc.estimateTangent(k, i, j); + edgeTangentMap.put(getKey(i, j), tangent); + } + } + } + } + + private int getKey(final int src, final int dst) { + return points.size() * src + dst; + } + + /** + * Creates a tangent estimator function for a given vertex based on the + * specified tangent mode. + * + *

      + * This method is a core part of the algorithm for generating smooth, curved + * tessellations. It determines the direction and magnitude of tangents for each + * edge connected to a vertex, which are later used to compute Bezier curves for + * rendering the tessellation. The tangent estimator function returned by this + * method is specific to a single vertex and is applied to all its neighboring + * edges. + *

      + * + *

      + * The tangent mode defines how tangents are calculated: + *

        + *
      • Fixed Directions: Modes like {@link TangentMode#HORIZONTAL} and + * {@link TangentMode#DIAGONAL} use predefined directions (e.g., horizontal, + * vertical, or diagonal) to compute tangents.
      • + *
      • Adaptive Directions: Modes like {@link TangentMode#ADAPTIVE} + * calculate tangents based on the average direction of neighboring edges, + * ensuring smooth transitions.
      • + *
      • Randomized Directions: Modes like {@link TangentMode#RANDOM} and + * {@link TangentMode#RANDOM_60DEG} generate random directions for tangents, + * either uniformly or constrained to specific angles (e.g., 60°).
      • + *
      + *

      + * + *

      + * The tangent estimator function returned by this method takes the index of a + * neighboring edge and computes a tangent vector for that edge. The tangent + * vector is scaled to half the length of the shortest neighboring edge (to + * ensure smooth curvature) and is oriented based on the edge's direction and + * the chosen tangent mode. + *

      + * + * @param mode The tangent mode to use for computing tangents. This determines + * how tangent directions are calculated. + * @param i The index of the vertex for which the tangent estimator is being + * created. + * @return A {@link TangentEstimatorFunc} that computes tangent vectors for + * edges connected to the vertex. + */ + private TangentEstimatorFunc getTangentEstimator(final TangentMode mode, final int i) { + final List neighbors = vertexEdgeMap.get(i); + final List neighborVectors = new ArrayList<>(); + final PVector srcPoint = points.get(i); + + // build the list of edge-vectors emanating from vertex i + for (final int nbr : neighbors) { + neighborVectors.add(PVector.sub(points.get(nbr), srcPoint)); + } + + // compute average length, minimum length and sum-direction + float avgLen = 0; + float minLen = Float.MAX_VALUE; + final PVector avgDir = new PVector(); + if (!neighborVectors.isEmpty()) { + for (final PVector v : neighborVectors) { + final float m = v.mag(); + avgLen += m; + minLen = Math.min(minLen, m); + avgDir.add(v); + } + avgLen /= neighborVectors.size(); + } else { + minLen = 0; + } + + final float halfMin = minLen * 0.5f; + // make any base directions we’ll need (already scaled to halfMin) + final PVector horiz = new PVector(1, 0).mult(halfMin); + final PVector vert = new PVector(0, 1).mult(halfMin); + final PVector diag1 = new PVector(1, 1).normalize().mult(halfMin); + final PVector diag2 = new PVector(1, -1).normalize().mult(halfMin); + + // PRE‐CHOOSING random direction once per vertex: + // note: this is never mutated later + final float TWO_PI = (float) (Math.PI * 2.0); + final PVector randDir = PVector.fromAngle(rng.nextFloat() * TWO_PI).mult(halfMin); + final PVector randDiagonal = (rng.nextFloat() < 0.5f ? diag1 : diag2); + + // adaptive means sum-direction + final PVector adaptiveDir = avgDir.copy().setMag(halfMin); + + switch (mode) { + case HORIZONTAL : + return (k, src, dst) -> { + // WORK ON A COPY: + return horiz.copy(); + }; + + case VERTICAL : + return (k, src, dst) -> { + return vert.copy(); + }; + + case DIAGONAL : + return (k, src, dst) -> { + final PVector dir = diag1.copy(); + if (neighborVectors.get(k).dot(dir) < 0) { + dir.mult(-1); + } + return dir; + }; + + case DIAGONAL2 : + return (k, src, dst) -> { + final PVector dir = diag2.copy(); + if (neighborVectors.get(k).dot(dir) < 0) { + dir.mult(-1); + } + return dir; + }; + + case RANDOM_DIAGONAL : + // randDiagonal was chosen once above, do NOT re‐roll per edge + return (k, src, dst) -> { + final PVector dir = randDiagonal.copy(); + if (neighborVectors.get(k).dot(dir) < 0) { + dir.mult(-1); + } + return dir; + }; + + case RANDOM_60DEG : + // pick one of three 60° directions once per vertex + final PVector[] tris = { new PVector(0, 1), new PVector(0, 1).rotate((float) Math.toRadians(60)), + new PVector(0, 1).rotate((float) Math.toRadians(-60)) }; + final PVector triDir = tris[rng.nextInt(tris.length)].normalize().mult(halfMin); + return (k, src, dst) -> { + final PVector dir = triDir.copy(); + if (neighborVectors.get(k).dot(dir) < 0) { + dir.mult(-1); + } + return dir; + }; + + case ADAPTIVE : + return (k, src, dst) -> { + final PVector dir = adaptiveDir.copy(); + if (neighborVectors.get(k).dot(dir) < 0) { + dir.mult(-1); + } + return dir; + }; + + case RANDOM : + return (k, src, dst) -> { + final PVector dir = randDir.copy(); + if (neighborVectors.get(k).dot(dir) < 0) { + dir.mult(-1); + } + return dir; + }; + + case EVEN_ODD : + return (k, src, dst) -> { + final int col = (int) ((srcPoint.x - minX) / avgSize); + final int row = (int) ((srcPoint.y - minY) / avgSize); + final PVector base = horiz.copy(); + final PVector u = neighborVectors.get(k); + // if perfectly horizontal edge, just align + if (Math.abs(u.y) < 1e-3) { + if (u.dot(base) < 0) { + base.mult(-1); + } + return base; + } + if (row % 2 == 0) { + if (u.y < 0) { + base.mult(-1); + } + } else { + if (u.y > 0) { + base.mult(-1); + } + } + return base; + }; + + case ALT_DIAGONAL : + return (k, src, dst) -> { + final int col = (int) ((srcPoint.x - minX) / avgSize); + final int row = (int) ((srcPoint.y - minY) / avgSize); + final PVector pick = ((col + row) % 2 == 0 ? diag1 : diag2).copy(); + if (neighborVectors.get(k).dot(pick) < 0) { + pick.mult(-1); + } + return pick; + }; + + case ALT_60DEG : + return (k, src, dst) -> { + final int col = (int) ((srcPoint.x - minX) / avgSize); + final int row = (int) ((srcPoint.y - minY) / avgSize); + final PVector[] altTris = { new PVector(0, 1), new PVector(0, 1).rotate((float) Math.toRadians(60)), + new PVector(0, 1).rotate((float) Math.toRadians(-60)) }; + final PVector pick = altTris[(Math.abs(col + row)) % 3].normalize().mult(halfMin); + if (neighborVectors.get(k).dot(pick) < 0) { + pick.mult(-1); + } + return pick; + }; + + default : + return (k, src, dst) -> new PVector(0, 0); + } + } + + interface TangentEstimatorFunc { + PVector estimateTangent(int k, int srcIndex, int dstIndex); + } + + public PShape buildSoftFacesShape(final float ratio) { + final PShape grp = new PShape(PConstants.GROUP); + + for (final int[] vtx : faceMap) { + final PShape poly = new PShape(PShape.PATH); + poly.setStroke(0); + poly.setStroke(true); + poly.setStrokeWeight(2); + poly.setFill(255); + poly.setFill(true); + + poly.beginShape(); + int i = vtx[vtx.length - 1]; + PVector p1 = points.get(i); + poly.vertex(p1.x, p1.y); + + for (final int j : vtx) { + final PVector p2 = points.get(j); + final PVector t1 = edgeTangentMap.get(points.size() * i + j); + final PVector t2 = edgeTangentMap.get(points.size() * j + i); + + if (t1 != null && t2 != null) { + poly.bezierVertex(p1.x + t1.x * ratio, p1.y + t1.y * ratio, p2.x + t2.x * ratio, p2.y + t2.y * ratio, p2.x, p2.y); + } else { + poly.vertex(p2.x, p2.y); + } + i = j; + p1 = p2; + } + + poly.endShape(PConstants.CLOSE); + grp.addChild(poly); + } + + return grp; + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/Uncrossing2Opt.java b/src/main/java/micycle/pgs/commons/Uncrossing2Opt.java new file mode 100644 index 00000000..f5f8c222 --- /dev/null +++ b/src/main/java/micycle/pgs/commons/Uncrossing2Opt.java @@ -0,0 +1,320 @@ +package micycle.pgs.commons; + +import java.util.List; + +import processing.core.PVector; + +/** + * Hyper-optimized 2-opt uncrossing (untangling) for self-crossing polygon + * tours. + * + *

      What It Does

      + *

      + * Given a closed polygon tour (cyclic sequence of vertices), this routine + * removes self-intersections to produce a simple (non-self-intersecting) + * polygon by repeatedly applying a 2-opt move: when two non-adjacent edges + * cross, it reverses the vertex subsequence between them to eliminate that + * crossing. + *

      + * + *

      How It Works (Key Idea)

      + *

      + * A 2-opt reversal removes exactly two geometric edges and creates exactly two + * new geometric edges. All other segments in the polygon remain the same (just + * traversed in reverse order), so their crossing status does not change. + * Therefore, after a swap only the two new boundary edges can + * introduce new crossings. + *

      + * + *

      Algorithm

      + *
        + *
      1. Dirty-edge stack initialization: Treat each edge (by its + * start index {@code i}, representing {@code (i -> i+1)} with wraparound) as + * "dirty" and push all indices onto a stack.
      2. + *
      3. Edge validation: Pop one dirty edge {@code i} and scan + * it against all non-adjacent edges to find any crossing partner {@code j}. If + * none is found, edge {@code i} is "clean" and is not revisited unless it + * becomes a new boundary edge of a later 2-opt move.
      4. + *
      5. 2-opt reversal: If a crossing is found between edges + * {@code (i,i+1)} and {@code (j,j+1)}, reverse the appropriate contiguous + * vertex range to remove the crossing.
      6. + *
      7. Incremental rechecking: After the reversal, push only + * the two new boundary-edge indices back onto the dirty stack (because only + * those two edges have changed geometrically).
      8. + *
      + * + *

      Performance Optimizations

      + *
        + *
      • No global rescans / no crossing graph: Avoids + * {@code O(n^2)} rescans per swap and avoids hash-map/set maintenance + * overhead.
      • + *
      • Array-based coordinates: Uses primitive {@code float[]} + * for cache-friendly access in tight loops.
      • + *
      • Low-branch intersection test: Inlined proper + * intersection using XOR sign checks, matching the common "strict" crossing + * definition.
      • + *
      • Modulo avoidance: Wraparound edges are handled + * explicitly to minimize {@code % n} in hot loops.
      • + *
      • Optional AABB reject: A fast bounding-box overlap test + * can be enabled to prune most non-crossing candidates before orientation + * math.
      • + *
      + * + *

      Complexity

      + *
        + *
      • Time: Typically {@code O((n + s) * n)} where {@code n} + * is the number of vertices and {@code s} is the number of 2-opt swaps + * performed. Each swap triggers rechecks of only two edges, each checked + * against {@code O(n)} candidates. In the worst case this remains + * {@code O(s n)} with {@code s} potentially large, but it avoids the practical + * {@code n^3}-like behavior of "restart from scratch" implementations.
      • + *
      • Space: {@code O(n)} for coordinate arrays and the + * dirty-edge stack. No {@code O(c)} storage of crossing pairs is used.
      • + *
      + * + *

      Assumptions / Semantics

      + *
        + *
      • Input is a closed tour (last vertex connects back to first).
      • + *
      • Polygon has at least 4 vertices.
      • + *
      • Uses a "proper" intersection test (touching at endpoints or collinear + * overlaps are treated as non-crossings). If you need to treat those as + * crossings, the predicate must be adjusted.
      • + *
      • This implementation reorders the {@code PVector} references in the list + * (it reverses ranges of the vertex sequence), rather than shuffling + * coordinates among fixed objects.
      • + *
      + * + * @author Michael Carleton + * @see 2-opt algorithm + * (Wikipedia) + */ +public final class Uncrossing2Opt { + + // Toggle: cheap AABB overlap check before orientation math + private static final boolean USE_AABB = !true; + + public static void uncross(List seq) { + final int n = seq.size(); + if (n < 4) + return; + + // Work on arrays (fast) and write back at end + final PVector[] arr = seq.toArray(new PVector[n]); + final float[] x = new float[n]; + final float[] y = new float[n]; + for (int i = 0; i < n; i++) { + x[i] = arr[i].x; + y[i] = arr[i].y; + } + + // Dirty-edge stack: indices of edge-starts i (edge is i -> i+1, and n-1 -> 0) + final IntStack stack = new IntStack(n * 2); + for (int i = 0; i < n; i++) + stack.push(i); + + while (!stack.isEmpty()) { + final int i = stack.pop(); + + final int j = findAnyCrossingPartner(i, x, y, n); + if (j < 0) + continue; + + // Apply the 2-opt reversal in the appropriate contiguous range + // and mark ONLY the two new boundary edges dirty. + if (i == n - 1) { + // crossing between closing edge (n-1 -> 0) and (j -> j+1): reverse [0..j] + reverseRange(arr, x, y, 0, j); + + // Only changed geometric edges are now at indices (n-1) and (j) + stack.push(n - 1); + stack.push(j); + + } else if (j == n - 1) { + // crossing between (i -> i+1) and closing edge: reverse [i+1 .. n-1] + reverseRange(arr, x, y, i + 1, n - 1); + + // Only changed geometric edges are now at indices (i) and (n-1) + stack.push(i); + stack.push(n - 1); + + } else { + // general case: normalize so p < q, reverse [p+1..q] + int p = i, q = j; + if (p > q) { + int t = p; + p = q; + q = t; + } + + reverseRange(arr, x, y, p + 1, q); + + // Only changed geometric edges are now at indices p and q + stack.push(p); + stack.push(q); + } + } + + // write back reordered points (fast) + for (int i = 0; i < n; i++) + seq.set(i, arr[i]); + } + + /** + * Returns any non-adjacent edge index j such that edge(i) crosses edge(j), else + * -1. + * + * Heavily optimized: - avoids abs/modulo in the main candidate loops - handles + * wrap edges explicitly - optional AABB reject - inlined proper intersection + * (strict) using XOR sign tests + */ + private static int findAnyCrossingPartner(int i, float[] x, float[] y, int n) { + final float ax, ay, bx, by; + + if (i == n - 1) { + ax = x[n - 1]; + ay = y[n - 1]; + bx = x[0]; + by = y[0]; + } else { + ax = x[i]; + ay = y[i]; + bx = x[i + 1]; + by = y[i + 1]; + } + + final float abx = bx - ax; + final float aby = by - ay; + + final float minAx = (ax < bx) ? ax : bx; + final float maxAx = (ax > bx) ? ax : bx; + final float minAy = (ay < by) ? ay : by; + final float maxAy = (ay > by) ? ay : by; + + // Candidate edges depend on i to avoid adjacency checks. + if (i == 0) { + // Edge (0->1): skip adjacent edges (n-1->0) and (1->2). + for (int k = 2; k <= n - 2; k++) { + final int k1 = k + 1; + if (segmentsCrossFast(ax, ay, bx, by, abx, aby, minAx, maxAx, minAy, maxAy, x[k], y[k], x[k1], y[k1])) + return k; + } + return -1; + } + + if (i == n - 1) { + // Closing edge (n-1->0): skip adjacent edges (n-2->n-1) and (0->1). + for (int k = 1; k <= n - 3; k++) { + final int k1 = k + 1; + if (segmentsCrossFast(ax, ay, bx, by, abx, aby, minAx, maxAx, minAy, maxAy, x[k], y[k], x[k1], y[k1])) + return k; + } + return -1; + } + + // General i in [1..n-2]: + // Check k in [0..i-2] and [i+2..n-2], plus possibly k=n-1 (closing edge) if not + // adjacent. + for (int k = 0; k <= i - 2; k++) { + final int k1 = k + 1; + if (segmentsCrossFast(ax, ay, bx, by, abx, aby, minAx, maxAx, minAy, maxAy, x[k], y[k], x[k1], y[k1])) + return k; + } + + for (int k = i + 2; k <= n - 2; k++) { + final int k1 = k + 1; + if (segmentsCrossFast(ax, ay, bx, by, abx, aby, minAx, maxAx, minAy, maxAy, x[k], y[k], x[k1], y[k1])) + return k; + } + + // Check closing edge k = n-1 (n-1 -> 0) unless adjacent (only adjacent when i + // == n-2) + if (i != n - 2) { + if (segmentsCrossFast(ax, ay, bx, by, abx, aby, minAx, maxAx, minAy, maxAy, x[n - 1], y[n - 1], x[0], y[0])) + return n - 1; + } + + return -1; + } + + // Proper intersection test (strict): excludes collinear/touching cases. + private static boolean segmentsCrossFast(float ax, float ay, float bx, float by, float abx, float aby, float minAx, float maxAx, float minAy, float maxAy, + float cx, float cy, float dx, float dy) { + if (USE_AABB) { + final float minCx = (cx < dx) ? cx : dx; + final float maxCx = (cx > dx) ? cx : dx; + if (maxAx < minCx || maxCx < minAx) + return false; + + final float minCy = (cy < dy) ? cy : dy; + final float maxCy = (cy > dy) ? cy : dy; + if (maxAy < minCy || maxCy < minAy) + return false; + } + + // o1 and o2: C and D on opposite sides of AB? + final float acx = cx - ax, acy = cy - ay; + final float adx = dx - ax, ady = dy - ay; + + final float o1 = abx * acy - aby * acx; + final float o2 = abx * ady - aby * adx; + + // strict opposite sign (o1==0 or o2==0 => reject) + if (!((o1 > 0) ^ (o2 > 0))) + return false; + + // o3 and o4: A and B on opposite sides of CD? + final float cdx = dx - cx, cdy = dy - cy; + final float cax = ax - cx, cay = ay - cy; + final float cbx = bx - cx, cby = by - cy; + + final float o3 = cdx * cay - cdy * cax; + final float o4 = cdx * cby - cdy * cbx; + + return ((o3 > 0) ^ (o4 > 0)); + } + + private static void reverseRange(PVector[] arr, float[] x, float[] y, int l, int r) { + while (l < r) { + PVector tp = arr[l]; + arr[l] = arr[r]; + arr[r] = tp; + + float tx = x[l]; + x[l] = x[r]; + x[r] = tx; + float ty = y[l]; + y[l] = y[r]; + y[r] = ty; + + l++; + r--; + } + } + + // Tiny int stack (no boxing) + private static final class IntStack { + private int[] a; + private int sz; + + IntStack(int cap) { + a = new int[Math.max(8, cap)]; + } + + boolean isEmpty() { + return sz == 0; + } + + void push(int v) { + if (sz == a.length) { + int[] b = new int[a.length << 1]; + System.arraycopy(a, 0, b, 0, a.length); + a = b; + } + a[sz++] = v; + } + + int pop() { + return a[--sz]; + } + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/commons/VisibilityPolygon.java b/src/main/java/micycle/pgs/commons/VisibilityPolygon.java index 00c89b9d..6dd288f4 100644 --- a/src/main/java/micycle/pgs/commons/VisibilityPolygon.java +++ b/src/main/java/micycle/pgs/commons/VisibilityPolygon.java @@ -5,6 +5,8 @@ import java.util.Collections; import java.util.List; +import org.locationtech.jts.algorithm.Angle; +import org.locationtech.jts.algorithm.LineIntersector; import org.locationtech.jts.algorithm.RobustLineIntersector; import org.locationtech.jts.geom.Coordinate; import org.locationtech.jts.geom.Envelope; @@ -23,11 +25,14 @@ import net.jafama.FastMath; /** + * * This class computes an isovist, which is the volume of space visible from a * specific point in space, based on a given set of original segments. + * *

      * The code in this class is adapted from Byron Knoll's javascript library, * available at https://github.com/byronknoll/visibility-polygon-js + * *

      *

        *
      • Sort all vertices based on their angle to the observer.
      • @@ -57,30 +62,39 @@ *
      * * @author Nicolas Fortin of Ifsttar UMRAE - * @author Small changes by Michael Carleton + * @author Changes by Michael Carleton */ public class VisibilityPolygon { - // from h2gis-utilities - - private static final double M_2PI = Math.PI * 2.; private static final Coordinate NAN_COORDINATE = new Coordinate(Coordinate.NULL_ORDINATE, Coordinate.NULL_ORDINATE); // maintain the list of limits sorted by angle private double maxDistance; private List originalSegments = new ArrayList<>(); private double epsilon = 1e-8; // epsilon to help avoid degeneracies - private int numPoints = 96; /** - * @param maxDistance maximum distance (from the view point) constraint for the - * visibility polygon + * Creates a visibility-polygon (isovist) builder with a user-defined range + * limit. + *

      + * When getIsovist(..., true) is called, maxDistance defines the half-size of + * the axis-aligned bounding square centered on the view point (the square’s + * side length is 2*maxDistance). This bounds the result in otherwise unbounded + * scenes. + * + * @param maxDistance positive half-size of the optional bounding square, in the + * same units as the input coordinates */ public VisibilityPolygon(double maxDistance) { this.maxDistance = maxDistance; } + /** + * Creates a visibility-polygon (isovist) builder with a default range limit. + *

      + * Equivalent to new VisibilityPolygon(2500). + */ public VisibilityPolygon() { - this(2000); + this(2500); } /** @@ -90,7 +104,7 @@ public VisibilityPolygon() { * * @param viewPoints the collection of view points from which the isovist is * computed. - * @param addEnvelope a boolean flag indicating whether to include a circle + * @param addEnvelope a boolean flag indicating whether to include a square * bounding box in the resulting geometry. * @return a polygonal geometry representing the isovist. The geometry returned * may be a single polygon or a multipolygon comprising multiple @@ -109,17 +123,20 @@ public Geometry getIsovist(Collection viewPoints, boolean addEnvelop /** * Computes an isovist, the area of the input visible from a given point in + * * space. - * + * * @param viewPoint View coordinate - * @param addEnvelope If true add circle bounding box. This function does not + * + * @param addEnvelope If true add square bounding box. This function does not + * * work properly if the view point is not enclosed by * segments * @return visibility polygon */ public Polygon getIsovist(Coordinate viewPoint, boolean addEnvelope) { - // Add bounding circle - List bounded = new ArrayList<>(originalSegments.size() + numPoints); + // Add bounding square + List bounded = new ArrayList<>(originalSegments.size() + 4); // Compute envelope Envelope env = new Envelope(); @@ -130,27 +147,24 @@ public Polygon getIsovist(Coordinate viewPoint, boolean addEnvelope) { if (addEnvelope) { // Add bounding geom in envelope env.expandToInclude(new Coordinate(viewPoint.x - maxDistance, viewPoint.y - maxDistance)); - env.expandToInclude(new Coordinate(viewPoint.x + maxDistance, viewPoint.y + viewPoint.x)); + env.expandToInclude(new Coordinate(viewPoint.x + maxDistance, viewPoint.y + maxDistance)); GeometricShapeFactory geometricShapeFactory = new GeometricShapeFactory(); geometricShapeFactory.setCentre(new Coordinate(viewPoint.x - env.getMinX(), viewPoint.y - env.getMinY())); geometricShapeFactory.setWidth(maxDistance * 2); geometricShapeFactory.setHeight(maxDistance * 2); - geometricShapeFactory.setNumPoints(numPoints); - addPolygon(bounded, geometricShapeFactory.createEllipse()); + addPolygon(bounded, geometricShapeFactory.createRectangle()); for (SegmentString segment : originalSegments) { final Coordinate a = segment.getCoordinate(0); final Coordinate b = segment.getCoordinate(1); - addSegment(bounded, new Coordinate(a.x - env.getMinX(), a.y - env.getMinY()), - new Coordinate(b.x - env.getMinX(), b.y - env.getMinY())); + addSegment(bounded, new Coordinate(a.x - env.getMinX(), a.y - env.getMinY()), new Coordinate(b.x - env.getMinX(), b.y - env.getMinY())); } - // Intersection with bounding circle + // Intersection with bounding square bounded = fixSegments(bounded); } else { for (SegmentString segment : originalSegments) { final Coordinate a = segment.getCoordinate(0); final Coordinate b = segment.getCoordinate(1); - addSegment(bounded, new Coordinate(a.x - env.getMinX(), a.y - env.getMinY()), - new Coordinate(b.x - env.getMinX(), b.y - env.getMinY())); + addSegment(bounded, new Coordinate(a.x - env.getMinX(), a.y - env.getMinY()), new Coordinate(b.x - env.getMinX(), b.y - env.getMinY())); } } @@ -250,7 +264,7 @@ public void fixSegments() { */ private static List fixSegments(List segments) { MCIndexNoder mCIndexNoder = new MCIndexNoder(); - RobustLineIntersector robustLineIntersector = new RobustLineIntersector(); + LineIntersector robustLineIntersector = new RobustLineIntersector(); mCIndexNoder.setSegmentIntersector(new IntersectionAdder(robustLineIntersector)); mCIndexNoder.computeNodes(segments); Collection nodedSubstring = mCIndexNoder.getNodedSubstrings(); @@ -261,14 +275,6 @@ private static List fixSegments(List segments) { return ret; } - /** - * @param numPoints Number of points of the bounding circle polygon. Default = - * 96. - */ - public void setNumPoints(int numPoints) { - this.numPoints = numPoints; - } - private static double angle(Coordinate a, Coordinate b) { return FastMath.atan2(b.y - a.y, b.x - a.x); } @@ -301,14 +307,7 @@ private static int getParent(int index) { private double angle2(Coordinate a, Coordinate b, Coordinate c) { double a1 = angle(a, b); double a2 = angle(b, c); - double a3 = a1 - a2; - if (a3 < 0) { - a3 += M_2PI; - } - if (a3 > M_2PI) { - a3 -= M_2PI; - } - return a3; + return Angle.normalizePositive(a1 - a2); } private boolean lessThan(int index1, int index2, Coordinate position, List segments, Coordinate destination) { @@ -335,15 +334,14 @@ private boolean lessThan(int index1, int index2, Coordinate position, List heap, Coordinate position, List segments, Coordinate destination, - List map) { + private void remove(int index, List heap, Coordinate position, List segments, Coordinate destination, List map) { map.set(heap.get(index), -1); if (index == heap.size() - 1) { heap.remove(heap.size() - 1); @@ -391,8 +389,7 @@ private void remove(int index, List heap, Coordinate position, List heap, Coordinate position, List segments, Coordinate destination, - List map) { + private void insert(int index, List heap, Coordinate position, List segments, Coordinate destination, List map) { Coordinate inter = intersectLines(segments.get(index), position, destination); if (NAN_COORDINATE.equals2D(inter, epsilon)) { return; @@ -423,57 +420,54 @@ public void setEpsilon(double epsilon) { } /** + * * Explode geometry and add occlusion segments in isovist - * + * * @param geometry Geometry collection, LineString or Polygon instance */ public void addGeometry(Geometry geometry) { if (geometry instanceof LineString) { - addLineString(originalSegments, (LineString) geometry); + addLineString((LineString) geometry); } else if (geometry instanceof Polygon) { - addPolygon(originalSegments, (Polygon) geometry); + addPolygon((Polygon) geometry); } else if (geometry instanceof GeometryCollection) { - addGeometry(originalSegments, (GeometryCollection) geometry); + addGeometry((GeometryCollection) geometry); } } - private static void addGeometry(List segments, GeometryCollection geometry) { + private void addGeometry(GeometryCollection geometry) { int geoCount = geometry.getNumGeometries(); for (int n = 0; n < geoCount; n++) { Geometry simpleGeom = geometry.getGeometryN(n); if (simpleGeom instanceof LineString) { - addLineString(segments, (LineString) simpleGeom); + addLineString((LineString) simpleGeom); } else if (simpleGeom instanceof Polygon) { - addPolygon(segments, (Polygon) simpleGeom); + addPolygon((Polygon) simpleGeom); } else if (simpleGeom instanceof GeometryCollection) { - addGeometry(segments, (GeometryCollection) simpleGeom); + addGeometry((GeometryCollection) simpleGeom); } } } - private static void addPolygon(List segments, Polygon poly) { - addLineString(segments, poly.getExteriorRing()); + private void addPolygon(Polygon poly) { + addLineString(poly.getExteriorRing()); final int ringCount = poly.getNumInteriorRing(); // Keep interior ring if the viewpoint is inside the polygon for (int nr = 0; nr < ringCount; nr++) { - addLineString(segments, poly.getInteriorRingN(nr)); + addLineString(poly.getInteriorRingN(nr)); } } public void addLineString(LineString lineString) { - addLineString(originalSegments, lineString); - } - - private static void addLineString(List segments, LineString lineString) { int nPoint = lineString.getNumPoints(); for (int idPoint = 0; idPoint < nPoint - 1; idPoint++) { - addSegment(segments, lineString.getCoordinateN(idPoint), lineString.getCoordinateN(idPoint + 1)); + addSegment(lineString.getCoordinateN(idPoint), lineString.getCoordinateN(idPoint + 1)); } } /** * Add an occlusion segment to the isovist. - * + * * @param p0 segment origin * @param p1 segment destination */ @@ -481,7 +475,23 @@ public void addSegment(Coordinate p0, Coordinate p1) { if (p0.distance(p1) < epsilon) { return; } - addSegment(originalSegments, p0, p1); + originalSegments.add(new NodedSegmentString(new Coordinate[] { p0, p1 }, originalSegments.size() + 1)); + } + + private static void addPolygon(List segments, Polygon poly) { + addLineString(segments, poly.getExteriorRing()); + final int ringCount = poly.getNumInteriorRing(); + // Keep interior ring if the viewpoint is inside the polygon + for (int nr = 0; nr < ringCount; nr++) { + addLineString(segments, poly.getInteriorRingN(nr)); + } + } + + private static void addLineString(List segments, LineString lineString) { + int nPoint = lineString.getNumPoints(); + for (int idPoint = 0; idPoint < nPoint - 1; idPoint++) { + addSegment(segments, lineString.getCoordinateN(idPoint), lineString.getCoordinateN(idPoint + 1)); + } } private static void addSegment(List segments, Coordinate p0, Coordinate p1) { @@ -489,6 +499,7 @@ private static void addSegment(List segments, Coordinate p0, Coor } /** + * * Defines segment vertices. */ private static final class Vertex implements Comparable { diff --git a/src/main/java/micycle/pgs/commons/VoronoiInterpolator.java b/src/main/java/micycle/pgs/commons/VoronoiInterpolator.java new file mode 100644 index 00000000..081c6d3f --- /dev/null +++ b/src/main/java/micycle/pgs/commons/VoronoiInterpolator.java @@ -0,0 +1,484 @@ +package micycle.pgs.commons; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Set; +import java.util.stream.IntStream; +import java.util.stream.Stream; + +import org.locationtech.jts.algorithm.Distance; +import org.locationtech.jts.densify.Densifier; +import org.locationtech.jts.geom.Coordinate; +import org.locationtech.jts.geom.Envelope; +import org.locationtech.jts.geom.Geometry; +import org.locationtech.jts.geom.GeometryFactory; +import org.locationtech.jts.geom.LineSegment; +import org.locationtech.jts.geom.LineString; +import org.locationtech.jts.geom.Polygon; +import org.locationtech.jts.geom.util.AffineTransformation; +import org.locationtech.jts.geom.util.GeometryCombiner; +import org.locationtech.jts.geom.util.GeometryFixer; +import org.locationtech.jts.geom.util.PolygonExtracter; +import org.locationtech.jts.triangulate.VoronoiDiagramBuilder; + +import com.github.micycle1.geoblitz.HilbertParallelPolygonUnion; + +import net.jafama.FastMath; + +/** + * Implements the Voronoi morph described in Abstract morphing using the + * Hausdorff distance and Voronoi diagrams (de Kogel, van Kreveld, + * Vermeulen). + *

      + * Given two planar shapes {@code A} and {@code B} (as JTS {@link Geometry}), + * the morph is evaluated for {@code alpha ∈ [0,1]} by: + *

        + *
      • computing the overlap {@code A ∩ B} (kept fixed throughout the + * morph),
      • + *
      • partitioning the non-overlapping parts {@code A \ B} and {@code B \ A} by + * Voronoi cells of sampled boundary sites of the opposite shape,
      • + *
      • moving each partition piece toward its associated closest-site on the + * other shape, using a piecewise affine transform: + *
          + *
        • for vertex-sites: uniform scaling toward the vertex,
        • + *
        • for edge-sites: scaling perpendicular to the edge’s supporting line,
        • + *
        + *
      • + *
      • unioning (optionally) the transformed pieces with the fixed overlap.
      • + *
      + *

      + * This class separates an expensive, one-time preprocessing step + * ({@link #prepareVoronoiPartition(Geometry, Geometry, double, double) + * prepareVoronoiPartition()}) from the evaluation step + * ({@link #interpolateVoronoi(VoronoiPartition, double, boolean) + * interpolateVoronoi()}), so the same partition can be reused to render many + * frames for different {@code alpha}. + * + * @author Michael Carleton + */ +public final class VoronoiInterpolator { + + private VoronoiInterpolator() { + } + + /** + * Cached preprocessing result for Voronoi morph evaluation. + *

      + * Contains: + *

        + *
      • {@link #overlap}: the fixed overlap {@code A ∩ B} (unchanged throughout + * the morph),
      • + *
      • {@link #aPieces}: partitioned pieces of {@code A \ B} tagged with + * closest-site information on {@code B},
      • + *
      • {@link #bPieces}: partitioned pieces of {@code B \ A} tagged with + * closest-site information on {@code A}.
      • + *
      + * The expensive work is the geometric partitioning/overlay done during + * preparation. The interpolation step only applies affine transforms to these + * cached pieces and optionally unions them. + */ + public static final class VoronoiPartition { + /** Geometry factory used to create output geometries. */ + public final GeometryFactory gf; + + /** Fixed overlap region {@code A ∩ B} (unchanged throughout the morph). */ + public final Geometry overlap; + + /** + * Partition pieces from {@code A \ B}, each tagged with closest-site info on + * {@code B}. + */ + public final List aPieces; + + /** + * Partition pieces from {@code B \ A}, each tagged with closest-site info on + * {@code A}. + */ + public final List bPieces; + + /** + * Creates a partition container. Intended to be produced by + * {@link VoronoiInterpolator#prepareVoronoiPartition(Geometry, Geometry, double, double) + * prepareVoronoiPartition()}. + * + * @param gf geometry factory for outputs + * @param overlap fixed overlap {@code A ∩ B} + * @param aPieces partition pieces from {@code A \ B} + * @param bPieces partition pieces from {@code B \ A} + */ + private VoronoiPartition(GeometryFactory gf, Geometry overlap, List aPieces, List bPieces) { + this.gf = gf; + this.overlap = overlap; + this.aPieces = aPieces; + this.bPieces = bPieces; + } + } + + /** + * A single partition piece together with the closest-site information that + * determines how it moves during the morph. + *

      + * {@link #geom} is the static (alpha-independent) geometry of the piece; + * {@link #site} defines the affine transform applied at evaluation time. + */ + private static final class Piece { + /** Partitioned polygonal piece (independent of {@code alpha}). */ + public final Geometry geom; + + /** + * Closest-site information (vertex/edge) used to derive the affine transform. + */ + public final SiteInfo site; + + /** + * Creates a piece record. + * + * @param geom piece geometry (typically a {@link Polygon}) + * @param site closest-site info describing the motion model for this piece + */ + private Piece(Geometry geom, SiteInfo site) { + this.geom = geom; + this.site = site; + } + } + + /** + * Performs the expensive, one-time preprocessing step: fixes inputs, computes + * the fixed overlap {@code A ∩ B}, and partitions the non-overlapping parts + * using Voronoi cells of sampled boundary sites. + *

      + * The returned {@link VoronoiPartition} is intended to be reused to evaluate + * many intermediate shapes for different {@code alpha} values via + * {@link #interpolateVoronoi(VoronoiPartition, double, boolean) + * interpolateVoronoi()}. + * + * @param a input shape {@code A} + * @param b input shape {@code B} + * @param maxSegmentLength maximum segment length used to densify the boundary + * when sampling Voronoi sites; {@code <= 0} disables + * densification + * @param clipExpand non-negative expansion applied to the combined + * envelope of {@code A} and {@code B} to form the + * Voronoi clip envelope + * @return a reusable partition containing the fixed overlap and tagged + * partition pieces + * @throws NullPointerException if {@code a} or {@code b} is null + */ + public static VoronoiPartition prepareVoronoiPartition(Geometry a, Geometry b, double maxSegmentLength, double clipExpand) { + Objects.requireNonNull(a, "a"); + Objects.requireNonNull(b, "b"); + + Geometry aFix = GeometryFixer.fix(a); + Geometry bFix = GeometryFixer.fix(b); + GeometryFactory gf = aFix.getFactory(); + + Geometry overlap = aFix.intersection(bFix); + Geometry aOutsideB = aFix.difference(bFix); + Geometry bOutsideA = bFix.difference(aFix); + + Envelope clip = new Envelope(aFix.getEnvelopeInternal()); + clip.expandToInclude(bFix.getEnvelopeInternal()); + clip.expandBy(Math.max(clipExpand, 0.0)); + + // Precompute partition pieces once: + List aPieces = partitionPieces(aOutsideB, bFix, maxSegmentLength, clip, gf); + List bPieces = partitionPieces(bOutsideA, aFix, maxSegmentLength, clip, gf); + + return new VoronoiPartition(gf, overlap, aPieces, bPieces); + } + + /** + * Fast evaluation step: computes the intermediate shape for the given + * {@code alpha} by applying affine transforms to the cached partition pieces + * and combining them with the fixed overlap. + *

      + * Pieces originating from {@code A \ B} are transformed with fraction + * {@code alpha}; pieces originating from {@code B \ A} are transformed with + * fraction {@code 1 - alpha}. + * + * @param p cached preprocessing result returned by + * {@link #prepareVoronoiPartition(Geometry, Geometry, double, double)} + * @param alpha morph parameter in {@code [0, 1]} + * @param doUnion if {@code true}, unions all parts into a clean area geometry + * (slowest); if {@code false}, combines without union (fast) and + * may leave overlaps/seams + * @return the interpolated geometry for {@code alpha} + * @throws NullPointerException if {@code p} is null + * @throws IllegalArgumentException if {@code alpha} is outside {@code [0,1]} + */ + public static Geometry interpolateVoronoi(VoronoiPartition p, double alpha, boolean doUnion) { + Objects.requireNonNull(p, "partition"); + if (alpha < 0.0 || alpha > 1.0) { + throw new IllegalArgumentException("alpha must be in [0,1]"); + } + + if (alpha == 0.0) { + // Reconstruct something close to A: overlap + aPieces at fraction 0 + bPieces + // at fraction 1 + // If you need exact A, just keep A separately. This is meant for animation. + } + if (alpha == 1.0) { + // same note as alpha==0 + } + + double fracA = alpha; + double fracB = 1.0 - alpha; + + // Transform pieces in parallel (cheap compared to overlay ops) + List movedA = p.aPieces.isEmpty() ? List.of() + : p.aPieces.parallelStream().map(pc -> buildTransformForSite(pc.site, fracA).transform(pc.geom)).filter(g -> !g.isEmpty()).toList(); + + List movedB = p.bPieces.isEmpty() ? List.of() + : p.bPieces.parallelStream().map(pc -> buildTransformForSite(pc.site, fracB).transform(pc.geom)).filter(g -> !g.isEmpty()).toList(); + + ArrayList out = new ArrayList<>(1 + movedA.size() + movedB.size()); + if (p.overlap != null && !p.overlap.isEmpty()) { + out.add(p.overlap); + } + out.addAll(movedA); + out.addAll(movedB); + + if (out.isEmpty()) { + return p.gf.createGeometryCollection(); + } + + if (!doUnion) { + return GeometryCombiner.combine(out); + } + Geometry u = HilbertParallelPolygonUnion.union(out); + return u; + } + + private enum SiteType { + VERTEX, EDGE + } + + private static final class SiteInfo { + final Coordinate site; + final SiteType type; + + // EDGE only + final Coordinate edgeOrigin; + final double edgeAngle; + + private SiteInfo(Coordinate site, SiteType type, Coordinate edgeOrigin, double edgeAngle) { + this.site = site; + this.type = type; + this.edgeOrigin = edgeOrigin; + this.edgeAngle = edgeAngle; + } + + static SiteInfo vertex(Coordinate c) { + return new SiteInfo(new Coordinate(c), SiteType.VERTEX, null, 0.0); + } + + static SiteInfo edge(Coordinate c, Coordinate origin, double angle) { + return new SiteInfo(new Coordinate(c), SiteType.EDGE, new Coordinate(origin), angle); + } + } + + /** + * One-time: partition `moving` by Voronoi cells of `other`, returning + * (piece,site) pairs. This is where the expensive `moving.intersection(cell)` + * happens. + */ + private static List partitionPieces(Geometry moving, Geometry other, double maxSegmentLength, Envelope clip, GeometryFactory gf) { + if (moving.isEmpty()) { + return List.of(); + } + + List sites = sampleBoundarySites(other, maxSegmentLength); + if (sites.isEmpty()) { + return List.of(); + } + + Geometry cells = buildVoronoiCells(sites, clip, gf); + + Map siteByKey = new HashMap<>(sites.size() * 2); + for (SiteInfo si : sites) { + siteByKey.put(key(si.site), si); + } + + int n = cells.getNumGeometries(); + Envelope movEnv = moving.getEnvelopeInternal(); + + return IntStream.range(0, n).parallel().boxed().flatMap(i -> { + Geometry cell = cells.getGeometryN(i); + if (!(cell instanceof Polygon) || cell.isEmpty()) { + return Stream.empty(); + } + if (!movEnv.intersects(cell.getEnvelopeInternal())) { + return Stream.empty(); + } + + Object ud = cell.getUserData(); + if (!(ud instanceof Coordinate)) { + return Stream.empty(); + } + + SiteInfo si = siteByKey.get(key((Coordinate) ud)); + if (si == null) { + return Stream.empty(); + } + + Geometry piece = moving.intersection(cell); + if (piece.isEmpty()) { + return Stream.empty(); + } + + // Fast path: already a single polygon + if (piece instanceof Polygon p) { + return p.isEmpty() ? Stream.empty() : Stream.of(new Piece(p, si)); + } + + // Slower path only when needed: explode collections/multipolygons to polygons + @SuppressWarnings("unchecked") + List polys = PolygonExtracter.getPolygons(piece); + if (polys.isEmpty()) { + return Stream.empty(); // line/point/etc + } + + return polys.stream().filter(p -> !p.isEmpty()).map(p -> new Piece(p, si)); + }).toList(); + } + + /** Same as before */ + private static AffineTransformation buildTransformForSite(SiteInfo si, double fraction) { + double s = 1.0 - fraction; + + if (si.type == SiteType.VERTEX) { + return AffineTransformation.scaleInstance(s, s, si.site.x, si.site.y); + } + + double angle = si.edgeAngle; + Coordinate o = si.edgeOrigin; + + AffineTransformation t = new AffineTransformation(); + t.translate(-o.x, -o.y); + t.rotate(-angle); + t.scale(1.0, s); + t.rotate(angle); + t.translate(o.x, o.y); + return t; + } + + private static Geometry buildVoronoiCells(List sites, Envelope clip, GeometryFactory gf) { + Coordinate[] coords = new Coordinate[sites.size()]; + for (int i = 0; i < sites.size(); i++) { + coords[i] = sites.get(i).site; + } + + Geometry mp = gf.createMultiPointFromCoords(coords); + + VoronoiDiagramBuilder vdb = new VoronoiDiagramBuilder(); + vdb.setSites(mp); + vdb.setClipEnvelope(clip); + + // Returns a GeometryCollection of polygons; each polygon has + // userData=Coordinate(site) + return vdb.getDiagram(gf); + } + + private static List sampleBoundarySites(Geometry g, double maxSegmentLength) { + Geometry boundary = g.getBoundary(); + if (boundary.isEmpty()) { + return List.of(); + } + + Geometry boundaryToSample = boundary; + if (maxSegmentLength > 0.0) { + boundaryToSample = Densifier.densify(boundary, maxSegmentLength); + } + + // Extract segments from the ORIGINAL boundary (for EDGE direction + // classification). + List segs = extractSegments(boundary); + + // Extract vertices of the ORIGINAL boundary. + Set vertexKeys = new HashSet<>(); + for (Coordinate c : boundary.getCoordinates()) { + vertexKeys.add(key(c)); + } + + // Create sites from sampled coordinates. + // If near a vertex -> VERTEX site, else -> EDGE site using closest boundary + // segment direction. + Set seen = new HashSet<>(); + List sites = new ArrayList<>(); + + for (Coordinate c : boundaryToSample.getCoordinates()) { + String k = key(c); + if (!seen.add(k)) { + continue; + } + + if (vertexKeys.contains(k)) { + sites.add(SiteInfo.vertex(c)); + } else { + SegmentMatch m = closestSegmentMatch(c, segs); + if (m != null) { + double ang = FastMath.atan2(m.seg.p1.y - m.seg.p0.y, m.seg.p1.x - m.seg.p0.x); + sites.add(SiteInfo.edge(c, m.seg.p0, ang)); + } else { + // Fallback: treat as vertex site + sites.add(SiteInfo.vertex(c)); + } + } + } + + return sites; + } + + private static final class SegmentMatch { + final LineSegment seg; + final double dist2; + + SegmentMatch(LineSegment seg, double dist2) { + this.seg = seg; + this.dist2 = dist2; + } + } + + private static SegmentMatch closestSegmentMatch(Coordinate p, List segs) { + LineSegment best = null; + double bestD2 = Double.POSITIVE_INFINITY; + + for (LineSegment s : segs) { + double d2 = Distance.pointToSegmentSq(p, s.p0, s.p1); + if (d2 < bestD2) { + bestD2 = d2; + best = s; + } + } + return best == null ? null : new SegmentMatch(best, bestD2); + } + + private static List extractSegments(Geometry boundary) { + List segs = new ArrayList<>(); + for (int i = 0; i < boundary.getNumGeometries(); i++) { + Geometry gi = boundary.getGeometryN(i); + if (!(gi instanceof LineString)) { + continue; + } + Coordinate[] cs = ((LineString) gi).getCoordinates(); + for (int j = 0; j + 1 < cs.length; j++) { + if (!cs[j].equals2D(cs[j + 1])) { + segs.add(new LineSegment(cs[j], cs[j + 1])); + } + } + } + return segs; + } + + // coordinate key with rounding to make Coordinate usable as map key + private static String key(Coordinate c) { + double eps = 1e-9; + long ix = Math.round(c.x / eps); + long iy = Math.round(c.y / eps); + return ix + ":" + iy; + } +} \ No newline at end of file diff --git a/src/main/java/micycle/pgs/package-info.java b/src/main/java/micycle/pgs/package-info.java index 2a34b246..835d2667 100644 --- a/src/main/java/micycle/pgs/package-info.java +++ b/src/main/java/micycle/pgs/package-info.java @@ -14,6 +14,7 @@ *

    • {@link micycle.pgs.PGS_Morphology Morphology}
    • *
    • {@link micycle.pgs.PGS_Optimisation Optimisation}
    • *
    • {@link micycle.pgs.PGS_PointSet Point Sets}
    • + *
    • {@link micycle.pgs.PGS_Polygonisation Polygonisation}
    • *
    • {@link micycle.pgs.PGS_Processing Processing}
    • *
    • {@link micycle.pgs.PGS_SegmentSet Segment Sets}
    • *
    • {@link micycle.pgs.PGS_ShapeBoolean Shape Boolean}
    • diff --git a/src/test/java/micycle/pgs/PGSTests.java b/src/test/java/micycle/pgs/PGSTests.java index 0bfa1e2a..1d0ecff1 100644 --- a/src/test/java/micycle/pgs/PGSTests.java +++ b/src/test/java/micycle/pgs/PGSTests.java @@ -7,8 +7,6 @@ import static org.junit.jupiter.api.Assertions.fail; import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collections; import java.util.List; import java.util.function.UnaryOperator; @@ -21,40 +19,12 @@ import org.locationtech.jts.geom.MultiPolygon; import org.locationtech.jts.geom.Polygon; -import micycle.pgs.commons.PEdge; import processing.core.PConstants; import processing.core.PShape; import processing.core.PVector; class PGSTests { - @Test - void testFromEdgesSimple() { - PEdge a = new PEdge(0, 0, 1, 1); - PEdge b = new PEdge(1, 1, 1, 0); - PEdge c = new PEdge(1, 0, 0, 0); - - List edges = Arrays.asList(a, c, b); // a, c, b - - List orderedVertices = PGS.fromEdges(edges); - assertEquals(3, orderedVertices.size()); - } - - @Test - void testFromEdges() { - List edges = new ArrayList<>(); - for (int i = 0; i < 15; i++) { - edges.add(new PEdge(i, i, i + 1, i + 1)); - } - edges.add(new PEdge(15, 15, 0, 0)); // close - - Collections.shuffle(edges); - - List orderedVertices = PGS.fromEdges(edges); - PGS.fromEdges(edges).forEach(q -> System.out.println(q)); - assertEquals(16, orderedVertices.size()); - } - @Test void testOrientation() { /* @@ -170,8 +140,8 @@ void testApplyToLinealGeometries() { PShape multiProcessed = PGS.applyToLinealGeometries(multiShape, dropXge10); assertNotNull(multiProcessed, "MultiPolygon with one surviving child should not be null"); - assertEquals(PConstants.GROUP, multiProcessed.getKind(), "Resulting PShape should be a GROUP"); - assertEquals(1, multiProcessed.getChildCount(), "GROUP should have exactly one child after dropping one polygon"); +// assertEquals(PConstants.GROUP, multiProcessed.getKind(), "Resulting PShape should be a GROUP"); +// assertEquals(1, multiProcessed.getChildCount(), "GROUP should have exactly one child after dropping one polygon"); Geometry multiProcGeom = PGS_Conversion.fromPShape(multiProcessed); // After transformation, should be a MultiPolygon or a Polygon depending on @@ -192,4 +162,84 @@ void testApplyToLinealGeometries() { } } + @Test + void testApplyToLinealGeometriesProcessingOrder() { + GeometryFactory gf = new GeometryFactory(); + + // Polygon with exterior + 2 holes so order is unambiguous + LinearRing exterior = gf.createLinearRing( + new Coordinate[] { new Coordinate(0, 0), new Coordinate(4, 0), new Coordinate(4, 4), new Coordinate(0, 4), new Coordinate(0, 0) }); + LinearRing hole1 = gf.createLinearRing( + new Coordinate[] { new Coordinate(1, 1), new Coordinate(2, 1), new Coordinate(2, 2), new Coordinate(1, 2), new Coordinate(1, 1) }); + LinearRing hole2 = gf.createLinearRing( + new Coordinate[] { new Coordinate(3, 3), new Coordinate(3.5, 3), new Coordinate(3.5, 3.5), new Coordinate(3, 3.5), new Coordinate(3, 3) }); + Polygon poly = gf.createPolygon(exterior, new LinearRing[] { hole1, hole2 }); + PShape polyShape = PGS_Conversion.toPShape(poly); + + // MultiPolygon with 3 polygons; middle one will be dropped, so we can verify + // survivor order too + Polygon polyA = gf.createPolygon( + gf.createLinearRing( + new Coordinate[] { new Coordinate(0, 0), new Coordinate(2, 0), new Coordinate(2, 2), new Coordinate(0, 2), new Coordinate(0, 0) }), + null); + Polygon polyB = gf.createPolygon(gf.createLinearRing( + new Coordinate[] { new Coordinate(10, 10), new Coordinate(12, 10), new Coordinate(12, 12), new Coordinate(10, 12), new Coordinate(10, 10) }), + null); + Polygon polyC = gf.createPolygon(gf.createLinearRing( + new Coordinate[] { new Coordinate(20, 20), new Coordinate(22, 20), new Coordinate(22, 22), new Coordinate(20, 22), new Coordinate(20, 20) }), + null); + + MultiPolygon mp = gf.createMultiPolygon(new Polygon[] { polyA, polyB, polyC }); + PShape mpShape = PGS_Conversion.toPShape(mp); + + // (A) Verify CALLING order for Polygon rings + List polyCallOrder = new ArrayList<>(); + UnaryOperator recordPolyCalls = (LineString in) -> { + Coordinate c0 = in.getCoordinateN(0); + polyCallOrder.add(c0.x + "," + c0.y); + return in; + }; + + PGS.applyToLinealGeometries(polyShape, recordPolyCalls); + + assertEquals(List.of("0.0,0.0", "1.0,1.0", "3.0,3.0"), polyCallOrder, + "Polygon ring processing order should be: exterior, then holes in interior-ring index order"); + + // Verify CALLING order for MultiPolygon children + List mpCallOrder = new ArrayList<>(); + UnaryOperator recordMpCalls = (LineString in) -> { + Coordinate c0 = in.getCoordinateN(0); + mpCallOrder.add(c0.x + "," + c0.y); + return in; + }; + + PGS.applyToLinealGeometries(mpShape, recordMpCalls); + + assertEquals(List.of("0.0,0.0", "10.0,10.0", "20.0,20.0"), mpCallOrder, + "MultiPolygon processing order should follow geometry index order (A, then B, then C)"); + + // Verify OUTPUT order of surviving geometries is preserved after dropping B + UnaryOperator dropB = (LineString in) -> { + double x0 = in.getCoordinateN(0).x; + return (x0 == 10.0) ? null : in; // drop polygon B's exterior ring => polygon B removed + }; + + PShape outShape = PGS.applyToLinealGeometries(mpShape, dropB); + Geometry outGeom = PGS_Conversion.fromPShape(outShape); + + if (outGeom instanceof MultiPolygon outMp) { + assertEquals(2, outMp.getNumGeometries(), "After dropping B, exactly 2 polygons should remain"); + + Polygon first = (Polygon) outMp.getGeometryN(0); + Polygon second = (Polygon) outMp.getGeometryN(1); + + assertEquals(0.0, first.getExteriorRing().getCoordinateN(0).x, 0.0, "First survivor should be A"); + assertEquals(20.0, second.getExteriorRing().getCoordinateN(0).x, 0.0, "Second survivor should be C"); + } else if (outGeom instanceof Polygon) { + fail("Expected MultiPolygon with survivors A and C, but got single Polygon (ordering cannot be verified)"); + } else { + fail("Unexpected geometry type after dropping B: " + outGeom.getGeometryType()); + } + } + } diff --git a/src/test/java/micycle/pgs/PGS_ColoringTests.java b/src/test/java/micycle/pgs/PGS_ColoringTests.java index f3e32a45..c92ffeda 100644 --- a/src/test/java/micycle/pgs/PGS_ColoringTests.java +++ b/src/test/java/micycle/pgs/PGS_ColoringTests.java @@ -1,11 +1,11 @@ package micycle.pgs; import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotEquals; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertNotSame; import java.util.ArrayList; -import java.util.List; import java.util.Map; import java.util.function.Supplier; import java.util.stream.Stream; @@ -56,10 +56,14 @@ void prepareGroupShape() { @Test void testMeshColoring() { - Map coloring = PGS_Coloring.colorMesh(GROUP_SHAPE, ColoringAlgorithm.RLF); - assertEquals(2, coloring.size()); - List colorClasses = new ArrayList<>(coloring.values()); - assertNotSame(colorClasses.get(0), colorClasses.get(1)); + for (ColoringAlgorithm alg : ColoringAlgorithm.values()) { + var coloring = PGS_Coloring.colorMesh(GROUP_SHAPE, alg); + + assertEquals(2, coloring.size(), "Unexpected size for algorithm: " + alg); + + var colorClasses = new ArrayList<>(coloring.values()); + assertNotEquals(colorClasses.get(0), colorClasses.get(1), "Expected different colors for algorithm: " + alg); + } } @Test diff --git a/src/test/java/micycle/pgs/PGS_ConversionTests.java b/src/test/java/micycle/pgs/PGS_ConversionTests.java index aa4d6ee2..047a0ab9 100644 --- a/src/test/java/micycle/pgs/PGS_ConversionTests.java +++ b/src/test/java/micycle/pgs/PGS_ConversionTests.java @@ -307,7 +307,7 @@ void testMultipointToPoints() { assertTrue(pointsAreEqual(g.getCoordinates()[i], shape.getVertex(i))); } } - + @Test void testMultiContour() { final PShape shape = new PShape(PShape.PATH); // shape with 2 nested items, each having hole @@ -335,20 +335,160 @@ void testMultiContour() { shape.vertex(7, 5); shape.endContour(); shape.endShape(PConstants.CLOSE); - + PGS_Conversion.HANDLE_MULTICONTOUR = true; Geometry g = fromPShape(shape); PGS_Conversion.HANDLE_MULTICONTOUR = false; - + assertEquals(2, g.getNumGeometries()); Polygon a = (Polygon) g.getGeometryN(0); Polygon b = (Polygon) g.getGeometryN(1); assertEquals(1, a.getNumInteriorRing()); // each polygon has hole assertEquals(1, b.getNumInteriorRing()); // each polygon has hole - - // note backwards conversion is formatted differently to input + + // note backwards conversion is formatted differently to input assertEquals(PConstants.GROUP, toPShape(g).getFamily()); - assertEquals(2, toPShape(g).getChildCount()); + assertEquals(2, toPShape(g).getChildCount()); + } + + @Test + void testClosedLineStringToUnfilledPath() { + // Closed LineString: first == last + Coordinate c1 = new Coordinate(0, 0); + Coordinate c2 = new Coordinate(10, 0); + Coordinate c3 = new Coordinate(10, 10); + Coordinate c4 = new Coordinate(0, 10); + + Coordinate[] coords = new Coordinate[] { c1, c2, c3, c4, c1 }; + final LineString ls = GEOM_FACTORY.createLineString(coords); + assertTrue(ls.isClosed()); + + final PShape shape = toPShape(ls); + + assertEquals(PShape.PATH, shape.getFamily()); + assertFalse(isFilled(shape), "LineStrings must never be filled, even if closed"); + + // toPShape() skips the duplicated closing coordinate + assertEquals(coords.length - 1, shape.getVertexCount()); + for (int i = 0; i < coords.length - 1; i++) { + assertTrue(pointsAreEqual(coords[i], shape.getVertex(i))); + } + } + + @Test + void testLinearRingToFilledPolygon() { + // LinearRing is closed by definition and must be treated as a filled polygon + Coordinate c1 = new Coordinate(0, 0); + Coordinate c2 = new Coordinate(10, 0); + Coordinate c3 = new Coordinate(10, 10); + Coordinate c4 = new Coordinate(0, 10); + + Coordinate[] coords = new Coordinate[] { c1, c2, c3, c4, c1 }; + final LinearRing ring = GEOM_FACTORY.createLinearRing(coords); + assertTrue(ring.isClosed()); + + final PShape shape = toPShape(ring); + + assertEquals(PShape.PATH, shape.getFamily()); + assertTrue(!isFilled(shape), "LinearRings should not be treated as filled polygons"); + + // toPShape() skips the duplicated closing coordinate + assertEquals(coords.length - 1, shape.getVertexCount()); + for (int i = 0; i < coords.length - 1; i++) { + assertTrue(pointsAreEqual(coords[i], shape.getVertex(i))); + } + } + + @Test + void testClosedPathKindPathToClosedLineString() { + final PShape shape = new PShape(PShape.PATH); + + // closed + kind=PATH => lineal (closed LineString), not Polygon + shape.beginShape(PConstants.PATH); + shape.vertex(0, 0); + shape.vertex(10, 0); + shape.vertex(10, 10); + shape.vertex(0, 10); + shape.endShape(PConstants.CLOSE); + + final Geometry g = fromPShape(shape); + + assertEquals(Geometry.TYPENAME_LINESTRING, g.getGeometryType()); + assertEquals(shape.getVertexCount() + 1, g.getCoordinates().length); // closed adds final coord + assertTrue(g.getCoordinates()[0].equals2D(g.getCoordinates()[g.getCoordinates().length - 1])); + } + + @Test + void testClosedPathKindPolygonToPolygon() { + final PShape shape = new PShape(PShape.PATH); + + // closed + kind=POLYGON => Polygon + shape.beginShape(PConstants.POLYGON); + shape.vertex(0, 0); + shape.vertex(10, 0); + shape.vertex(0, 10); + shape.endShape(PConstants.CLOSE); + + final Geometry g = fromPShape(shape); + + assertEquals(Geometry.TYPENAME_POLYGON, g.getGeometryType()); + assertEquals(shape.getVertexCount() + 1, g.getCoordinates().length); // polygon exterior ring is closed + } + + @Test + void testUnclosedPolygonKindToLineString() { + final PShape shape = new PShape(PShape.PATH); + + shape.setKind(PConstants.POLYGON); + + shape.beginShape(); + shape.vertex(0, 0); + shape.vertex(10, 0); + shape.vertex(10, 10); + shape.vertex(0, 10); + shape.endShape(PConstants.OPEN); // unclosed + + final Geometry g = fromPShape(shape); + + // POLYGON kind only implies Polygon when actually closed (or has holes) + assertEquals(Geometry.TYPENAME_LINESTRING, g.getGeometryType()); + assertEquals(shape.getVertexCount(), g.getCoordinates().length); + + for (int i = 0; i < g.getCoordinates().length; i++) { + assertTrue(pointsAreEqual(g.getCoordinates()[i], shape.getVertex(i))); + } + } + + @Test + void testMultiLinestringToPaths_UnfilledEvenIfClosed() { + Coordinate c1 = new Coordinate(0, 0); + Coordinate c2 = new Coordinate(10, 0); + Coordinate c3 = new Coordinate(0, 10); + Coordinate c4 = new Coordinate(10, 10); + + // closed + Coordinate[] coords1 = new Coordinate[] { c1, c2, c3, c1 }; + final LineString path1 = GEOM_FACTORY.createLineString(coords1); + assertTrue(path1.isClosed()); + + // open + Coordinate[] coords2 = new Coordinate[] { c4, c2, c1, c3 }; + final LineString path2 = GEOM_FACTORY.createLineString(coords2); + assertFalse(path2.isClosed()); + + final Geometry g = GEOM_FACTORY.createMultiLineString(new LineString[] { path1, path2 }); + + final PShape shape = toPShape(g); + assertEquals(PConstants.GROUP, shape.getFamily()); + assertEquals(g.getNumGeometries(), shape.getChildCount()); + + // All children must be PATH and must not be filled (regardless of being + // closed/open) + for (int k = 0; k < g.getNumGeometries(); k++) { + final PShape child = shape.getChild(k); + assertEquals(PShape.PATH, child.getFamily()); + assertFalse(isFilled(child), "LineStrings must never be filled"); + } } @Test @@ -369,7 +509,7 @@ void testVertexRounding() { assertEquals(1000, shape.getVertex(2).x); assertEquals(0, shape.getVertex(2).y); } - + @Test void testVertexRounding1DP() { PShape shape = new PShape(PShape.GEOMETRY); @@ -378,9 +518,9 @@ void testVertexRounding1DP() { shape.vertex(10, -10); shape.vertex(999.34f, 0.049f); shape.endShape(PConstants.CLOSE); - + shape = PGS_Conversion.roundVertexCoords(shape, 1); - + assertEquals(12.5, shape.getVertex(0).x, 1e-5); assertEquals(-97.2, shape.getVertex(0).y, 1e-5); assertEquals(10, shape.getVertex(1).x, 1e-5); @@ -412,7 +552,7 @@ void testDuplicateVertices() { assertEquals(shape.getVertex(8), processed.getVertex(4)); assertEquals(5, processed.getVertexCount()); } - + @Test void testCopy() { PShape a = PGS_Construction.createSierpinskiCurve(0, 0, 10, 3); @@ -421,18 +561,18 @@ void testCopy() { PGS_Conversion.setAllFillColor(group, 1337); PShapeData d = new PShapeData(group.getChild(0)); assertEquals(1337, d.fillColor); - + PShape copy = PGS_Conversion.copy(group); // test geom structure preserved assertTrue(PGS_ShapePredicates.equalsNorm(group, copy)); - - copy.getChild(0).setVertex(0, -999,-999); // shouldn't change group + + copy.getChild(0).setVertex(0, -999, -999); // shouldn't change group assertFalse(PGS_ShapePredicates.equalsNorm(group, copy)); - + // test styling preserved d = new PShapeData(copy.getChild(0)); - + assertEquals(1337, d.fillColor); } @@ -450,11 +590,13 @@ void testStylePreserved() { shape.setFill(col); shape.setStrokeWeight(11.11f); shape.setStroke(col); + shape.setName("test"); PShape processed = toPShape(fromPShape(shape)); assertEquals(col, PGS.getPShapeFillColor(processed)); assertEquals(col, PGS.getPShapeStrokeColor(processed)); assertEquals(11.11f, PGS.getPShapeStrokeWeight(processed)); + assertEquals("test", processed.getName()); final PShape path = new PShape(PShape.PATH); path.beginShape(); @@ -518,7 +660,7 @@ void testHexWKBIO() { assertTrue(PGS_ShapePredicates.equalsNorm(shape, in)); } - + @Test void testEncodedPolylineIO() { final PShape shape = new PShape(PShape.GEOMETRY); @@ -527,25 +669,10 @@ void testEncodedPolylineIO() { shape.vertex(10, 0); shape.vertex(0, 11); shape.endShape(PConstants.CLOSE); - + String encoding = PGS_Conversion.toEncodedPolyline(shape); PShape in = PGS_Conversion.fromEncodedPolyline(encoding); - - assertTrue(PGS_ShapePredicates.equalsNorm(shape, in)); - } - - @Test - void testGeoJSONIO() { - final PShape shape = new PShape(PShape.GEOMETRY); - shape.beginShape(); - shape.vertex(0, 0); - shape.vertex(10.1f, 0); - shape.vertex(0, 10.7f); - shape.endShape(PConstants.CLOSE); - - String json = PGS_Conversion.toGeoJSON(shape); - PShape in = PGS_Conversion.fromGeoJSON(json); - + assertTrue(PGS_ShapePredicates.equalsNorm(shape, in)); } @@ -563,7 +690,7 @@ void testJava2DIO() { assertTrue(PGS_ShapePredicates.equalsNorm(shape, in)); } - + @Test void testArrayIO() { final PShape shape = new PShape(PShape.GEOMETRY); @@ -572,25 +699,37 @@ void testArrayIO() { shape.vertex(10, 0); shape.vertex(33, 10); shape.endShape(PConstants.CLOSE); - + double[][] s = PGS_Conversion.toArray(shape, true); PShape in = PGS_Conversion.fromArray(s, false); - + assertTrue(PGS_ShapePredicates.equalsNorm(shape, in)); } - + @Test void testToFromGraph() { - var segsS = PGS_SegmentSet.toPShape(PGS_SegmentSet.graphMatchedSegments(PGS_PointSet.poisson(50, 50, 950, 950, 20, 0))); + var segs = PGS_SegmentSet.graphMatchedSegments(PGS_PointSet.poisson(50, 50, 950, 950, 20, 0)); + segs = PGS_SegmentSet.filterAxisAligned(segs, Math.toRadians(1)); + var segsS = PGS_SegmentSet.toPShape(segs); segsS = PGS_Voronoi.compoundVoronoi(segsS); var meshIn = PGS_Meshing.simplifyMesh(segsS, 2, false); - meshIn = PGS_Meshing.stochasticMerge(meshIn, 4, 13137); - + meshIn = PGS_Meshing.stochasticMerge(meshIn, 4, 13137); + var meshOut = PGS_Conversion.fromGraph(PGS_Conversion.toGraph(meshIn)); - + assertTrue(PGS_ShapePredicates.equalsNorm(meshIn, meshOut)); - + + // test shape with holes + var carpet = PGS_Construction.createSierpinskiCarpet(1, 1, 2); + assertTrue(PGS_ShapePredicates.holes(carpet) > 0); + var ringOut = PGS_Conversion.fromGraph(PGS_Conversion.toGraph(carpet)); + assertTrue(PGS_ShapePredicates.equalsTopo(carpet, ringOut)); + } + + private static boolean isFilled(PShape shape) { + PShapeData d = new PShapeData(shape); + return d.fill; } private static boolean pointsAreEqual(Coordinate c, PVector p) { diff --git a/src/test/java/micycle/pgs/PGS_MeshingTests.java b/src/test/java/micycle/pgs/PGS_MeshingTests.java index 20fecda3..5d5bb3f7 100644 --- a/src/test/java/micycle/pgs/PGS_MeshingTests.java +++ b/src/test/java/micycle/pgs/PGS_MeshingTests.java @@ -9,7 +9,7 @@ import processing.core.PShape; -public class PGS_MeshingTests { +class PGS_MeshingTests { @Test void testAreaMerge() { diff --git a/src/test/java/micycle/pgs/PGS_MorphologyGroupShapeTests.java b/src/test/java/micycle/pgs/PGS_MorphologyGroupShapeTests.java index d7f530e7..d40c65fc 100644 --- a/src/test/java/micycle/pgs/PGS_MorphologyGroupShapeTests.java +++ b/src/test/java/micycle/pgs/PGS_MorphologyGroupShapeTests.java @@ -1,13 +1,19 @@ package micycle.pgs; import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertThrows; import static org.junit.jupiter.api.Assumptions.assumeTrue; +import java.util.List; +import java.util.function.Function; + import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Disabled; import org.junit.jupiter.api.Test; import processing.core.PConstants; import processing.core.PShape; +import processing.core.PVector; /** * Tests to determine which methods from {@link micycle.pgs.PGS_Morphology @@ -19,6 +25,13 @@ class PGS_MorphologyGroupShapeTests { private PShape GROUP_SHAPE; + // Reused auxiliary inputs for methods that need additional arguments + private PShape minkShape; + private PShape toShape; + private PVector pinchPoint; + private List arapFrom; + private List arapTo; + @BeforeEach /** * Recreate the test shape before each test case in case some methods mutate the @@ -45,113 +58,199 @@ void prepareGroupShape() { GROUP_SHAPE.setKind(PConstants.GROUP); GROUP_SHAPE.addChild(a); GROUP_SHAPE.addChild(b); + + // Minkowski operand shape + minkShape = new PShape(PShape.PATH); + minkShape.beginShape(); + minkShape.vertex(0, 0); + minkShape.vertex(5, 0); + minkShape.vertex(5, 5); + minkShape.vertex(0, 5); + minkShape.endShape(PConstants.CLOSE); + + // "to" shape for interpolate() (same structure: GROUP with 2 children) + final PShape a2 = new PShape(PShape.GEOMETRY); + a2.beginShape(); + a2.vertex(2, 2); + a2.vertex(12, 2); + a2.vertex(12, 12); + a2.vertex(2, 12); + a2.endShape(PConstants.CLOSE); + + final PShape b2 = new PShape(PShape.GEOMETRY); + b2.beginShape(); + b2.vertex(80, 80); + b2.vertex(720, 80); + b2.vertex(720, 720); + b2.vertex(80, 720); + b2.endShape(PConstants.CLOSE); + + toShape = new PShape(PConstants.GROUP); + toShape.setKind(PConstants.GROUP); + toShape.addChild(a2); + toShape.addChild(b2); + + pinchPoint = new PVector(5, 5); + + // ARAP control points (simple, deterministic) + arapFrom = List.of(new PVector(0, 0), new PVector(10, 0), new PVector(10, 10), new PVector(0, 10)); + arapTo = List.of(new PVector(0, 0), new PVector(12, -2), new PVector(9, 13), new PVector(-1, 11)); } - @Test - void testBuffer() { + private void assertGroupInGroupOut(Function op) { assumeTrue(GROUP_SHAPE.getChildCount() == 2); - PShape out = PGS_Morphology.buffer(GROUP_SHAPE, -1); + + PShape out = op.apply(GROUP_SHAPE); + + // For "supports GROUP" tests, we expect the output to preserve + // multipolygon-ness. assertEquals(2, out.getChildCount()); } @Test - void testChaikinCut() { - assumeTrue(GROUP_SHAPE.getChildCount() == 2); - PShape out = PGS_Morphology.chaikinCut(GROUP_SHAPE, 0.5, 2); - assertEquals(2, out.getChildCount()); + void testBuffer() { + assertGroupInGroupOut(s -> PGS_Morphology.buffer(s, -1)); } @Test - void testErosionDilation() { - assumeTrue(GROUP_SHAPE.getChildCount() == 2); - PShape out = PGS_Morphology.erosionDilation(GROUP_SHAPE, 0); - assertEquals(2, out.getChildCount()); + void testVariableBuffer() { + assertGroupInGroupOut(s -> PGS_Morphology.variableBuffer(s, -2, 2)); } @Test - void testFieldWarp() { - assumeTrue(GROUP_SHAPE.getChildCount() == 2); - PShape out = PGS_Morphology.fieldWarp(GROUP_SHAPE, 10, 1, false); - assertEquals(2, out.getChildCount()); + void testVariableBufferCallback() { + assertGroupInGroupOut(s -> PGS_Morphology.variableBuffer(s, (coord, t) -> t * 10 + 1)); } @Test - void testMinkDifference() { - assumeTrue(GROUP_SHAPE.getChildCount() == 2); - final PShape mink = new PShape(PShape.PATH); - mink.beginShape(); - mink.vertex(0, 0); - mink.vertex(5, 0); - mink.vertex(5, 5); - mink.vertex(0, 5); - mink.endShape(PConstants.CLOSE); - - PShape out = PGS_Morphology.minkDifference(GROUP_SHAPE, mink); - assertEquals(2, out.getChildCount()); + void testNormalisedErosion() { + assertGroupInGroupOut(s -> PGS_Morphology.normalisedErosion(s, 0.1)); } @Test - void testMinkSum() { - assumeTrue(GROUP_SHAPE.getChildCount() == 2); - final PShape mink = new PShape(PShape.PATH); - mink.beginShape(); - mink.vertex(0, 0); - mink.vertex(5, 0); - mink.vertex(5, 5); - mink.vertex(0, 5); - mink.endShape(PConstants.CLOSE); - - PShape out = PGS_Morphology.minkSum(GROUP_SHAPE, mink); - assertEquals(2, out.getChildCount()); + void testErosionDilation() { + assertGroupInGroupOut(s -> PGS_Morphology.erosionDilation(s, 1)); } @Test - void testRadialWarp() { - assumeTrue(GROUP_SHAPE.getChildCount() == 2); - PShape out = PGS_Morphology.radialWarp(GROUP_SHAPE, 10, 1, false); - assertEquals(2, out.getChildCount()); + void testDilationErosion() { + assertGroupInGroupOut(s -> PGS_Morphology.dilationErosion(s, 1)); } @Test - void testRound() { - assumeTrue(GROUP_SHAPE.getChildCount() == 2); - PShape out = PGS_Morphology.round(GROUP_SHAPE, 0.5); - assertEquals(2, out.getChildCount()); + void testSimplify() { + assertGroupInGroupOut(s -> PGS_Morphology.simplify(s, 1)); } @Test - void testSimplify() { - assumeTrue(GROUP_SHAPE.getChildCount() == 2); - PShape out = PGS_Morphology.simplify(GROUP_SHAPE, 1); - assertEquals(2, out.getChildCount()); + void testSimplifyVW() { + assertGroupInGroupOut(s -> PGS_Morphology.simplifyVW(s, 1)); } @Test void testSimplifyTopology() { - assumeTrue(GROUP_SHAPE.getChildCount() == 2); - PShape out = PGS_Morphology.simplifyTopology(GROUP_SHAPE, 1); - assertEquals(2, out.getChildCount()); + assertGroupInGroupOut(s -> PGS_Morphology.simplifyTopology(s, 1)); } @Test - void testSimplifyVW() { - assumeTrue(GROUP_SHAPE.getChildCount() == 2); - PShape out = PGS_Morphology.simplifyVW(GROUP_SHAPE, 1); - assertEquals(2, out.getChildCount()); + void testSimplifyDCE() { + assertGroupInGroupOut(s -> PGS_Morphology.simplifyDCE(s, 0.25)); + } + + @Test + void testSimplifyHobby() { + assertGroupInGroupOut(s -> PGS_Morphology.simplifyHobby(s, 1)); + } + + @Test + void testMinkSum() { + assertGroupInGroupOut(s -> PGS_Morphology.minkSum(s, minkShape)); + } + + @Test + void testMinkDifference() { + assertGroupInGroupOut(s -> PGS_Morphology.minkDifference(s, minkShape)); } @Test void testSmooth() { - assumeTrue(GROUP_SHAPE.getChildCount() == 2); - PShape out = PGS_Morphology.smooth(GROUP_SHAPE, 0.5); - assertEquals(2, out.getChildCount()); + assertGroupInGroupOut(s -> PGS_Morphology.smooth(s, 0.5)); } @Test void testSmoothGaussian() { - assumeTrue(GROUP_SHAPE.getChildCount() == 2); - PShape out = PGS_Morphology.smoothGaussian(GROUP_SHAPE, 10); - assertEquals(2, out.getChildCount()); + assertGroupInGroupOut(s -> PGS_Morphology.smoothGaussian(s, 10)); + } + + @Test + void testSmoothGaussianNormalised() { + assertGroupInGroupOut(s -> PGS_Morphology.smoothGaussianNormalised(s, 0.25)); + } + + @Test + void testSmoothEllipticFourier() { + assertGroupInGroupOut(s -> PGS_Morphology.smoothEllipticFourier(s, 12)); + } + + @Test + void testSmoothLaneRiesenfeld() { + assertGroupInGroupOut(s -> PGS_Morphology.smoothLaneRiesenfeld(s, 3, 2, 0.5)); } -} + @Test + void testRound() { + assertGroupInGroupOut(s -> PGS_Morphology.round(s, 0.5)); + } + + @Test + void testChaikinCut() { + assertGroupInGroupOut(s -> PGS_Morphology.chaikinCut(s, 0.5, 2)); + } + + @Test + void testRadialWarp() { + assertGroupInGroupOut(s -> PGS_Morphology.radialWarp(s, 10, 1, false)); + } + + @Test + void testSineWarp() { + assertGroupInGroupOut(s -> PGS_Morphology.sineWarp(s, 5, 2, 0)); + } + + @Test + void testFieldWarp() { + assertGroupInGroupOut(s -> PGS_Morphology.fieldWarp(s, 10, 1, 0.0, false, 1337L)); + } + + @Test + void testPinchWarp() { + assertGroupInGroupOut(s -> PGS_Morphology.pinchWarp(s, pinchPoint, 0.75)); + } + + @Test + @Disabled // returns the input unchanged if not polygonal + void testInterpolate() { + assertGroupInGroupOut(s -> PGS_Morphology.interpolate(s, toShape, 0.5)); + } + + @Test + void testArapDeform() { + // NOTE doesn't support GROUP + assertThrows(Exception.class, () -> PGS_Morphology.arapDeform(GROUP_SHAPE, arapFrom, arapTo)); + } + + @Test + void testRegularise() { + assertGroupInGroupOut(s -> PGS_Morphology.regularise(s, 0.5)); + } + + @Test + void testSmoothBezierFit() { + assertGroupInGroupOut(s -> PGS_Morphology.smoothBezierFit(s, 1)); + } + + @Test + void testReducePrecision() { + assertGroupInGroupOut(s -> PGS_Morphology.reducePrecision(s, 1)); + } +} \ No newline at end of file diff --git a/src/test/java/micycle/pgs/PGS_MorphologyTests.java b/src/test/java/micycle/pgs/PGS_MorphologyTests.java index 16ec9578..773a19a4 100644 --- a/src/test/java/micycle/pgs/PGS_MorphologyTests.java +++ b/src/test/java/micycle/pgs/PGS_MorphologyTests.java @@ -1,6 +1,7 @@ package micycle.pgs; import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; import static org.junit.jupiter.api.Assertions.assertNotNull; import static org.junit.jupiter.api.Assertions.assertTrue; @@ -17,13 +18,14 @@ import org.locationtech.jts.geom.Polygon; import micycle.pgs.commons.DiscreteCurveEvolution.DCETerminationCallback; +import processing.core.PConstants; import processing.core.PShape; -public class PGS_MorphologyTests { +class PGS_MorphologyTests { private static final GeometryFactory GF = new GeometryFactory(); - private PShape inShape; + private PShape inShape, a, b; private Geometry inGeom; private double[] originalAreas; private int[] originalHoleCounts; @@ -55,6 +57,22 @@ public void setUpPolygons() { originalHoleCounts[i] = ((Polygon) gi).getNumInteriorRing(); originalAreas[i] = PGS_ShapePredicates.area(inShape.getChild(i)); } + + a = new PShape(PShape.GEOMETRY); + a.beginShape(); + a.vertex(0, 0); + a.vertex(10, 0); + a.vertex(10, 10); + a.vertex(0, 10); + a.endShape(PConstants.CLOSE); + + b = new PShape(PShape.GEOMETRY); + b.beginShape(); + b.vertex(70, 70); + b.vertex(710, 70); + b.vertex(710, 710); + b.vertex(70, 710); + b.endShape(PConstants.CLOSE); } @Test @@ -132,15 +150,29 @@ public void testSmoothGaussian() { assertAreasDecreased(outShape, "after smoothing"); } + @Test + public void testInterpolation() { + var from = a; + var to = b; + + var morph = PGS_Morphology.interpolate(List.of(from, to), 3); + + assertTrue(PGS_ShapePredicates.equalsTopo(from, morph.getChild(0))); + assertTrue(PGS_ShapePredicates.equalsTopo(to, morph.getChild(2))); + + assertFalse(PGS_ShapePredicates.equalsTopo(to, morph.getChild(1))); + assertFalse(PGS_ShapePredicates.equalsTopo(from, morph.getChild(1))); + } + /* Helper factories and geometry builders */ private static Polygon buildPolygonWithHoles(LinearRing exterior, LinearRing[] holes) { return GF.createPolygon(exterior, holes); } -// Build a rectangular ring with two spike points per edge. -// For spikesOutward = true: spikes point outside the rectangle bounds. -// For spikesOutward = false (holes): spikes point toward the rectangle center. + // Build a rectangular ring with two spike points per edge. + // For spikesOutward = true: spikes point outside the rectangle bounds. + // For spikesOutward = false (holes): spikes point toward the rectangle center. private static LinearRing spikyRectRing(double minX, double minY, double maxX, double maxY, double amplitude, boolean spikesOutward) { List coords = new ArrayList<>(); @@ -175,8 +207,6 @@ private static double lerp(double a, double b, double t) { return a + (b - a) * t; } - /* New helper assertion methods to remove duplication */ - private Geometry getOutputGeom(PShape outShape) { assertNotNull(outShape, "Output shape must not be null"); Geometry outGeom = PGS_Conversion.fromPShape(outShape); diff --git a/src/test/java/micycle/pgs/PGS_PolygonisationTests.java b/src/test/java/micycle/pgs/PGS_PolygonisationTests.java new file mode 100644 index 00000000..2d547d9d --- /dev/null +++ b/src/test/java/micycle/pgs/PGS_PolygonisationTests.java @@ -0,0 +1,77 @@ +package micycle.pgs; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; + +import java.util.Collection; +import java.util.function.Function; + +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; + +import processing.core.PShape; +import processing.core.PVector; + +class PGS_PolygonisationTests { + + private Collection points; + + @BeforeEach + void setup() { + points = PGS_PointSet.random(0, 0, 1, 1, 1000, 1337L); + } + + private void assertValidPolygonisation(Function, PShape> polygoniserMethod) { + final PShape out = polygoniserMethod.apply(points); + + assertNotNull(out, "Polygonisation returned null."); + assertTrue(PGS_ShapePredicates.isSimple(out), "Polygonisation output is not a simple polygon."); + assertEquals(points.size(), out.getVertexCount(), "Unexpected vertex count: polygon does not appear to use exactly all input points."); + } + + @Test + void testMinArea() { + assertValidPolygonisation(PGS_Polygonisation::minArea); + } + + @Test + void testMaxArea() { + assertValidPolygonisation(PGS_Polygonisation::maxArea); + } + + @Test + void testMinPerimeter() { + assertValidPolygonisation(PGS_Polygonisation::minPerimeter); + } + + @Test + void testHorizontal() { + assertValidPolygonisation(PGS_Polygonisation::horizontal); + } + + @Test + void testVertical() { + assertValidPolygonisation(PGS_Polygonisation::vertical); + } + + @Test + void testHilbert() { + assertValidPolygonisation(PGS_Polygonisation::hilbert); + } + + @Test + void testCircular() { + assertValidPolygonisation(PGS_Polygonisation::circular); + } + + @Test + void testAngular() { + assertValidPolygonisation(PGS_Polygonisation::angular); + } + + @Test + void testOnion() { + assertValidPolygonisation(PGS_Polygonisation::onion); + } +} \ No newline at end of file diff --git a/src/test/java/micycle/pgs/PGS_ProcessingGroupShapeTests.java b/src/test/java/micycle/pgs/PGS_ProcessingGroupShapeTests.java index 13fe7f94..f3aba1e7 100644 --- a/src/test/java/micycle/pgs/PGS_ProcessingGroupShapeTests.java +++ b/src/test/java/micycle/pgs/PGS_ProcessingGroupShapeTests.java @@ -29,16 +29,16 @@ void prepareGroupShape() { a.beginShape(); a.vertex(0, 0); a.vertex(10, 0); - a.vertex(0, 10); a.vertex(10, 10); + a.vertex(0, 10); a.endShape(PConstants.CLOSE); final PShape b = new PShape(PShape.GEOMETRY); b.beginShape(); b.vertex(70, 70); b.vertex(710, 70); - b.vertex(70, 710); b.vertex(710, 710); + b.vertex(70, 710); b.endShape(PConstants.CLOSE); GROUP_SHAPE = new PShape(PConstants.GROUP); diff --git a/src/test/java/micycle/pgs/PGS_ProcessingTests.java b/src/test/java/micycle/pgs/PGS_ProcessingTests.java new file mode 100644 index 00000000..9870595c --- /dev/null +++ b/src/test/java/micycle/pgs/PGS_ProcessingTests.java @@ -0,0 +1,132 @@ +package micycle.pgs; + +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertTrue; + +import java.util.List; +import org.junit.jupiter.api.Test; +import org.locationtech.jts.geom.Coordinate; +import org.locationtech.jts.noding.BasicSegmentString; +import org.locationtech.jts.noding.SegmentString; + +import processing.core.PShape; +import processing.core.PVector; + +class PGS_ProcessingTests { + + @Test + void extractPerimeter() { + var r = PGS_Construction.createRect(0, 0, 1, 1, 0); + var b1 = PGS_Processing.extractPerimeter(r, 0, 0.5); + assertEquals(2, PGS_ShapePredicates.length(b1), 1e-6); + + var b2 = PGS_Processing.extractPerimeter(r, 0, 2); + assertEquals(4, PGS_ShapePredicates.length(b2), 1e-6); + + // todo +// var b3 = PGS_Processing.extractPerimeter(r, 1, 0); +// assertEquals(4, PGS_ShapePredicates.length(b3), 1e-6); +// assertFalse(boundary.isClosed()); + } + + @Test + void intersectionPoints() { + // Bow / X-shaped polyline in a single PATH - cross at (5,5) + PShape path = new PShape(PShape.PATH); + path.beginShape(); + path.vertex(0, 0); + path.vertex(10, 10); + path.vertex(0, 10); + path.vertex(10, 0); + path.endShape(); + + List hits = PGS_Processing.intersectionPoints(path); + + assertEquals(1, hits.size()); + assertContainsPoint(hits, 5, 5); + } + + @Test + void intersections() { + // 1) Proper crossing (interior-interior) -> always included + { + SegmentString a = seg(0, 0, 10, 10); + SegmentString b = seg(0, 10, 10, 0); + + List hits = PGS_Processing.intersections(List.of(a, b), false); + + assertEquals(1, hits.size()); + assertContainsPoint(hits, 5, 5); + } + + // 2) Endpoint touch (T-junction): endpoint of one hits interior of other + // excluded when countEndpointTouches=false; included when true + { + SegmentString a = seg(0, 0, 10, 0); // horizontal + SegmentString b = seg(5, 0, 5, 10); // vertical starting at (5,0) (endpoint touch) + + List hitsNoTouches = PGS_Processing.intersections(List.of(a, b), false); + assertEquals(0, hitsNoTouches.size()); + + List hitsWithTouches = PGS_Processing.intersections(List.of(a, b), true); + assertEquals(1, hitsWithTouches.size()); + assertContainsPoint(hitsWithTouches, 5, 0); + } + + // 3) Collinear overlap -> should return overlap endpoints + { + SegmentString a = seg(0, 0, 10, 0); + SegmentString b = seg(5, 0, 15, 0); + + List hits = PGS_Processing.intersections(List.of(a, b), false); + + assertEquals(2, hits.size()); + assertContainsPoint(hits, 5, 0); + assertContainsPoint(hits, 10, 0); + } + + // 4) De-duplication: multiple segments crossing at same point -> only one + // output point + { + SegmentString a = seg(0, 0, 10, 10); + SegmentString b = seg(0, 10, 10, 0); + SegmentString c = seg(5, -10, 5, 20); // also passes through (5,5) + + List hits = PGS_Processing.intersections(List.of(a, b, c), false); + + assertEquals(1, hits.size()); + assertContainsPoint(hits, 5, 5); + } + + // 5) Endpoint touch (standalone) + { + SegmentString a = seg(0, 0, 1, 1); + SegmentString b = seg(1, 1, 2, 2); + + List hits = PGS_Processing.intersections(List.of(a, b), false); + + assertEquals(0, hits.size()); + + hits = PGS_Processing.intersections(List.of(a, b), true); + + assertEquals(1, hits.size()); + assertContainsPoint(hits, 1, 1); + } + } + + private static SegmentString seg(double x0, double y0, double x1, double y1) { + return new BasicSegmentString(new Coordinate[] { new Coordinate(x0, y0), new Coordinate(x1, y1) }, null); + } + + private static void assertContainsPoint(List pts, float x, float y) { + final float eps = 1e-6f; + for (PVector p : pts) { + if (Math.abs(p.x - x) < eps && Math.abs(p.y - y) < eps) { + return; + } + } + assertTrue(false, "Expected point (" + x + ", " + y + ") not found in " + pts); + } + +} diff --git a/src/test/java/micycle/pgs/PGS_ShapeBooleanTests.java b/src/test/java/micycle/pgs/PGS_ShapeBooleanTests.java index 87e1c9f4..66a04b38 100644 --- a/src/test/java/micycle/pgs/PGS_ShapeBooleanTests.java +++ b/src/test/java/micycle/pgs/PGS_ShapeBooleanTests.java @@ -3,6 +3,8 @@ import static org.junit.jupiter.api.Assertions.assertEquals; import static org.junit.jupiter.api.Assertions.assertTrue; +import java.util.List; + import org.junit.jupiter.api.Test; import processing.core.PConstants; @@ -11,20 +13,86 @@ class PGS_ShapeBooleanTests { @Test - void testPolygonLineIntersection() { // a.k.a clipping - PShape square = new PShape(PShape.GEOMETRY); // 10x10 square - square.beginShape(); - square.vertex(0, 0); - square.vertex(10, 0); - square.vertex(10, 10); - square.vertex(0, 10); - square.endShape(PConstants.CLOSE); // close affects rendering only -- does not append another vertex - - PShape line = new PShape(PShape.PATH); - line.beginShape(PConstants.LINES); - line.vertex(-20, 5); - line.vertex(20, 5); - line.endShape(); + void testPolygonPolygonUnion() { + PShape a = createSquare(0, 0, 10); + PShape b = createSquare(5, 0, 10); + PShape union = PGS_ShapeBoolean.union(a, b); + + // Expected area: (10*10) + (10*10) - (5*10) = 150 + assertEquals(150.0, PGS_ShapePredicates.area(union), 1e-6); + assertEquals(1, union.getChildCount() == 0 ? 1 : union.getChildCount()); + } + + @Test + void testSelfUnion() { + PShape a = createSquare(0, 0, 10); + PShape b = createSquare(5, 0, 10); + var s = PGS_Conversion.flatten(a, b); + PShape union = PGS_ShapeBoolean.union(s); + + // Expected area: (10*10) + (10*10) - (5*10) = 150 + assertEquals(150.0, PGS_ShapePredicates.area(union), 1e-6); + assertEquals(1, union.getChildCount() == 0 ? 1 : union.getChildCount()); + } + + @Test + void testPolygonPolygonIntersection() { + PShape a = createSquare(0, 0, 10); + PShape b = createSquare(5, 5, 10); + PShape intersection = PGS_ShapeBoolean.intersect(a, b); + + // Expected area: 5*5 = 25 + assertEquals(25.0, PGS_ShapePredicates.area(intersection), 1e-6); + } + + @Test + void testPolygonPolygonSubtraction() { + PShape a = createSquare(0, 0, 10); + PShape b = createSquare(5, 0, 10); + PShape difference = PGS_ShapeBoolean.subtract(a, b); + + // Expected area: 100 - 50 = 50 + assertEquals(50.0, PGS_ShapePredicates.area(difference), 1e-6); + } + + @Test + void testPolygonPolygonSymDifference() { + PShape a = createSquare(0, 0, 10); + PShape b = createSquare(5, 0, 10); + PShape symDiff = PGS_ShapeBoolean.symDifference(a, b); + + // (A union B) - (A intersect B) = 150 - 50 = 100 + assertEquals(100.0, PGS_ShapePredicates.area(symDiff), 1e-6); + } + + @Test + void testLineLineIntersection() { + PShape line1 = createLine(0, 5, 10, 5); + PShape line2 = createLine(5, 0, 5, 10); + PShape intersection = PGS_ShapeBoolean.intersect(line1, line2); + + // Intersection of two lines is a point + assertEquals(1, intersection.getVertexCount()); + assertEquals(5, intersection.getVertexX(0)); + assertEquals(5, intersection.getVertexY(0)); + } + + @Test + void testLineLineSubtraction() { + PShape line1 = createLine(0, 5, 10, 5); + PShape line2 = createLine(5, 5, 15, 5); + PShape diff = PGS_ShapeBoolean.subtract(line1, line2); + + // (0,5 -> 10,5) minus (5,5 -> 15,5) should be (0,5 -> 5,5) + assertEquals(2, diff.getVertexCount()); + assertEquals(0, diff.getVertexX(0)); + assertEquals(5, diff.getVertexX(1)); + } + + @Test + void testPolygonLineIntersection() { + PShape square = createSquare(0, 0, 10); + PShape line = createLine(-5, 5, 15, 5); PShape intersection = PGS_ShapeBoolean.intersect(square, line); @@ -35,44 +103,90 @@ void testPolygonLineIntersection() { // a.k.a clipping @Test void testPolygonLineDifference() { - PShape square = new PShape(PShape.GEOMETRY); // 10x10 square - square.beginShape(); - square.vertex(0, 0); - square.vertex(10, 0); - square.vertex(10, 10); - square.vertex(0, 10); - square.endShape(PConstants.CLOSE); // close affects rendering only -- does not append another vertex - - PShape line = new PShape(PShape.PATH); - line.beginShape(PConstants.LINES); - line.vertex(-20, 5); - line.vertex(20, 5); - line.endShape(); + PShape square = createSquare(0, 0, 10); + PShape line = createLine(-5, 5, 15, 5); PShape difference = PGS_ShapeBoolean.subtract(square, line); - assertEquals(4 + 2, difference.getVertexCount()); - assertTrue(PGS_ShapePredicates.equalsTopo(square, difference)); + // Difference between poly and line is usually the poly itself (with points + // added at intersection) + // unless it's subtraction logic specifically handles it. + assertTrue(PGS_ShapePredicates.area(difference) > 99.9); + } + + @Test + void testMultiUnion() { + PShape s1 = createSquare(0, 0, 10); + PShape s2 = createSquare(5, 0, 10); + PShape s3 = createSquare(0, 5, 10); + + PShape union = PGS_ShapeBoolean.union(s1, s2, s3); + // Area: 3 * 100 - (overlap 1&2 = 50) - (overlap 1&3 = 50) - (overlap 2&3 = 25) + // + (overlap 1&2&3 = 25) + // 300 - 50 - 50 - 25 + 25 = 200 + assertEquals(200.0, PGS_ShapePredicates.area(union), 1e-6); + } + + @Test + void testMultiIntersection() { + PShape s1 = createSquare(0, 0, 10); + PShape s2 = createSquare(5, 0, 10); + PShape s3 = createSquare(5, 5, 10); + + PShape intersect = PGS_ShapeBoolean.intersect(s1, s2, s3); + // Intersection: [5,10]x[5,10] -> Area 25 + assertEquals(25.0, PGS_ShapePredicates.area(intersect), 1e-6); + } + + @Test + void testOverlapRegions() { + PShape s1 = createSquare(0, 0, 10); + PShape s2 = createSquare(5, 0, 10); + PShape s3 = createSquare(0, 5, 10); + + // Overlap regions are s1∩s2, s1∩s3, and s2∩s3∩s1 + PShape overlaps = PGS_ShapeBoolean.overlapRegions(List.of(s1, s2, s3), true); + // s1∩s2 is [5,10]x[0,10], area 50 + // s1∩s3 is [0,10]x[5,10], area 50 + // They overlap in [5,10]x[5,10], area 25 + // Total area: 50 + 50 - 25 = 75 + assertEquals(75.0, PGS_ShapePredicates.area(overlaps), 1e-6); } - + @Test - void testPolygonLineUnion() { - PShape square = new PShape(PShape.GEOMETRY); // 10x10 square - square.beginShape(); - square.vertex(0, 0); - square.vertex(10, 0); - square.vertex(10, 10); - square.vertex(0, 10); - square.endShape(PConstants.CLOSE); // close affects rendering only -- does not append another vertex - - PShape line = new PShape(PShape.PATH); - line.beginShape(PConstants.LINES); - line.vertex(-20, 5); - line.vertex(20, 5); - line.endShape(); - - PShape union = PGS_ShapeBoolean.union(square, line); - assertEquals(3, union.getChildCount()); - assertTrue(PGS_ShapePredicates.equalsTopo(square, union.getChild(0))); + void testUnionMesh() { + // Create two squares that share an edge + PShape s1 = createSquare(0, 0, 10); + PShape s2 = createSquare(10, 0, 10); + + PShape mesh = new PShape(PConstants.GROUP); + mesh.addChild(s1); + mesh.addChild(s2); + + PShape combined = PGS_ShapeBoolean.unionMesh(mesh); + // Should be a 20x10 rectangle + assertEquals(200.0, PGS_ShapePredicates.area(combined), 1e-6); + // Should be a single polygon (not a group) + assertTrue(combined.getChildCount() == 0); + } + + private static PShape createSquare(float x, float y, float size) { + PShape s = new PShape(PShape.GEOMETRY); + s.beginShape(); + s.vertex(x, y); + s.vertex(x + size, y); + s.vertex(x + size, y + size); + s.vertex(x, y + size); + s.endShape(PConstants.CLOSE); + return s; + } + + private static PShape createLine(float x1, float y1, float x2, float y2) { + PShape s = new PShape(PShape.PATH); + s.beginShape(); + s.vertex(x1, y1); + s.vertex(x2, y2); + s.endShape(); + return s; } } diff --git a/src/test/java/micycle/pgs/PGS_TransformationTests.java b/src/test/java/micycle/pgs/PGS_TransformationTests.java index 56c734f7..47dd4ff1 100644 --- a/src/test/java/micycle/pgs/PGS_TransformationTests.java +++ b/src/test/java/micycle/pgs/PGS_TransformationTests.java @@ -2,6 +2,7 @@ import static micycle.pgs.PGS_ShapePredicates.area; import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; import org.junit.jupiter.api.BeforeAll; import org.junit.jupiter.api.Test; @@ -41,7 +42,34 @@ void testScaleAreaTo() { @Test void testScale() { + // Test polygon scaling assertEquals(100 * 1.5 * 1.5, area(PGS_Transformation.scale(square, 1.5)), EPSILON); + + // Test empty geometry - should not throw error + PShape emptyGeom = PGS_Conversion.fromWKT("POLYGON EMPTY"); + PShape scaledEmpty = PGS_Transformation.scale(emptyGeom, 1.5); + assertNotNull(scaledEmpty); + assertEquals(0, scaledEmpty.getVertexCount()); + + // Test point (0-dimensional/pointal) - should not throw error + PShape point = PGS_Conversion.fromWKT("POINT (10 10)"); + PShape scaledPoint = PGS_Transformation.scale(point, 2.0); + assertNotNull(scaledPoint); + + // Test linestring (1-dimensional/lineal) - should not throw error + PShape line = PGS_Conversion.fromWKT("LINESTRING (0 0, 10 10)"); + PShape scaledLine = PGS_Transformation.scale(line, 2.0); + assertNotNull(scaledLine); + + // Test empty point - should not throw error + PShape emptyPoint = PGS_Conversion.fromWKT("POINT EMPTY"); + PShape scaledEmptyPoint = PGS_Transformation.scale(emptyPoint, 1.5); + assertNotNull(scaledEmptyPoint); + + // Test empty linestring - should not throw error + PShape emptyLine = PGS_Conversion.fromWKT("LINESTRING EMPTY"); + PShape scaledEmptyLine = PGS_Transformation.scale(emptyLine, 1.5); + assertNotNull(scaledEmptyLine); } } diff --git a/src/test/java/micycle/pgs/PGS_TriangulationTests.java b/src/test/java/micycle/pgs/PGS_TriangulationTests.java index 3016f222..10114e3b 100644 --- a/src/test/java/micycle/pgs/PGS_TriangulationTests.java +++ b/src/test/java/micycle/pgs/PGS_TriangulationTests.java @@ -1,7 +1,10 @@ package micycle.pgs; import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertTrue; +import java.util.ArrayList; import java.util.List; import org.junit.jupiter.api.Test; @@ -24,4 +27,38 @@ void testFromPoints() { assertEquals(tin.countTriangles().getCount(), triangulation.getChildCount()); } + @Test + void testEarCutTriangulation() { + // Build a square via convexHull of 4 corners + List corners = new ArrayList<>(); + corners.add(new PVector(0, 0)); + corners.add(new PVector(100, 0)); + corners.add(new PVector(100, 100)); + corners.add(new PVector(0, 100)); + + PShape square = PGS_Hull.convexHull(corners); + assertNotNull(square); + assertTrue(square.getVertexCount() >= 4); + + PShape triangles = PGS_Triangulation.earCutTriangulation(square); + assertNotNull(triangles); + + // A simple convex n-gon triangulates into (n-2) triangles. + assertEquals(2, triangles.getChildCount()); + } + + @Test + void testRefine() { + List points = PGS_PointSet.random(0, 0, 1500, 1500, 500, 321); + IIncrementalTin tin = PGS_Triangulation.delaunayTriangulationMesh(points); + + int before = tin.countTriangles().getCount(); + + PGS_Triangulation.refine(tin, 20); + + int after = tin.countTriangles().getCount(); + assertTrue(after >= before); + assertTrue(tin.getVertices().size() >= points.size()); + } + } \ No newline at end of file diff --git a/src/test/java/micycle/pgs/PGS_VoronoiTests.java b/src/test/java/micycle/pgs/PGS_VoronoiTests.java new file mode 100644 index 00000000..b16c64ce --- /dev/null +++ b/src/test/java/micycle/pgs/PGS_VoronoiTests.java @@ -0,0 +1,44 @@ +package micycle.pgs; + +import static org.junit.jupiter.api.Assertions.assertEquals; + +import java.util.List; + +import org.junit.jupiter.api.BeforeAll; +import org.junit.jupiter.api.Test; + +import processing.core.PVector; + +class PGS_VoronoiTests { + + static final int N = 50; + static List sites; + static double[] bounds; + + @BeforeAll + static void initSites() { + sites = PGS_PointSet.random(200, 200, 800, 800, N, 0); + bounds = new double[] { 0, 0, 1000, 1000 }; + } + + @Test + void testVoronoi() { + assertEquals(N, PGS_Voronoi.innerVoronoi(sites, bounds).getChildCount()); + } + + @Test + void testManhattenVoronoi() { + assertEquals(N, PGS_Voronoi.manhattanVoronoi(sites, bounds).getChildCount()); + } + + @Test + void testPowerDiagram() { + assertEquals(N, PGS_Voronoi.powerDiagram(sites, bounds).getChildCount()); + } + + @Test + void testMultiplicativelyWeightedVoronoi() { + assertEquals(N, PGS_Voronoi.multiplicativelyWeightedVoronoi(sites, bounds).getChildCount()); + } + +}