Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/assets/Duckham_et_al.webp
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
26 changes: 26 additions & 0 deletions docs/reference/algorithms/charshape.md
Original file line number Diff line number Diff line change
@@ -1 +1,27 @@
# Charshape

The characteristic shape reduction method provided by PolyShell is a novel polygon reduction built upon the
characteristic shape algorithm of Duckham et al.[^1]

[^1]: [M. Duckham, L. Kulik, M. Worboys, A. Galton, 2008. Efficient generation of simple polygons for characterizing
the shape of a set of points in the plane.](https://doi.org/10.1016/j.patcog.2008.03.023)

## Characteristic Shape Algorithm

The algorithm presented by Duckham et al. proposes an efficient method for generating simple polygons which
"characterize" a set of points in the plane. The method works by computing the triangulation of a point cloud,
iteratively removing outward-facing edges until a shape is found which conforms to the point cloud without over-fitting.

![Duckham](../../assets/Duckham_et_al.webp){ width="500", loading=lazy }
/// caption
Characteristic shape of a point cloud. Adapted from Duckham et al.[^1]
///

## Adaptation to a Polygon Reduction Algorithm

The problem of polygon reduction can be viewed in a very similar way to that considered by Duckham et al. In this
instance, we still have a cloud of points in the plane, albeit with additional topological information. To satisfy
[PolyShell's axioms], it is necessary that the front never recedes into the initial shape. A minor modification is to
compute a [constrained triangulation](https://en.wikipedia.org/wiki/Constrained_Delaunay_triangulation), removing only edges which do not lie on the original polygon.

[PolyShell's axioms]: ../../user-guide/axioms.md
File renamed without changes.
15 changes: 15 additions & 0 deletions docs/reference/features/parallelism.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Acceleration Through Parallelism

Both [RDP](../algorithms/ramer-douglas-peucker.md) and [Charshape](../algorithms/charshape.md) take a bottom-up approach, starting with the
convex hull and progressively introducing detail, they can be readily parallelized. For Charshape, as an edge is
removed, two more are uncovered. Likewise with RDP, a segment is split in two. In both instances, a task is split into
two sub-tasks in a divide-and-conquer approach. As a result, both Charshape and RDP are ideal candidates for parallelism
through [work stealing](https://en.wikipedia.org/wiki/Work_stealing).

While this approach is attractive, reduction occurs out-of-order, making it challenging to reduce to a fixed length or
for adaptive algorithms, each of which require global observability.

Thus far, benchmarking has shown that work stealing yields only modest gains in performance. This mainly due to
overheads in the Python bindings and preprocessing steps which cannot be performed in parallel. For this reason,
PolyShell does not currently implement these parallelisation strategies for either algorithm. If significant performance
gains are made in either area, then multi-core reduction will become a priority.
5 changes: 4 additions & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,10 @@ nav:
- Visvalingam-Whyatt: reference/algorithms/visvalingam-whyatt.md
- Ramer-Douglas-Peucker: reference/algorithms/ramer-douglas-peucker.md
- Charshape: reference/algorithms/charshape.md
- Benchmarks: reference/benchmarks.md
- Features:
- Parallelism: reference/features/parallelism.md
- Developers:
- Benchmarks: reference/developers/benchmarks.md

plugins:
- search
Expand Down
17 changes: 11 additions & 6 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ dynamic = ["version", "readme"]
dependencies = []

[project.urls]
repository = "https://github.com/ecmwf/polyshell"
documentation = "https://ecmwf.github.io/PolyShell/"
issues = "https://github.com/ecmwf/polyshell/issues"
repository = "https://github.com/ecmwf/polyshell"
documentation = "https://ecmwf.github.io/PolyShell/"
issues = "https://github.com/ecmwf/polyshell/issues"

[tool.setuptools.dynamic]
readme = {file = ["README.md"], content-type = "text/markdown"}
Expand Down Expand Up @@ -84,18 +84,23 @@ cache-keys = [{ file = "pyproject.toml" }, { file = "rust/Cargo.toml" }, { file
members = ["scripts/benchmark"]

[project.scripts]
polyshell = "polyshell._cli:app"

polyshell = "polyshell._cli:app"

[tool.pytest.ini_options]
testpaths = "tests"
minversion = "7.0"
addopts = [
"-v",
"--pdbcls=IPython.terminal.debugger:Pdb",
]
testpaths = ["tests"]

[tool.ruff.lint.isort]
known-first-party = [
"polyshell",
"benchmark",
"cli",
]

[tool.ruff.format]
quote-style = "double"

Expand Down
5 changes: 0 additions & 5 deletions pytest.ini

This file was deleted.

28 changes: 24 additions & 4 deletions src/algorithms/simplify_charshape.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ use spade::{CdtEdge, Point2, SpadeNum, Triangulation};
use std::cmp::Ordering;
use std::collections::BinaryHeap;

/// Store edge information. Score is used for ranking the priority queue which determines removal
/// order.
#[derive(Debug)]
struct CharScore<'a, T>
where
Expand Down Expand Up @@ -56,6 +58,9 @@ impl<T: SpadeNum> PartialEq for CharScore<'_, T> {
}
}

/// Area and topology preserving polygon reduction algorithm.
///
/// Based on the characteristic shape algorithm of [Duckham et al](https://doi.org/10.1016/j.patcog.2008.03.023).
fn characteristic_shape<T>(orig: &Polygon<T>, eps: T, max_len: usize) -> Polygon<T>
where
T: GeoFloat + SpadeNum,
Expand All @@ -66,15 +71,19 @@ where

let eps_2 = eps * eps;

let mut len = 0;

// Get the constrained Delaunnay triangulation of the original polygon
let tri = orig.triangulate();

// Mask storing the current state of the polygon, starting with the convex hull.
let mut boundary_mask = vec![false; tri.num_vertices()];
let mut len = 0;
tri.convex_hull().for_each(|edge| {
boundary_mask[edge.from().index()] = true;
len += 1;
});

// Store all outer edges in a maximum priority queue, based on length.
let mut pq = tri
.convex_hull()
.map(|edge| edge.rev())
Expand All @@ -84,17 +93,24 @@ where
})
.collect::<BinaryHeap<_>>();

// Iterate over edges with lengths greater than epsilon
while let Some(largest) = pq.pop() {
if largest.score < eps_2 || len >= max_len {
if largest.score < eps_2 {
// Max-heap guarantees all future points have lengths smaller than epsilon
break;
}

if len >= max_len {
// Further additions would send us above the maximum length
break;
}

// Regularity check
// Regularity check: removing a constraint edge would encroach upon the original area
if largest.edge.is_constraint_edge() {
continue;
}

// Update boundary nodes and edges
// Add a point to the reduction, updating the mask
let coprime_node = largest.edge.opposite_vertex().unwrap();
boundary_mask[coprime_node.index()] = true;
len += 1;
Expand All @@ -111,6 +127,7 @@ where
Polygon::new(exterior, vec![])
}

/// Uncover an outer edge, exposing two new edges which are pushed to the heap.
fn recompute_boundary<'a, T>(
edge: DirectedEdgeHandle<'a, Point2<T>, (), CdtEdge<()>, ()>,
pq: &mut BinaryHeap<CharScore<'a, T>>,
Expand All @@ -127,7 +144,10 @@ fn recompute_boundary<'a, T>(
}
}

/// Simplifies a geometry while preserving its topology and area.
pub trait SimplifyCharshape<T, Epsilon = T> {
/// Return the simplified geometry using a topology and area preserving algorithm based on the
/// characteristic shape algorithm of [Duckham et al](https://doi.org/10.1016/j.patcog.2008.03.023).
fn simplify_charshape(&self, eps: Epsilon, len: usize) -> Self;
}

Expand Down
7 changes: 3 additions & 4 deletions src/algorithms/simplify_vw.rs
Original file line number Diff line number Diff line change
Expand Up @@ -64,10 +64,9 @@ impl<T: CoordFloat> PartialEq for VWScore<T> {
}
}

/// Area and topology preserving Visvalingam-Whyatt algorithm
/// adapted from the [geo implementation](https://github.com/georust/geo/blob/e8419735b5986f120ddf1de65ac68c1779c3df30/geo/src/algorithm/simplify_vw.rs)
///
/// Area and topology preserving Visvalingam-Whyatt algorithm.
///
/// Adapted from the [geo implementation](https://github.com/georust/geo/blob/e8419735b5986f120ddf1de65ac68c1779c3df30/geo/src/algorithm/simplify_vw.rs).
fn visvalingam_preserve<T>(orig: &LineString<T>, eps: T, min_len: usize) -> Vec<Coord<T>>
where
T: GeoFloat + RTreeNum,
Expand Down Expand Up @@ -192,7 +191,7 @@ where
})
}

/// Recompute adjacent triangle(s) using left and right adjacent points, pushing to the heap
/// Recompute adjacent triangle(s) using left and right adjacent points, pushing to the heap.
fn recompute_triangles<T: CoordFloat>(
orig: &LineString<T>,
pq: &mut BinaryHeap<VWScore<T>>,
Expand Down