Hello! The “straight skeleton” is an interesting computation geometry concept that’s not super commonly known, but has some interesting properties. I’ve just added a implementation to XLE; I’m not 100% sure if there are other solid implementations in open source (there may be, but last time I searched, I found implementations in commercial math packages, just not any in the open source domain).
Definition
Imagine you have an arbitrary polygon (convex, concave, or even with internal holes). For each edge, find the direction that is perpendicular to the edge and pointing inwards into the shape (we’ll call this the “normal” of the edge). Now imagine that edge is moving along it’s normal, inwards into the polygon. The polygon will appear to get smaller and smaller, sometimes dividing into two. But in the end, it should eventually collapse into just a set of lines.
These lines generated from the collapse, and the paths that the vertices trace out as the polygon shrinks, are called the straight skeleton.
Let’s visualize with some common shapes. A rectangle is a simple case:
We get a moderately more complex case with a L shaped input:
Notice that the polygon is shrinking according to it’s natural shape. We end up with something that represents the structure of the overall shape. Generally small details have little impact on the final skeleton, but the main structure will be well reflected.
Also note that we’re not scaling the polygon. Each vertex has a straight line that it’s moving along, but this line rarely will intersect the origin. The relative length of the edges changes as time increases.
Each edge is moving along its normal at the same rate. The line that each vertex moves along is called the vertex path, and is defined by the intersection of two edges as they advance along their tangents. Any given input polygon will have one correct straight skeleton solution, so long as there are no intersecting edge segments (exception for one minor case, which we’ll get to in a bit).
I’ll use time related terms (like velocity) to describe what’s happening here, because this is just the easiest way to visualize what’s happening. Imagine that the polygon is shrinking as we’re advancing through time.
Collapses and motorcycle crashes
If we take any 2 adjacent edges, we can calculate a “velocity” for the vertex between them. The velocity is the amount the vertex will move in a unit of time, and is in the direction of the vertex path (the blue lines above). This can be calculated with a little bit of geometric algebra – remember that the vertex movement is defined by the movement of the edges on either side of it.
Vertices will continue to move according to their velocity indefinitely until an “event” happens that changes things. There are only 2 main types of event: one is called an “edge collapse” and another is called a “motorcycle crash”.
So calculating the straight skeleton is actually pretty simple: we just calculate when the next event will be, advance time and process that event. Then find the next event, and repeat until there are no more events. Believe it or not, this is all we have to do; and it works for any given input polygon (though I’ll cover an extension for polygons with holes below).
Collapses
There’s a typical example of a collapsing edge in the rectangle case:
Once an edge reaches zero length, it’s considered to be “collapsed”, and it will be replaced with a single vertex. In the rectangle case, once the collapse is processed, we’re finished and don’t need to advance any further. But this isn’t usually the case. Usually the new vertex will continue according to the same rules as any other vertex: it just traces the path between two edges. Nominally the edges connected to that vertex haven’t actually changed direction, but that doesn’t mean the velocity of the new vertex is the same the velocities of either of the vertices on the collapsed edge. So, all in all, we’ll remove 2 vertices while processing this event and add back 1 new vertex.
Motorcycle crashes
I always imagine the authors that came up with this name were Tron fans.. Perhaps it’s true. Anyway, it’s actually a pretty descriptive name. Consider the following diagram:
Here, a vertex and an edge are moving towards each other on a collision course. When they “crash”, this will actually split the entire wavefront loop into two. We create two new vertices, one in each loop, each moving away from each other and away from the crash.
If we zoom up in the middle diagram above:
The extremely observant might have noticed another property here: the vertex paths for each vertex don’t have the same length. This means the velocities of each vertex don’t have the same magnitude. In fact the velocity of the vertex depends on the angle between its edges (more on that later).
Colinear adjacent edges
The math here works for any input polygon, with one exception: if there are adjacent collinear edges. The vertex path is defined by the intersection of adjacent edges; however if those edges are collinear, there are an infinite number of intersections and therefore an infinite number of possible paths. We could have to add extra rules to handle this case properly, but as we’ll see in a moment that isn’t quite as easy as it sounds. In general, collinearity is kryptonite to the algorithm and we should try to avoid it as much as possible.
Tire meeting the road
In theory, these two types of events are all we need to handle in order to calculate the skeleton. We just need some code to find the next event to process and the code to handle processing it. However, of course, this isn’t really the end of the story.
I’ve found that computational geometry algorithms like this are amongst the most annoying types of algorithms to implement and debug, and it always tends to be for the same reason. That is, the mathematical operations we’re using have only a finite level of precision. It really is kind of shocking how quickly we build up enough error to where we don’t know if a given point is on the left or right of a line. And in this algorithm, if we get even a single calculation like that wrong, we can get either a missed motorcycle crash or a edge narrowly missing collapse and expanding in the wrong direction (either of which will ruin the output entirely).
So, the real challenge with this algorithm is implementing it for non-quantum computers with finite precision. That means coming up with a set of heuristics that can deal with the error. So, I’ll share here the best approaches I’ve found.
Hex grids
Some input shapes are naturally more prone to numeric precision error that others. While shapes with very soft following edges can be a little tricky, the most difficult types of shapes I’ve come across are actually hex grids. Here’s a diagram to demonstrate some difficult scenarios that pop up when processing hex grids:
In many of these cases, we have multiple events happening at once, and sometimes multiple events happening at the same point. That should set off warning bells for precision error. Let’s consider, for example, the case marked (A) above.
Here we have 2 edges coming together, and at one end of the edges we have 2 other edges collapsing and at the other end we have to vertices motorcycling into each other. This is all happening at the same time – so in theory there are actually 4 events happening simultaneously. Those calculations are all subject to error, though, so let’s consider what would happen if we used completely naive math:
- based on error, or random case, we’ll decide on one specific event to process first. Let’s imagine that event is one of the motorcycle crashes.
- here, the motorcycle could be crashing directly into a vertex, but we interpret that as a motorcycle vs edge crash, and the edge could be either of the two vertices connected to the other vertex
- when we process the motorcycle crash, we will end up with 2 vertices connected by an edge, but either on the same point, or extremely close together
- this creates a big problem because the normal for that edge is extremely subject to error. It could be in almost any direction, meaning the velocities for the vertices is also very prone to error
- that new edge might even be collapsing – it could be expanding from it’s original tiny length to something much larger
- furthermore, because the edges in the bottom circle of (A) are also collapsing, the new vertex we’ve created as a result of the motorcycle crash is at the end of 2 edges that are collinear or almost collinear. If we attempt to calculate the velocity of a vertex that is at the bottom of a V of almost collinear edges, we’ll end up with a value that is asymptotically huge
- in fact, the velocity can be massive enough that even if the edge collapses happen very briefly after the motorcycle crash, when we attempt to calculate the final position for that vertex, we can end up with something that has escaped the loop and running off to infinity
Consider the two edges marked with (B) above
- these edges are colinear, but not adjacent
- however, the edges between them will collapse (ultimately becoming 2 collapses at the same time, incidentally) and at that point, they will become adjacent, while still being collinear (recall that edges move along their tangents, so edges will remain colinear if they start collinear)
- this has the same problems as if we have adjacent collinear edges in the initial input, and will generate infinities and incorrect outputs if not handled specially
- this raises the question of how to determine if 2 edges are collinear – because this will definitely require some compensation for precision error
Hex grids also have another added problem in that, since square roots are involved in calculating the input vertices, the inputs polygon actually involves transcendental numbers. Since we’re calculating those at finite precision, even the input geometry has built in precision error.
These kinds of scenarios, and others, come up over and over again using hex grids. And it’s easy to randomly generate input hex grids – so they make the perfect input data for testing this algorithm.
if you’re curious, here how the above shape ends up:
Compounding error
You may have noticed that edges can be involved in multiple events over the course of the calculation. For example, an edge might be adjacent to a collapse at one point, and later be part of a motorcycle crash. The problem with this is it can cause compounding error in successive events.
Recall that when we process a motorcycle crash, we generate 2 new vertices, each with edges attached. These new vertices, or the attached edges, can be involved in further motorcycle crashes. However, since we calculated the initial position for that vertex while processing the motorcycle crash, it already has that precision error built in. Any calculations we make for the velocity of the vertex or the tangents and movement of the attached edges also have that error baked in.
And, so, when the vertex or edges are involved in a new event, that event is calculated with the error from the first motorcycle crash, plus the error from this new event. This can happen over and over again – effectively magnifying the impact of precision error for large polygons.
Furthermore, all of the geometry math here involves the tangents to edges in some way. To calculate this, we need to know the positions of the vertices for that edge. If both vertices on the edge are part of the initial input loop, this isn’t an issue – we just look at their input positions, and since the edges never rotate, whatever normal we calculate from there will always be valid. However, what if one or two of the vertices were generated as a result of an event?
To calculate the edge normal in these cases, we need to know the position of both vertices at an equal time. That requires knowing the vertex path of at least one of the vertices and calculating the movement along that path. That also introduces precision error, which will ultimately impact our calculation of the edge normal.
Next time
This is already getting pretty long, so I’ll cut it off here for now. In the next update I’ll describe the heuristics I’m using, and also show some more complex examples (and maybe talk about some interesting applications of this algorithm).