Monday, June 16, 2003

http://www-2.cs.cmu.edu/~quake/tripaper/triangle2.html

from that page:

A triangular mesh generator rests on the efficiency of its triangulation algorithms and data structures, so I discuss these first. I assume the reader is familiar with Delaunay triangulations, constrained Delaunay triangulations, and the incremental insertion algorithms for constructing them. Consult the survey by Bern and Eppstein [2] for an introduction.

There are many Delaunay triangulation algorithms, some of which are surveyed and evaluated by Fortune [7] and Su and Drysdale [18]. Their results indicate a rough parity in speed among the incremental insertion algorithm of Lawson [11], the divide-and-conquer algorithm of Lee and Schachter [12], and the plane-sweep algorithm of Fortune [6]; however, the implementations they study were written by different people. I believe that Triangle is the first instance in which all three algorithms have been implemented with the same data structures and floating-point tests, by one person who gave roughly equal attention to optimizing each. (Some details of how these implementations were optimized appear in Appendix A.)

Table 1 compares the algorithms, including versions that use exact arithmetic (see Section 4) to achieve robustness, and versions that use approximate arithmetic and are hence faster but may fail or produce incorrect output. (The robust and non-robust versions are otherwise identical.) As Su and Drysdale [18] also found, the divide-and-conquer algorithm is fastest, with the sweepline algorithm second. The incremental algorithm performs poorly, spending most of its time in point location. (Su and Drysdale produced a better incremental insertion implementation by using bucketing to perform point location, but it still ranks third. Triangle does not use bucketing because it is easily defeated, as discussed in the appendix.) The agreement between my results and those of Su and Drysdale lends support to their ranking of algorithms.

An important optimization to the divide-and-conquer algorithm, adapted from Dwyer [5], is to partition the vertices with alternating horizontal and vertical cuts (Lee and Schachter's algorithm uses only vertical cuts). Alternating cuts speed the algorithm and, when exact arithmetic is disabled, reduce its likelihood of failure. One million points can be triangulated correctly in a minute on a fast workstation.


from http://astronomy.swin.edu.au/~pbourke/terrain/triangulate/


The triangulation algorithm may be described in pseudo-code as follows.

subroutine triangulate
input : vertex list
output : triangle list
initialize the triangle list
determine the supertriangle
add supertriangle vertices to the end of the vertex list
add the supertriangle to the triangle list
for each sample point in the vertex list
initialize the edge buffer
for each triangle currently in the triangle list
calculate the triangle circumcircle center and radius
if the point lies in the triangle circumcircle then
add the three triangle edges to the edge buffer
remove the triangle from the triangle list
endif
endfor
delete all doubly specified edges from the edge buffer
this leaves the edges of the enclosing polygon only
add to the triangle list all triangles formed between the point
and the edges of the enclosing polygon
endfor
remove any triangles from the triangle list that use the supertriangle vertices
remove the supertriangle vertices from the vertex list
end

The above can be refined in a number of ways to make it more efficient. The most significant improvement is to presort the sample points by one coordinate, the coordinate used should be the one with the greatest range of samples. If the x axis is used for presorting then as soon as the x component of the distance from the current point to the circumcircle center is greater than the circumcircle radius, that triangle need never be considered for later points, as further points will never again be on the interior of that triangles circumcircle. With the above improvement the algorithm presented here increases with the number of points as approximately O(N^1.5).

The time taken is relatively independent of the input sample distribution, a maximum of 25% variation in execution times has been noticed for a wide range of naturally occurring distributions as well as special cases such as normal, uniform, contour and grid distributions.

The algorithm does not require a large amount of internal storage. The algorithm only requires one internal array and that is a logical array of flags for identifying those triangles that no longer need be considered. If memory is available another speed improvement is to save the circumcircle center and radius for each triangle as it is generated instead of recalculating them for each added point. It should be noted that if sufficient memory is available for the above and other speed enhancements then the increase in execution time is almost a linear function of the number of points.