Tuesday 11 March 2014

ALGORITHEMS


Traveling Salesman Problem's Heuristic


This is one of the most well known difficult problems of time. A salesperson must visit n cities, passing through each city only once, beginning from one of the city that is considered as a base or starting city and returns to it. The cost of the transportation among the cities is given. The problem is to find the order of minimum cost route that is, the order of visiting the cities in such a way that the cost is the minimum.
 

Let's number the cities from 1 to n and city 1 be the start-city of the salesperson. Also let's assume that c(i, j) is the visiting cost from any city i to any other city j.  Here is the systematic way of solving this problem:

Algorithm TSP

  1. First, find out all (-1)! Possible solutions, where n is the number of cities.
  2. Next, determine the minimum cost by finding out the cost of everyone of these (-1)! Solutions.
  3. Finally, keep the one with the minimum cost.

 

Clearly, this requires at least (n-1)! Steps.

If we try to determine the solution of this problem systematically, we would ended up with (- 1)! possible solutions. For example if there were 21 cities the steps required are (- 1)! = (21 - 1)! = 20! steps. If each step required a 1msec we would need about 770 centuries to calculate the minimum cost route. (My life is too short for this crap.) Clearly we cannot examine all possible solutions for minimum cost.

 

Outline of TSP Heuristic

Whenever the salesman is in town he chooses as his next city i.e., the city for which the c(i, j) cost, is the minimum among all c(i, k) costs, where k are the pointers of the city the salesman has not visited yet. In case more than one cities give the minimum cost, the city with the smaller k will be chosen. This greedy algorithm selects the cheapest visit in every step and does not care whether this will lead to a wrong result or not.


Heuristic


Input
  • Number of cities n
  • Cost of traveling between the cities.
  • c (i, j)   ij = 1, . . , n.
  • Start with city 1
Output
  • Vector of cities and total cost.

Main Steps
  1. Initialization
    • c← 0
    • Cost ← 0
    • visits ←  0
    • e = 1     /* pointer of the visited city */
       
  2. For 1 ≤ r ≤  n
    • Do {
        Choose pointer j with
      • minimum = c (ej) = min{c (ek); visits (k) = 0 and 1 ≤  k  ≤  n }
      • cost ← cost + minimum - cost
      • e = j
  3. C(r) ← j
    • C(n) = 1
    • cost = cost + c (e, 1)




Divide-and-Conquer Strategy

The algorithm for reporting the intersection of a set of half planes uses the divide-and-conquer strategy. This strategy relies on a recursive approach and has two steps. The top down step, the input is recursively divided until the subproblems are small enough to be solved easily. The Bottom-up step consists of recursively merging the solutions. A divide-and-conquer algorithm can be thought of as a binary tree, where the leaves correspond to “atomic” operations and the nodes to merge operations.
For simplicity, assume that the resultant polygon is convex and we know how to search the space R, which is a rectangle. The resulting polygon contains in R i.e., rectangle.
The divide-and-conquer strategy can be describe in three steps as follows
  • Build the structure
        
    The set of n half-planes in the input is recursively halved until we obtain n single half-plane (Note that this yields a binary tree).
  • Solve an atomic problem
        
    The half-plane in each leaf node is intersected with rectangle R (i.e., search space). This yields a convex polygon in each leaf.
  • Merge the results
        
    Compute recursively the intersection of convex polygon in bottom-up fashion.

 

Merging Step

The binary tree is processed bottom up as follows. At each node of the tree, compute intersection of two convex polygons. If p and p` are the number of vertices of two polygons to be intersected, convex polygon intersection is linear i.e., O(p + p`). The merging step eventually delivers a convex polygon that is the intersection (if any) of the n half-planes.

Algorithm
The recursive algorithm is as follows: Suppose S be the set of half-planes.

HALF-PLANE-INTERSECTION (S)
If |S| > 2 then
    Compute :  HALF-PLANE-INTERSECTION (S/2)  HALF-PLANE-INTERSECTION (S-S/2)
Else
    return S  
R

Complexity
The time complexity of HALF-PLANE-INTERSECTION algorithm is O(n log n). One can show that the algorithm for half-plane intersection is optimal since sorting problem can be reduced to the half-plane intersection problem.


Smallest Enclosing Circle Problem


Introduction

The minimal enclosing circle is used in planning the location of a shared facility. For example, a shared facility is a hospital servicing a community. Note that traditionally people consider post office as an example hence post office problem (see Voronoi diagram).  If we consider each home in a community as points in the plane, finding minimal enclosing circle gives a good place to put the hospital i.e., the center of the minimal circle. Placing the hospital at the center of minimal circle minimizes the distance between the hospital and the farthest home (point) in the community.
In the military, this problem is known as the “Bomb Problem”. If we suppose each target on a map as a planar point, the center of the minimal circle of a map i.e., set, is a good place to drop a bomb for maximum destruction. Furthermore, the radius of the minimal circle can be used to calculate how much explosive is required.
This problem is also useful to examine the point that lie on the boundary of the minimal enclosing circle. These points are in a sense the outliers of the set, and in statistics, are sometimes discarded to get a more robust estimate.
The simplest algorithm considers every circle defined by two or three of the n points, and finds the smallest of “these” circles that contains every point. There exits O(n3) such circles, and each takes O(n) time to check, for a total running time of O(n4).  Elzinga and Hearn gave an O(n2) algorithm in 1972, and Shamos and Hoey (1975), Preparata (1977), and Shamos (1978) discovered the first O (n logn) algorithms.
Finally, and to everyone's surprise, in 1983 Nimrod Megiddo showed that the minimal enclosing circle problem can be solve in O(n) time using the prune-and-search techniques for linear programming. This landmark result is one of the most beautiful in the field of computational geometry.

Problem Statement

The Minimal Enclosing Circle Problem is, simply stated, the problem of finding the smallest circle that completely contains a set of points. Formally, given a set S of n planar points, find the circle C of smallest radius such that all points in S are contained in either C or its boundary.

An O(n2)-time Algorithm

  1. Draw a circle at center, c, such that points of given set lie within the circle. Clearly, this circle can be made smaller (or else we have the solution).
  2. Make a circle smaller by finding the point A farthest from the center of circler, and drawing a new circle with the same center and passing through the point A. These two steps produce a smaller enclosing circle. The reason that the new circle is smaller is that this new circle still contains all the points of given set, except now it passes through farthest point, x, rather than enclosing it.
  3. If the circle passes through 2 or more points, proceed to step 4. Otherwise, make the circle smaller by moving the center towards point A, until the circle makes contact with another point B from the set.
  4. At this stage, we a circle, C, that passes through two or more points of a given set. If the circle contains an interval (point-free interval) of arc greater than half the circle's circumference on which no points lie, the circle can be made smaller. Let D and E be the points at the ends of this point-free interval. While keeping D and E on the circle's boundary, reduce the diameter of the circle until we have either case (a) or case (b).
    Case (a)   The diameter is the distance DE.
    • We are done!
    Case (b)   The circle C touches another point from the set, F.
    • Check whether there exits point-free arc intervals of  length more than half the circumference of C.
    • IF no such point-free arc intervals exit THEN
      • We are done!
    • Else
      • Goto step 4.
      • In this case, three points must lie on an arc less than half the circumference in length. We repeat step 4 on the outer  two of the three points on the arc.

Analysis

This algorithm is straight forward, but expensive to implement. Step 1, 2 and 3 require linear time in the number of points in the given set. In step 4 above, it takes linear time to find each new point F. However, finding the point F does not guarantee the termination of the algorithm. Step 4 must be repeated until the circle contains no point-free interval longer than half its circumference. In the worst case, this requires (n-2) iterations of step 4, implying that the total time spent in step 4 could be on the order of the square of the size of the point set.
Hence, this algorithm is O(n2).


The remaining section is divided into two parts. First part describes the algorithm used in implementation and the second part presents the best-known algorithm for finding minimal enclosing circle, MEC.

Applet’s Algorithm

This algorithm is due to Pr. Chrystal (1885).

The Algorithm

Foremost, observe that the minimal enclosing circle, MEC, is entirely determined by the Convex Hull of a given set of point. The reason is the points of the set touched by the minimal enclosing circle are always on the convex hull of the set.
First step of the algorithm is to compute the convex hull, H, of the points in linear time. Clearly, we can do this since points are kept ordered by x-coordinate. Call the convex hull H and the number of convex hull vertices h. Next step is to pick any side, say S, of the convex hull. The main loop of the algorithm is as follows:

Main Loop

  1. For each vertex of H other than those of S, we compute the angle subtended by S. The minimum such angle occurs at vertex v, and the value of the minimum angle is α (alpha).
    • IF  α  ≥  90 degrees THEN
      • We are done!
      • /* The minimal enclosing circle is determined by the diametric circle of S */
    • IF α < 90 degrees THEN
      • Goto step 2 below.
  1. Since α < 90 degrees, check the remaining vertices of the triangle formed by S and v.
    • IF no vertices are obtuse THEN
      • We are done!
      • /* The MEC is determined by the vertices of S and the vertex v */
  2. If one of the other angles of the triangle formed by S and v is obtuse, then set S to be the side opposite the obtuse angle and repeat the main loop of the algorithm (i.e., Goto step 1 above).

Analysis

Assuming the points are given in sorted order, the algorithm initializes in the time that is linear in the number of points in the set. The main loop of the algorithm requires linear time with respect to the number of convex hull points, and main loop could be repeated once for each diagonal of the convex hull. The number of diagonals is proportional to the square of the number of convex hull points. For that reason in the worst case, the total time of the algorithm is proportional to the number of points in the set, plus the cube of the number of convex hull points.
However, in implementation, the running time depends on the side initially chosen to start the algorithm, and the algorithm expected to perform quite well in practice.

Halting Condition

It remains to prove that the algorithm will defines the minimal enclosing circle of the given set of points, rather than looping forever. The proof follows from the fact that during each iteration, algorithm is reducing the radius of the circle being considered and  ensuring that all the points of the set are still within the circle. Therefore, the circle will eventually become the minimal enclosing circle.

Linear Time Algorithm

Nimrod Megiddo proposes the algorithm and it uses the prune-and-search techniques for linear programming to find the minimal enclosing circle in O(n) time.

Prune-and-Search

The essence of Megiddo's algorithm is prune-and-search. In a prune-and-search algorithm, a linear amount of work is done at each step to reduce the input size by a constant fraction f. If this can be achieved, then the total amount of work done will reduce to O(n)*(1 + (1-f) + (1-f)2 + ...). In this formula, the infinite series is geometric and sums to a constant value, and so the total running time is O(n). For example, suppose that by inspecting our set of n input elements we can discard 1/4 of them as irrelevant to the solution. By repeatedly applying this inspection to the remaining elements, we can reduce the input to a size which is trivial to solve, say n≤3. The total time taken to achieve this reduction will be proportional to (n + 3n/4 + 9n/16 + ...). It is easy to show that this series approaches, and never exceeds, a limit of 4n. Therefore, the total running time is proportional to n, as required.
The idea of using the geometric series to reduce an algorithm to linear time predates Megiddo's work; in particular, it had previously been used to develop O(n) median-finding algorithms. However, he was the first to apply it to a number of fundamental problems in computational geometry.

Megiddo's Linear-Time Algorithm

To find the minimal enclosing circle (MEC) of a set of points, Megiddo's algorithm discards at least n/16 points at each (linear-time) iteration. That is, given a set S of n points, the algorithm identifies n/16 points that can be removed from S without affecting the MEC of S. This procedure can be repeatedly applied until some trivial base case is reached (such as n=3), with the total running time proportional to (n + 15n/16 + 225n/256 + ...) = 16n.
In order to find n/16 points to discard, a great deal of cleverness is required. The algorithm makes heavy use of two subroutines:
  • median(S, >)
    • Takes a set S of elements and a metric for comparing them pairwise, and returns the median of the elements.
  • MEC-center(S, L)
    • Takes a set S of points and a line L, and determines which side of L the center of the MEC of S lies on.
As mentioned above, median() predates Megiddo's work, whereas the algorithm described here as MEC-center() was presented as part of his 1983 paper. To explore these procedures in detail would go beyond the scope of this outline, but each uses prune-and-search to run in linear time. The algorithm used by MEC-center() amounts to a simplified version of the algorithm as a whole.
Given these primitives, the algorithm for discarding n/16 input points runs as follows:
  • Arbitrarily pair up the n points in S to get n/2 pairs.
  • Construct a bisecting line for each pair of points, to get n/2 bisectors in total.
  • Call median() to find the bisector with median slope. Call this slope mmid.
  • Pair up each bisector of slope ≥ mmid with another of slope < mmid, to get n/4 intersection points. Call the set of these points I.
  • Call median() to find the point in I with median y-value. Call this y-value ymid.
  • Call MEC-center() to find which side of the line y=ymid the MEC-center C lies on. (Without loss of generality, suppose it lies above.)
  • Let I' be the subset of points of I whose y-values are less than ymid. (I' contains n/8 points.)
  • Find a line L with slope mmid such that half the points in I' lie to L's left, half to its right.
  • Call MEC-center() on L. Without loss of generality, suppose C lies on L's right.
  • Let I'' be the subset of I' whose points lie to the left of L. (I'' contains n/16 points.)
We can now discard one point in S for each of the n/16 intersection points in I''. The reasoning runs as follows. After  two calls to MEC-center(), we have found that the MEC-center C must lie above ymid and to the right of L, whereas any point in I'' is below ymid and to the left of L.
Each point in I'' is at the meeting point of two bisector lines. One of these bisectors must have slope ≥ mmid, and therefore must never pass through the quadrant where we know C to lie. Call this bisector B. Now, we know which side of B C lies on, so of the two points whose bisector is B, let PC be the one that lies on the same side as C, and let the other be PX.
It is easy to show that PC must be nearer to C than PX. It follows that PC cannot lie on the minimal enclosing circle, and thus we can safely discard a point PC for each of the n/16 intersection points in I''.
We have not discussed here how this algorithm can be made to deal with degenerate input (parallel bisectors, collinear points, etc), but it turns out that we get the same performance guarantees for such cases. The fact of the matter is for degenerate input the algorithm is able to discard more than n/16 points. In short, Megiddo's algorithm guarantees to prune at least n/16 points in each iteration independent to input.
Therefore, by the argument based on the geometric series, Megiddo's algorithm computes the minimal enclosing circle in linear time.



No comments:

Post a Comment

 

FACEBOOK PAGE

SKETCHES & PAINTINGS