EXPERIMENTAL BEHAVIOR OF JURIK’S NEAREST POINT APPROACH ALGORITHM FOR LINEAR PROGRAMMING EXPERIMENTAL BEHAVIOR OF JURIK’S NEAREST POINT APPROACH ALGORITHM FOR LINEAR PROGRAMMING

In linear programming problems [3] one tries to minimize a linear cost function on a set of higher dimensional vectors bounded by linear constraints. The simplex algorithm [4] to solve this problem was proposed by B. Dantzig in 1947. The algorithm is iterative, moving through vertices of the convex polyhedron bounded by the problem’s constraints. It performs quite well in practice, and is still being used. However, it was shown in 1972 by Klee and Minty that in a worst case it takes exponentially many steps. Multiple polynomial time running algorithms were proposed such as L. Khachinyan’s ellipsoid method in 1979 and N. Karmarkar’s in 1984. The research in this topic of high practical importance is ongoing, with open problems even for the simplex method such as Hirsch’s conjecture.


Linear programming problems
In linear programming problems [3] one tries to minimize a linear cost function on a set of higher dimensional vectors bounded by linear constraints. The simplex algorithm [4] to solve this problem was proposed by B. Dantzig in 1947. The algorithm is iterative, moving through vertices of the convex polyhedron bounded by the problem's constraints. It performs quite well in practice, and is still being used. However, it was shown in 1972 by Klee and Minty that in a worst case it takes exponentially many steps. Multiple polynomial time running algorithms were proposed such as L. Khachinyan's ellipsoid method in 1979 and N. Karmarkar's in 1984. The research in this topic of high practical importance is ongoing, with open problems even for the simplex method such as Hirsch's conjecture.

Jurik's algorithm
Jurik in [1] proposed a new algorithm for solving linear optimization problems. It starts with constructing a hyperplane H orthogonal to the cost direction that lies sufficiently far from a convex polyhedron P. One then observes that to find the optimal x is the same, as to find pairs of points w,x, where w lies in H and x lies in P whose distance is minimal. The algorithm has an outer iterative cycle and embedded within it is an inner iterative cycle. Let us briefly describe what is computed at each iteration of the cycles.
The algorithm stipulates how to generate initial points w 0 and x 0 (the latter is generated first). Suppose in the k-th iteration of the outer cycle we generated points w k lying in the hyperplane H and x k in P (necessarily on the boundary of P). A line L k is then defined as the projection of the line passing through w k and x k onto the hyperplane H. New points w kϩ1 and x kϩ1 are then generated in the inner cycle (which Jurik showed terminates in finite number of steps) as the points of minimal distance lying on L k and P respectively. Because the line L k contains the projection r k of point x k onto H, with increasing k the distance of point x k to H (the cost function) cannot increase. In fact we have the following inequalities for distances because Lk contains the projection of x k onto H and thus At this time, it is not known whether the algorithm always converges. Jurik himself proved that the inner cycle stops after a finite number of iteration. The following conjecture was indicated by Jurik: Conjecture A Jurik's algorithm converges to the optimum in a finite number of steps.

An example needing many line iterations
Jurik in his thesis [2] reports that in the vast majority of cases, he needed to use only two lines to find the optimum. In our experiments we found a 3-dimensional problem which takes many more iterations. Let us define the vectors This leads us to formulate the following conjecture, which indicates that in the worst case the running time of Jurik's algorithm is worse than the running time of simplex algorithm.

Conjecture B
The number of steps in Jurik's algorithm is not bounded by any function of the problem's dimension and the number of constraints.

Typical behavior
In this section we present an example, that based on our experience, we believe is quite typical for running behavior of Jurik's algorithm on three dimensional problems. We chose three random vectors:

Conclusion
Jurik presented in his works [1], [2] compelling evidence that his new algorithm is competitive with commonly used linear opti-mization algorithms. Based on our, admittedly very limited, experimental experience we stated two conjectures about the behavior of the algorithm. In case the Conjecture B holds, a modification of the algorithm is needed to deal with some linear programming problems, perhaps by rescaling the problem, a clever choice of hyperplane H, or some other method.