Diagonal methods for expensive global optimization
The goal of global optimization is essentially to search for optimal solutions in various areas of human activity. The principal advantage of the diagonal approach compared to other methods is its speed. Russian scientists from Lobachevsky State University of Nizhni Novgorod have improved the method of global optimization by offering a so-called "diagonal approach."
When solving multi-parameter applied problems, researchers resort to calculations that ensure finding the optimal solution that will give the maximum benefit with minimum costs. The search for mathematical tools for such calculations is of high relevance. The complexity level of the optimization task depends on the parameters and values to be calculated. Sometimes it is necessary to take into account only one factor, and the structure of the problem itself is simple (it has one minimal value to be found), and traditional mathematical methods of local optimization can easily cope with this task.
It is obvious that new methods for solving global optimization problems need to be developed, since traditional algorithms cannot cope with such problems. A computer is supplied with a procedure depending on several numerical parameters and the constraints that must be observed in the calculations. The system has to offer the most appropriate solution within the limits that have been set.
The idea of diagonal methods was proposed by the Hungarian mathematician Janos Pinter in 1996, and the Russian scientist Yaroslav Sergeyev proposed and implemented a number of fundamental developments of the approach.
The results of studies over the past 20 years were published in the monograph "Deterministic global optimization: an introduction to the diagonal approach," written in collaboration with Dmitry Kvasov. What is the essence of the diagonal approach? Researchers can represent an overall set of the problem parameters as a multidimensional hypercube. Any object can be divided into many cubes, which are so small that it is possible to assemble from them any shape, including a circle.
Imagine dividing an apple into pieces. Each of these pieces can be cut into many smaller pieces many times .We are limited by the thickness of the knife, but there are no such limitations in mathematics. The apple can be divided into arbitrarily small parts until reaching the desired result. The rule for calculating the characteristic and the method for the best partitioning of the hypercube are fundamental to this approach. In the example with the apple, this study would be aimed at finding ways to cut and select the tastiest piece.
"Our method of hypercube partitioning differs from traditional ones in that the hyperinterval is divided into a number of subintervals, which can be divided into three (when three, nine, or 27 new subintervals arise in each partitioning). Additionally, the diagonals of these hypercubes rotate in the multidimensional space according to a specific rule we propose, in contrast to traditional methods in which the diagonals are fixed and parallel to each other. This rotation allows us to obtain a larger number of subintervals thus decreasing the number of computations of the function values to be optimized," explains Yaroslav Sergeyev.
Another feature of the method can be described as accounting for the qualitative features in the function's behavior, while in the traditional approach, the worst behavior is always the expected.
The developed methods were applied to solve time-consuming real-world problems, for example, by optimizing topology and ensuring the reliability of network switching, image processing, optimal design of control systems, and signal filtering. The efficiency of devices and systems implementing these processes has been increased dramatically with the use of the new methods. Currently, Yaroslav Sergeyev and his colleagues are working to develop parallel versions of the diagonal method permitting to use powerful supercomputer systems for solving highly complex problems.