G I Marchuk's plenary: ICM 1970


Gurii Ivanovich Marchuk gave a plenary lecture to the International Congress of Mathematicians in Nice in 1970. The title of his talk was Methods and Problems of Computational Mathematics. We give below a version of Marchuk's lecture in which we have corrected the spelling of certain names (the problem being transliteration).

Methods and Problems of Computational Mathematics.

Computational mathematics being part of mathematics has currently at its disposal powerful techniques for solving problems of science and engineering. The range of computational methods is so wide that it is practically impossible to cover them to a full extent in one report. A series of interesting investigations by Bellman, Dreyfus et al. devoted to dynamic programming and some related problems was discussed at the previous Congress of Mathematicians. Therefore we shall confine ourselves to some selected questions connected with the theory of approximate operations in finite, and infinite-dimensional functional spaces which the author has been concerned with. Even so, however, it is impossible to cover many interesting studies in the field because of the time limit given to the report. For the same reason the author, regretfully, had to reduce to minimum references to the original studies.

Large-scale electronic computers gave rise to algorithmic constructions and mathematical experimentation over a wide area of science and engineering. This attracted new research personnel to the problems of computational mathematics. The valuable experience we had had in solving applied problems was later used to devise effective methods and algorithms of computational mathematics.

The methods of computational mathematics are closely related to the state of computer art. New concepts and methods are formed in computational mathematics and its numerous applications influenced essentially by every new stage of computer technology.

The standard of research in computational mathematics is largely dependent on the actual connection with fundamental areas of mathematics. First of all I should like to mention functional analysis, differential equations, algebra and logic, the theory of probability, calculus of variations, etc. A mutual exchange of the ideas between different branches of mathematics has been intensified in the recent decade. This is true in the first place for computational mathematics which has used the results of fundamental mathematical areas to develop new and more sophisticated methods and to improve the old ones.

At the same time it should be emphasised that applications have an important influence on computational mathematics. Thus, for instance, mathematical simulation often stimulated a discovery of new approaches which are now a most valuable possession of computational mathematics. Such applied areas as hydrodynamics, atomic physics, mathematical economics and the control theory are most important examples.

1. The theory of approximation, stability and convergence of difference schemes.

The wide use of finite-differences method in differential equations of mathematical physics required a detailed study of those features of difference equations that affect in the first place the quality of difference schemes. Among them are above all the stability and convergence conditions.

This unfavourable feature of difference equations and the corresponding studies of John von Neumann initiated theoretical investigations in order to determine the relation between convergence and stability and to find effective stability criteria of difference schemes.

Later on several authors formulated the following fundamental theorem called the equivalence theorem. If a difference scheme approximates a linear homogeneous differential equation for a properly posed problem, then the stability of the difference scheme is a necessary and sufficient condition for its convergence. The final formulation and the proof of this theorem for an abstract evolution equation in a Banach space were given by Lax. Generalisation of the equivalence theorem for non-homogeneous linear differential equation was given by Richtmyer. One can make the stability conditions of the scheme less strict provided that the initial data are sufficiently smooth. This idea is implemented in the Strang equivalence theorem using the concept of weak stability.

Speaking of the effective stability conditions it is necessary to mention John von Neumann-Richtmyer's paper of 1950. They formulated a so-called local stability criterion. They introduced such new notions as a symbol of a difference scheme, a spectrum of a family of difference operators and a kernel of the spectrum of the family which made it possible to estimate norms of the powers of the step operators. These estimates were in many cases effectively used in the stability analysis.

An interesting approach to difference schemes with variable coefficients is associated with the idea of dissipativity. This idea was implemented in the studies of Kreiss. His theorems relate the order of dissipativity of the difference equations approximating systems of hyperbolic equations to the order of their accuracy. Important results have been derived by a so-called energy method which is based on the concept of strong stability. The idea of the method is to choose some norm for the vector solution. The norm of the vector solution grows from step to step not faster than I + 0(∆t).

The energy method was first introduced by Courant, Friedrichs and Lewy and developed by other authors, in particular by Ladyzhenskaya and Lees.

Here it is necessary to mention the theory of the convergence of difference schemes developed by Samarsky who has used energy inequalities and a priori estimates. The theory gives necessary and sufficient stability conditions for two- and three-layer schemes formulated in a form of inequalities. The inequalities contain operator coefficients of difference schemes.

Of late the interest of mathematicians has been attracted to stable boundary-value hyperbolic problems. A certain contribution to that has been made by Kreiss. He has formulated necessary and sufficient stability conditions for some classes of problems. Ryabenkii has deeply studied the theory of boundary-value problems for difference equations with constant coefficients. As before the theory of difference equations for boundary-value problems of mathematical physics is of supreme concern to mathematicians.

2. A numerical solution of the problems of mathematical physics.

The studies of approximation, stability and convergence have provided the necessary basis for a wide research of effective difference schemes applied to the problems of mathematical physics. The algorithms of finite difference methods combine, as a rule, the aspect of a construction of a difference equation-analogue as well as the aspect of its solution. Therefore the advance of the constructive theory of the finite difference methods depends on a mutually coordinated development of the two aspects mentioned above.

If we try to summarise the vast experience of recent years in the development of finite difference methods we can conventionally distinguish some main trends.

2.1. One of such trends is concerned with finding efficient algorithms for multi- dimensional stationary problems on mathematical physics.

As a result of the success achieved in a solution of simultaneous linear algebraic equations with Jacobi and block-tridiagonal matrices there have emerged a few excellent algorithms in which factorisation of the difference operator is used. At the Institute of Applied Mathematics (AS, USSR) were proposed different variants of the direct factorisation method which have been effectively applied to a solution of different classes of problems and which should be specially mentioned.

One can see that besides the precise factorisation methods there is a rapid development of the approximate factorisation methods where factorisation of the operator is performed by means of iterations.

Early sixties were marked by a major contribution in computational mathematics associated with the names of Douglas, Peaceman and Rachford who suggested an alternating direction method. The success of the method was ensured by the use of a simple reduction of a multi-dimensional problem to a sequence of one-dimensional problems with Jacobi matrices which are convenient to handle. The theory of the alternating direction method has been developed by Douglas and Gunn, Birkhoff, Wachspress, Varga and also by Kellogg, Bakhvalov, Vorobjov, Widlund et al.

Later Soviet mathematicians Yanenko, Dyakonov, Samarsky and others developed a so-called splitting-up method. The point is that the approximation of the initial operator by each auxiliary operator is not necessary but on the whole such an approximation exists in special norms.

A series of investigations has been devoted to a choice of optimisation parameters of splitting-up schemes by means of spectral and variational techniques.

2.2. The experience we have in the solution of one-dimensional problems represents a solid base when we come to the development of algorithms for the problems of mathematical physics. An important role in the development of new approaches to a solution of non-stationary two-dimensional problems belongs to the alternating direction method.

Further advancement of the methods for multi-dimensional non-stationary problems is connected with splitting-up techniques based as a rule on non-homogeneous difference approximations of the initial differential operators. The mathematical technique is related with splitting of a compound operator to simple ones. If this approach is used the given equation can be solved by means of integration of simpler equations. In this case the intermediate schemes have to satisfy the approximation and stability conditions only as a whole which permits flexible schemes to be constructed for practically all problems of mathematical physics.

Splitting-up schemes for implicit approximations have been suggested by Yanenko, Dyakonov, Samarsky et al. and applied in various problems. Such schemes have stimulated a more general computational approach to the problems of mathematical physics which has been called a weak approximation method.

French scientists Lions, Temam, Bensoussan, Glowinski et al. have made an important contribution to the splitting-up methods and theoretically substantiated a number of new approaches. These investigations are especially important for fluid dynamics, the theory of plasticity and the control theory. The method of decomposition and decentralisation formulated by these scientists should be specially mentioned. It is closely connected with the method of weak approximation.

Recently there has been found a class of splitting-up schemes equivalent in their accuracy to the Crank-Nicolson difference scheme and applied to non-stationary operators. These schemes are absolutely stable for the systems of equations with positive semi-definite operators depending explicitly on space and time coordinates. This method is easily extended to quasi-linear equations.

Lax and Wendroff have suggested a kind of a predictor-corrector scheme. This approach is used in hydrodynamics, meteorological and oceanological problems.

2.3. In the recent years there has been a rapid development of a so-called particle- in-the-cell method suggested by Harlow and applied to multi-dimensional problems of mathematical physics. It is widely used to calculate multi-dimensional hydro- dynamics flows with strong deformation of the fluid, big relative displacements and colliding surfaces. We can expect that in the years to come the applicability of the method will be extended to multi-dimensional problems.

2.4. The Monte-Carlo method suggested by John von Neumann and Ulam has been developed now for more than two decades. From the very beginning it turned out that the Monte-Carlo method was effective only on very fast computers because a great number of samples is required to reduce the mean squared error of a solution.

However, in spite of the difficulties of putting this method on middle-scale computers and, maybe, due to them the theory of the method has been considerably improved which has increased its efficiency. The basic ideas intended to a considerable improvement of the method comprise the use of conditional probabilities and statistical weight coefficients which can be found when information on the solutions of conjugate equations is used, the latter being related to the essential functionals inherent in the problems.

The simplicity and universality of this method will undoubtedly make it an important tool of computational mathematics.

2.5. Lately there has been much interest in variational methods applied to problems of mathematical physics. The variational methods of Ritz, Galerkin, Frefz and others have long become classical in computational mathematics.

Not long ago there emerged a new trend in variational methods, a so-called method of finite elements or functions. The main idea of it was expressed by Courant as far back as nineteen forties. The essence of this method is that one seeks an approximate solution in a form of linear combination of functions with compact support of order of the mesh width h. In other words one takes as trial functions special functions in a polynomial form identically equal to zero outside of a fixed domain having a characteristic dimension of several h's. The main problem here is the theory of approximation of the functions by a given system of finite elements.

An important contribution to the finite element method has been made by Birkhoff, Shultz, Varga et al. A systematic study of the theory and applications of the method has been fulfilled by Aubin, Babuska, Fix and by Strang, Bramble, Douglas and others.

Usually the main obstacle one comes across using variational methods is a choice of simple functions satisfying boundary conditions. It can be overcome by means of special variational functionals. For this purpose one employs a so-called penalty method or a weight method which reduce the initial problem to one with natural boundary conditions. The finite element method is close in its idea to the method of spline functions.

The finite element method is closely associated with the application of a variational approach to constructing finite difference equations corresponding to differential equations of mathematical physics. Lions, Cea, Aubin, Raviart and other authors have contributed to this area of research.

There is no doubt that the scope of variational methods will grow as the problems become more and more complicated. The variational approach in combination with other methods will be a powerful tool in computational mathematics.

3. Conditionally properly posed problems.

Correctness of a problem plays an important role in a numerical solution of mathematical physics equations. The concept of correctness was introduced by Hadamard at the beginning of our century. We know a variety of classical problems properly

posed in the sense of Hadamard. However, with a more profound study of various problems in natural sciences and engineering it became necessary to solve so-called conditionally properly posed problems. Tikhonov has formulated the requirements which proved to be natural in a formulation of improperly posed problems in the sense of Hadamard. Tikhonov introduced a concept of regularisation.

The results of the investigations of conditionally properly posed problems are presented in M M Lavrentiev's well-known monograph "Some improperly posed problems of mathematical physics".

An interesting approach to the formulation of the improperly posed problems in the sense of Hadamard is based on probabilistic methods. Most complete investigations have been made by M. M. Lavrentiev and Vasiliev. Different aspects of the theory of these problems in mathematical physics are discussed by Jones, Douglas, S Krein, Miller, Cannon and others.

Lions and Lattes have formulated a numerical method for the inverse evolution equation using a so-called quasi-inversion.

As evidenced by the tendencies of solving conditionally properly posed problems, the techniques used here is closely associated with the optimisation theory of computation to be briefly reviewed in this paper.

4 . Numerical methods in linear algebra.

A solution of simultaneous algebraic equations and computing of eigenvalues and eigenvectors of matrices are important problems of computational mathematics. Speaking about the numerical methods and problems in linear algebra of recent years it is necessary first of all to emphasise the growing interest in the solution of large systems of the corresponding equations, in the solution of ill-conditioned systems and in spectral problems for arbitrary matrices. Much attention has been paid to the use of a priori information in the process of the solution. Under the influence of computer development the old numerical methods in linear algebra have been reconsidered. The increasing use of computers has stimulated a creation of new algorithms well suited for automatic calculation.

4.1. Direct methods play an important role when simultaneous linear algebraic equations are solved or inverse matrices and determinants are found.

Direct methods have been considerably developed first by Faddeeva, Bauer, House- holder, Wilkinson and then by Henrici, Forsythe, Golub, Kublanovskaya, Voevodin and others. Using some elementary transformations one can represent the initial matrix as a product of two matrices, each being easily inverted.

We used to compare computational methods according to a number of arithmetic operations and the memory requirements. Now we ought to pay attention also to their accuracy. It means that round-off error analysis has become an essential feature of the method itself.

The corresponding investigations were started by John von Neuman, Goldstein, Turing, Givens et al. A systematic study of errors was first made by Wilkinson. His results were later systematised in his excellent monograph " An algebraic eigenvalue problem " where the method of equivalent perturbations was taken as a basic mathematical technique. As a result estimates of the norms of perturbations were obtained for all fundamental transformations of linear algebra.

In parallel with the method of equivalent perturbations there was an intensive development of the statistical error theory. The results obtained by Bakhvalov, Voevodin, Kim et al. initiated an investigation of the real distribution of round-errors. The statistical methods are certain to play an important role in the round-off error analysis.

4.2. Iterative methods remain very important in linear algebra. An active progress of these methods has resulted in a number of powerful algorithms which are effectively used on computers.

At present there are some trends in a construction of the iterative processes and methods aimed at the minimisation of the number of arithmetic operations for obtaining a solution, with the emphasis put on the use of spectral characteristics of the operators involved. A choice of iteration process parameters is part of optimisation of the computational algorithm. The major difficulty here is as a rule to determine the boundaries of the spectra of the matrices.

Spectral optimisation of iterative methods stimulates a formulation of a number of problems. Once again we shall discuss the two of them.

More attention has been recently attracted to the Lanczos transform of arbitrary matrices which leads to an equivalent system of equations with a symmetric matrix whose spectrum occupies two segments symmetric with respect to zero.

The second problem is a search of effective methods intended to determine the matrix eigenvalue with minimum modulus.

Let us discuss the application of variational principles to iterative methods. Such methods allow a successive minimisation of some functional which attains a minimum on a desired solution. There has been much interest in such problems. Kantorovich, Lanczos, Hestenes and Stiefel as well as Krasnoselsky and Krein et al. have stated a variational approach to iterative methods. I should like to mention the recent papers of Petryshyn, Forsythe, Daniel, Yu Kuznetsov, Godunov and others.

When the variational approach to iterative methods is used one can select relaxation parameters on the basis of a posteriori information obtained at each step. This is also the case for the steepest descent method and the iterative method with minimal discrepancies. The above said is the merit of the variational approach. The rate of convergence seems to be not lower than the rate we get using Chebyshev's polynomials.

There is also probabilistic technique intended to choose optimisation parameters of iterative processes. A series of interesting results has been obtained by Vorobjov.

The Young-Frankel over-relaxation method has not yet lost its importance. It has become classical and is generalised in the monographs of Wasow and Forsythe, Varga, Isaacson et al.

4.3. Let us consider how to solve a total eigenvalue problem for arbitrary matrices by iterations.

We shall discuss only power methods which have been advanced by Wilkinson, Bauer, Rutishauser, Collatz, Voevodin and by Francis, Kublanovskaya, Eberlein and many others. Until recently there have been effective eigenvalue algorithms only for symmetric matrices, for instance, the Jacobi method and the method of dividing segments in two. It is hoped that the discovery of the QR-algorithm and the generalised method of rotation will allow us to deal with arbitrary matrices. As present different modifications of the QR-algorithms are developed most intensively. These are widely used in science and engineering.

5. Optimization of numerical algorithms.

An important goal of computational mathematics is to find most profitable methods for a solution of the problems, i.e. to optimise algorithms. One must study the problem of optimisation under given constraints by general mathematical theorems and to estimate what is a minimal possible cost to solve a particular problem or a sequence of problems. Local optimisation of one isolated part of a solution does not practically resolve the problem we are interested in. However, if one can find the best way of handling every local problem using the existing computing facilities, one is thus led to its solution. This concept of the optimisation theory has been formulated by Sobolev and Babuska and it represents sufficiently well the essence of the formulated problem.

Yet in many cases it is either impossible to build an optimal algorithm or the latter turns out to be very costly. Nevertheless it appears possible to build an algorithm close an optimal one. This is the case, for example, when asymptotically optimal algorithms are constructed. It will be noted that at present the theory of asymptotic estimates is an effective tool of algorithm optimisation.

The concept of µ-entropy introduced by Kolmogorov has been very useful too. A hypothesis has been proposed that the efforts spent to find a solution are essentially associated in many instances with µ -entropy of a set of elements on which the solution depends. Using the concept of µ-entropy one can estimate both upper and lower bounds of the number of operations needed for the solution of many computational problems.

Sobolev, Bakhvalov, Lebedev and others have studied a number of algorithms for the problems of mathematical physics using finite-difference methods.

A considerable contribution to the theory of computation and its optimization has been made by Babuska, Dahlquist, Henrici et al., Babuska, Vitasek and Prager have introduced a notion of ±_K-sequence of computational processes. This implies that if the length of a sequence of operations in the problems of mathematical analysis is increased, the accuracy of computation exponentially increases.

An idea has been expressed to introduce operations with intervals. This trend named interval arithmetic can be applied to the study of the approximation errors in mathematical analysis and to the analysis of round-off errors.

6. Trends in computational mathematics.

6.1. The progress in computational technology has had an important influence on many branches of computer science which show a tendency of integration. The relations between: software, the methods of computational and applied mathematics, the theory of programming and languages-become so close that the choice of a strategy for a solution of particular problems is now of paramount importance. Though optimisation of individual components of computational process is as before a fundamental factor of the theory, the attention becomes more and more concentrated on optimisation of the whole process. Optimisation of computation is obviously one of the central goals of computational mathematics which stimulates exploration of new algorithms and new ways of their computer implementation.

6.2. The second trend is connected with a solution of classes of problems and with algorithm standardisation. A large amount of computer-processed information must be systematised and put in order. The valuable experience which we have in the solution of the problems of science and engineering allows us in many cases to set as an ultimate goal a creation of universal methods suitable to handle more or less wide classes of mathematical problems of the same type. At present a care must be taken to save the efforts of the society on a creation of numerous individual algorithms for individual and rare problems. It seems that a rational strategy for a solution of various rare problems is to construct universal algorithms self-adjusting to optimal operating conditions because they use a posteriori information. A rational strategy for a solution of frequently repeated problems is a careful implementation of specific algorithms.

These two approaches combined will help to save social resources spent on a creation of effective software. First steps have been made in the theory of the universal algorithms which are self-adjusting to a kind of optimal operating conditions and a course of further research has been outlined.

6.3. Software is becoming a materialisation of the society's intellect. The process of the mathematisation of sciences has given rise to an active development of the methods to simulate the phenomena occurring in nature and society. High-speed, large-memory computers of new generations can store immediately available valuable information and multi-access computers allow new forms of man-machine interaction using a conversational mode of operation. Therefore standardisation of software in general and of computational algorithms in particular is an urgent problem of scientific and technological process.

6.4. The problem of software has stimulated a formulation of new problems in computational mathematics, such as a construction of grids for complicated domains. For two-dimensional domains the above problem is close to its effective solution while for three- and multi-dimensional domains it is just being stated. This problem is closely connected with a construction of algorithms for large problems with high accuracy by difference, variational and other techniques or may be by a combination of different methods. The solution of the problems with non-linear monotonous operators is especially important. The corresponding theory is at present intensively developed.

6.5. The success achieved in analytic transformations on a computer leads practically to the solution of mathematical physics problems by the well-known technique of the continuous function analysis. As the supply of visual aids for analytic computations grows, these methods will penetrate more and more into software. The success achieved in analytic transformations on computers will give computer science new possibilities which nowadays should be taken into account.

Finally I should like to note that the further development of computational mathematics depends on the standard of research in fundamental branches of mathematics, the importance of the latter essentially growing at the age of great technological progress. Only a harmonic combination of research in all branches of mathematics will provide the necessary and favourable conditions for self-development of mathematics and its applications.

Last Updated April 2020