# László Lovász's Kyoto Prize lecture

László Lovász was awarded the 2010 Kyoto Prize for "Outstanding Contributions to Mathematical Sciences Based on Discrete Optimisation Algorithms." He was presented with the award, which consists of a diploma, a Kyoto Prize medal of twenty-karat gold, and a cash prize totalling 50 million yen (approximately US$550,000) at a ceremony held at the Kyoto International Conference Centre on 10 November 2010. On the following day he delivered the lecture Fledgeling Subject Bridging Classical Theory and New Applications. We give a version of his lecture below.

### Fledgling Subject Bridging Classical Theory and New Applications

by

Lászlo Lovász.

Lászlo Lovász.

It is great pleasure and honour for me to receive the Kyoto Prize. Indeed, when looking at the list of past laureates I feel dwarfed by these great scientist, and I cannot believe that I am included among them, I am also grateful for this opportunity to be able to speak about my way to mathematical research, and about how I see the development of the fields of mathematics that have interested me most.

1.

**Family**

I grew up in Budapest, which is a large City by Hungarian standards (population 2 million). At the time of my childhood there were still many remnants of the war: ruined buildings, shot-holes in the walls, and some of the bridges across the Danube were still in ruin. The traffic was much less than nowadays: we could go and play soccer in some streets, where a car came only every 10 minutes or so.

Budapest was on the wrong side of the iron curtain. I have some dramatic memories about the revolution in 1956 (I was 8 years old), but I cannot say that my childhood was bad. My father was a surgeon, and we lived in relative comfort. I have a brother two years younger than me, and we had a good time playing together.

I met my wife Kati when we were in high school; we got married when we were university students. I feel that we have been real partners in life; she is an active mathematician, and an enthusiastic teacher. She has also made a lot of sacrifices to help my research in many possible ways, moving with me to America and back, bearing the burden of the family, proofreading my books and papers, and, above all, understanding the plans, goals, and concerns in my professional life.

We have four children, and (as for now) five grandchildren, who give us a lot of pleasure. My children were all very good in maths, but their interests took the three girls in other directions: our oldest daughter to literature, the other two, to economics. Our son, however, is a university student studying mathematics.

2. Budapest was on the wrong side of the iron curtain. I have some dramatic memories about the revolution in 1956 (I was 8 years old), but I cannot say that my childhood was bad. My father was a surgeon, and we lived in relative comfort. I have a brother two years younger than me, and we had a good time playing together.

I met my wife Kati when we were in high school; we got married when we were university students. I feel that we have been real partners in life; she is an active mathematician, and an enthusiastic teacher. She has also made a lot of sacrifices to help my research in many possible ways, moving with me to America and back, bearing the burden of the family, proofreading my books and papers, and, above all, understanding the plans, goals, and concerns in my professional life.

We have four children, and (as for now) five grandchildren, who give us a lot of pleasure. My children were all very good in maths, but their interests took the three girls in other directions: our oldest daughter to literature, the other two, to economics. Our son, however, is a university student studying mathematics.

**Meeting with mathematical research**

I was introduced to mathematical research already in high school. I think my father was hoping that I would follow him in the medical profession, but when I was finishing elementary school (at the age of 14), Director Bellay of the school came to see my parents and told them about a specialised high school for mathematically talented children, which was just starting, and convinced them to enrol me in this school. This high school, called Fazekas, was not really well known then (it is now quite famous), but eventually my parents agreed. I still feel that this was the greatest moment in my life. In this high school I had excellent teachers, and met many outstanding students (including my wife), who are still among my best friends.

Hungary has a long tradition of nurturing mathematical talent. The first mathematical monthly journal for high school children started in 1894 (it is still alive, and has become a very important institution in maths education), and national mathematical competitions started in 1894. Our maths teacher, Mr Rábai, decided that his best students in the class should get more than school maths (even though school maths in our school was quite advanced), and introduced us to some of the best research mathematicians at the university and the research institute, who were happy to meet and teach talented high school children (travel restrictions behind the iron curtain may in fact have had a beneficial effect in this respect). Besides the very advanced classes in the school, I learned a lot from many excellent research mathematicians. One of them was Paul Erdös. Probably many of you have heard of him: not only was he one of the greatest mathematicians of the 20th century, but also a very special person, and there are many legends about him. He did not want to settle down, did not want to have property (so that it does not distract him from doing mathematics), and was traveling all the time, staying in hotels, guest rooms, or with his numerous friends. He was always surrounded by a big group of people, and shared with them all his new problems, his research ideas, and new results of others he learned about during his travels. He was great in raising new research problems in mathematics, or to communicate new, unexplored research problems all over the world.

He liked very much to talk with children and young people, and he visited our school several times during the 4 years I spent there. He gave the students unsolved problems, which were elementary in the sense that even a good high school student could understand them, but this did not mean that they were easy or uninteresting. One or two of these I could solve, and so started my lifelong commitment to mathematical research.

I also want to say a few words about Tibor Gallai, from whom I also learned a lot in high school and later, when he was my thesis advisor (in an unofficial way; we did not necessarily have official thesis advisors at that time). As a person he was just the opposite of Erdös, a reserved, very modest person, who preferred a long quiet conversation one-on-one. When I was a student, I visited him regularly and learned a lot about mathematics, in particular about graph theory, and about his ideas of the directions it is developing.

One event which made a great impression on me happened when I wrote up my first paper, a rather complicated graph-theoretic argument. Gallai helped me a lot; I could even say that based on my inexperienced notes, he was the one who wrote the paper. At the end, not only that did he refuse to be a co-author of the paper; he even objected to thanking him at the end of the paper. He said that as a teacher, this was his duty. I don't know if I always managed to follow his moral standards, but I have tried.

When I finished high school, it was natural to enrol as a mathematics student at the Eötvös Loránd University in Budapest. I got an excellent education there, learning both the foundations of most branches of mathematics, and also the spirit of mathematical research.

3. Hungary has a long tradition of nurturing mathematical talent. The first mathematical monthly journal for high school children started in 1894 (it is still alive, and has become a very important institution in maths education), and national mathematical competitions started in 1894. Our maths teacher, Mr Rábai, decided that his best students in the class should get more than school maths (even though school maths in our school was quite advanced), and introduced us to some of the best research mathematicians at the university and the research institute, who were happy to meet and teach talented high school children (travel restrictions behind the iron curtain may in fact have had a beneficial effect in this respect). Besides the very advanced classes in the school, I learned a lot from many excellent research mathematicians. One of them was Paul Erdös. Probably many of you have heard of him: not only was he one of the greatest mathematicians of the 20th century, but also a very special person, and there are many legends about him. He did not want to settle down, did not want to have property (so that it does not distract him from doing mathematics), and was traveling all the time, staying in hotels, guest rooms, or with his numerous friends. He was always surrounded by a big group of people, and shared with them all his new problems, his research ideas, and new results of others he learned about during his travels. He was great in raising new research problems in mathematics, or to communicate new, unexplored research problems all over the world.

He liked very much to talk with children and young people, and he visited our school several times during the 4 years I spent there. He gave the students unsolved problems, which were elementary in the sense that even a good high school student could understand them, but this did not mean that they were easy or uninteresting. One or two of these I could solve, and so started my lifelong commitment to mathematical research.

I also want to say a few words about Tibor Gallai, from whom I also learned a lot in high school and later, when he was my thesis advisor (in an unofficial way; we did not necessarily have official thesis advisors at that time). As a person he was just the opposite of Erdös, a reserved, very modest person, who preferred a long quiet conversation one-on-one. When I was a student, I visited him regularly and learned a lot about mathematics, in particular about graph theory, and about his ideas of the directions it is developing.

One event which made a great impression on me happened when I wrote up my first paper, a rather complicated graph-theoretic argument. Gallai helped me a lot; I could even say that based on my inexperienced notes, he was the one who wrote the paper. At the end, not only that did he refuse to be a co-author of the paper; he even objected to thanking him at the end of the paper. He said that as a teacher, this was his duty. I don't know if I always managed to follow his moral standards, but I have tried.

When I finished high school, it was natural to enrol as a mathematics student at the Eötvös Loránd University in Budapest. I got an excellent education there, learning both the foundations of most branches of mathematics, and also the spirit of mathematical research.

**About graphs**

The area of mathematics that inspired most of my work is graph theory. This studies structures that are mathematically very simple: they consist of some elements called

Graph theory is not a new subject. The first result in graph theory, the Königsberg Bridges Problem, was solved by Euler (one of the greatest mathematicians of all times) in 1736. However, it was not until the 1960's when graph theory started to become a large and active field. Paul Erdös was especially instrumental by asking many simple but nontrivial and inspiring questions, and thereby leading the subject to more and more depth and maturity.

Graph theory is usually considered to be one branch of "discrete mathematics." Note the spelling: we don't mean "discreet" (in the sense of respecting privacy) but rather that we consider structures which consists of well-separated objects, usually finite in number (in contrast, the real line consists of an infinite number of points, which can be arbitrarily close to each other). Graph theory and areas in which generalisations or abstractions of graphs are studied constitute a substantial part of discrete mathematics, and I will not make a distinction between them in this talk.

I was introduced to graph theory, and through it to mathematical research, quite early in my life. My high school classmate and friend Lajos Pósa had met Paul Erdös when he was quite young. Erdös gave him graph theoretic problems to solve, and he was successful, and by the time he started high school, he wrote a joint research paper with Paul Erdös and some more papers by himself. Later when we met in high school, he told me about some further Erdös problems, and since I was able to solve one or two, he introduced me to Erdös. Erdös gave me unsolved problems, and then I thought of some of my own, and so started my lifelong commitment to graph theory.

I must add that in those days, graph theory was quite isolated from "mainstream" mathematics, and I was often advised by older mathematicians to do something more serious instead. But for me, the novelty of the subject and its potential applications were fascinating.

Things have changed since then. Graph theory has become quite important in the last decades, through many of the applications, but also through its tighter and tighter links with other parts of mathematics. I will mention some of these developments below.

4. *nodes*or*vertices*, and a relation between them: some pairs of nodes are*adjacent*, or*connected by an edge*, some pairs are not. Such a structure is called a*graph*or*network*. For example, if we want to describe the acquaintanceship structure of a society, we can represent every person by a node, and connect two nodes by an edge if these two persons know each other. We usually represent the nodes by points in the plane, and the edges, by curves connecting them (not necessarily straight). This way we get a drawing like the collaboration graph or the Petersen graph. A graph is one of the simplest possible structures, and it is clear that graphs are needed for the representation or modelling of a great variety of structures in sciences, economics, engineering, logistics, and many other areas.Graph theory is not a new subject. The first result in graph theory, the Königsberg Bridges Problem, was solved by Euler (one of the greatest mathematicians of all times) in 1736. However, it was not until the 1960's when graph theory started to become a large and active field. Paul Erdös was especially instrumental by asking many simple but nontrivial and inspiring questions, and thereby leading the subject to more and more depth and maturity.

Graph theory is usually considered to be one branch of "discrete mathematics." Note the spelling: we don't mean "discreet" (in the sense of respecting privacy) but rather that we consider structures which consists of well-separated objects, usually finite in number (in contrast, the real line consists of an infinite number of points, which can be arbitrarily close to each other). Graph theory and areas in which generalisations or abstractions of graphs are studied constitute a substantial part of discrete mathematics, and I will not make a distinction between them in this talk.

I was introduced to graph theory, and through it to mathematical research, quite early in my life. My high school classmate and friend Lajos Pósa had met Paul Erdös when he was quite young. Erdös gave him graph theoretic problems to solve, and he was successful, and by the time he started high school, he wrote a joint research paper with Paul Erdös and some more papers by himself. Later when we met in high school, he told me about some further Erdös problems, and since I was able to solve one or two, he introduced me to Erdös. Erdös gave me unsolved problems, and then I thought of some of my own, and so started my lifelong commitment to graph theory.

I must add that in those days, graph theory was quite isolated from "mainstream" mathematics, and I was often advised by older mathematicians to do something more serious instead. But for me, the novelty of the subject and its potential applications were fascinating.

Things have changed since then. Graph theory has become quite important in the last decades, through many of the applications, but also through its tighter and tighter links with other parts of mathematics. I will mention some of these developments below.

**Meeting with applications**

4-1.

In the last years of my studies, I got interested in the strong connection between operations research and graph theory, I wrote my thesis under the guidance of Gallai on the problem of factors of graphs (today we call them matchings). The basic question is: Given a graph, can we pair up its nodes so that no two nodes in each pair± are adjacent? More generally, what is the maximum number of edges that are pairwise disjoint (without a common endpoint)? For bipartite graphs (graphs in which the nodes are divided into two classes, so that each edge connects two nodes in different classes) the answer was given by König in 1931: The maximum number of pairwise disjoint edges is equal to the maximum number of nodes covering all edges. Tutte and Berge extended the characterisation to all graphs (the condition is beautiful but a bit too complicated to state here). Many other matching problems remained unsolved, however (allowing me to write a thesis by solving some), and matching theory is still an important source of difficult, but not hopelessly difficult, graph theoretic problems.

Shortly after I defended my thesis, I solved a problem which was called the Weak Perfect Graph Conjecture of Berge, and the method of solution turned out to have interesting consequences in integer programming and through this, to polyhedra. I have found repeatedly that this connection between graph theory, optimisation, and geometry is most fruitful.

There are many other graph-theoretical optimisation problems, some of which, like the Maximum Flow Problem or the Traveling Salesman Problem, are very important in practice, and broadly known. These problems (and also the perfect graph problem, which I don't describe in detail) are examples of optimisation problems quite different from traditional optimisation problems.

In a typical optimisation problem in analysis, we want to find the minimum or maximum of a "smooth" (differentiable) function, and one learns as an important application of differential calculus how to solve this by finding the points where the derivative is 0. In discrete optimisation the situation is quite different: we want to optimise functions that are defined on a finite, but large and complicated set (like the set of all matchings in a graph, where the function is the number of edges in the matching). This objective function has no derivative, and the classical methods of analysis are useless.

The theorem of König described above characterises the maximum matching, and it is very useful in theoretical considerations, but it does not tell how to compute this optimum (such an algorithm was designed much later by Kuhn in the bipartite case and by Edmonds in the general case). For other problems like the Traveling Salesman Problem, not even a good characterisation similar to König's Theorem is possible. But the problems are there, arising from practice, and we have to solve them as well as we can.

There are several methods to attack such problems; perhaps the most successful one goes through linear programming, which can be thought of as the art of solving systems of linear inequalities. Solving systems of linear equations is a basic undergraduate level skill. Solving systems of linear inequalities is quite a bit more complicated, but it is also possible. It is interesting to note that this algebraic problem can be translated to geometry, by constructing a convex polyhedron (in a high dimensional space) and reducing the optimisation problem to finding the highest point of this polyhedron.

Combinatorial optimisation problems can be translated into linear programming, even though the systems of linear inequalities one gets can be very large and complicated, sometimes forbiddingly so (like in the case of the Traveling Salesman Problem). But quite often one gets very elegant connections between graph theory and linear programming this way. I got fascinated by these connections, and spent a lot of time trying to understand them in general. Eventually I wrote a book with Martin Grötschel and Lex Schrijver about geometric methods in combinatorial optimisation; but I will come back to this later.

4-2.

Coming back to the early 1970's, when I was working on matching theory, many of us tried to obtain an analogue of Tutte's Theorem (characterising when a graph has a perfect matching) for Hamilton cycles:

The problem is quite similar to the matching problem, and my advisor Tibor Gallai and many of us were wondering why is it so much more difficult. This time (around 1970) was also the time of the fast development of computer science, in particular of the theory of algorithms and their complexity. In 1972/73, I spent a year in the US, and learned about the newly developed theory of polynomial time algorithms and NP-completeness. Polynomial time problems were considered "easy," or at least efficiently solvable.

NP-complete problems are hard in the sense that every other problem in a rather wide class of problems can be reduced to them. It is widely expected that this is a real distinction, and NP-complete problems cannot be solved efficiently, but no mathematical proof of this assertion exists. This fundamental problem of complexity theory, usually stated as "P=NP?," was included in 2000 among the 7 most important open problems in mathematics. The theory of complexity of algorithms thrilled me because it gave an explanation of the difference between the matching problem and the Hamilton cycle problem: one of them was in P, the other was NP-complete!

When I returned to Hungary, I met a friend of mine, Péter Gács, who spent the year in Moscow. Cutting into each others words, we began to explain the great ideas we learned about Leonid Levin's work in Moscow, and about the work of Cook and Karp in the US: as it turned out, it was independent development of the same theory. (For about two weeks we thought we had a proof for P≠NP. Nowadays we would be more suspicious of our own too simple ideas for a famous problem ...)

Graph theory became one of the most important areas of the mathematical foundations of computer science. We have seen that graph theoretic problems motivated the "P=NP?" problem, and many more of the most interesting questions in the development of complexity theory.

There is also an important connection pointing the other way: to describe in mathematical terms the process of a complicated computation, one can use a graph. Steps in the computation are represented by nodes (often called "gates" in this context), and an edge indicates that the output of one step is an input of the other. We can assume that these outputs are just single bits ("TRUE" or "FALSE"), and the gates themselves can be very simple. The most natural version is to use AND gates (which output TRUE if both of their inputs are TRUE), OR gates (which output TRUE if at least one of their inputs is TRUE) and NOT gates (which have only one input bit, and they output the negation of it). All the complexity of the computation goes into the structure of this graph (called a

I am sorry to report that we, graph theorists, have not achieved much in this direction. For example, the famous "P=NP?" problem boils down to the following question: Let us fix a set of n nodes. Suppose that there is a network as above, which has an input gate for every pair of nodes, and a single output gate u. Assigning "TRUE" or "FALSE" values to the input gates, we specify a graph on the given nodes (TRUE meaning that the given nodes are adjacent). We want the output to be TRUE if and only if the graph has a Hamilton cycle. Such a network can be designed, but the question is, whether its size can be bounded by some polynomial in n, say by n100. To understand the complexity of computations using graph theory is a BIGGGG challenge!

But I cannot end my remarks about computer science on this pessimistic note, because pessimism was very far from the spirit of computer science in the 1970's. In fact, the last 3 decades of the 1900s was a very special period of co-development of graph theory and theoretical computer science.

Such a co-development of mathematics with another science happens every now and then: one can think, for example, of the development of mechanics and mathematical analysis in the 18th century. Questions and results from both areas cross-fertilised each other, and the same great scientists (Newton, Leibniz, Euler) contributed to both the mathematical and the physical sides.

I felt something similar about the co-development of discrete mathematics (graph theory) and computer science (complexity theory). The latter provided a new framework to ask questions and seek answers in graph theory, and it was most exciting to see how this confirmed some intuition (like about the difference in difficulty of the matching problem and the Hamilton cycle problem) and lead to new results. In the other direction, graph theory provided the tools of understanding computations, the design of chips, computer networks, and many other questions in computer science. It was an exciting and inspiring time for a graph theorist to live in!

In fact, computer science did not stop at the boundaries of graph theory; it has penetrated and re-shaped many other, more classical parts of mathematics. For example, prime numbers have been studied by many of the best mathematicians since ancient times (these are positive integer numbers that are not multiples of any smaller integer other than 1). A lot of deep results have been accumulated about primes (as well as many open questions). The theory of prime numbers was considered as one of the most "pure" areas, studied for its own internal interest. But the development of computers raised a very simple but novel question: how do you decide whether a given number is prime? Certainly, great mathematicians like Gauss, who did a lot of computation with primes, had good individual methods, but these hand calculations became too slow as soon as the numbers had more than a few digits. With computers, you could try to answer this question for much larger numbers, but for this new methods were needed. Computer science not only provided the tools to test primality, it also provided the need: as discovered in the late 1970's, one can design important cryptographic protocols based on the efficiency of primality testing, as opposed to the (likely) difficulty of factoring an integer into primes.

4-3.

Around 1980, Martin Grötschel from Bonn, Lex Schrijver from Amsterdam and I started to write a book about graph-theoretic applications of the ellipsoid method. This method was developed by Soviet scientists (Shor, Yudin, Nemirovsky and Khachiyan) to solve linear programs, and it was particularly well suited for the kind of combinatorial optimisation problems that I mentioned above. The theory fitted together very nicely (once the right framework for it has been found), but there was one annoying little gap: the method failed in a case which could be considered "degenerate" (technically, when the polyhedron had no interior point). In practice, it was easy to eliminate this difficulty in every concrete case, but the solutions had an annoying "ad hoc" element. After we ended one of our working sessions and Martin and Lex went back to Bonn and Amsterdam, I started to think about the problem, and realised that the difficulty could be eliminated if we could solve the basic number-theoretic problem: find rational numbers with a common denominator approximating given irrational numbers. For example, "2 = 1.41421... and "3 = 1.73205... are quite well approximated by the fractions 10/7 = 1.42857... and 12/7 = 1.71428... (having a common denominator of 7), and much better by 58/41 = 1.41463... and 71/41 = 1.73170...

So this took me to number theory, one of the oldest and noblest branches of mathematics. There was a lot of information in the literature about the existence of such approximations, but I found no methods to compute them. One of the standard approaches to this problem, going back to Minkowski in the 19th century, was to translate the problem into finding the closest pair of points in a lattice. I will not define this exactly, but a lattice is a very regular arrangement of points in space, like the atoms of a crystal, and I needed a method to find the closest pair among them.

So I got to geometry, even older and nobler; the trouble was that one needed to solve the problem in a high dimensional space, not in our familiar 3-dimensional space. But I remembered that a few months earlier I heard a talk by Hendrik Lenstra from Amsterdam, in which he described a new algorithm for another combinatorial optimisation problem, which solved it by developing an algorithm for the very same geometric problem. His algorithm was not quite right for me (it became too slow in high dimension), but it put me on the right track, and I was able to develop an efficient algorithm for this "shortest lattice vector problem." The algorithm gave only an approximate solution, but this was good enough.

I wrote a letter to Lenstra (on paper: this was back in 1981), and a little later he wrote back that his brother Arjen Lenstra had found a much more important application for this algorithm, namely finding the irreducible factors of a polynomial. (This is an algorithmic problem similar to the problem of finding the prime factorisation of an integer. It seemed more difficult at that time, but in the end it turned out to be easier.) The three of us wrote a joint paper about this. Since then, this algorithm has found many practical applications, in particular to cryptography, where it serves as a tool to test potential cryptosystems.

I described the history of this result in detail, because it illustrates the intricate connections between various branches of mathematics, and also how a problem raised for purely aesthetic reasons may lead to practically important developments.

5. **Combinatorial optimisation**In the last years of my studies, I got interested in the strong connection between operations research and graph theory, I wrote my thesis under the guidance of Gallai on the problem of factors of graphs (today we call them matchings). The basic question is: Given a graph, can we pair up its nodes so that no two nodes in each pair± are adjacent? More generally, what is the maximum number of edges that are pairwise disjoint (without a common endpoint)? For bipartite graphs (graphs in which the nodes are divided into two classes, so that each edge connects two nodes in different classes) the answer was given by König in 1931: The maximum number of pairwise disjoint edges is equal to the maximum number of nodes covering all edges. Tutte and Berge extended the characterisation to all graphs (the condition is beautiful but a bit too complicated to state here). Many other matching problems remained unsolved, however (allowing me to write a thesis by solving some), and matching theory is still an important source of difficult, but not hopelessly difficult, graph theoretic problems.

Shortly after I defended my thesis, I solved a problem which was called the Weak Perfect Graph Conjecture of Berge, and the method of solution turned out to have interesting consequences in integer programming and through this, to polyhedra. I have found repeatedly that this connection between graph theory, optimisation, and geometry is most fruitful.

There are many other graph-theoretical optimisation problems, some of which, like the Maximum Flow Problem or the Traveling Salesman Problem, are very important in practice, and broadly known. These problems (and also the perfect graph problem, which I don't describe in detail) are examples of optimisation problems quite different from traditional optimisation problems.

In a typical optimisation problem in analysis, we want to find the minimum or maximum of a "smooth" (differentiable) function, and one learns as an important application of differential calculus how to solve this by finding the points where the derivative is 0. In discrete optimisation the situation is quite different: we want to optimise functions that are defined on a finite, but large and complicated set (like the set of all matchings in a graph, where the function is the number of edges in the matching). This objective function has no derivative, and the classical methods of analysis are useless.

The theorem of König described above characterises the maximum matching, and it is very useful in theoretical considerations, but it does not tell how to compute this optimum (such an algorithm was designed much later by Kuhn in the bipartite case and by Edmonds in the general case). For other problems like the Traveling Salesman Problem, not even a good characterisation similar to König's Theorem is possible. But the problems are there, arising from practice, and we have to solve them as well as we can.

There are several methods to attack such problems; perhaps the most successful one goes through linear programming, which can be thought of as the art of solving systems of linear inequalities. Solving systems of linear equations is a basic undergraduate level skill. Solving systems of linear inequalities is quite a bit more complicated, but it is also possible. It is interesting to note that this algebraic problem can be translated to geometry, by constructing a convex polyhedron (in a high dimensional space) and reducing the optimisation problem to finding the highest point of this polyhedron.

Combinatorial optimisation problems can be translated into linear programming, even though the systems of linear inequalities one gets can be very large and complicated, sometimes forbiddingly so (like in the case of the Traveling Salesman Problem). But quite often one gets very elegant connections between graph theory and linear programming this way. I got fascinated by these connections, and spent a lot of time trying to understand them in general. Eventually I wrote a book with Martin Grötschel and Lex Schrijver about geometric methods in combinatorial optimisation; but I will come back to this later.

4-2.

**Computer science**Coming back to the early 1970's, when I was working on matching theory, many of us tried to obtain an analogue of Tutte's Theorem (characterising when a graph has a perfect matching) for Hamilton cycles:

*given a graph, is there a cycle in it going through each node exactly once?*The problem is quite similar to the matching problem, and my advisor Tibor Gallai and many of us were wondering why is it so much more difficult. This time (around 1970) was also the time of the fast development of computer science, in particular of the theory of algorithms and their complexity. In 1972/73, I spent a year in the US, and learned about the newly developed theory of polynomial time algorithms and NP-completeness. Polynomial time problems were considered "easy," or at least efficiently solvable.

NP-complete problems are hard in the sense that every other problem in a rather wide class of problems can be reduced to them. It is widely expected that this is a real distinction, and NP-complete problems cannot be solved efficiently, but no mathematical proof of this assertion exists. This fundamental problem of complexity theory, usually stated as "P=NP?," was included in 2000 among the 7 most important open problems in mathematics. The theory of complexity of algorithms thrilled me because it gave an explanation of the difference between the matching problem and the Hamilton cycle problem: one of them was in P, the other was NP-complete!

When I returned to Hungary, I met a friend of mine, Péter Gács, who spent the year in Moscow. Cutting into each others words, we began to explain the great ideas we learned about Leonid Levin's work in Moscow, and about the work of Cook and Karp in the US: as it turned out, it was independent development of the same theory. (For about two weeks we thought we had a proof for P≠NP. Nowadays we would be more suspicious of our own too simple ideas for a famous problem ...)

Graph theory became one of the most important areas of the mathematical foundations of computer science. We have seen that graph theoretic problems motivated the "P=NP?" problem, and many more of the most interesting questions in the development of complexity theory.

There is also an important connection pointing the other way: to describe in mathematical terms the process of a complicated computation, one can use a graph. Steps in the computation are represented by nodes (often called "gates" in this context), and an edge indicates that the output of one step is an input of the other. We can assume that these outputs are just single bits ("TRUE" or "FALSE"), and the gates themselves can be very simple. The most natural version is to use AND gates (which output TRUE if both of their inputs are TRUE), OR gates (which output TRUE if at least one of their inputs is TRUE) and NOT gates (which have only one input bit, and they output the negation of it). All the complexity of the computation goes into the structure of this graph (called a

*Boolean circuit*).I am sorry to report that we, graph theorists, have not achieved much in this direction. For example, the famous "P=NP?" problem boils down to the following question: Let us fix a set of n nodes. Suppose that there is a network as above, which has an input gate for every pair of nodes, and a single output gate u. Assigning "TRUE" or "FALSE" values to the input gates, we specify a graph on the given nodes (TRUE meaning that the given nodes are adjacent). We want the output to be TRUE if and only if the graph has a Hamilton cycle. Such a network can be designed, but the question is, whether its size can be bounded by some polynomial in n, say by n100. To understand the complexity of computations using graph theory is a BIGGGG challenge!

But I cannot end my remarks about computer science on this pessimistic note, because pessimism was very far from the spirit of computer science in the 1970's. In fact, the last 3 decades of the 1900s was a very special period of co-development of graph theory and theoretical computer science.

Such a co-development of mathematics with another science happens every now and then: one can think, for example, of the development of mechanics and mathematical analysis in the 18th century. Questions and results from both areas cross-fertilised each other, and the same great scientists (Newton, Leibniz, Euler) contributed to both the mathematical and the physical sides.

I felt something similar about the co-development of discrete mathematics (graph theory) and computer science (complexity theory). The latter provided a new framework to ask questions and seek answers in graph theory, and it was most exciting to see how this confirmed some intuition (like about the difference in difficulty of the matching problem and the Hamilton cycle problem) and lead to new results. In the other direction, graph theory provided the tools of understanding computations, the design of chips, computer networks, and many other questions in computer science. It was an exciting and inspiring time for a graph theorist to live in!

In fact, computer science did not stop at the boundaries of graph theory; it has penetrated and re-shaped many other, more classical parts of mathematics. For example, prime numbers have been studied by many of the best mathematicians since ancient times (these are positive integer numbers that are not multiples of any smaller integer other than 1). A lot of deep results have been accumulated about primes (as well as many open questions). The theory of prime numbers was considered as one of the most "pure" areas, studied for its own internal interest. But the development of computers raised a very simple but novel question: how do you decide whether a given number is prime? Certainly, great mathematicians like Gauss, who did a lot of computation with primes, had good individual methods, but these hand calculations became too slow as soon as the numbers had more than a few digits. With computers, you could try to answer this question for much larger numbers, but for this new methods were needed. Computer science not only provided the tools to test primality, it also provided the need: as discovered in the late 1970's, one can design important cryptographic protocols based on the efficiency of primality testing, as opposed to the (likely) difficulty of factoring an integer into primes.

4-3.

**From optimisation to cryptography**Around 1980, Martin Grötschel from Bonn, Lex Schrijver from Amsterdam and I started to write a book about graph-theoretic applications of the ellipsoid method. This method was developed by Soviet scientists (Shor, Yudin, Nemirovsky and Khachiyan) to solve linear programs, and it was particularly well suited for the kind of combinatorial optimisation problems that I mentioned above. The theory fitted together very nicely (once the right framework for it has been found), but there was one annoying little gap: the method failed in a case which could be considered "degenerate" (technically, when the polyhedron had no interior point). In practice, it was easy to eliminate this difficulty in every concrete case, but the solutions had an annoying "ad hoc" element. After we ended one of our working sessions and Martin and Lex went back to Bonn and Amsterdam, I started to think about the problem, and realised that the difficulty could be eliminated if we could solve the basic number-theoretic problem: find rational numbers with a common denominator approximating given irrational numbers. For example, "2 = 1.41421... and "3 = 1.73205... are quite well approximated by the fractions 10/7 = 1.42857... and 12/7 = 1.71428... (having a common denominator of 7), and much better by 58/41 = 1.41463... and 71/41 = 1.73170...

So this took me to number theory, one of the oldest and noblest branches of mathematics. There was a lot of information in the literature about the existence of such approximations, but I found no methods to compute them. One of the standard approaches to this problem, going back to Minkowski in the 19th century, was to translate the problem into finding the closest pair of points in a lattice. I will not define this exactly, but a lattice is a very regular arrangement of points in space, like the atoms of a crystal, and I needed a method to find the closest pair among them.

So I got to geometry, even older and nobler; the trouble was that one needed to solve the problem in a high dimensional space, not in our familiar 3-dimensional space. But I remembered that a few months earlier I heard a talk by Hendrik Lenstra from Amsterdam, in which he described a new algorithm for another combinatorial optimisation problem, which solved it by developing an algorithm for the very same geometric problem. His algorithm was not quite right for me (it became too slow in high dimension), but it put me on the right track, and I was able to develop an efficient algorithm for this "shortest lattice vector problem." The algorithm gave only an approximate solution, but this was good enough.

I wrote a letter to Lenstra (on paper: this was back in 1981), and a little later he wrote back that his brother Arjen Lenstra had found a much more important application for this algorithm, namely finding the irreducible factors of a polynomial. (This is an algorithmic problem similar to the problem of finding the prime factorisation of an integer. It seemed more difficult at that time, but in the end it turned out to be easier.) The three of us wrote a joint paper about this. Since then, this algorithm has found many practical applications, in particular to cryptography, where it serves as a tool to test potential cryptosystems.

I described the history of this result in detail, because it illustrates the intricate connections between various branches of mathematics, and also how a problem raised for purely aesthetic reasons may lead to practically important developments.

**Tools from classical mathematics**

When I was a student, mathematics seemed to move in the direction of fragmentation. Areas which were considered as the "core" of our science developed their abstract paradigms and (at least it seemed so to me) only appreciated research which fit into this. Relatively new branches like probability or graph theory, seemed to become separated from each other and from classical branches. Other division lines ran between pure and applied mathematics, and between discrete and continuous mathematics. (I am happy to report that this tendency seems to have turned around, and the unity of mathematics is on much firmer grounds.) To be sure, it is very important to seek a deeper framework or paradigm of research, and it is always exciting if a problem or result from some seemingly remote area, all of a sudden fits in a nice abstract framework; often this reveals important points that have been missed and suggest interesting problems to work on. However, I have felt that results and methods that do not fit in these established frameworks are perhaps even more interesting.

A related controversial point is the principle of "purity of method." If we have a problem in geometry, should we attempt to solve it using just geometry, or should we try to bring in methods from algebra or analysis? Again, both approaches have their merits, but for me, unexpected connections between seemingly distant areas of mathematics have always been the most fascinating. There are beautiful applications of very classical mathematics such as algebra in graph theory, which I have always found very fascinating, and have tried to find such connections myself. In the 1970's I found a couple of applications of topology to graph theory, and was able to solve a rather long-standing open problem by these methods. This was surprising, because topology is the study of continuity, the opposite of discrete mathematics.

One of the most important steps in the direction of applying other areas of mathematics in graph theory was the introduction of the "probabilistic method" by Paul Erdös in the mid-1950s, The idea is simple, but the fact that it works is almost mind-boggling: quite often, if you want to construct some object with complicated properties, and no construction you can think of works, you can try to construct the object randomly, flipping a coin or rolling dice to determine how to make the next construction step. This sounds crazy: if you get all parts of a TV set in a bag, and you don't know how to put it together, surely you would not try to attach them to each other on the basis of coin flips. Well, this would not be the right approach to construct a TV-set, but it does work miraculously in some other cases. (To stay at this example, other complicated networks like the human brain most likely have a lot of random connections and they work well.) What Erdös did was to prove that graphs with certain very special properties, called Ramsey graphs, exist. It had been long known that every graph with n nodes must have either log n nodes that are mutually adjacent, or log n nodes that are mutually nonadjacent. Ramsey graphs are graphs for which this bound is tight, that is, where the largest number of mutually adjacent nodes is about log n, and so is the largest number of mutually adjacent nodes. Erdös in fact proved that if we put every graph with a given number of nodes in a bag, shake it, and pull out one graph, it will be, almost certainly, a Ramsey graph. What makes this surprising is the fact that even today, we don't know how to construct a single Ramsey graph; whatever we construct, it always ends up among the tiny fraction of graphs which are not Ramsey graphs.

Today, such probabilistic arguments are used in graph theory in many other ways, and in fact this method is becoming a fundamental tool in many areas of mathematics. Around 1960, Paul Erdös and Alfréd Rényi developed the theory of random graphs. In their model, we start with n nodes, which we fix; then we begin to add edges, where the location of each new edge is chosen randomly from among all pairs of nodes not adjacent yet. After a certain prescribed number m of edges, we stop. Of course, if we repeat this construction, almost surely we will end up with a different graph. However, if n and m are large, the graphs constructed this way will be very similar, with a very small probability of getting an "outlier." (This is a manifestation of the Law of Large Numbers.) A related phenomenon is that watching these random graphs develop (as edges are added), we observe sudden changes in their structure. For example, if we look at the graph when it has O.49n edges, almost surely it will consist of many small connected components. If we look again when the graph has O.51n edges, then it will contain a single giant component (containing about 4% of all nodes), along with a few very small ones.

To determine these typical properties of random graphs is not easy, and Erdös and Rényi have worked out many of them. Less than a decade later, I learned probability theory from the lectures of Rényi, and he gave me copies of their papers on random graphs. I have to admit that I was not interested in them for a while; they contained long, detailed computations, and who likes to read such things? Since then, the field has blossomed into one of the most active areas in graph theory, and became fundamental for modelling the internet, and of course I could not avoid working with random graphs, as we shall see below.

6. A related controversial point is the principle of "purity of method." If we have a problem in geometry, should we attempt to solve it using just geometry, or should we try to bring in methods from algebra or analysis? Again, both approaches have their merits, but for me, unexpected connections between seemingly distant areas of mathematics have always been the most fascinating. There are beautiful applications of very classical mathematics such as algebra in graph theory, which I have always found very fascinating, and have tried to find such connections myself. In the 1970's I found a couple of applications of topology to graph theory, and was able to solve a rather long-standing open problem by these methods. This was surprising, because topology is the study of continuity, the opposite of discrete mathematics.

One of the most important steps in the direction of applying other areas of mathematics in graph theory was the introduction of the "probabilistic method" by Paul Erdös in the mid-1950s, The idea is simple, but the fact that it works is almost mind-boggling: quite often, if you want to construct some object with complicated properties, and no construction you can think of works, you can try to construct the object randomly, flipping a coin or rolling dice to determine how to make the next construction step. This sounds crazy: if you get all parts of a TV set in a bag, and you don't know how to put it together, surely you would not try to attach them to each other on the basis of coin flips. Well, this would not be the right approach to construct a TV-set, but it does work miraculously in some other cases. (To stay at this example, other complicated networks like the human brain most likely have a lot of random connections and they work well.) What Erdös did was to prove that graphs with certain very special properties, called Ramsey graphs, exist. It had been long known that every graph with n nodes must have either log n nodes that are mutually adjacent, or log n nodes that are mutually nonadjacent. Ramsey graphs are graphs for which this bound is tight, that is, where the largest number of mutually adjacent nodes is about log n, and so is the largest number of mutually adjacent nodes. Erdös in fact proved that if we put every graph with a given number of nodes in a bag, shake it, and pull out one graph, it will be, almost certainly, a Ramsey graph. What makes this surprising is the fact that even today, we don't know how to construct a single Ramsey graph; whatever we construct, it always ends up among the tiny fraction of graphs which are not Ramsey graphs.

Today, such probabilistic arguments are used in graph theory in many other ways, and in fact this method is becoming a fundamental tool in many areas of mathematics. Around 1960, Paul Erdös and Alfréd Rényi developed the theory of random graphs. In their model, we start with n nodes, which we fix; then we begin to add edges, where the location of each new edge is chosen randomly from among all pairs of nodes not adjacent yet. After a certain prescribed number m of edges, we stop. Of course, if we repeat this construction, almost surely we will end up with a different graph. However, if n and m are large, the graphs constructed this way will be very similar, with a very small probability of getting an "outlier." (This is a manifestation of the Law of Large Numbers.) A related phenomenon is that watching these random graphs develop (as edges are added), we observe sudden changes in their structure. For example, if we look at the graph when it has O.49n edges, almost surely it will consist of many small connected components. If we look again when the graph has O.51n edges, then it will contain a single giant component (containing about 4% of all nodes), along with a few very small ones.

To determine these typical properties of random graphs is not easy, and Erdös and Rényi have worked out many of them. Less than a decade later, I learned probability theory from the lectures of Rényi, and he gave me copies of their papers on random graphs. I have to admit that I was not interested in them for a while; they contained long, detailed computations, and who likes to read such things? Since then, the field has blossomed into one of the most active areas in graph theory, and became fundamental for modelling the internet, and of course I could not avoid working with random graphs, as we shall see below.

**Very large networks**

More recently, I got interested in the theory of very large networks. Between 1999 and 2006 I worked in the Theory Group of Microsoft Research, which was an excellent place, with several outstanding mathematicians and good cooperative atmosphere. We were free to do any mathematical research that we were interested in, but of course it was also worth while to listen to the problems that were raised by software development, networking, cryptography, and other areas which were important for a large software company, In the Fall of 2002, I heard three questions raised by three colleagues, which seemed quite different, but on the long run turned out to be very closely related.

Jennifer Chayes was looking at internet models that were initiated by Albert and Barabási just a few years earlier, The internet was modelled as a randomly growing network, and she asked whether one can describe a limit distribution in a fashion similar to the way the limit of the sum of more and more independent coin-flips can be described by the "Bell curve" (Gaussian distribution). I remembered that in 1979, Paul Erdös, Joel Spencer and I had written a paper where we implicitly used such a limit notion. This had not gotten much attention at that time, but now it also tied in with a question of Vera Sós about quasirandomness, and adding many new ideas, Christian Borgs, Jennifer Chayes, Vera Sós, Kati Vesztergombi and I were able to work out a theory of convergent graph sequences. About the same time, Michael Freedman, who was working on quantum computing based on methods from topology, asked for a characterisation of partition functions of statistical physical models. With Lex Schrijver, who was visiting, we managed to give a characterisation of such functions. Based on both works, my postdoc Balázs Szegedy and I were able to construct limit objects for convergent graph sequences.

This suggested a general framework for the study of very large networks, and I began to look around where such networks were used besides the internet. It turned out that they were all over the place: many areas in mathematics, computer science, biology, physics and social sciences study properties of very large graphs. This gives a new twist to graph theory, and futher areas of classical mathematics turn out to be needed here. I have to re-learn areas that I last met when I was a student, and I enjoy this enormously.

The internet is an obvious example. There are in fact more than one network that we can define based on the internet. One is a "physical' network, the graph whose nodes are all the electronic devices (computers, telephones, routers, hubs, etc.), and whose edges are all the connections between them (wired or wireless). There is also the "logical" structure of the internet, often called the World Wide Web, which consists of all the documents available on the web, and whose edges are the hyperlinks pointing from one to the other.

Social networks are of course formed by people, with various definitions of connections between them, although the best known and best studied social networks are internet based (like Facebook). Some historians would like to understand history based on a network of humans. The structure of this network determines, among others, how fast news, diseases, religions, inventions spread through society, with enormous impact on the events of history.

There are many other networks related to humans. The brain is a great example of a huge network whose working is not fully understood. It is too large for its structure (all its neurons and their connections) to be encoded in our DNA; how come that it still functions and is able to solve math problems?

Biology is full of systems whose basic structure is a network. Interactions between all animals and plants living in a forest (who eats whom), interactions between proteins in our body, etc. All in all, networks are about to become a basic language for the description of structures and systems in many parts of nature; just as continuous functions, differentiation and integration became the basic language for the description of mechanics and electromagnetism.

For the mathematician, this should say that we have to provide powerful tools for biologists, historians, sociologist to describe the systems they (and all of us) are interested in. This will not be an easy task, since these systems are very diverse; modelling of traffic, information distribution or electrical networks discussed above, are only the peak of the iceberg.

We often use the finite to approximate the infinite; numerical solution of physics' equations (say, for the purposes of weather prediction) usually go through restricting the whole space and time to a finite number of points, and computing step-by-step (approximately) how temperature, pressure etc. develops at these points. It is a more subtle thought that the infinite is often a good approximation of the large finite. Continuous structures are often cleaner, more symmetric, and richer than their discrete counterparts. Graph limits are good examples for this method.

To illustrate the idea on a physical example, look at a large piece of metal. This is a crystal, that is a really large graph consisting of atoms and bonds between them (arranged in a periodic and therefore rather boring way). But for an engineer, who uses this metal to build a bridge, it is more useful to consider it as a continuum with a few important parameters (density, elasticity, temperature) as functions on this continuum. Our engineer can then use differential equations to compute the stability of the bridge. Can we consider a more general very large graph as some kind of a continuum?

In some cases this is possible, and a figure illustrates the idea. We start with a random graph which is just a little more complicated than the random graphs introduced by Erdös and Renyi. It is constructed by growing it randomly according to the following rule: at each step, either a new node or a new edge is born. If the number of nodes is $n$, then the probability that a new node is born is $1/n$, and the probability that a new edge is born is $(n-1)/n$. A new edge connects a randomly chosen pair of nodes.

The picture on the left represents the graph after 100 steps in the following way: the pixel in the intersection of the $i$-th row and the $j$-th column is black, if there is an edge connecting the $i$-th node and the $j$-th column, and white, if there is no such edge. So the area in the upper left is darker, because a pixel there represents a pair of nodes that have been around for a longer time and have had more chance to get connected.

While this graph depends on chance, the pixel picture on the left is, from a distance, quite similar to the continuous function 1-max(x, y), which is depicted on the right. If instead of a hundred steps we took 1000, the similarity would be even more striking. One can prove that the rather simple function on the right encodes all the information one needs to know about the graph on the left, except for random fluctuations, which become smaller and smaller as the number of nodes grows.

Large graphs, with thousands of nodes, and huge graphs, with billions, represent quite new kind of challenges for the graph theorist. It is a special beauty of mathematics that to meet some of these challenges, we have to discover and use more and more connections with other, more classical parts of mathematics.

7. Jennifer Chayes was looking at internet models that were initiated by Albert and Barabási just a few years earlier, The internet was modelled as a randomly growing network, and she asked whether one can describe a limit distribution in a fashion similar to the way the limit of the sum of more and more independent coin-flips can be described by the "Bell curve" (Gaussian distribution). I remembered that in 1979, Paul Erdös, Joel Spencer and I had written a paper where we implicitly used such a limit notion. This had not gotten much attention at that time, but now it also tied in with a question of Vera Sós about quasirandomness, and adding many new ideas, Christian Borgs, Jennifer Chayes, Vera Sós, Kati Vesztergombi and I were able to work out a theory of convergent graph sequences. About the same time, Michael Freedman, who was working on quantum computing based on methods from topology, asked for a characterisation of partition functions of statistical physical models. With Lex Schrijver, who was visiting, we managed to give a characterisation of such functions. Based on both works, my postdoc Balázs Szegedy and I were able to construct limit objects for convergent graph sequences.

This suggested a general framework for the study of very large networks, and I began to look around where such networks were used besides the internet. It turned out that they were all over the place: many areas in mathematics, computer science, biology, physics and social sciences study properties of very large graphs. This gives a new twist to graph theory, and futher areas of classical mathematics turn out to be needed here. I have to re-learn areas that I last met when I was a student, and I enjoy this enormously.

The internet is an obvious example. There are in fact more than one network that we can define based on the internet. One is a "physical' network, the graph whose nodes are all the electronic devices (computers, telephones, routers, hubs, etc.), and whose edges are all the connections between them (wired or wireless). There is also the "logical" structure of the internet, often called the World Wide Web, which consists of all the documents available on the web, and whose edges are the hyperlinks pointing from one to the other.

Social networks are of course formed by people, with various definitions of connections between them, although the best known and best studied social networks are internet based (like Facebook). Some historians would like to understand history based on a network of humans. The structure of this network determines, among others, how fast news, diseases, religions, inventions spread through society, with enormous impact on the events of history.

There are many other networks related to humans. The brain is a great example of a huge network whose working is not fully understood. It is too large for its structure (all its neurons and their connections) to be encoded in our DNA; how come that it still functions and is able to solve math problems?

Biology is full of systems whose basic structure is a network. Interactions between all animals and plants living in a forest (who eats whom), interactions between proteins in our body, etc. All in all, networks are about to become a basic language for the description of structures and systems in many parts of nature; just as continuous functions, differentiation and integration became the basic language for the description of mechanics and electromagnetism.

For the mathematician, this should say that we have to provide powerful tools for biologists, historians, sociologist to describe the systems they (and all of us) are interested in. This will not be an easy task, since these systems are very diverse; modelling of traffic, information distribution or electrical networks discussed above, are only the peak of the iceberg.

We often use the finite to approximate the infinite; numerical solution of physics' equations (say, for the purposes of weather prediction) usually go through restricting the whole space and time to a finite number of points, and computing step-by-step (approximately) how temperature, pressure etc. develops at these points. It is a more subtle thought that the infinite is often a good approximation of the large finite. Continuous structures are often cleaner, more symmetric, and richer than their discrete counterparts. Graph limits are good examples for this method.

To illustrate the idea on a physical example, look at a large piece of metal. This is a crystal, that is a really large graph consisting of atoms and bonds between them (arranged in a periodic and therefore rather boring way). But for an engineer, who uses this metal to build a bridge, it is more useful to consider it as a continuum with a few important parameters (density, elasticity, temperature) as functions on this continuum. Our engineer can then use differential equations to compute the stability of the bridge. Can we consider a more general very large graph as some kind of a continuum?

In some cases this is possible, and a figure illustrates the idea. We start with a random graph which is just a little more complicated than the random graphs introduced by Erdös and Renyi. It is constructed by growing it randomly according to the following rule: at each step, either a new node or a new edge is born. If the number of nodes is $n$, then the probability that a new node is born is $1/n$, and the probability that a new edge is born is $(n-1)/n$. A new edge connects a randomly chosen pair of nodes.

The picture on the left represents the graph after 100 steps in the following way: the pixel in the intersection of the $i$-th row and the $j$-th column is black, if there is an edge connecting the $i$-th node and the $j$-th column, and white, if there is no such edge. So the area in the upper left is darker, because a pixel there represents a pair of nodes that have been around for a longer time and have had more chance to get connected.

While this graph depends on chance, the pixel picture on the left is, from a distance, quite similar to the continuous function 1-max(x, y), which is depicted on the right. If instead of a hundred steps we took 1000, the similarity would be even more striking. One can prove that the rather simple function on the right encodes all the information one needs to know about the graph on the left, except for random fluctuations, which become smaller and smaller as the number of nodes grows.

Large graphs, with thousands of nodes, and huge graphs, with billions, represent quite new kind of challenges for the graph theorist. It is a special beauty of mathematics that to meet some of these challenges, we have to discover and use more and more connections with other, more classical parts of mathematics.

**Beyond research**

The life of a mathematician is not all research. In varying proportions, we all have to take our shares of teaching, administration, and popularisation of science. I personally have always felt that teaching is a very rewarding activity. Of course, teaching advanced material to very good students is more fun, but even very basic classes can have their interesting, challenging moments.

I have written several books, both research monographs and textbooks, and found that to find the best definitions, the best organisation of the material, the right examples, and many other details can be fascinating. I have had my successes and failures in administration, but this is not the right place to talk about them.

One activity I am interested in is exploiting the possibilities offered by computers and the internet in teaching and popularisation of science, in particular mathematics. I made an attempt to develop an expository "paper" making use of the possibilities of interaction and animations offered by computers.

See http://www.cs.elte.hu/~lovasz/tuttedemo.html

I have written several books, both research monographs and textbooks, and found that to find the best definitions, the best organisation of the material, the right examples, and many other details can be fascinating. I have had my successes and failures in administration, but this is not the right place to talk about them.

One activity I am interested in is exploiting the possibilities offered by computers and the internet in teaching and popularisation of science, in particular mathematics. I made an attempt to develop an expository "paper" making use of the possibilities of interaction and animations offered by computers.

See http://www.cs.elte.hu/~lovasz/tuttedemo.html

Last Updated February 2020