Many important scientific and engineering problems have continuous mathematical formulations. They occur in numerous areas, including physics, chemistry, finance, economics, statistics, and all computational sciences. Such problems often lead to ordinary or partial differential equations, integral equations, stochastic differential equations, high-dimensional and path integration, nonlinear equations (including systems of polynomial equations), and various types of optimisation problems.
These problems can almost never be solved analytically, but rather only approximately to within some error threshold. Computational complexity is an area of applied mathematics and
theoretical computer science that studies the minimal computational resources needed
for the approximate solution of such problems. Often the resource of interest is time. The minimal computational time can be measured in different settings and for different error criteria. As we study the computational complexity of continuous problems, we encounter many challenging research problems.
High dimensional problems usually suffer from the curse of dimensionality if we consider them over spaces where all variables play the same role. A challenging problem is to find a way of structuring such problems that will allow us to vanquish the curse. This exciting research area studies the tractability of such problems. There are already several positive results in this direction. Examples include problems defined over spaces equipped with product, finite-order and other weights, as well as problems defined over spaces of increasing smoothness with respect to successive variables.
For some problems the curse of dimensionality in the worst case setting can be vanquished by
switching to a more lenient setting such as the randomized or average case setting. Then the complexity is only polynomial in the dimension. Sometimes we can even prove that the number of function values needed to solve the problem approximately does not depend on the dimension. We believe that this is only the beginning, and that we will find many other ways to structure these problems in a way that will allow us to break the curse. Furthermore, we also believe that high-dimensional problems occurring in computational practice are structured so that the curse of dimensionality does not hold. This would explain why practically important high-dimensional problems can be solved with reasonable computational effort.
Discrepancy theory is directly related to the quality of quasi-Monte Carlo methods for the approximation of integrals. It deals with the problem of distributing points as uniformly as possible and estimating the inevitable errors from approximating a continuous distribution by a discrete one. Naturally, discrepancy is intimately related to tractability studies. Although the classical theory has already answered many questions for low dimensional problems, the high dimensional situation is not well understood and many challenging fundamental problems still need to be studied.
High-dimensional or even infinite-dimensional problems (which come up in solving PDEs with random coefficients for instance) will be the basis of the two workshops in the program. These workshops will study discrepancy, quasi-Monte Carlo methods, and the complexity of high dimensional problems and stochastic computation.