Date: | Tue, November 12, 2019 |
Time: | 14:15 |
Place: | Research I, Room 103 |
Abstract: The solution of parametric partial differential equations or other parametric problems is the main component of many applications in scientific computing. Such applications include, but are not limited to, uncertainty quantification, inverse problems and optimization. To avoid the re-implementation of scientific simulation codes, the use of snapshot-based (non-intrusive) techniques for the solution of parametric problems becomes very attractive.
In this presentation, I will report on ongoing work to solve parametric problems with a higher-dimensional parameter space by means of approximation in reproducing kernel Hilbert spaces. In presence of regularization, approximation in reproducing kernel Hilbert spaces is equivalent to the so-called "kernel ridge regression", which is a classical approach in machine learning. In that sense, results on the use of machine learning to for an efficient approximation of parametric problems will be discussed for examples in computational fluid mechanics and quantum chemistry.
One challenge in parametric problems with high-dimensional parameter space is the high number of simulation snapshots that has to be computed in order to get a low approximation error with respect to the parameter space. If a single simulation is computationally expensive, many simulations of this kind become computationally intractable. To overcome this, we have introduced a multi-fidelity kernel ridge regression approach based on the sparse grid combination technique or multi-index approximation. In fact, this approach allows to significantly reduce the number of expensive calculations by adding coarser and coarser simulation snapshots.
While this approach allows to soften the computational challenges of the simulation snapshots, large-scale training in kernel ridge regression with millions of training samples is almost impossible if traditional matrix factorizations are used in the training process. To solve this issue, we have developed a hierarchical matrix approach that allows to solve related dense linear systems in log-linear time. This hierarchical matrix approach was parallelized on clusters of graphics hardware (GPUs) to get the best possible performance.
The results presented in my talk are based on joined work with Michael Griebel, Helmut Harbrecht, Bing Huang, Christian Rieger and Anatole von Lilienfeld (in alphabetical order).