Authors:
(1) Vladislav Trifonov, Skoltech ([email protected]);
(2) Alexander Rudikov, AIRI, Skoltech;
(3) Oleg Iliev, Fraunhofer ITWM;
(4) Ivan Oseledets, AIRI, Skoltech;
(5) Ekaterina Muravleva, Skoltech.
Table of Links
2 Neural design of preconditioner
3 Learn correction for ILU and 3.1 Graph neural network with preserving sparsity pattern
5.1 Experiment environment and 5.2 Comparison with classical preconditioners
5.4 Generalization to different grids and datasets
7 Conclusion and further work, and References
5.4 Generalization to different grids and dataset
We also observe a good generalization of our approach when transferring our preconditioner between grids and datasets (Figure 4). The transfer between datasets of increasing and decreasing complexity does not lead to a loss of quality. This means that we can train the model with simple PDEs and then use it with complex ones for inference. If we fix the complexity of the dataset and try to transfer the learned model to other grids, we observe a loss of quality of only about 10%.
This paper is