GBEES-GPU: An efficient parallel GPU algorithm for high-dimensional nonlinear uncertainty propagation

Published in Computer Physics Communications, 2025

Abstract: Eulerian nonlinear uncertainty propagation methods often suffer from finite domain limitations and computational inefficiencies. A recent approach to this class of algorithm, Grid-based Bayesian Estimation Exploiting Sparsity, addresses the first challenge by dynamically allocating a discretized grid in regions of phase space where probability is non-negligible. However, the design of the original algorithm causes the second challenge to persist in high-dimensional systems. This paper presents an architectural optimization of the algorithm for CPU implementation, followed by its adaptation to the CUDA framework for single GPU execution. The algorithm is validated for accuracy and convergence, with performance evaluated across distinct GPUs. Tests include propagating a three-dimensional probability distribution subject to the Lorenz ‘63 model and a six-dimensional probability distribution subject to the Lorenz ‘96 model. The results imply that the improvements made result in a speedup of over 1000 times compared to the original implementation.

Recommended citation: Hanson, B. L., Rubio, C., García-Gutiérrez, A., and Bewley, T., "GBEES-GPU: An efficient parallel GPU algorithm for high-dimensional nonlinear uncertainty propagation," Computer Physics Communications, Vol. 317, 2025, p. 109819.
Download Paper