Essentially all astronomical data has correlated noise. New statistical approaches for overcoming correlated noise have recently become available and are now popular in the astronomy community. These approaches involve linear algebra manipulations amenable to GPU acceleration. Two barriers have prevented widespread adoption of GPUs for astronomy data: access to (NVIDIA) hardware, and the finite learning curve of programming (CUDA). Recent Python frameworks are lowering the barrier to programming GPUs.
In this presentation I demonstrate the performance of the PyTorch GPU programming framework on fitting a line to data with correlated noise. The task applies Gaussian Process regression with linear least squares. The hardware was NVIDIA K40 on a Sandy Bridge node of the NASA Pleiades supercomputer based at NASA Ames Research Center.