Abstract : Spectroscopies are of fundamental importance but can suffer from low sensitivity. Singular Value Decomposition (SVD) is a highly interesting mathematical tool, which can be conjugated with low-rank approximation to denoise spectra and
increase sensitivity. SVD is also involved in data mining with Principal Component Analysis (PCA). In this paper, we focussed on the optimisation of SVD duration, which is a time-consuming computation. Both Intel processors (CPU) and Nvidia graphic cards (GPU) were benchmarked. A 100 times gain was achieved when combining divide and conquer algorithm, Intel Math Kernel Library (MKL), SSE3 (Streaming SIMD Extensions) hardware instructions and single precision. In such case, the CPU can outperform the GPU driven by CUDA technology. These results give a strong background to optimise SVD computation at the user scale.
https://hal.sorbonne-universite.fr/hal-02063604
Contributor : Guillaume Laurent <>
Submitted on : Tuesday, March 12, 2019 - 10:18:37 AM Last modification on : Thursday, December 17, 2020 - 1:40:09 PM Long-term archiving on: : Thursday, June 13, 2019 - 2:14:53 PM