Ayuda
Ir al contenido

Dialnet


Resumen de Accelerating time series analysis via near-data-processing approaches

Iván Fernández Vega

  • The explosion of the Internet-Of-Things and Big Data era has resulted in the continuous generation of a very large amount of data, which is increasingly difficult to store and analyze. Such a collection of data is also referred to as a time series, a common data representation in almost every scientific discipline and business application. Time series analysis (TSA) splits the time series into subsequences of consecutive data points to extract valuable information.

    In this thesis, we characterize state-of-the-art TSA algorithms and find their bottlenecks in commodity computing platforms. We observe that the performance and energy efficiency of TSA algorithms are heavily burdened by data movement. Based on that, we propose software and hardware solutions to accelerate time series analysis and make its computation as energy-efficient as possible. To this end, we provide four contributions: PhiTSA, NATSA, MATSA and TraTSA.

    PhiTSA optimizes and characterizes state-of-the-art TSA algorithms in a many-core Intel Xeon Phi KNL platform. NATSA is a novel Processing-Near-Memory accelerator for TSA. This accelerator places custom floating-point processing units close to High-Bandwidth-Memory, exploiting its memory channels and the lower latency of accesses. MATSA is a novel Processing-Using-Memory accelerator for TSA, known as MATSA. The key idea is to exploit magneto-resistive memory crossbars to enable energy-efficient and fast time series computation in memory while overcoming endurance issues of other non-volatile memory technologies. Finally, TraTSA evaluates the benefits of applying Transprecision Computing to TSA, where the number of bits dedicated to floating-point operations is reduced.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus