Spectrum Attention Mechanism for Time Series Classification
Time series classification(TSC) has always been an important and challenging research task. With the wide application of deep learning, more and more researchers use deep learning models to solve TSC problems. Since time series always contains a lot of noise, which has a negative impact on network training, people usually filter the original data before training the network. The existing schemes are to treat the filtering and training as two stages, and the design of the filter requires expert experience, which increases the design difficulty of the algorithm and is not universal. We note that the essence of filtering is to filter out the insignificant frequency components and highlight the important ones, which is similar to the attention mechanism. In this paper, we propose an attention mechanism that acts on spectrum (SAM). The network can assign appropriate weights to each frequency component to achieve adaptive filtering. We use L1 regularization to further enhance the frequency screening capability of SAM. We also propose a segmented-SAM (SSAM) to avoid the loss of time domain information caused by using the spectrum of the whole sequence. In which, a tumbling window is introduced to segment the original data. Then SAM is applied to each segment to generate new features. We propose a heuristic strategy to search for the appropriate number of segments. Experimental results show that SSAM can produce better feature representations, make the network converge faster, and improve the robustness and classification accuracy.
READ FULL TEXT