Temporal-Framing Adaptive Network for Heart Sound Segmentation without Prior Knowledge of State Duration
Objective: This paper presents a novel heart sound segmentation algorithm based on Temporal-Framing Adaptive Network (TFAN), including state transition loss and dynamic inference for decoding the most likely state sequence. Methods: In contrast to previous state-of-the-art approaches, the TFAN-based method does not require any knowledge of the state duration of heart sounds and is therefore likely to generalize to non sinus rhythm. The TFAN-based method was trained on 50 recordings randomly chosen from Training set A of the 2016 PhysioNet/Computer in Cardiology Challenge and tested on the other 12 independent training and test databases (2099 recordings and 52180 beats). The databases for segmentation were separated into three levels of increasing difficulty (LEVEL-I, -II and -III) for performance reporting. Results: The TFAN-based method achieved a superior F1 score for all 12 databases except for `Test-B', with an average of 96.7 method. Moreover, the TFAN-based method achieved an overall F1 score of 99.2 94.4 88.54 TFAN-based method therefore provides a substantial improvement, particularly for more difficult cases, and on data sets not represented in the public training data. Significance: The proposed method is highly flexible and likely to apply to other non-stationary time series. Further work is required to understand to what extent this approach will provide improved diagnostic performance, although it is logical to assume superior segmentation will lead to improved diagnostics.
READ FULL TEXT