| [1] |
ELZEINY S, QAEAQE M. Stress classification using photoplethysmogram-based spatial and frequency domain images[J]. Sensors, 2020, 20(18):4987-4999.
doi: 10.3390/s20174987
|
| [2] |
BOUTER Y, BRZOZKA M M, RYGULA R, et al. Chronic psychosocial stress causes increased anxiety-like behavior and alters endocannabinoid levels in the brain of C57BL/6J mice[J]. Cannabis and Cannabinoid Research, 2020, 5(1):51-61.
doi: 10.1089/can.2019.0041
pmid: 32322676
|
| [3] |
FLOVIO D M, FRANCA D. High-resolution physiological stress prediction models based on ensemble learning and recurrent neural networks[C]. Proceedings of the IEEE International Symposium on Intelligent Systems and Computing, 2020:1-6.
|
| [4] |
FITRL I I, SRI W. Stress detection from multimodal wearable sensor data[C]. Proceedings of the 2nd International Conference on Engineering and Applied Science, 2020:1-7.
|
| [5] |
GHANAPRIYA S, ORCHID C P, RAVINDER K. Stress recognition with multi-modal sensing using bootstrapped ensemble deep learning model[J]. Expert System, 2023, 40(6):1-16.
|
| [6] |
PHILIP S, ATTILA R, ROBERT D, et al. Introducing WESAD, a Multimodal dataset for wearable stress and affect detection[C]. Proceedings in 2018 International Conference on Multimodal Interaction, 2018:400-408.
|
| [7] |
PRERNA G, JAYASANKAR S, ANDREAS D, et al. Stress detection by machine learning and wearable sensors[C]. Proceedings in the 26th International Conference on Intelligent User Interfaces, 2021:43-45.
|
| [8] |
MOHAMED A A A, ROAA M, NANCY M, et al. A deep learning approach using WESAD data for multi-class classification with wearable sensors[C]. Proceedings in the 6th Novel Intelligent and Leading Emerging Sciences Conference, 2024:194-197.
|
| [9] |
SRIRAM K P, PRAVEEN K G, ABDUL A S G, et al. Deep learning-based automated emotion recognition using multimodal physiological signals and time frequency methods[J]. IEEE Transactions on Instrumentation and Measurement, 2024,73: DOI: 10.1109/TIM.2024.3420349.
|
| [10] |
CHOI H S. Emotion recognition using a Siamese model and a late fusion-based multimodal method in the WESAD dataset with hardware accelerators[J]. Electronics, 2025, 14(4):1-16.
doi: 10.3390/electronics14010001
|
| [11] |
ANUBHAV B, BEHNAM B, DIRK R, et al. Attentive cross-modal connections for deep multimodal wearable-based emotion recognition[C]. 9th International Conference on Affective Computing and Intelligent Interaction Workshop and Demos, 2021:1-5.
|
| [12] |
ZHAO Minghui, ZHAO Lulu, GAO Hongxiang, et al. Cross-modal attention fusion of electrocardiogram emotion and voiceprint for depression detection[C]. Proceeding of Computing in Cardiology Conference, 2024:1-4.
|
| [13] |
WANG Ruiqi, JO W, ZHAO Dezhong, et al. Husformer: a multi-modal transformer for multi-modal human state recognition[J]. IEEE Transactions on Cognitive and Developmental Systems, 2024, 16(4):1374-1390.
doi: 10.1109/TCDS.2024.3357618
|
| [14] |
HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8):1735-1780.
doi: 10.1162/neco.1997.9.8.1735
pmid: 9377276
|
| [15] |
陈小乾, 尹亮, 展宗辉, 等. 基于注意力机制和RCN-BiLSTM融合的风电机组故障识别[J]. 中国电力, 2025, 58(8):94-102.
|
|
CHEN Xiaoqian, YIN Liang, ZHAN Zonghui, et al. Fault identification for wind turbine based on attention mechanism and RCN-BiLSTM fusion[J]. Electric Power, 2025, 58(8):94-102.
|
| [16] |
ITTI L, KOCH C, NIEBUR E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11):1254-1259.
doi: 10.1109/34.730558
|
| [17] |
吴思, 张旭光, 方银锋. 基于注意力机制的人群计数方法[J]. 中国安全科学学报, 2022, 32(1):127-134.
doi: 10.16265/j.cnki.issn1003-3033.2022.01.017
|
|
WU Si, ZHANG Xuguang, FANG Yinfeng. Method of crowd counting based on attention mechanism[J]. China Safety Science Journal, 2022, 32(1):127-134.
doi: 10.16265/j.cnki.issn1003-3033.2022.01.017
|
| [18] |
曹淑超, 戈伟斌, 里聪慧, 等. 基于注意力Seq2Seq网络的人群安全风险评估[J]. 中国安全科学学报, 2025, 35(12):196-203.
doi: 10.16265/j.cnki.issn1003-3033.2025.12.1477
|
|
CAO Shuchao, GE Weibin, LI Conghui, et al. Crowd safety risk assessment based on Seq2Seq-attention network[J]. China Safety Science Journal, 2025, 35(12):196-203.
doi: 10.16265/j.cnki.issn1003-3033.2025.12.1477
|
| [19] |
HENDRICKS L A, AKATA Z, DARRELL T. Generating visual explanations[C]. 14th European Conference on Computer Vision (ECCV), 2016:3-19.
|
| [20] |
KANG Tianyu, DING Wei, CHEN Ping. CRESPR: modular scarification of DNNs to improve pruning performance and model interpretability[J]. Neural Networks, 2023,172: DOI: 10.1016/j.neunet.2023.12.021.
|
| [21] |
MICHAL B, HENRIK S. Generating visual explanations from deep networks using implicit neural representations[C]. 2025 IEEE/CVE Winter Conference on Applications of Computer Vision, 2025, 7:3310-3319.
|