China Safety Science Journal ›› 2024, Vol. 34 ›› Issue (12): 213-222.doi: 10.16265/j.cnki.issn1003-3033.2024.12.0308

• Emergency technology and management • Previous Articles     Next Articles

Knowledge-prompted few-shot relation extraction for emergency plan texts

ZHANG Kai1(), CHEN Qiang1,**(), NI Kai2, ZHANG Yujin1   

  1. 1 School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
    2 Science and Technology Research and Development Office, Shanghai Institute of Work Safety Science, Shanghai 201620, China
  • Received:2024-07-15 Revised:2024-09-22 Online:2024-12-28 Published:2025-06-28
  • Contact: CHEN Qiang

Abstract:

In order to accurately and quickly achieve relation extraction from few-shot emergency plan texts, KMKP based on knowledge prompts was proposed. First, a prompt template was constructed, utilizing learnable typed entity markers that incorporate relation semantics. The effectiveness of input guidance on the pre-trained language model (PLM) was thereby enhanced by these markers. Second, the boundary loss function was utilized to optimize model training, enabling the PLM to learn specific dependency relationships in the emergency domain and apply structured constraints to [MASK] predictions. Third, a gradient-free emergency knowledge storage database was created using the training data, and a knowledge retrieval mechanism was constructed by integrating KNN algorithm. The feature connections between training and prediction data can be captured through this mechanism and the gradient-free normation was used to correct the predictions of PLM. Finally, the experimental validation and analysis were performed using four public datasets under few-shot settings (1-, 8-, and 16-shot). The results show that compared to the state-of-the-art model, KnowPrompt, F1 score is boosted by an average of 2.1%, 2.8%, and 1.9% by KMKP. In a 16-shot emergency plan instance test, a relation extraction accuracy of 91.02% is achieved by KMKP. Catastrophic forgetting and overfitting issues in few-shot scenarios are effectively mitigated.

Key words: knowledge-prompted, few-shot, emergency plan, relation extraction, data augmentation, k-nearest neighbor(KNN) relationship extraction model based on knowledge prompts (KMKP)

CLC Number: