Tele-Knowledge Pre-training for Fault Analysis

10/20/2022
by   Zhuo Chen, et al.
3

In this work, we share our experience on tele-knowledge pre-training for fault analysis. Fault analysis is a vital task for tele-application, which should be timely and properly handled. Fault analysis is also a complex task, that has many sub-tasks. Solving each task requires diverse tele-knowledge. Machine log data and product documents contain part of the tele-knowledge. We create a Tele-KG to organize other tele-knowledge from experts uniformly. With these valuable tele-knowledge data, in this work, we propose a tele-domain pre-training model KTeleBERT and its knowledge-enhanced version KTeleBERT, which includes effective prompt hints, adaptive numerical data encoding, and two knowledge injection paradigms. We train our model in two stages: pre-training TeleBERT on 20 million telecommunication corpora and re-training TeleBERT on 1 million causal and machine corpora to get the KTeleBERT. Then, we apply our models for three tasks of fault analysis, including root-cause analysis, event association prediction, and fault chain tracing. The results show that with KTeleBERT, the performance of task models has been boosted, demonstrating the effectiveness of pre-trained KTeleBERT as a model containing diverse tele-knowledge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2021

Multi-stage Pre-training over Simplified Multimodal Pre-training Models

Multimodal pre-training models, such as LXMERT, have achieved excellent ...
research
02/07/2023

Continual Pre-training of Language Models

Language models (LMs) have been instrumental for the rapid advance of na...
research
03/26/2023

Koala: An Index for Quantifying Overlaps with Pre-training Corpora

In very recent years more attention has been placed on probing the role ...
research
12/02/2021

DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding

Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained ...
research
07/28/2021

Domain-matched Pre-training Tasks for Dense Retrieval

Pre-training on larger datasets with ever increasing model size is now a...
research
07/08/2022

DSTEA: Dialogue State Tracking with Entity Adaptive Pre-training

Dialogue state tracking (DST) is a core sub-module of a dialogue system,...
research
07/18/2022

Towards a General Pre-training Framework for Adaptive Learning in MOOCs

Adaptive learning aims to stimulate and meet the needs of individual lea...

Please sign up or login with your details

Forgot password? Click here to reset