Data Mining in Clinical Trial Text: Transformers for Classification and Question Answering Tasks

01/30/2020
by   Lena Schmidt, et al.
0

This research on data extraction methods applies recent advances in natural language processing to evidence synthesis based on medical texts. Texts of interest include abstracts of clinical trials in English and in multilingual contexts. The main focus is on information characterized via the Population, Intervention, Comparator, and Outcome (PICO) framework, but data extraction is not limited to these fields. Recent neural network architectures based on transformers show capacities for transfer learning and increased performance on downstream natural language processing tasks such as universal reading comprehension, brought forward by this architecture's use of contextualized word embeddings and self-attention mechanisms. This paper contributes to solving problems related to ambiguity in PICO sentence prediction tasks, as well as highlighting how annotations for training named entity recognition systems are used to train a high-performing, but nevertheless flexible architecture for question answering in systematic review automation. Additionally, it demonstrates how the problem of insufficient amounts of training annotations for PICO entity extraction is tackled by augmentation. All models in this paper were created with the aim to support systematic review (semi)automation. They achieve high F1 scores, and demonstrate the feasibility of applying transformer-based classification methods to support data mining in the biomedical literature.

READ FULL TEXT
research
01/27/2023

A Comparative Study of Pretrained Language Models for Long Clinical Text

Objective: Clinical knowledge enriched transformer models (e.g., Clinica...
research
01/27/2022

Clinical-Longformer and Clinical-BigBird: Transformers for long clinical sequences

Transformers-based models, such as BERT, have dramatically improved the ...
research
07/20/2023

UMLS-KGI-BERT: Data-Centric Knowledge Integration in Transformers for Biomedical Entity Recognition

Pre-trained transformer language models (LMs) have in recent years becom...
research
11/10/2017

Neural Skill Transfer from Supervised Language Tasks to Reading Comprehension

Reading comprehension is a challenging task in natural language processi...
research
12/19/2022

Do CoNLL-2003 Named Entity Taggers Still Work Well in 2023?

Named Entity Recognition (NER) is an important and well-studied task in ...
research
04/09/2020

Calibrating Structured Output Predictors for Natural Language Processing

We address the problem of calibrating prediction confidence for output e...

Please sign up or login with your details

Forgot password? Click here to reset