Fine-Tuning Language Models for Scientific Writing Support

by   Justin Mücke, et al.

We support scientific writers in determining whether a written sentence is scientific, to which section it belongs, and suggest paraphrasings to improve the sentence. Firstly, we propose a regression model trained on a corpus of scientific sentences extracted from peer-reviewed scientific papers and non-scientific text to assign a score that indicates the scientificness of a sentence. We investigate the effect of equations and citations on this score to test the model for potential biases. Secondly, we create a mapping of section titles to a standard paper layout in AI and machine learning to classify a sentence to its most likely section. We study the impact of context, i.e., surrounding sentences, on the section classification performance. Finally, we propose a paraphraser, which suggests an alternative for a given sentence that includes word substitutions, additions to the sentence, and structural changes to improve the writing style. We train various large language models on sentences extracted from arXiv papers that were peer reviewed and published at A*, A, B, and C ranked conferences. On the scientificness task, all models achieve an MSE smaller than 2%. For the section classification, BERT outperforms WideMLP and SciBERT in most cases. We demonstrate that using context enhances the classification of a sentence, achieving up to a 90% F1-score. Although the paraphrasing models make comparatively few alterations, they produce output sentences close to the gold standard. Large fine-tuned models such as T5 Large perform best in experiments considering various measures of difference between input sentence and gold standard. Code is provided under


page 1

page 2

page 3

page 4


Sparks: Inspiration for Science Writing using Language Models

Large-scale language models are rapidly improving, performing well on a ...

SciNLI: A Corpus for Natural Language Inference on Scientific Text

Existing Natural Language Inference (NLI) datasets, while being instrume...

arXivEdits: Understanding the Human Revision Process in Scientific Writing

Scientific publications are the primary means to communicate research di...

Extractive Summarizer for Scholarly Articles

We introduce an extractive method that will summarize long scientific pa...

From Zero to Hero: Convincing with Extremely Complicated Math

Becoming a (super) hero is almost every kid's dream. During their shelte...

Hierarchical Neural Networks for Sequential Sentence Classification in Medical Scientific Abstracts

Prevalent models based on artificial neural network (ANN) for sentence c...

Suggestion Mining from Online Reviews using ULMFiT

In this paper we present our approach and the system description for Sub...

Please sign up or login with your details

Forgot password? Click here to reset