Contextual Temperature for Language Modeling

12/25/2020
by   Pei-Hsin Wang, et al.
0

Temperature scaling has been widely used as an effective approach to control the smoothness of a distribution, which helps the model performance in various tasks. Current practices to apply temperature scaling assume either a fixed, or a manually-crafted dynamically changing schedule. However, our studies indicate that the individual optimal trajectory for each class can change with the context. To this end, we propose contextual temperature, a generalized approach that learns an optimal temperature trajectory for each vocabulary over the context. Experimental results confirm that the proposed method significantly improves state-of-the-art language models, achieving a perplexity of 55.31 and 62.89 on the test set of Penn Treebank and WikiText-2, respectively. In-depth analyses show that the behaviour of the learned temperature schedules varies dramatically by vocabulary, and that the optimal schedules help in controlling the uncertainties. These evidences further justify the need for the proposed method and its advantages over fixed temperature schedules.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2023

Long Horizon Temperature Scaling

Temperature scaling is a popular technique for tuning the sharpness of a...
research
11/18/2022

Layer-Stack Temperature Scaling

Recent works demonstrate that early layers in a neural network contain u...
research
10/18/2022

Fine-tune your Classifier: Finding Correlations With Temperature

Temperature is a widely used hyperparameter in various tasks involving n...
research
11/26/2020

Unigram-Normalized Perplexity as a Language Model Performance Measure with Different Vocabulary Sizes

Although Perplexity is a widely used performance metric for language mod...
research
01/21/2019

Calibration with Bias-Corrected Temperature Scaling Improves Domain Adaptation Under Label Shift in Modern Neural Networks

Label shift refers to the phenomenon where the marginal probability p(y)...
research
08/05/2023

LLM is Like a Box of Chocolates: the Non-determinism of ChatGPT in Code Generation

There has been a recent explosion of research on Large Language Models (...
research
02/11/2022

Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense

We develop a novel optimization method for NLPbackdoor inversion. We lev...

Please sign up or login with your details

Forgot password? Click here to reset