Reduction of Maximum Entropy Models to Hidden Markov Models

12/12/2012
by   Joshua Goodman, et al.
0

We show that maximum entropy (maxent) models can be modeled with certain kinds of HMMs, allowing us to construct maxent models with hidden variables, hidden state sequences, or other characteristics. The models can be trained using the forward-backward algorithm. While the results are primarily of theoretical interest, unifying apparently unrelated concepts, we also give experimental results for a maxent model with a hidden variable on a word disambiguation task; the model outperforms standard techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2020

Application of the Hidden Markov Model for determining PQRST complexes in electrocardiograms

The application of the hidden Markov model with various parameters in th...
research
02/18/2023

Maximum Entropy Estimator for Hidden Markov Models: Reduction to Dimension 2

In the paper, we introduce the maximum entropy estimator based on 2-dime...
research
06/25/2018

Analyticity of Entropy Rates of Continuous-State Hidden Markov Models

The analyticity of the entropy and relative entropy rates of continuous-...
research
01/06/2019

Malware Detection Using Dynamic Birthmarks

In this paper, we explore the effectiveness of dynamic analysis techniqu...
research
05/21/2020

Hidden Markov Chains, Entropic Forward-Backward, and Part-Of-Speech Tagging

The ability to take into account the characteristics - also called featu...
research
07/21/2010

A generalized risk approach to path inference based on hidden Markov models

Motivated by the unceasing interest in hidden Markov models (HMMs), this...
research
03/23/2021

Towards interpretability of Mixtures of Hidden Markov Models

Mixtures of Hidden Markov Models (MHMMs) are frequently used for cluster...

Please sign up or login with your details

Forgot password? Click here to reset