Revisiting Skip-Gram Negative Sampling Model with Regularization

04/01/2018
by   Cun Mu, et al.
0

We revisit skip-gram negative sampling (SGNS), a popular neural-network based approach to learning distributed word representation. We first point out the ambiguity issue undermining the SGNS model, in the sense that the word vectors can be entirely distorted without changing the objective value. To resolve this issue, we rectify the SGNS model with quadratic regularization. A theoretical justification, which provides a novel insight into quadratic regularization, is presented. Preliminary experiments are also conducted on Google's analytical reasoning task to support the modified SGNS model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset