SBERT studies Meaning Representations: Decomposing Sentence Embeddings into Explainable AMR Meaning Features

06/14/2022
by   Juri Opitz, et al.
0

Metrics for graph-based meaning representations (e.g., Abstract Meaning Representation, AMR) can help us uncover key semantic aspects in which two sentences are similar to each other. However, such metrics tend to be slow, rely on parsers, and do not reach state-of-the-art performance when rating sentence similarity. On the other hand, models based on large-pretrained language models, such as S(entence)BERT, show high correlation to human similarity ratings, but lack interpretability. In this paper, we aim at the best of these two worlds, by creating similarity metrics that are highly effective, while also providing an interpretable rationale for their rating. Our approach works in two steps: We first select AMR graph metrics that measure meaning similarity of sentences with respect to key semantic facets, such as, i.a., semantic roles, negation, or quantification. Second, we employ these metrics to induce Semantically Structured Sentence BERT embeddings (S^3BERT), which are composed of different meaning aspects captured in different sub-spaces. In our experimental studies, we show that our approach offers a valuable balance between performance and interpretability.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset