Tweaks and Tricks for Word Embedding Disruptions - l'unam - université nantes angers le mans
Communication Dans Un Congrès Année : 2019

Tweaks and Tricks for Word Embedding Disruptions

Résumé

Word embeddings are established as very effective models used in several NLP applications. If they differ in their architecture and training process, they often exhibit similar properties and remain vector space models with continuously-valued dimensions describing the observed data. The complexity resides in the developed strategies for learning the values within each dimensional space. In this paper, we introduce the concept of disruption which we define as a side effect of the training process of embedding models. Disruptions are viewed as a set of embedding values that are more likely to be noise than effective descriptive features. We show that dealing with disruption phenomenon is of a great benefit to bottom-up sentence embedding representation. By contrasting several in-domain and pre-trained embedding models, we propose two simple but very effective tweaking techniques that yield strong empirical improvements on textual similarity task.
Fichier principal
Vignette du fichier
RANLP054.pdf (102.53 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02391495 , version 1 (14-11-2024)

Identifiants

Citer

Amir Hazem, Nicolas Hernandez. Tweaks and Tricks for Word Embedding Disruptions. Recent Advances In Natural Language Processing (RANLP), Sep 2019, Varna, Bulgaria. pp.460-464, ⟨10.26615/978-954-452-056-4_054⟩. ⟨hal-02391495⟩
47 Consultations
0 Téléchargements

Altmetric

Partager

More