Injecting Wiktionary to improve token-level contextual representations using contrastive learning

Anna Mosolova, Marie Candito, Carlos Ramisch

Main: Multilingual Issues Oral Paper

Session 6: Multilingual Issues (Oral)
Conference Room: Marie Louise 1
Conference Time: March 19, 10:30-12:00 (CET) (Europe/Malta)
TLDR:
You can open the #paper-42-Oral channel in a separate window.
Abstract: While static word embeddings are blind to context, for lexical semantics tasks context is rather too present in contextual word embeddings, vectors of same-meaning occurrences being too different (Ethayarajh, 2019). Fine-tuning pre-trained language models (PLMs) using contrastive learning was proposed, leveraging automatically self-augmented examples (Liu et al., 2021b). In this paper, we investigate how to inject a lexicon as an alternative source of supervision, using the English Wiktionary. We also test how dimensionality reduction impacts the resulting contextual word embeddings. We evaluate our approach on the Word-In-Context (WiC) task, in the unsupervised setting (not using the training set). We achieve new SoTA result on the original WiC test set. We also propose two new WiC test sets for which we show that our fine-tuning method achieves substantial improvements. We also observe improvements, although modest, for the semantic frame induction task. Although we experimented on English to allow comparison with related work, our method is adaptable to the many languages for which large Wiktionaries exist.