WEC-Explainer: A Descriptive Framework
for Exploring Word Embedding Contextualization

We present a descriptive framework for designing applications for word embedding explanation tasks. This framework connects data, features, tasks, and users involved in the explanation process. We use the framework as theoretical groundwork and implement a data processing pipeline that is used to solve three different tasks related to word embedding contextualization. We show that divergent research questions can be analyzed by combining different data curation methods with a similar set of features.

Use Cases

Encoded Context Properties

This use case supports users in gaining an overview of properties encoded in embedding vectors in different model's layers. This task is especially relevant for users with NLP expertise.

Semantic Concept Similarity

This use case supports users in understanding how well semantic concepts are separated in the (HD or 2D) embedding space. This task is relevant for users with general language knowledge.

Masked Prediction Quality

This use case supports users in gaining insight into masked prediction meaningfulness for contexts involving different function words. This task is relevant for users with linguistic expertise.

BibTex Entry