Analysis and visualisation of linguistic structures in neural language models: neural representations of verb-particle constructions in BERT

This study examines how transformer-based neural language models, particularly BERT and a construction-aware variant (CxG-BERT), represent verb-particle constructions (e.g., “agree on,” “come back,” “give up”) across layers. Using data from the British National Corpus, we extract model activations a...

Full description

Saved in:
Bibliographic Details
Main Authors: Kissane, Hassane (Author) , Schilling, Achim (Author) , Krauss, Patrick (Author)
Format: Article (Journal)
Language:English
Published: 09 Apr 2026
In: Language, cognition and neuroscience
Year: 2026, Pages: 1-21
ISSN:2327-3801
DOI:10.1080/23273798.2026.2648588
Online Access:Verlag, kostenfrei, Volltext: https://doi.org/10.1080/23273798.2026.2648588
Get full text
Author Notes:Hassane Kissane, Achim Schilling and Patrick Krauss
Description
Summary:This study examines how transformer-based neural language models, particularly BERT and a construction-aware variant (CxG-BERT), represent verb-particle constructions (e.g., “agree on,” “come back,” “give up”) across layers. Using data from the British National Corpus, we extract model activations and analyse them via multidimensional scaling (MDS), generalized discrimination value (GDV), and permutation tests. Results show that BERT's middle layers best capture syntactic structure, with notable variation across verb types. These findings challenge assumptions of uniform linguistic processing in neural networks, revealing a more complex interaction between architecture and linguistic representation. Overall, the study advances understanding of how neural models encode language and highlights both their strengths and limitations for linguistic analysis, while suggesting directions for optimizing architectures in theory-driven research.
Item Description:Gesehen am 27.04.2026
Physical Description:Online Resource
ISSN:2327-3801
DOI:10.1080/23273798.2026.2648588