Prof. Vito Pirrelli

Short bio

Prof. Vito Pirrelli is Research manager at the National Research Council Institute for Computational Linguistics Antonio Zampolli since 2003, he is head of the Laboratory for Communication Physiology, and co-editor in chief of the Mental Lexicon and Lingue e Linguaggio. His main research interests focus on fundamental issues of language architecture and physiology, lying at the interdisciplinary crossroad of cognitive linguistics, psycholinguistics, neuroscience and information science.

Over the last 20 years, he has been leading a data-driven research program that uses artificial neural networks, language models and information and communication technologies to investigate language as a holistic dynamic system, emerging from interrelated patterns of sensory experience, communicative and social interaction and psychological and neurobiological mechanisms. This research program went beyond the fragmentation of mainstream NLP technologies of the early 21st century, allowing innovation to come out of research labs and address societal needs. Using portable devices and cloud computing to collect ecological multimodal language data, the Comphys Lab currently offers a battery of tools, resources and protocols that support language teaching and education assessment, cultural integration and early diagnosis and intervention of language and cognitive disorders.

In 2021, following a peer review by the relevant Class Committee, he was elected member of the Academia Europaea.

Talk abstract

Written Text Processing and the Adaptive Reading Hypothesis

Oral reading requires the fine coordination of eye movements and articulatory movements. The eye provides access to the input stimuli needed for voice articulation to unfold at a relatively constant rate, while control on articulation provides internal feedback to oculomotor control for eye movements to be directed when and where a decoding problem arises.

A factor that makes coordination of the eye and the voice particularly hard to manage is their asynchrony. Eye movements are faster than voice articulation and are much freer to scan a written text forwards and backwards. As a result, given a certain time window, the eye can typically fixate more words than the voice can articulate.

According to most scholars, readers compensate for this functional asynchrony by using their phonological buffer, a working memory stack of limited temporal capacity where fixated words can be maintained temporarily, until they are read out loud. The capacity of the phonological buffer thus puts an upper limit on the distance between the position of the voice and the position of the eye during oral text reading, known as the eye-voice span.

In my talk, I will discuss recent reading evidence showing that the eye-voice span is the “elastic” outcome of an optimally adaptive viewing strategy, interactively modulated by individual reading skills and the lexical and structural features of a text. The voice span not only varies across readers depending on their rate of articulation, but it also varies within each reader, getting larger when a larger structural unit is processed. This suggests that skilled readers can optimally coordinate articulation and fixation times for text processing, adaptively using their phonological memory buffer to process linguistic structures of different size and complexity.