Tehran Institute for Advanced Studies (TEIAS)

/ Quantifying Attention Flow in Transformers __ Samira Abnar

Talk

Quantifying Attention Flow in Transformers

May 17, 2021
(27 Ordibehesht, 1400)

16:00

Venue

This Talk is online

Registration Deadline

May 16, 2021

You may need a VPN to start the talk.

+982189174612

Samira Abnar

Ph.D. candidate at the University of Amsterdam

Overview

In the Transformer model, “self-attention” combines information from attended embedding’s into the representation of the focal embedding in the next layer. Thus, across layers of the Transformer, information originating from different tokens gets increasingly mixed. This makes attention weights unreliable as explanations probes. Here, we consider the problem of quantifying this flow of information through self-attention. we propose two methods for approximating the attention to input tokens given attention weights, attention rollout, and attention flow, as post hoc methods when we use attention weights as the relative relevance of the input tokens. We show that these methods give complementary views on the flow of information, and compared to raw attention, both yield higher correlations with importance scores of input tokens obtained using other input attribution methods.

Biography

تیاس

I am a PhD candidate at University of Amsterdam. I work with Jelle Zuidema and Wilker Aziz with a great team at Cognition, Language and Computation (CLC) Lab at Institute for Logic, Language, and Computation. My PhD research is in the intersection of Machine Learning and Natural Language Processing, and the main theme is to study the inductive biases that are needed for deep neural networks to learn natural language.

Video