Tehran Institute for Advanced Studies (TEIAS)

/ Visually Grounded Reasoning across Languages and Cultures __ Fangyu Liu


Visually Grounded Reasoning across Languages and Cultures

Fangyu Lui En2

December 8, 2021
(17 Azar, 1400)



This Talk is online

Registration Deadline

December 7, 2021

You may need a VPN to start the talk.


Fangyu Liu

PhD Student, University of Cambridge
Winner of the Best Long Paper Award at EMNLP 2021


The design of widespread vision-and-language datasets and pre-trained encoders directly adopts, or draws inspiration from, the concepts and images of ImageNet. While one can hardly overestimate how much this benchmark contributed to progress in computer vision, it is mostly derived from lexical databases and image queries in English, resulting in source material with a North American or Western European bias. Therefore, we devise a new protocol to construct an ImageNet-style hierarchy representative of more languages and cultures. In particular, we let the selection of both concepts and images be entirely driven by native speakers, rather than scraping them automatically. Specifically, we focus on a typologically diverse set of languages, namely, Indonesian, Mandarin Chinese, Swahili, Tamil, and Turkish. On top of the concepts and images obtained through this new protocol, we create a multilingual dataset for Multicultural Reasoning over Vision and Language (MaRVL) by eliciting statements from native speaker annotators about pairs of images. The task consists of dis- criminating whether each grounded statement is true or false. We establish a series of baselines using state-of-the-art models and find that their cross-lingual transfer performance lags dramatically behind supervised performance in English. These results invite us to reassess the robustness and accuracy of current state-of-the-art models beyond a narrow domain, but also open up new exciting challenges for the development of truly multilingual and multicultural systems.



Fangyu Liu is a second-year PhD student in NLP at the Language Technology Lab, University of Cambridge, supervised by Professor Nigel Collier. His research centres around multi-modal NLP, self-supervised representation learning and model interpretability. He is a Trust Scholar funded by Grace & Thomas C.H. Chan Cambridge Scholarship. Besides Cambridge, he also spent time doing research at Amazon, EPFL, and the University of Waterloo.