Tehran Institute for Advanced Studies (TEIAS)

/ Natural Language Processing (graduate)

Reading Group

Natural Language Processing (graduate)

graduate Study Group

Wednesdays, 11am-12pm (online on Teams)

Venue

Khatam University
Meeting room, 6th floor, Khatam University (2nd Building) See location on Google map

+982189174612

Organizers

Reading Group

Natural Language Processing (graduate)

Previous slide
Next slide

Contact the moderator if you are interested in joining the group.

Winter 2020/2021
DatePresenters Topic / Paper

13 January

Houman Mehrafarin, Maryam Sadat Hashemi

-

6 January (2021)

Kiamehr Rezaee, Mohammad Ali Modaresi

  • Attention is Not Only Weight: Analyzing Transformers with Vector Norms. EMNLP 2020.
  • How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models.

30 December

Mohsen Tabasy, Hossein Mohebbi

  • Experience Grounds Language. EMNLP 2020.
  • Intrinsic Probing through Dimension Selection. EMNLP 2020. [slides]

23 December

Student project proposals

  • Topology of Word Embeddings: Singularities Reflect Polysemy. *SEM 2020. [slides]
  • Diversifying Dialogue Generation with Non-Conversational Text. ACL 2020. [slides]
Fall 2020
DatePresenters Topic / Paper

16 December

Sara Rajaee, Mahsa Razavi

  • Topology of Word Embeddings: Singularities Reflect Polysemy. *SEM 2020. [slides]
  • Diversifying Dialogue Generation with Non-Conversational Text. ACL 2020. [slides]

9 December

Houman Mehrafarin, Amin Pourdabiri

  • Assessing BERT's Syntactic Abilities. 2019.
  • Personal Information Leakage Detection in Conversations. EMNLP 2020. [slides]

2 December

Maryam Sadat Hashemi, Kiamehr Rezaee

  • Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision. EMNLP 2020. [slides]
  • Linguistic Profiling of a Neural Language Model. COLING 2020.

25 November

Hossein Mohebbi, Mohsen Tabasy, Sara Rajaee

  • The elephant in the interpretability room. Why use attention as explanation when we have saliency methods? BlackboxNLP 2020. [slides]
  • How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking. EMNLP 2020.
  • Asking without Telling: Exploring Latent Ontologies in Contextual Representations. EMNLP 2020. [slides]

18 November

Mohammad Ali Modaresi, Samin Fatehi

  • Pretrained Language Model Embryology: The Birth of ALBERT. EMNLP 2020.
  • ETC: Encoding Long and Structured Inputs in Transformers. EMNLP 2020. [slides]

11 November

Amin Pourdabiri, Mahsa Razavi

  • Hierarchical Reinforcement Learning for Open-Domain Dialog. AAAI 2020. [slides]
  • Filtering Noisy Dialogue Corpora by Connectivity and Content Relatedness. EMNLP 2020. [slides]

4 November

Houman Mehrafarin, Kiamehr Rezaee, Maryam Sadat Hashemi

  • Are Sixteen Heads Really Better Than One? NIPS 2019. [slides]
  • Do Explicit Alignments Robustly Improve Multilingual Encoders. EMNLP 2020.
  • Rethinking attention with performers. [slides]

28 October

Hossein Mohebbi, Mohammad Ali Modaresi

  • A Tale of a Probe and a Parser. ACL 2020.
  • What Do Position Embeddings Learn? An Empirical Study of Pre-Trained Language Model Positional Encoding. EMNLP 2020.

21 October

Sara Rajaee, Samin Fatehi, Mahsa Razavi

  • BERT-EMD: Many-to-Many Layer Mapping for BERT Compression with Earth Mover's Distance. EMNLP 2020. [slides]
  • Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. ACL 2020.
  • Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network. ACL 2018.

14 October

Sara Rajaee, Maryam Sadat Hashemi

  • Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them. NAACL 2019. [slides]
  • Cross-Modality Relevance for Reasoning on Language and Vision. ACL 2020. [slides]

30 September

Amin Pourdabiri, Maryam Sadat Hashemi, Samin Fatehi

  • Big Bird: Transformers for Longer Sequences. [slides]
  • Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks. ECCV 2020. [slides]
  • Attention over Parameters for Dialogue Systems. NeurIPS 2020 ConvAI workshop. [slides]

23 September

Ali Modaresi and Hossein Mohebbi

  • Quantifying Attention Flow in Transformers. ACL 2020. [slides]
  • DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering. ACL 2020. [slides]
Series 1
Spring 2020
DatePresenters Topic / Paper

20 April

Mohsen Tabasy

  • BERT Rediscovers the Classical NLP Pipeline. [slides]
  • Universal Adversarial Triggers for Attacking and Analyzing NLP. [slides]

27 April

Hossein Mohebbi

  • oLMpics -- On what Language Model Pre-training Captures
  • Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping. [slides]

4 May

Houman Mehrafarin

  • What Does BERT Look At? An Analysis of BERT's Attention. [slides]

11 May

Amin Pourdabiri

  • Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words

18 May

Kiamehr Rezaee

  • How Multilingual is Multilingual BERT?

25 May

-

-

1 June

Ali Modaresi

  • BERT-based Lexical Substitution. [slides]
  • Contextual Embeddings: When Are They Worth It? [slides]