Sara Rajaee, Mahsa Razavi
- Topology of Word Embeddings: Singularities Reflect Polysemy. *SEM 2020. [slides]
- Diversifying Dialogue Generation with Non-Conversational Text. ACL 2020. [slides]
Houman Mehrafarin, Amin Pourdabiri
- Assessing BERT's Syntactic Abilities. 2019.
- Personal Information Leakage Detection in Conversations. EMNLP 2020. [slides]
Maryam Sadat Hashemi, Kiamehr Rezaee
- Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision. EMNLP 2020. [slides]
- Linguistic Profiling of a Neural Language Model. COLING 2020.
Hossein Mohebbi, Mohsen Tabasy, Sara Rajaee
- The elephant in the interpretability room. Why use attention as explanation when we have saliency methods? BlackboxNLP 2020. [slides]
- How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking. EMNLP 2020.
- Asking without Telling: Exploring Latent Ontologies in Contextual Representations. EMNLP 2020. [slides]
Mohammad Ali Modaresi, Samin Fatehi
- Pretrained Language Model Embryology: The Birth of ALBERT. EMNLP 2020.
- ETC: Encoding Long and Structured Inputs in Transformers. EMNLP 2020. [slides]
Amin Pourdabiri, Mahsa Razavi
- Hierarchical Reinforcement Learning for Open-Domain Dialog. AAAI 2020. [slides]
- Filtering Noisy Dialogue Corpora by Connectivity and Content Relatedness. EMNLP 2020. [slides]
Houman Mehrafarin, Kiamehr Rezaee, Maryam Sadat Hashemi
- Are Sixteen Heads Really Better Than One? NIPS 2019. [slides]
- Do Explicit Alignments Robustly Improve Multilingual Encoders. EMNLP 2020.
- Rethinking attention with performers. [slides]
Hossein Mohebbi, Mohammad Ali Modaresi
- A Tale of a Probe and a Parser. ACL 2020.
- What Do Position Embeddings Learn? An Empirical Study of Pre-Trained Language Model Positional Encoding. EMNLP 2020.
Sara Rajaee, Samin Fatehi, Mahsa Razavi
- BERT-EMD: Many-to-Many Layer Mapping for BERT Compression with Earth Mover's Distance. EMNLP 2020. [slides]
- Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. ACL 2020.
- Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network. ACL 2018.
Sara Rajaee, Maryam Sadat Hashemi
- Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them. NAACL 2019. [slides]
- Cross-Modality Relevance for Reasoning on Language and Vision. ACL 2020. [slides]
Amin Pourdabiri, Maryam Sadat Hashemi, Samin Fatehi
- Big Bird: Transformers for Longer Sequences. [slides]
- Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks. ECCV 2020. [slides]
- Attention over Parameters for Dialogue Systems. NeurIPS 2020 ConvAI workshop. [slides]
Ali Modaresi and Hossein Mohebbi
- Quantifying Attention Flow in Transformers. ACL 2020. [slides]
- DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering. ACL 2020. [slides]