Tehran Institute for Advanced Studies (TEIAS)

/ CS-Research Areas

Artificial Intelligence and Deep Learning

 

  • Natural Language Processing (NLP) is at the intersection of Computer Science and Linguistics. The goal is to provide machines with the ability to understand human language; for instance, read a text and answer specific questions asked from that text, translate from one language to another, or summarize a large document into a small paragraph.
       ■ Lexical Semantics: A core problem in NLP is to come up with a mathematical model that makes words “meaningful” to computers. The model has to give computers the power to transform words to a semantic space in which semantically similar words are placed in the proximity of each other. For instance, it should be easily possible in the space to compute that the words mouse and keyboard are related to each other but keyboard and mask are not.
       ■ Lexical ambiguity: Most of the frequently-used words in any language are polysemous, i.e., they can denote more than one meaning, depending on the context in which they appear. For instance, mouse can refer to the rodent meaning or computer device. One of the long-standing research problems in NLP is to automatically distinguish which meaning of a word was intended in a given context, the so-called Word Sense Disambiguation.
       ■ Context-sensitive representation: One recent research trend in NLP is contextualised word embeddings. The goal here is to have models that compute dynamic representations for words, dynamic in the sense that they can adapt to their contexts (hence, the word mouse is associated with different representations depending on its intended meaning).

 

 

Applied Mathematics and Statistics

 

  • Statistics: Technological advances have enabled collection and processing of mountainous data troves. A great amount of effort, creativity and experimentation has gone into the development of algorithms to extract knowledge from large data sets. Some of these algorithms, such as deep learning, yield impressive results in practice. One of the main goals of statistics is to understand the performance limits of these algorithms, and enhance our understanding of why these algorithms perform so well in practice. Statistics for large data sets (also known as high dimensional statistics) utilizes deep mathematical ideas such as concentration of measure theory.
  • Information Theory: Broadly speaking, information theory is the science of transmission, processing, extraction, and utilization of information. There are deep connections between information theory and statistics, and progress in one area influences progress in the other.

 

 

Trustworthy Computer Systems

 

  •  Software Verification 
    The goal of software verification is to mathematically assure that a given program fully satisfies all its expected requirements. In order to improve the quality of their final products, many major companies include software verification along with testing in their software production process to increase the quality of their final products.
  • Program Synthesis 
    Program synthesis is the task of constructing a program from user intent expressed in high-level descriptions. The user intent can be in various forms, e.g., examples, formal requirements, resource constraints or a combination of different formats. With the increased number of people who have access to computers on a daily basis and given the fact that many of them are not programmers, the goal of synthesis is to generate programs using simpler forms of inputs from the user.
  • Software defined networking 
    Software Defined Networking (SDN) is an emerging network architecture that separates the control logic of the network (control-plane) from the underlying switches (data-plane). SDN promotes logical centralization of network control and introduces network programmability. The research in this area is about to design high-level programming languages for networks and to introduce new approaches to synthesize/verify the programs.
  • Model-based testing 
    Testing and debugging are major parts of software development and together account for more than half of the total development cost. Model-based testing is a structured method that brings rogour to testing by using models that steer the test-case generation and execution process. We research model-based testing techniques both from theoretical and empirical perspectives.
  •  Software product lines 
    Variability is an inherent part of many software systems and software product lines provide a near paradigm to address variability as a first-class citizen in the development process. Our research in software product lines addresses variability-aware models, model learning techniques, and testing techniques.
  •  Cyber physical systems 
    Cyber-physical systems are the result of the tight integration of computer systems with their physical environment and communication networks. This leads to a very rich domain that combines discrete algorithms with continuous dynamics and stochastic behaviour of networks. We research logical foundations of cyber-physical techniques as well as rigorous model-based testing techniques for such systems.