Tehran Institute for Advanced Studies (TEIAS)

/ Artificial Intelligence and Deep Learning

Artificial Intelligence and Deep Learning

 

  • Natural Language Processing (NLP) is at the intersection of Computer Science and Linguistics. The goal is to provide machines with the ability to understand human language; for instance, read a text and answer specific questions asked from that text, translate from one language to another, or summarize a large document into a small paragraph.
       ■ Lexical Semantics: A core problem in NLP is to come up with a mathematical model that makes words “meaningful” to computers. The model has to give computers the power to transform words to a semantic space in which semantically similar words are placed in the proximity of each other. For instance, it should be easily possible in the space to compute that the words mouse and keyboard are related to each other but keyboard and mask are not.
       ■ Lexical ambiguity: Most of the frequently-used words in any language are polysemous, i.e., they can denote more than one meaning, depending on the context in which they appear. For instance, mouse can refer to the rodent meaning or computer device. One of the long-standing research problems in NLP is to automatically distinguish which meaning of a word was intended in a given context, the so-called Word Sense Disambiguation.
       ■ Context-sensitive representation: One recent research trend in NLP is contextualised word embeddings. The goal here is to have models that compute dynamic representations for words, dynamic in the sense that they can adapt to their contexts (hence, the word mouse is associated with different representations depending on its intended meaning).

Reading Group Page>>