Tehran Institute for Advanced Studies (TEIAS)

/ Data Science and Machine Learning

Summer School

Data Science and Machine Learning

24-27 August, 2019

Venue

Khatam University
Address: Mollasadra, North Shiraz, Hakim Azam, No 30. See location on Google map

+982189174612

Organizers

Executive Chair
Mohammad Morovati

 

Scientific Chair
Mohammadreza Mousavi
Mohammad Taher Pilehvar
Hossein Hojjat


Organization Chair
Fereshte Allahverdi

Overview

Data is one of the major assets of the future. Making sense of the huge amount of available data will create enormous opportunities. Data can range from natural language documents, to network traffic, to logs of system behavior. The world-class invited speakers of this summer school share their knowledge and experience on different aspects of these compelling challenges, including natural language processing, machine learning and model learning, using machine learning for the design of cyber-physical systems and data security and privacy.

Target Audience

The intended audience of the summer school are researchers and graduate students with some research background in computer science and engineering or closely related fields.

Speakers

Frits Vaandrager


Head of Department of Software Science
Institute for Computing and Information Sciences
Radboud University

Frits W.Vaandrager is Professor of Informatics for Technical Applications, Institute for Computing and Information Sciences, Radboud University, Nijmegen. Frits earned his M.S. in Mathematics with specialization in Computer Science at the University of Leiden in 1985 with the thesis “Algebraic techniques for concurrency and their application”. He earned his Ph.D. in Computer Science at the University of Amsterdam in 1990 with the thesis “Verification of two communication protocols by means of process algebra”. Frits has a strong interest in the development and application of theory, (formal) methods and tools for the speciation and analysis of computer based systems. In particular, he is interested in real-time embedded systems, distributed algorithms and protocols.

Improving Software Using Automata Learning
Automata learning is emerging as an effective technique for obtaining state machine models of software and hardware systems. In this lecture, I will start with an introduction to the theory of (active and passive) automata learning. Next, I will give an overview of work at Radboud University in which we used automata learning to find standard violations and security vulnerabilities in implementations of network protocols such as TCP, TLS, and SSH. Also, I will discuss the application of automata learning to support refactoring of legacy embedded control software. I will show how Galois connections help us to scale the application of learning algorithms to practical problems, and discuss the challenges that we face to further scale the application of automata learning techniques. 

Hamed Hassani


Assistant Professor
Warren Center for Network and Data Sciences
University of Pennsylvania

I am an assistant professor in the Department of Electrical and Systems Engineering (as of July 2017). I hold a secondary appointment in the Department of Computer and Information Systems. I am also a faculty affiliate of the Warren Center for Network and Data Sciences
Before joining Penn, I was a research fellow at the Simons Institute, UC Berkeley (program: Foundations of Machine Learning). Prior to that, I was a post-doctoral scholar and lecturer in the Institute for Machine Learning at ETH Zürich. I received my Ph.D. degree in Computer and Communication Sciences from EPFL.

 

Submodularity in Data Science
We are witnessing a new era of science — ushered in by our ability to collect massive amounts of data and unprecedented ways to learn about the physical world. Many scientific and engineering models feature inherently discrete decision variables — from selection of advertisements to appear on users’ page to identifying objects in animage. The study of how to make near-optimal decisions from a massive pool of possibilities is at the heart of combinatorial optimization problems. Many of these problems are extremely difficult, and even those that are theoretically tractable may not scale to large instances. In this regard, submodularity has proven to be a key combinatorial structure that can be exploited to provide efficient algorithms with strong theoretical guarantees. This tutorial aims to provide a deep understanding of the various frameworks that have been recently developed for submodular optimization in the presence of the modern challenges in machine learning and data science. In particular, we will discuss challenges such as large-scale, online, distributed, and stochastic submodular maximization and illustrate the discrete and continuous-based frameworks to address these challenges. A particular emphasis is on the current research directions as well as concrete exemplar applications in data science. Slides>>

Hossein Hojjat


Assistant Professor
Rochester Institute of Technology

Hossein Hojjat is an assistant professor at the University of Tehran. He is also an assistant professor (on leave) at the Rochester Institute of Technology (RIT) and a visiting assistant professor at the Cornell University. He earned a PhD in Computer Science from EPFL. He is on the editorial board of Information Processing Letters and also the co-chair of the international conference on Fundamentals of Software Engineering (FSEN). His research interests center on program synthesis and verification.

Mohsen Lesani


Assistant Professor
Computer Science and Engineering Department
University of California, Riverside

I am an assistant professor at the Computer Science and Engineering Department of the University of California, Riverside. I work with my aspiring students in the Safe, Secure and Smart Software (S3) lab. I spent my postdoc at MIT and obtained my PhD from UCLA. My research interests span the areas of verification and synthesis, and concurrent and distributed computing. I develop specification, verification and synthesis techniques and tools to build reliable, secure and efficient computing systems, in particular, subtle concurrent and distributed systems. I often reduce verification and synthesis to simple sufficient conditions to leverage the increasingly powerful automated and semi-automated reasoning techniques and tools. My research has been recognized as SIGPLAN Research Highlight in 2019 and received the distinguished paper award at OOPSLA’18 and ISSRE’15.

Session 1:
Consistent Distributed Data Stores

Distributed system replication is widely used as a means of fault-tolerance and scalability for data stores. However, it provides a spectrum of consistency choices that impose a dilemma for clients between correctness, responsiveness and availability. Given a sequential data object and its integrity properties, this talk presents how a replicated object that guarantees state integrity and convergence and avoids unnecessary coordination can be automatically synthesized. The approach is based on a novel sufficient condition for integrity and convergence called well-coordination that requires certain orders between conflicting and dependent operations. We statically analyze the given sequential object to decide its conflicting and dependent methods and use this information to avoid coordination. This talk also presents coordination protocols that are parametric in terms of the analysis results and provide the well-coordination requirements. Slides>>

Session 2:
Cross-chain transactions

The value of cryptocurrencies is highly volatile and investors require fast and reliable exchange systems. In cross-chain transactions multiple parties exchange assets across multiple blockchains which can be represented as a directed graph with vertexes as parties and edges as asset transfers. Given a transaction graph, this talk presents protocols that guarantee the following property called uniformity. If all parties conform to the protocol, all the assets should be transferred. Further, if any party deviates from the protocol, the conforming parties should not experience a loss.

Session 3:
Distributed Data Consistency analysis

In this talk, we interactively analyze the consistency requirements of example data objects.

Mohammad Ali Maddah-Ali


Sharif University of Technology

Mohammad Ali Maddah-Ali received the B.Sc. degree from Isfahan University of Technology, and the M.A.Sc. degree from the University of Tehran, both in electrical engineering. From 2002 to 2007, he was with the Coding and Signal Transmission Laboratory (CST Lab), Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada, working toward the Ph.D. degree. From 2007 to 2008, he worked at the Wireless Technology Laboratories, Nortel Networks, Ottawa, ON, Canada. From 2008 to 2010, he was a post-doctoral fellow in the Department of Electrical Engineering and … “More

Mohammad Mousavi


Professor
University of Leicester, UK

Mohammad Mousavi is a professor of Data-Oriented Software Engineering at University of Leicester, UK. He received his Ph.D. in Computer Science in 2005 from TU Eindhoven. He is the co-author of some 100 chapters and scientific papers and a book on “Modeling and Analysis of Communicating Systems”.

Modeling and Model Learning for Software Product Lines

Software Product Line (SPL) engineering is commonly used for developing variability-intensive software systems aiming for mass production and customization. An SPL consists of a set of products, which are developed from a common platform and by reusing a set of core assets. SPLs have been adopted in different domains by a wide range of industries. Hence,
developing rigorous model-based techniques for their development and analysis can have major practical impact.
Conventional modeling of each individual product in an SPL can be very costly due to the potential large number of products. Thus, several formalisms have been proposed for modeling SPLs at a higher-level of abstraction for taking the structure of the SPL into account. In this tutorial, we will cover basic concepts from software product line engineering, SPL modelling and model learning.
The research presented in this talk is the result of joint work with Mahsa Varshosaz and Diego Nascimento Damasceno. Slides>>

Seyed Mohsen Moosavi Dezfooli


Postdoc Researcher
ETH, Switzerland

I am a last-year PhD student in Computer and Communication Sciences working with Prof. Pascal Frossard at the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. My research interests broadly include machine learning, computer vision, and signal processing. Currently, I am focusing on the analysis of the robustness of deep networks and its connection with the high-dimensional geometry of deep classifiers. During my PhD, I interned for the Apple AI Research in 2017 where I worked on autonomous technologies. I received my M.S. degree in Communication Systems from the EPFL in 2014 where I worked as a research assistant under the supervision of Prof. Martin Vetterli. In 2012, I got my B.S. in Electrical Engineering from Amirkabir University of Technology (Tehran Polytechnic), Iran.

The Achilles’ heel of deep learning

The problem of the adversarial vulnerability of deep neural networks has attracted a lot of attention in the recent years. Many works have underlined the vulnerability of deep neural networks to small adversarial manipulations of the input. However, the underlying reasons of such instability have remained largely unknown. In this tutorial, I will try to give an overview of this field of research and to talk about some of my recent works on this topic, in two parts:In the first part, I will give a brief history of the problem with a focus on the robustness of deep neural networks. I will talk about the most important attack schemes and how vulnerable deep neural networks are. Next, we discuss, at high-level, some general ideas to counter adversarial attacks and to improve robustness properties of deep neural networks. If time permits, I will also give an overview on the current state of research on the reasons behind such vulnerabilities.In the second part, I propose a novel approach to analyze the adversarial vulnerability through studying the geometry of the decision regions of deep classifiers. We will quantify the geometrical factors affecting robustness, such as curvature, and correlation in the decision boundary of deep image classifiers. I show how these geometric insights can be exploited in order to design computationally efficient methods to evaluate and to improve the robustness properties of state-of-the-art image classifiers. Such insights and methods will eventually contribute to the design of robust ML systems in safety-critical applications of AI such as autonomous driving and automated medical diagnosis. Slides>>

Raúl Pardo


Researcher
IT University of Copenhagen

Raúl is a postdoctoral researcher in the SQUARE group at IT University of Copenhagen. He completed a PhD degree in Computer Science from Chalmers University of Technology. He spent almost two years in the Privatics team at Inria. His research is focused on developing rigorous techniques to design, analyse and build software to protect online privacy. His interests lie at the intersection of formal methods, online privacy and computer security.

Privacy in Online Social Networks

Abstract: As the use of Online Social Networks (OSNs) such as Facebook, Twitter or Instagram increases, privacy breaches in these systems keep pace with its growth. A data breach occurs when users’ personal data are shared with an unintended audience, or data are misused. Data
breaches can be caused by OSN users (e.g., tagging a user in a picture or re-sharing a post); or the OSN provider (e.g., selling personal data for target advertisement without user’s consent).
This tutorial gives an overview of privacy threats in OSNs and mechanisms to overcome them. We start off by describing OSN architectures and storage systems that prevent OSN providers from misusing users’ personal data. Then, we look into the access control
models used in OSNs, which help users limit the audience of the information they share. We conclude by presenting recent work on privacy policy languages for OSNs which provide users with fine-grained control over the information they share. Slides>>

Mohammd Taher Pilehvar


Assistant Professor
Iran University of Science and Technology

Mohammad Taher Pilehvar is an Assistant Professor at Iran University of Science and Technology and an Affiliated Lecturer at the University of Cambridge. Prior to this, he was a research associate at the Language Technology lab, University of Cambridge. He has done his PhD under the supervision of Roberto Navigli. For the past ten years, Taher has been active in research in Natural Language Processing, mainly focusing on lexical semantics, with several publications (including two ACL best paper nominations, at 2013 and 2017) on sense representation and semantic similarity. Taher has also been active in organizing several international workshops, tutorials and academic competitions.

A Primer on Natural Language Processing
In this talk, I will provide a general introduction to Natural Language Processing (NLP). I will list some of the reasons that make NLP a challenging problem, including language ambiguity and common sense knowledge. Then, I will quickly overview some of the applications of NLP. Finally, I will conclude with open research problems. Slides>>

Mehrdad Gharib Shirangi


Senior Staff Data Scientist
Stanford University

Mehrdad Gharib Shirangi is a Senior Staff Data Scientist at Baker Hughes, a GE Company. Before joining GE, he was a PhD researcher at Stanford University. Dr Shirangi’s current interests include prescriptive data analytics, machine learning for optimal decision making, and optimization of drilling & well completion in oil & gas operations. Shirangi holds BS degrees in mechanical engineering and petroleum engineering from Sharif University, an MS degree in petroleum engineering from University of Tulsa, and a PhD degree from Stanford University.

AutoML for Advanced Data Science: Advances and Opportunities
Automated machine learning (autoML) has recently become a popular subject for various tech companies. Major cloud providers (Amazon AWS, Google GCP, Microsoft Azure, IBM Watson studio), each provide their own version of autoML. The basic idea is to automate various steps of a typical machine learning project, from data preprocessing, to feature selection, algorithm selection and hyper-parameter optimization. This can significantly accelerate the process of obtaining an accurate machine learning model, while potentially improving the accuracy compared with a manual process. The applications are tremendous. Researchers can now test a hypothesis within a few hours, rather than weeks, once they have obtained the data. Engineers in companies can build or maintain predictive models much more efficiently. In this talk, we consider regression and classification problems where the goal is to obtain a model with smallest generalization error. We review several recent tools and libraries for autoML and compare these tools for some datasets. We conclude with recommendations for improving machine learning model generation process.

Fatemeh Ghassemi


Assistant Professor
University of Tehran

Fatemeh Ghassemi has received her Ph.D in Software Engineering from Sharif University of technology in 2011, and in Computer Science from Vrije Universiteit of Amsterdam in 2018. She is an assistant professor at University of Tehran since 2012, supervising the Formal Methods laboratory. Her research interest includes formal methods in software engineering, protocol verification, model checking, cyber-physical Systems, theory of programming languages, and software testing.

Offline Automata Learning of Network Applications

The behavior of network applications can be abstractly modeled by automata; the states identify different execution status of applications, e. g., variable values and statements to be executed, while the transitions denote the possible communication of applications. The traces of the automata express the possible sequence of application interactions which can be used as a precise model for network application classifications. Due to the diversity and variability of network applications, port-based and statistical signature detection approaches become inefficient and instead, behavioral classification approaches are considered. Learning automaton-based models of application to classify traffic in terms of the learned models eliminate the shortages of models derived in terms of non-behavioral features and improve the resulting classifications. The automata models are learned from the traces of the applications such that they not only include the given traces but also include unobserved traces as much as possible. As the behaviors of network applications depend on distributed systems, active automata learning approaches cannot be employed. We extend the passive automata learning approach with a state merging algorithm which generalizes the learned models to not only subsume unseen behaviors but also minimize unwanted traces as much as possible. The merging algorithm considers the behavior of well-known network protocols that an application may rely on and merges those states of the learned model that the participating protocols have similarly progressed in those states modulo counter abstraction. To improve the generated model to cover unobserved behaviors, we leverage the technique of complex event processing to complete the model with the unseen interleaving of behaviors due to the concurrent nature of applications.
The learned models are used to distinguish the executions of applications in an interleaved execution trace of different systems. To demonstrate the effectiveness of our approach, we compare it to related approaches in terms of their true-positive rate, false-positive rate, and test time. Our results indicate that our technique prevents including invalid traces so that unobserved behaviors are covered with acceptable precision. Slides>>

Mahsa Varshosaz


Researcher
IT University of Copenhagen

Mahsa Varshosaz is a postdoctoral researcher at the computer science department in IT University of Copenhagen. She received her PhD degree from Halmstad University in computer science and engineering in 2019. Her research interests include model based testing, verification (in particular model checking) of software systems, and analysis of software product lines. In the past few years, her research has been focused on investigating and developing modelling and model-based testing approaches for software product lines. Recently, she has started working on automatic program repair for software systems, where the goal is to reduce the effort of programmers to fix bugs by suggesting a set of possible fixes which are generated automatically.

Michael Zock


Emeritus Research Director
French National Centre for Scientific Research

Michael Zock is emeritus research director at the CNRS and Honorary Professor of the Research
Institute of Information and Language Processing (university of Wolverhampton, UK). Starting his
research career in the eightees at LIMSI, an A.I. lab close to Paris, he joined 2006 the NLP group of
LIF (now LIS-lab) at the university of Aix-Marseille… More>>

Possible Strategies to Overcome the ‘Tip-of-the-Tongue’-Problem
Dictionaries are repositories of knowledge concerning words. While readers are mostly concerned with meanings, writers are generally more concerned with their expressive forms (lemmata, words). I will focus here on this latter task.
More precisely, I am interested in building a set of tools to help authors to overcome the Tip-of-the-Tongue problem (ToT). Interestingly enough, people being in this state always know something concerning the target. Their knowledge (or, rather, the knowledge they are aware of) may concern the meaning (part of the definition), the sound (number of syllables; similar sounding words), or typically related words (associations, collocation). Given these differences of knowledge states, we need to create different resources. I will present in my talk an implementation addressing the ‘acces-by sound problem’, as well as a roadmap concerning the other two problems: (a) access via meaning, and (b) access via target-related words. Slides>>

Schedule

Day 1

Hamed Hassani - Session 01

Hamed Hassani - Session 02

Hamed Hassani - Session 03

Mohsen Moosavi - Session 01

Mohsen Moosavi - Session 02

Mohsen Moosavi - Session 03

Day 2

Raúl Pardo

Mohammad Mousavi - Session 01

Mohammad Mousavi - Session 02

Fatemeh Ghassemi

Day 3

Mohammad Taher Pilehvar

Michael Zock

Mehrdad Shirangi

Frits Vaandrager - Session 01

Frits Vaandrager - Session 02

Frits Vaandrager - Session 03

Day 4

Mohammad Ali Maddah-Ali - Session 01

Mohammad Ali Maddah-Ali - Session 02

Mohammad Ali Maddah-Ali - Session 03

Mohsen Lesani - Session 01

Mohsen Lesani - Session 02

Mohsen Lesani - Session 03