Computer Sciences Colloquium - Understanding Deep Learning for Natural Language Processing

Omer Levy​

22 October 2017, 11:00 
Schreiber Building, Room 006 
Computer Sciences Colloquium

Abstract:

Deep learning is revolutionizing natural language processing (NLP), with innovations such as word embeddings and long short-term memory (LSTM) playing a key role in virtually every state-of-the-art NLP system today. However, what these neural components learn in practice is somewhat of a mystery. This talk dives into the inner workings of word embeddings and LSTMs, in an attempt to gain a better mathematical and linguistic understanding of what they do, how they do it, and why it works.

 

Bio:

I am a post-doc in the Department of Computer Science & Engineering at the University of Washington, working with Prof. Luke Zettlemoyer. Previously, I completed my PhD at Bar-Ilan University with the guidance of Prof. Ido Dagan and Dr. Yoav Goldberg. I am interested in designing algorithms that mimic the basic language abilities of humans, and using them to realize semantic applications such as question answering and summarization that help people cope with information overload. I am also interested in deepening our qualitative understanding of how machine learning is applied to language and why it succeeds (or fails), in hope that better understanding will foster better methods.

 
Tel Aviv University makes every effort to respect copyright. If you own copyright to the content contained
here and / or the use of such content is in your opinion infringing, Contact us as soon as possible >>