Keynotes

Keynote #1, Jure Leskovec, "Language as Window into Social Dynamics of Online Communities"

Keynote #2, Stephen Clark, “The Theory and Practice of Compositional Distributed Semantics”

Keynote #3, Tara Sainath, "Deep Learning Advances, Challenges and Future Directions for Speech and Language Processing"


Keynote #1, "Language as Window into Social Dynamics of Online Communities"

Monday, December 8, 8:30-9:30

Room: Emerald A-B

Jure Leskovec (Stanford University, Computer Science)
http://cs.stanford.edu/~jure/

Abstract

Most of the activity in online social networks and communities takes the form of natural language, from product reviews to comments, conversations, and posts. Such communities capture a complete linguistic record of activity of millions of people over many years. Computationally analyzing such linguistic traces of human activity offers enormous potential to address long-standing scientific questions as well as a new perspective on fundamental questions in the social sciences and linguistics.

The talk discusses how analysis of language used in online communities can be applied to study online interactions and the dynamics of social networks. As members of online communities join and depart, the linguistic norms evolve, stimulating further changes to the membership and its social dynamics. We discuss how users follow a determined life-cycle with respect to their susceptibility to adopt new community norms, and how this insight can be harnessed to predict how long a user will stay active in the community. We also explore how feedback users receive by others liking or voting on author’s post creates complex social feedback effects that affect author's future behavior as well as the dynamics of the whole community.

This talk includes joint work with Justin Cheng, Cristian Danescu-Niculescu-Mizil, Dan Jurafsky, Christopher Potts, and Robert West.

About the speaker

Jure Leskovec is assistant professor of Computer Science at Stanford University. His research focuses on mining large social and information networks. Problems he investigates are motivated by large scale data, the Web and on-line media. This research has won several awards including a Microsoft Research Faculty Fellowship, the Alfred P. Sloan Fellowship and numerous best paper awards. Leskovec received his bachelor's degree in computer science from University of Ljubljana, Slovenia, and his PhD in in machine learning from the Carnegie Mellon University and postdoctoral training at Cornell University. You can follow him on Twitter @jure


Keynote #2: "The Theory and Practice of Compositional Distributed Semantics"

Tuesday, December 9, 8:30-9:30

Room: Emerald A-B

Stephen Clark (University of Cambridge Computer Laboratory)
http://www.cl.cam.ac.uk/~sc609/

Abstract

There has been a recent resurgence in the use of distributed semantic representations in language and speech processing research. Much of this work has focused on semantic representations of words, obtained either through the classic distributional technique of counting context words, or through the use of neural networks trained to predict a word in context. Given the existence of compositional structures in language, an obvious next step is to consider how compositionality can be modeled in the distributional setting. A combination of compositional and distributed representations has many potential advantages for computational semantics, but the development of such a combination has many challenges.

In this talk I will describe a complete mathematical framework for deriving distributed representations compositionally using the grammatical framework of Combinatory Categorial Grammar (CCG). The framework exploits a natural correspondence between tensor-based representations and complex grammatical types. The existence of robust, broad-coverage CCG parsers opens up the possibility of applying the tensor-based framework to naturally occurring text.

I will also describe ongoing efforts to implement the framework, for which there are considerable practical challenges. I will describe some of the sentence spaces we are exploring; some of the datasets we are developing; and some of the machine learning techniques we are using in an attempt to learn the values of the tensors from corpus data.

This work is being carried out with Luana Fagarasan, Douwe Kiela, Jean Maillard, Tamara Polajnar, Laura Rimell, Eva Maria Vecchi, and involves collaborations with Mehrnoosh Sadrzadeh (Queen Mary) and Ed Grefenstette and Bob Coecke (Oxford).

About the speaker

Stephen Clark is Reader in Natural Language Processing at the University of Cambridge. Previously he was a member of Faculty at the University of Oxford and a postdoctoral researcher at the University of Edinburgh. He holds a PhD in Computer Science and Artificial Intelligence from the University of Sussex and a Philosophy degree from Cambridge. His main research interest is the development of data-driven models for the syntactic and semantic analysis of natural language. He holds a 1M 5-year ERC Starting Grant to work on integrating distributional and compositional models of meaning, as well as coordinating a 1.5M 5-site EPSRC grant in this area. He is currently Chair of the European Chapter of the Association for Computational Linguistics and was program co-chair for ACL 2010.


Keynote #3, "Deep Learning Advances, Challenges and Future Directions for Speech and Language Processing"

Wednesday, December 10, 8:30-9:30

Room: Emerald A-B

Tara Sainath (Google)
https://sites.google.com/site/tsainath/

Abstract

In the past few years, we have seen a paradigm shift in the speech recognition community towards using deep neural networks (DNNs). DNNs were first explored for acoustic modeling, where numerous research labs demonstrated improvements in WER between 10-40% relative.

In the first part of this talk, I will provide an overview of the latest improvements in deep learning across various research labs since the initial inception. First, I will talk about alternative neural network architectures, including Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs), which have yielded additional gains over DNNs for acoustic modeling and language modeling. Second, I will discuss the challenges in training these networks. I will describe large-scale GPU and CPU optimization strategies, which have allowed us to train these networks on thousands of hours of data.

In the second part of this talk, I will talk about what problems in speech recognition process have been relatively unexplored by deep learning techniques. I will provide some insights as to how deep learning could help in these areas.

About the speaker

Tara Sainath received her PhD in Electrical Engineering and Computer Science from MIT in 2009. The main focus of her PhD work was in acoustic modeling for noise robust speech recognition. After her PhD, she spent 5 years at the Speech and Language Algorithms group at IBM T.J. Watson Research Center, before joining Google Research. She has co-organized a special session on Sparse Representations at Interspeech 2010 in Japan. She has also organized a special session on Deep Learning at ICML 2013 in Atlanta. In addition, she is a staff reporter for the IEEE Speech and Language Processing Technical Committee (SLTC) Newsletter. Her research interests are mainly in acoustic modeling, including deep neural networks and sparse representations.