Ryan Lowe

Ph.D. Student, Reasoning & Learning Lab, McGill University
ryan.lowe@cs.mcgill.ca


I am a Ph.D. student in Computer Science in the Reasoning & Learning Lab at McGill University, supervised by Joelle Pineau. My current research focuses on multi-agent reinforcement learning and the emergence of language and behavioural complexity; I spent some time working at these problems at OpenAI under Igor Mordatch and Pieter Abbeel. In the past, I worked on deep learning methods for dialogue systems, and improving dialogue evaluation metrics. Before McGill, I worked at the Institute for Quantum Computing, the Max Planck Institute, and the National Research Council.

My CV can be found here.

News

  • We have open-sourced our code for the multi-agent training algorithm MADDPG, used in our NIPS paper, here.
  • We've open-sourced our Multi-Agent Particle Environments used in our multi-agent work at OpenAI. They are written in Python and use the OpenAI Gym interface.
  • Our paper on actor-critic methods for multi-agent problems was accepted to NIPS 2017.
  • We've released a blog post detailing some of our multi-agent experiments: you can view it here.

Publications


Preprints

Iulian Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau.
"A Survey of Available Corpora for Building Data-Driven Dialogue Systems."
Submitted to Dialogue & Discourse, 2016.
[paper]

2018

Peter Henderson, Koustuv Sinha, Nicolas Angelard-Gontier, Nan Ke, Genevieve Fried, Ryan Lowe, Joelle Pineau.
"Ethical Challenges in Data-Driven Dialogue Systems."
In AAAI/ACM AI Ethics and Society Conference, 2017.
[paper]

2017

Ryan Lowe*, Yi Wu*, Aviv Tamar, Jean Harb, Pieter Abbeel, Igor Mordatch.
"Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments."
In Neural Information Processing Systems (NIPS), 2017.
[paper] [code]

Ryan Lowe*, Michael Noseworthy*, Iulian Serban, Nicholas Angelard-Gontier, Yoshua Bengio, Joelle Pineau.
"Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses."
In Association for Computational Linguistics (ACL), 2017. [Outstanding Paper]
[paper] [code]

Peter Benner, Ryan Lowe, Matthias Voigt.
"L∞-Norm Computation for Large-Scale Descriptor Systems Using Structured Iterative Eigensolvers."
In Numerical Algebra, Control, and Optimization (NACO), 2017.
[paper]

Iulian Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, Yoshua Bengio.
"A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues."
In Association for the Advancement of Artificial Intelligence (AAAI), 2017.
[paper] [code]

Teng Long, Emmanuel Bengio, Ryan Lowe, Jackie Cheung, Doina Precup.
"World Knowledge for Reading Comprehension: Rare Entity Prediction with Hierarchical LSTMs Using External Descriptions."
In Empirical Methods in Natural Language Processing (EMNLP), 2017.
[paper]

Ryan Lowe, Nissan Pow, Iulian Serban, Laurent Charlin, Chia-Wei Liu, Joelle Pineau
"Training End-to-End Dialogue Systems with the Ubuntu Dialogue Corpus."
In Dialogue & Discourse, 2017.
[paper]

Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, Yoshua Bengio
"An Actor-Critic Algorithm for Sequence Prediction"
In International Conference on Learning Representations (ICLR), 2017.
[paper] [code]

2016

Iulian Serban, Ryan Lowe, Laurent Charlin, Joelle Pineau.
"Generative Deep Neural Networks for Dialogue: A Short Review"
In NIPS Workshop on Learning Methods for Dialogue, 2016. [Oral]
[paper]

Teng Long, Ryan Lowe, Jackie Cheung, Doina Precup.
"Leveraging Lexical Resources for Learning Entity Embeddings in Multi-Relational Data"
In Association of Computational Linguistics (ACL, short paper), 2016.
[paper]

Chia-Wei Liu*, Ryan Lowe*, Iulian Serban*, Mike Noseworthy*, Laurent Charlin, Joelle Pineau.
"How NOT to Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation."
In Empirical Methods in Natural Language Processing (EMNLP), 2016. [Oral]
[paper]

Emmanuel Bengio, Pierre-Luc Bacon, Ryan Lowe, Joelle Pineau, Doina Precup.
"Reinforcement Learning of Conditional Computation Policies for Neural Networks."
In ICML Workshop on Abstractions in RL, 2016. [Oral]

Ryan Lowe, Iulian Serban, Michael Noseworthy, Laurent Charlin, Joelle Pineau.
"On the Evaluation of Dialogue Systems with Next Utterance Classification."
In Proceedings of SIGDIAL (short paper), 2016.
[paper]

2015

Ryan Lowe*, Nissan Pow*, Iulian Serban, Joelle Pineau.
"The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems."
In SIGDIAL, 2015 [Oral].
[paper] [code] [dataset] [slides]

Ryan Lowe, Nissan Pow, Iulian Serban, Laurent Charlin, Joelle Pineau.
"Incorporating Unstructured Textual Knowledge into Neural Dialogue Systems."
In NIPS Workshop on Machine Learning for Spoken Language Understanding, 2015.
[paper]

< 2015

Peter Benner, Ryan Lowe, Matthias Voigt.
"Computation of the H∞-Norm for Large-Scale Systems."
Numerical Solution of PDE Eigenvalue Problems Workshop, Oberwolfach, Germany, pp. 3289-3291, 2013.
[paper] [slides]

Peter Benner, Ryan Lowe, Matthias Voigt.
"Numerical Methods for Computing the H∞-Norm of Large-Scale Descriptor Systems."
Householder Symposium XIX, Spa Belgium, pp. 248-249, 2013.
[paper]

Resources




ADEM: An Automatic Dialogue Evaluation Tool


Most automatic evaluation methods for non-task-oriented dialogue (i.e. no task completion signal) perform poorly. We set out to train a model that could replicate human judgements of dialogue response quality on the Twitter dataset. We've open-sourced our model so that other researchers can use it: you can find the code here. Our model is not perfect and only works on Twitter for now, but we believe the same idea can be scaled up and used in other domains.


The Ubuntu Dialogue Corpus v2


The Ubuntu Dialogue Corpus v2 is an updated version of the original Ubuntu Dialogue Corpus. It was created in conjunction with Rudolph Kadlec and Martin Schmid at IBM Watson in Prague. The updated version has the training, validation, and test sets split disjointly by time, which more closely models real-world applications. It has a new context sampling scheme to favour longer contexts, a more reproducible entity replacement procedure, and some bug fixes.

You can download the Ubuntu Dialogue Corpus v2 here.
Code to replicate the results from the paper is available here.

The Ubuntu Dialogue Corpus v1


The Ubuntu Dialogue Corpus v1 is a dataset consisting of almost 1 million dialogues extracted from the Ubuntu IRC chat logs. This dataset has several desirable properties: it is very large, each conversation has multiple turns (a minimum of 3), and it is formed from chat-style messages (as opposed to tweets). There is also a very natural application towards technical support. The size of this dataset makes it a great resource for training dialogue models, particularly neural architectures.

Note that this dataset is outdated; please use the Ubuntu Dialogue Corpus v2 .

Talks & Articles


Technical Talks

An Actor-Critic Algorithm for Sequence Prediction [slides]
The Ubuntu Dialogue Corpus [slides]

Non-Technical Talks

Research in the Reasoning and Learning Lab [slides] Humanity in the Age of the Machines [slides]

Articles

How Machines Learn
Graphite Publications,, 2016.
[article]

Artificial Intelligence has Already Taken Over
Graphite Publications, 2016.
[article]

Website design replicated with permission from Dustin Tran.