Prasanna Parthasarathi will be graduating with Ph.D. (Computer Science) from School of Computer Science, McGill University. He was advised by Prof. Joelle Pineau on his research in dialogue generation, structured representation for language generation, knowledge based dialogue generation, compositional dialogue tasks, probing neural language understanding models among others. Prasanna will be joining Noah's Ark Lab (Huawei), Montreal as a Senior Researcher in their Natural Language Processing team from March 2022.

During his Ph.D., he was affiliated with Mila-Quebec AI Institute and also enjoyed opportunities to work at Google Brain, Facebook AI Research, and Noah's Ark Lab (Huawei) at the roles of Research Intern and Graduate Student Researcher. Further, Prasanna co-chairs the workshop on Novel Ideas in Learning-to-Learn through Interaction (NILLI) at EMNLP 2021, 2022. He also serves the program committee of top NLP and AI conferences -- EMNLP, ACL, NeurIPS, ICML, AAAI, NAACL, EACL and COLING.

Before joining Mila/McGill, Prasanna obtained his Masters(Thesis) in Computer Science from the Department of Computer Science and Engineering, Indian Institute of Technology Madras. He was advised by Prof. Balaraman Ravindran on his research in transfer learning in reinforcement learning and theoretical reinforcement learning. He spent Summer '15 at Duke University working on finite episode exploration strategies with Prof. George Konidaris.

News:

Look out for updates on Novel Ideas in Learning-to-Learn through Interaction workshop that I will be co-chairing at EMNLP 2022.here
Checkout my recent work on Memory Augmented Optimizers for Deep Learning at ICLR 2022. here.
Checkout my recent work UnNatural Language Inference that won an outstanding paper award at ACL 2021. here.
Checkout the our recent work at SIGDial 2021: Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?. here.
Checkout the our recent work at SIGDial 2021: A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss here.