NILLI
Novel Ideas in Learning-to-Learn through Interaction

All Editions

Workshop @ EMNLP 2023

Collaborative dialogues [1, 2, 3] with automated systems through language interactions have become ubiquitous, wherein it is becoming common from setting an alarm to planning one’s day through language interactions. With recent advances in dialogue research [1, 2, 3], embodied learning [4, 5, 6] and using language as a mode of instruction for learning agents [4, 7, 8] there is, now, a scope for realizing domains that can assume agents with primitive task knowledge and a continual interact-and-learn procedure to systematically acquire knowledge through verbal/non-verbal interactions [9, 10, 11, 12]. The research direction of building interactive learning agents [4, 7, 8, 13] facilitates the possibility of agents to have advanced interactions like taking instructions by being a pragmatic listener, asking for more samples, generating rationales for predictions, interactions to interpret learning dynamics, or even identifying or modifying a new task that can be used towards building effective learning-to-learn mechanisms. In a way, with verbal/non-verbal interactive medium this interdisciplinary field unifies research paradigms of lifelong learning, natural language processing, embodied learning, reinforcement learning, robot learning and multi-modal learning towards building interactive and interpretable AI.


Speakers

Alison Smith-Renner
Research Manager,
Human Centered AI/ML
Guanzhi Wang
Ph.D. Student,
CalTech
Yoav Artzi
Associate Professor,
Cornell University
Daniel Fried
Assistant Professor,
Carnegie Mellon University


Call For Papers

We call for novel, unpublished or in-review works on topics listed in the groups below:

  • Language at Fore
    • Novel environments for language understanding through interaction.
    • Language based Reinforcement Learning.
    • Learning representations in grounded language.
    • Language based interaction methods in interdisciplinary research.
  • Machine Learning with Interaction
    • Modeling multi-modal and language interactions to aid continual learning.
    • Interactive training for embodied agents.
    • Early and negative results on learning to solve tasks through interaction.
    • Non-verbal/Verbal interactive frameworks.
  • Community Impact of Interactive Agents
    • Frontiers in building interactive agents (data, frameworks, open problems).
    • Applications of interactive learning in interdisciplinary research.
    • Security and ethical challenges in interaction based learning.

Important Dates

To be announced

Submission Instructions

We have sent out the invitations to the authors of the selected Findings papers. Please reach out to us in case you have any issues.

Schedule

Opening Remarks 08:30-08:35


Invited Talk 1: Guanzhi Wang 08:35-09:20

Open-Ended Embodied Agents with Internet-Scale Knowledge and Large Language Models


Invited Talk 2: Yoav Artzi 09:20-10:05

Natural Language Learning via Interaction


Invited Talk 3: Alison Smith-Renner 10:05-10:50

Learning about human interaction w/ AI: a recipe for human-centered design of AI systems


Coffee Break 10:50-11:15


Invited Talk 4: Daniel Fried 11:15-12:00

Interacting with LLMs for Grounded Tasks


Break 12:00 - 13:30


Lightning Talks (Session 1- 12 talks) 13:30-15:30

  • Improving Visually Grounded Continual Language Learning with Selective Module Specialization (Kyra Ahrens)

  • CA Zero-Shot Language Agent for Computer Control with Structured Reflection (Tao Li)

  • Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation (Yuliang Cai) 10:10-10:20

  • Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models (Haoran Wang)

  • Does Listener Gaze in Face-to-Face Interaction Follow the Entropy Rate Constancy Principle: An Empirical Study (Yu Wang)

  • PlugMed: Improving Specificity in Patient-Centered Medical Dialogue Generation using In-Context Learning (Chengfeng Dou)

  • Dior-CVAE: Diffusion Priors in Variational Dialog Generation (Tianyu Yang)

  • GATE: Grounded Argument and Task Extraction for Embodied Agents (Tapas Nayak)

  • NormLens: Reading Books is Great, But Not if You Are Driving! Visually Grounded Reasoning about Defeasible Commonsense Norms (Seungju Han)

  • CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization(Bodhisattwa Prasad Majumder)

  • DialGuide: Aligning Dialogue Model Behavior with Developer Guidelines(Di Jin)

  • MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties Grounded in Math Reasoning Problems(Macina Jakub)


Break 15:30-16:00


Lightning Talks-(Session-2 8 Talks) 16:00-17:20

  • RSVP: Customer Intent Detection via Agent Response Contrastive and Generative Pre-Training (Yu-Chien Tang)

  • Long-Horizon Dialogue Understanding for Role Identification in the Game of Avalon with Large Language Models (Huao Li)

  • Large Language Models as Source Planner for Personalized Knowledge-grounded Dialogues (Hongru Wang)

  • Time-Considerable Dialogue Models via Reranking by Time Dependency (Yuiko Tsunomori)

  • Improving Conversational Recommendation Systems via Bias Analysis and Language-Model-Enhanced Data Augmentation(Xi Wang)

  • Multi-User MultiWOZ: Task-Oriented Dialogues among Multiple Users (Yohan Jo)

  • STEER: Unified Style Transfer with Expert Reinforcement (Skyler Hallinan)

  • Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models (Amirhossein Kazemnejad)


Closing Remarks 17:20-17:30


Papers Accepted

  • Improving Visually Grounded Continual Language Learning with Selective Module Specialization,
    Kyra Ahrens, Lennart Bengtson, Jae Hee Lee and Stefan Wermter

  • CA Zero-Shot Language Agent for Computer Control with Structured Reflection,
    Tao Li, Gang Li, Zhiwei Deng, Bryan Wang and Yang Li

  • Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation,
    Yuliang Cai, Jesse Thomason and Mohammad Rostami

  • Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models,
    Haoran Wang and Kai Shu

  • Does Listener Gaze in Face-to-Face Interaction Follow the Entropy Rate Constancy Principle: An Empirical Study,
    Yu Wang and Hendrik Buschmeier

  • PlugMed: Improving Specificity in Patient-Centered Medical Dialogue Generation using In-Context Learning,
    Chengfeng Dou, Zhi Jin, Wenpin Jiao, Haiyan Zhao, Yongqiang Zhao and Zhengwei Tao

  • Dior-CVAE: Diffusion Priors in Variational Dialog Generation,
    Tianyu Yang, Thy Thy Tran and Iryna Gurevych

  • GATE: Grounded Argument and Task Extraction for Embodied Agents,
    Chayan Sarkar, Avik Mitra, Pradip Pramanick and Tapas Nayak

  • NormLens: Reading Books is Great, But Not if You Are Driving! Visually Grounded Reasoning about Defeasible Commonsense Norms,
    Seungju Han, Junhyeok Kim, Jack Hessel, Liwei Jiang, Jiwan Chung, Yejin Son, Yejin Choi and Youngjae Yu

  • CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization,
    Bodhisattwa Prasad Majumder, Bhavana Dalvi Mishra, Peter Jansen, Oyvind Tafjord, Niket Tandon, Li Zhang, Chris Callison-Burch and Peter Clark

  • DialGuide: Aligning Dialogue Model Behavior with Developer Guidelines,
    Prakhar Gupta, Yang Liu, Di Jin, Behnam Hedayatnia, Spandana Gella, Sijia Liu, Patrick L. Lange, Julia Hirschberg, Dilek Hakkani-Tur

  • MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties Grounded in Math Reasoning Problems,
    Jakub Macina, Nico Daheim, Sankalan Pal Chowdhury, Tanmay Sinha, Manu Kapur, Iryna Gurevych, Mrinmaya Sachan

  • RSVP: Customer Intent Detection via Agent Response Contrastive and Generative Pre-Training,
    Yu-Chien Tang, Wei Yao Wang, An Zi Yen and Wen Chih Peng

  • Long-Horizon Dialogue Understanding for Role Identification in the Game of Avalon with Large Language Models,
    Simon Stepputtis, Joseph Campbell, Yaqi Xie, Zhengyang Qi, Wenxin Sharon, Zhang, Ruiyi Wbng, Sanketh Rangreji, Charles Michael Lewis and Katia P. Sy- cara

  • Large Language Models as Source Planner for Personalized Knowledge-grounded Dialogues,
    Hongru Wang, Minda Hu, Yang Deng, Rui Wang, Fei Mi, Weichao Wang, Yasheng Wang, Wai-Chung Kwan, Irwin King and Kam-Fai Wong

  • Time-Considerable Dialogue Models via Reranking by Time Dependency,
    Yuiko Tsunomori, Masakazu Ishihata and Hiroaki Sugiyama

  • Improving Conversational Recommendation Systems via Bias Analysis and Language-Model-Enhanced Data Augmentation ,
    Xi Wang, Hossein A. Rahmani, Jiqun Liu and Emine Yilmaz

  • Multi-User MultiWOZ: Task-Oriented Dialogues among Multiple Users ,
    Yohan Jo, Xinyan Zhao, Arijit Biswas, Nikoletta Basiou, Vincent Auvray, Nikolaos Malandrakis, Angeliki Metallinou and Alexandros Potamianos

  • STEER: Unified Style Transfer with Expert Reinforcement ,
    Skyler Hallinan, Faeze Brahman, Ximing Lu, Jaehun Jung, Sean Welleck and Yejin Choi

  • Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models ,
    Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi and Sarath Chandar

Organizing Committee

Prasanna Parthasarathi
Senior Researcher,
Huawei Noah's Ark Lab
Koustuv Sinha
Research Scientist
FAIR
Khyathi Raghavi Chandu
Research Scientist
Allen Institute of AI
Chinnadhurai Sankar
Research Lead,
SliceX AI



Adina Williams
Research Scientist,
FAIR
Sarath Chandar
Assistant Professor,
École polytechnique de Montréal
Marc-Alexandre Côté
Senior researcher,
Microsoft Research
Joelle Pineau
Professor,
McGill University


Program Committee

To be announced