Guiding Interaction Behaviors for Multi-modal Grounded Language Learning

Jesse Thomason , Jivko Sinapov , Raymond Mooney
Proceedings of the First Workshop on Language Grounding for Robotics 20 -24

11
2017
Prospection: Interpretable plans from language by predicting the future

Chris Paxton , Yonatan Bisk , Jesse Thomason , Arunkumar Byravan
international conference on robotics and automation 6942 -6948

18
2019
Improving Robot Success Detection using Static Object Data

Rosario Scalise , Jesse Thomason , Yonatan Bisk , Siddhartha Srinivasa
intelligent robots and systems 4229 -4235

11
2019
Augmenting Knowledge through Statistical, Goal-oriented Human-Robot Dialog

Saeid Amiri , Sujay Bajracharya , Cihangir Goktolgal , Jesse Thomason
intelligent robots and systems 744 -750

2019
Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog

Jesse Thomason , Aishwarya Padmakumar , Jivko Sinapov , Nick Walker
Journal of Artificial Intelligence Research 67 327 -374

16
2020
Exploring multi-dimensional data on mobile devices with single hand motion and orientation gestures

Jesse Thomason , Jingtao Wang
human computer interaction with mobile devices and services 173 -176

3
2012
RREx-BoT: Remote Referring Expressions with a Bag of Tricks

Gunnar A Sigurdsson , Jesse Thomason , Gaurav S Sukhatme , Robinson Piramuthu
arXiv preprint arXiv:2301.12614

2023
Retrospectives on the embodied ai workshop

Matt Deitke , Dhruv Batra , Yonatan Bisk , Tommaso Campari
arXiv preprint arXiv:2210.06849

6
2022
Language grounding with 3d objects

Jesse Thomason , Mohit Shridhar , Yonatan Bisk , Chris Paxton
Smpte Journal 1691 -1701

24
2022
Teach: Task-driven embodied agents that chat

Aishwarya Padmakumar , Jesse Thomason , Ayush Shrivastava , Patrick Lange
Smpte Journal 36 ( 2) 2017 -2025

41
2022
Multimodal Speech Recognition for Language-Guided Embodied Agents

Allen Chang , Xiaoyuan Zhu , Aarav Monga , Seoho Ahn
arXiv preprint arXiv:2302.14030

2023
Vision-and-language navigation: A survey of tasks, methods, and future directions

Jing Gu , Eliana Stefani , Qi Wu , Jesse Thomason
arXiv preprint arXiv:2203.12667

27
2022
Climb: A continual learning benchmark for vision-and-language tasks

Tejas Srinivasan , Ting-Yun Chang , Leticia Pinto Alva , Georgios Chochlakis
Advances in Neural Information Processing Systems 35 29440 -29453

10
2022
Embodied bert: A transformer model for embodied, language-guided visual task completion

Alessandro Suglia , Qiaozi Gao , Jesse Thomason , Govind Thattai
arXiv preprint arXiv:2108.04927

27
2021
Efficient End-to-End Visual Document Understanding with Rationale Distillation

Wang Zhu , Alekh Agarwal , Mandar Joshi , Robin Jia
arXiv preprint arXiv:2311.09612

1
2023
Progprompt: Generating situated robot task plans using large language models

Ishika Singh , Valts Blukis , Arsalan Mousavian , Ankit Goyal
2023 IEEE International Conference on Robotics and Automation (ICRA) 11523 -11530

388
2023
ProgPrompt: program generation for situated robot task planning using large language models

Ishika Singh , Valts Blukis , Arsalan Mousavian , Ankit Goyal
Autonomous Robots 47 ( 8) 999 -1012

18
2023
Selective" Selective Prediction": Reducing Unnecessary Abstention in Vision-Language Reasoning

Tejas Srinivasan , Jack Hessel , Tanmay Gupta , Bill Yuchen Lin
arXiv preprint arXiv:2402.15610

2024
Iterative vision-and-language navigation

Jacob Krantz , Shurjo Banerjee , Wang Zhu , Jason Corso
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 14921 -14930

10
2023
ALFRED-L: Investigating the Role of Language for Action Learning in Interactive Visual Environments

Arjun Akula , Spandana Gella , Aishwarya Padmakumar , Mahdi Namazifar
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing 9369 -9378

2
2022