TidyBot: Personalized Robot Assistance with Large Language Models

July 2023

tl;dr: Use LLM to summarize and apply user preference through few shot learning. This clever CoT prompt engineering provides a great way to personalize/customize LLM.

Overall impression

TidyBot achieved personalization of LLM through few-shot summarization. The context learning from few examples enabled some user customization, without fine-tuning of LLM. It is essentially a nice piece of prompt engineering, and promotes LLM’s Chain-of-Thought. It is reasonable to enable customization of LLM through prompt engineering rather than fine-tuning the base model.

The combination of summarization and open-vocabulary classification is critical to the autonomy of TidyBot. This enables object classifier to work with a small set of generalized object categories. Summarization provides a good way of generalization.

Yet another key question is how to form data close loop to continuously improve the LLM? This must be done through parameter efficient tuning of LLM (LoRa, etc). This is also a bit like the supervised finetuning (SFT) of ChatGPT.

The design of a benchmark dataset is quite informative and provides best practice for robotics research.

Overall this makes a great high-school project.

Key ideas

Technical details