Behavior Learning
Research Team

Research Summary

Recent dialogue systems using AI systems, such as smart speakers, are able to provide various functions such as information retrieval based on voice commands. In this context, interactive robots are expected to be a system that can coexist with humans, but the communication mechanism that utilizes various modalities such as gestures and eye gaze as humans do has not yet been realized. Our team aims to develop a robot that can interact with humans as humans do in daily life by measuring human behaviors during the dialogue and constructing a deep generative model of the interaction behaviors.

Main Research Fields
  • Human robot interaction
  • Machine learning
Keywords
  • Reinforcement learning
  • Human robot interaction
  • Communicative robot
  • Motion generation
  • Intrinsic motivation
Research theme
  • Reinforcement learning of robots interacting with humans.
  • Automatic generation of human-like and natural motion of communicative robot based on human cognition.
  • Interactive robot operation mechanism through communication.
  • Autonomous robots operating in everyday environments.

Yutaka Nakamura

Yutaka Nakamura

History

2004
Nara institute of science and technology
2006
Osaka University
2020
RIKEN

Members

Huthaifa Ahmad
Research Scientist
Yuya Okadome
Visiting Scientist
Yazan Alkatshah
Research Part-time Worker I and Student Trainee
Chenfei Xu
Research Part-time Worker II and Student Trainee

Former member

Zhichao Chen
Technical Staff I(2022/10-2023/03)
Yusuke Nishimura
Research Part-time Worker I and Student Trainee(2021/01-2023/03)
Shota Takashiro
Administrative Part-time Worker II (2021/12-2022/03)
Nayuta Arai
Research Intern(2023/09)
Sara Pia Calvitto
Research Intern(2023/08-2023/11)

Research results

Learning from human demonstration

For real-world human-robot interaction (HRI), it is difficult to hand-craft all the rules for robots owing to diverse situations. Under the circumstances, deep learning approaches should be considered to assist robots in mastering HRI skills. Here, we demonstrate a practical HRI application, in which an android robot acts as a mall receptionist to encourage customers to perform hand hygiene using a hand sanitizer. By learning from the human demonstration, the IRL-driven android achieves a competitive performance to a well-trained human operator. In addition, we construct a IRL-to-RL transition framework to further enhance the android, of which the performance eventually outcompetes the human expert’s with extra data sampling.

Learning from human demonstration

Selected Publications

  1. 岡留 有哉, 阿多 健史郎, 石黒 浩, 中村 泰
    ”対話中の振る舞い予測のための時間的整合性に注目した自己教師あり学習”
    人工知能学会論文誌 (2022)
  2. Zhichao Chen, Nakamura Yutaka, Hiroshi Ishiguro
    "Android As a Receptionist in a Shopping Mall Using Inverse Reinforcement Learning"
    IEEE/RSJ International Conference on Intelligent Robots and Systems (2022)
  3. Naoki Ise, Yoshihiro Nakata, Yutaka Nakamura, Hiroshi Ishiguro
    "Gaze motion and subjective workload assessment while performing a task walking hand in hand with a mobile robot"
    International Journal of Social Robotics (2022)
  4. Huthaifa Ahmad, Yutaka Nakamura
    "A robot that is always ready for safe physical interactions"
    Interdisciplinary Conference on Mechanics, Computers and Electrics (ICMECE 2022)
  5. Satoshi Yagi, Yoshihiro Nakata, Yutaka Nakamura, Hiroshi Ishiguro
    "Can an android's posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?"
    Plos ONE (2021)
  6. Yusuke Nishimura, Yutaka Nakamura, and Hiroshi Ishiguro
    "Human interaction behavior modeling using Generative Adversarial Networks"
    Neural Networks, 132, pp.521—531 (2020)
  7. Mofei Li, Yutaka Nakamura, and Hiroshi Ishiguro
    "Choice modeling using dot-product attention mechanism"
    Artificial Life and Robotics (2020)
  8. Ahmed Hussain Qureshi, Yutaka Nakamura, Yuichiro Yoshikawa, and Hiroshi Ishiguro
    "Intrinsically motivated reinforcement learning for human-robot interaction in the real-world"
    Neural Networks, 107, pp. 23—33 (2018)
  9. Yuya Okadome, Yutaka Nakamura, and Hiroshi Ishiguro
    "A confidence-based roadmap using Gaussian process regressio"
    Autonomous Robots, 41(4) (2017)
  10. Yutaka Nakamura, Takeshi Mori, Masa-aki Sato, and Shin Ishii
    "Reinforcement learning for a biped robot based on a CPG-actor-critic method"
    Neural Networks, 20(6), pp.723-735 (2007)

Links

Contact Information

yutaka.nakamura [at] riken.jp

Top