Object–Object Interaction Affordance Learning

Document Type

Article

Publication Date

4-2014

Keywords

action recognition, robot learning, learn from demonstration, object classification, graphical model

Digital Object Identifier (DOI)

https://doi.org/10.1016/j.robot.2013.12.005

Abstract

This paper presents a novel object–object affordance learning approach that enables intelligent robots to learn the interactive functionalities of objects from human demonstrations in everyday environments. Instead of considering a single object, we model the interactive motions between paired objects in a human–object–object way. The innate interaction-affordance knowledge of the paired objects are learned from a labeled training dataset that contains a set of relative motions of the paired objects, human actions, and object labels. The learned knowledge is represented with a Bayesian Network, and the network can be used to improve the recognition reliability of both objects and human actions and to generate proper manipulation motion for a robot if a pair of objects is recognized. This paper also presents an image-based visual serving approach that uses the learned motion features of the affordance in interaction as the control goals to control a robot to perform manipulation tasks.

Was this content written or created while at USF?

Yes

Citation / Publisher Attribution

Robotics and Autonomous Systems, v. 62, issue 4, p. 487-496.

Share

COinS