Object–Object Interaction Affordance Learning
action recognition, robot learning, learn from demonstration, object classification, graphical model
Digital Object Identifier (DOI)
This paper presents a novel object–object affordance learning approach that enables intelligent robots to learn the interactive functionalities of objects from human demonstrations in everyday environments. Instead of considering a single object, we model the interactive motions between paired objects in a human–object–object way. The innate interaction-affordance knowledge of the paired objects are learned from a labeled training dataset that contains a set of relative motions of the paired objects, human actions, and object labels. The learned knowledge is represented with a Bayesian Network, and the network can be used to improve the recognition reliability of both objects and human actions and to generate proper manipulation motion for a robot if a pair of objects is recognized. This paper also presents an image-based visual serving approach that uses the learned motion features of the affordance in interaction as the control goals to control a robot to perform manipulation tasks.
Was this content written or created while at USF?
Citation / Publisher Attribution
Robotics and Autonomous Systems, v. 62, issue 4, p. 487-496.
Scholar Commons Citation
Sun, Yu; Ren, Shaogang; and Lin, Yun, "Object–Object Interaction Affordance Learning" (2014). Computer Science and Engineering Faculty Publications. 52.