Header logo is


2017


Robotic Motion Learning Framework to Promote Social Engagement
Robotic Motion Learning Framework to Promote Social Engagement

Burns, R.

The George Washington University, August 2017 (mastersthesis)

Abstract
This paper discusses a novel framework designed to increase human-robot interaction through robotic imitation of the user's gestures. The set up consists of a humanoid robotic agent that socializes with and play games with the user. For the experimental group, the robot also imitates one of the user's novel gestures during a play session. We hypothesize that the robot's use of imitation will increase the user's openness towards engaging with the robot. Preliminary results from a pilot study of 12 subjects are promising in that post-imitation, experimental subjects displayed a more positive emotional state, had higher instances of mood contagion towards the robot, and interpreted the robot to have a higher level of autonomy than their control group counterparts. These results point to an increased user interest in engagement fueled by personalized imitation during interaction.

hi

link (url) [BibTex]

2017


link (url) [BibTex]


no image
Robot Learning

Peters, J., Lee, D., Kober, J., Nguyen-Tuong, D., Bagnell, J., Schaal, S.

In Springer Handbook of Robotics, pages: 357-394, 15, 2nd, (Editors: Siciliano, Bruno and Khatib, Oussama), Springer International Publishing, 2017 (inbook)

am ei

Project Page [BibTex]

Project Page [BibTex]

2009


Synchronized Oriented Mutations Algorithm for Training Neural Controllers
Synchronized Oriented Mutations Algorithm for Training Neural Controllers

Berenz, V., Suzuki, K.

In Advances in Neuro-Information Processing: 15th International Conference, ICONIP 2008, Auckland, New Zealand, November 25-28, 2008, Revised Selected Papers, Part II, pages: 244-251, Springer Berlin Heidelberg, Berlin, Heidelberg, 2009 (inbook)

am

link (url) DOI [BibTex]

2009


link (url) DOI [BibTex]


Integration of Visual Cues for Robotic Grasping
Integration of Visual Cues for Robotic Grasping

Bergström, N., Bohg, J., Kragic, D.

In Computer Vision Systems, 5815, pages: 245-254, Lecture Notes in Computer Science, Springer Berlin Heidelberg, 2009 (incollection)

Abstract
In this paper, we propose a method that generates grasping actions for novel objects based on visual input from a stereo camera. We are integrating two methods that are advantageous either in predicting how to grasp an object or where to apply a grasp. The first one reconstructs a wire frame object model through curve matching. Elementary grasping actions can be associated to parts of this model. The second method predicts grasping points in a 2D contour image of an object. By integrating the information from the two approaches, we can generate a sparse set of full grasp configurations that are of a good quality. We demonstrate our approach integrated in a vision system for complex shaped objects as well as in cluttered scenes.

am

pdf link (url) DOI [BibTex]

pdf link (url) DOI [BibTex]


no image
Bayesian Methods for Autonomous Learning Systems (Phd Thesis)

Ting, J.

Department of Computer Science, University of Southern California, Los Angeles, CA, 2009, clmc (phdthesis)

am

PDF [BibTex]

PDF [BibTex]