Header logo is


2019


no image
Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement

Hu, S., Kuchenbecker, K. J.

Applied Bionics and Biomechanics, (9765383), December 2019 (article)

Abstract
Learning from demonstration (LfD) enables a robot to emulate natural human movement instead of merely executing preprogrammed behaviors. This article presents a hierarchical LfD structure of task-parameterized models for object movement tasks, which are ubiquitous in everyday life and could benefit from robotic support. Our approach uses the task-parameterized Gaussian mixture model (TP-GMM) algorithm to encode sets of demonstrations in separate models that each correspond to a different task situation. The robot then maximizes its expected performance in a new situation by either selecting a good existing model or requesting new demonstrations. Compared to a standard implementation that encodes all demonstrations together for all test situations, the proposed approach offers four advantages. First, a simply defined distance function can be used to estimate test performance by calculating the similarity between a test situation and the existing models. Second, the proposed approach can improve generalization, e.g., better satisfying the demonstrated task constraints and speeding up task execution. Third, because the hierarchical structure encodes each demonstrated situation individually, a wider range of task situations can be modeled in the same framework without deteriorating performance. Last, adding or removing demonstrations incurs low computational load, and thus, the robot’s skill library can be built incrementally. We first instantiate the proposed approach in a simulated task to validate these advantages. We then show that the advantages transfer to real hardware for a task where naive participants collaborated with a Willow Garage PR2 robot to move a handheld object. For most tested scenarios, our hierarchical method achieved significantly better task performance and subjective ratings than both a passive model with only gravity compensation and a single TP-GMM encoding all demonstrations.

hi

DOI [BibTex]


no image
Low-Hysteresis and Low-Interference Soft Tactile Sensor Using a Conductive Coated Porous Elastomer and a Structure for Interference Reduction

Park, K., Kim, S., Lee, H., Park, I., Kim, J.

Sensors and Actuators A: Physical, 295, pages: 541-550, August 2019 (article)

Abstract
The need for soft whole-body tactile sensors is emerging. Piezoresistive materials are advantageous in terms of making large tactile sensors, but the hysteresis of piezoresistive materials is a major drawback. The hysteresis of a piezoresistive material should be attenuated to make a practical piezoresistive soft tactile sensor. In this paper, we introduce a low-hysteresis and low-interference soft tactile sensor using a conductive coated porous elastomer and a structure to reduce interference (grooves). The developed sensor exhibits low hysteresis because the transduction mechanism of the sensor is dominated by the contact between the conductive coated surface. In a cyclic loading experiment with different loading frequencies, the mechanical and piezoresistive hysteresis values of the sensor are less than 21.7% and 6.8%, respectively. The initial resistance change is found to be within 4% after the first loading cycle. To reduce the interference among the sensing points, we also propose a structure where the grooves are inserted between the adjacent electrodes. This structure is implemented during the molding process, which is adopted to extend the porous tactile sensor to large-scale and facile fabrication. The effects of the structure are investigated with respect to the normalized design parameters ΘD, ΘW, and ΘT in a simulation, and the result is validated for samples with the same design parameters. An indentation experiment also shows that the structure designed for interference reduction effectively attenuates the interference of the sensor array, indicating that the spatial resolution of the sensor array is improved. As a result, the sensor can exhibit low hysteresis and low interference simultaneously. This research can be used for many applications, such as robotic skin, grippers, and wearable devices.

hi

DOI [BibTex]

DOI [BibTex]


no image
Physical activity in non-ambulatory toddlers with cerebral palsy

M.Orlando, J., Pierce, S., Mohan, M., Skorup, J., Paremski, A., Bochnak, M., Prosser, L. A.

Research in Developmental Disabilities, 90, pages: 51-58, July 2019 (article)

Abstract
Background: Children with cerebral palsy are less likely to be physically active than their peers, however there is limited evidence regarding self-initiated physical activity in toddlers who are not able, or who may never be able, to walk. Aims: The aim of this study was to measure self-initiated physical activity and its relationship to gross motor function and participation in non-ambulatory toddlers with cerebral palsy. Methods and procedures: Participants were between the ages of 1–3 years. Physical activity during independent floor-play at home was recorded using a wearable tri-axial accelerometer worn on the child’s thigh. The Gross Motor Function Measure-66 and the Child Engagement in Daily Life, a parent-reported questionnaire of participation, were administered. Outcomes and results: Data were analyzed from the twenty participants who recorded at least 90 min of floor-play (mean: 229 min), resulting in 4598 total floor-play minutes. The relationship between physical activity and gross motor function was not statistically significant (r = 0.20; p = 0.39), nor were the relationships between physical activity and participation (r = 0.05−0.09; p = 0.71−0.84). Conclusions and implications: The results suggest physical activity during floor-play is not related to gross motor function or participation in non-ambulatory toddlers with cerebral palsy. Clinicians and researchers should independently measure physical activity, gross motor function, and participation.

hi

DOI [BibTex]

DOI [BibTex]


Implementation of a 6-{DOF} Parallel Continuum Manipulator for Delivering Fingertip Tactile Cues
Implementation of a 6-DOF Parallel Continuum Manipulator for Delivering Fingertip Tactile Cues

Young, E. M., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 12(3):295-306, June 2019 (article)

Abstract
Existing fingertip haptic devices can deliver different subsets of tactile cues in a compact package, but we have not yet seen a wearable six-degree-of-freedom (6-DOF) display. This paper presents the Fuppeteer (short for Fingertip Puppeteer), a device that is capable of controlling the position and orientation of a flat platform, such that any combination of normal and shear force can be delivered at any location on any human fingertip. We build on our previous work of designing a parallel continuum manipulator for fingertip haptics by presenting a motorized version in which six flexible Nitinol wires are actuated via independent roller mechanisms and proportional-derivative controllers. We evaluate the settling time and end-effector vibrations observed during system responses to step inputs. After creating a six-dimensional lookup table and adjusting simulated inputs using measured Jacobians, we show that the device can make contact with all parts of the fingertip with a mean error of 1.42 mm. Finally, we present results from a human-subject study. A total of 24 users discerned 9 evenly distributed contact locations with an average accuracy of 80.5%. Translational and rotational shear cues were identified reasonably well near the center of the fingertip and more poorly around the edges.

hi

DOI Project Page [BibTex]


no image
How Does It Feel to Clap Hands with a Robot?

Fitter, N. T., Kuchenbecker, K. J.

International Journal of Social Robotics, 12(1):113-127, April 2019 (article)

Abstract
Future robots may need lighthearted physical interaction capabilities to connect with people in meaningful ways. To begin exploring how users perceive playful human–robot hand-to-hand interaction, we conducted a study with 20 participants. Each user played simple hand-clapping games with the Rethink Robotics Baxter Research Robot during a 1-h-long session involving 24 randomly ordered conditions that varied in facial reactivity, physical reactivity, arm stiffness, and clapping tempo. Survey data and experiment recordings demonstrate that this interaction is viable: all users successfully completed the experiment and mentioned enjoying at least one game without prompting. Hand-clapping tempo was highly salient to users, and human-like robot errors were more widely accepted than mechanical errors. Furthermore, perceptions of Baxter varied in the following statistically significant ways: facial reactivity increased the robot’s perceived pleasantness and energeticness; physical reactivity decreased pleasantness, energeticness, and dominance; higher arm stiffness increased safety and decreased dominance; and faster tempo increased energeticness and increased dominance. These findings can motivate and guide roboticists who want to design social–physical human–robot interactions.

hi

DOI [BibTex]

DOI [BibTex]


no image
A Robustness Analysis of Inverse Optimal Control of Bipedal Walking

Rebula, J. R., Schaal, S., Finley, J., Righetti, L.

IEEE Robotics and Automation Letters, 4(4):4531-4538, 2019 (article)

mg

DOI [BibTex]

DOI [BibTex]


Autonomous Identification and Goal-Directed Invocation of Event-Predictive Behavioral Primitives
Autonomous Identification and Goal-Directed Invocation of Event-Predictive Behavioral Primitives

Gumbsch, C., Butz, M. V., Martius, G.

IEEE Transactions on Cognitive and Developmental Systems, 2019 (article)

Abstract
Voluntary behavior of humans appears to be composed of small, elementary building blocks or behavioral primitives. While this modular organization seems crucial for the learning of complex motor skills and the flexible adaption of behavior to new circumstances, the problem of learning meaningful, compositional abstractions from sensorimotor experiences remains an open challenge. Here, we introduce a computational learning architecture, termed surprise-based behavioral modularization into event-predictive structures (SUBMODES), that explores behavior and identifies the underlying behavioral units completely from scratch. The SUBMODES architecture bootstraps sensorimotor exploration using a self-organizing neural controller. While exploring the behavioral capabilities of its own body, the system learns modular structures that predict the sensorimotor dynamics and generate the associated behavior. In line with recent theories of event perception, the system uses unexpected prediction error signals, i.e., surprise, to detect transitions between successive behavioral primitives. We show that, when applied to two robotic systems with completely different body kinematics, the system manages to learn a variety of complex behavioral primitives. Moreover, after initial self-exploration the system can use its learned predictive models progressively more effectively for invoking model predictive planning and goal-directed control in different tasks and environments.

al

arXiv PDF video link (url) DOI Project Page [BibTex]


no image
Rigid vs compliant contact: an experimental study on biped walking

Khadiv, M., Moosavian, S. A. A., Yousefi-Koma, A., Sadedel, M., Ehsani-Seresht, A., Mansouri, S.

Multibody System Dynamics, 45(4):379-401, 2019 (article)

mg

DOI [BibTex]

DOI [BibTex]


no image
Even Delta-Matroids and the Complexity of Planar Boolean CSPs

Kazda, A., Kolmogorov, V., Rolinek, M.

ACM Transactions on Algorithms, 15(2, Special Issue on Soda'17 and Regular Papers):Article Number 22, 2019 (article)

al

DOI [BibTex]

DOI [BibTex]


no image
Machine Learning for Haptics: Inferring Multi-Contact Stimulation From Sparse Sensor Configuration

Sun, H., Martius, G.

Frontiers in Neurorobotics, 13, pages: 51, 2019 (article)

Abstract
Robust haptic sensation systems are essential for obtaining dexterous robots. Currently, we have solutions for small surface areas such as fingers, but affordable and robust techniques for covering large areas of an arbitrary 3D surface are still missing. Here, we introduce a general machine learning framework to infer multi-contact haptic forces on a 3D robot’s limb surface from internal deformation measured by only a few physical sensors. The general idea of this framework is to predict first the whole surface deformation pattern from the sparsely placed sensors and then to infer number, locations and force magnitudes of unknown contact points. We show how this can be done even if training data can only be obtained for single-contact points using transfer learning at the example of a modified limb of the Poppy robot. With only 10 strain-gauge sensors we obtain a high accuracy also for multiple-contact points. The method can be applied to arbitrarily shaped surfaces and physical sensor types, as long as training data can be obtained.

al

link (url) DOI [BibTex]


no image
Birch tar production does not prove Neanderthal behavioral complexity

Schmidt, P., Blessing, M., Rageot, M., Iovita, R., Pfleging, J., Nickel, K. G., Righetti, L., Tennie, C.

Proceedings of the National Academy of Sciences (PNAS), 116(36):17707-17711, 2019 (article)

mg

DOI [BibTex]

DOI [BibTex]

2014


3D Traffic Scene Understanding from Movable Platforms
3D Traffic Scene Understanding from Movable Platforms

Geiger, A., Lauer, M., Wojek, C., Stiller, C., Urtasun, R.

IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 36(5):1012-1025, published, IEEE, Los Alamitos, CA, May 2014 (article)

Abstract
In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow and occupancy grids. For each of these cues we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

avg ps

pdf link (url) [BibTex]

2014


pdf link (url) [BibTex]


no image
An autonomous manipulation system based on force control and optimization

Righetti, L., Kalakrishnan, M., Pastor, P., Binney, J., Kelly, J., Voorhies, R. C., Sukhatme, G. S., Schaal, S.

Autonomous Robots, 36(1-2):11-30, January 2014 (article)

Abstract
In this paper we present an architecture for autonomous manipulation. Our approach is based on the belief that contact interactions during manipulation should be exploited to improve dexterity and that optimizing motion plans is useful to create more robust and repeatable manipulation behaviors. We therefore propose an architecture where state of the art force/torque control and optimization-based motion planning are the core components of the system. We give a detailed description of the modules that constitute the complete system and discuss the challenges inherent to creating such a system. We present experimental results for several grasping and manipulation tasks to demonstrate the performance and robustness of our approach.

am mg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Learning of grasp selection based on shape-templates

Herzog, A., Pastor, P., Kalakrishnan, M., Righetti, L., Bohg, J., Asfour, T., Schaal, S.

Autonomous Robots, 36(1-2):51-65, January 2014 (article)

Abstract
The ability to grasp unknown objects still remains an unsolved problem in the robotics community. One of the challenges is to choose an appropriate grasp configuration, i.e., the 6D pose of the hand relative to the object and its finger configuration. In this paper, we introduce an algorithm that is based on the assumption that similarly shaped objects can be grasped in a similar way. It is able to synthesize good grasp poses for unknown objects by finding the best matching object shape templates associated with previously demonstrated grasps. The grasp selection algorithm is able to improve over time by using the information of previous grasp attempts to adapt the ranking of the templates to new situations. We tested our approach on two different platforms, the Willow Garage PR2 and the Barrett WAM robot, which have very different hand kinematics. Furthermore, we compared our algorithm with other grasp planners and demonstrated its superior performance. The results presented in this paper show that the algorithm is able to find good grasp configurations for a large set of unknown objects from a relatively small set of demonstrations, and does improve its performance over time.

am mg

link (url) DOI [BibTex]