Header logo is


2018


A Value-Driven Eldercare Robot: Virtual and Physical Instantiations of a Case-Supported Principle-Based Behavior Paradigm
A Value-Driven Eldercare Robot: Virtual and Physical Instantiations of a Case-Supported Principle-Based Behavior Paradigm

Anderson, M., Anderson, S., Berenz, V.

Proceedings of the IEEE, pages: 1,15, October 2018 (article)

Abstract
In this paper, a case-supported principle-based behavior paradigm is proposed to help ensure ethical behavior of autonomous machines. We argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which autonomous systems are apt to be deployed and for the actions they are liable to undertake. We believe that this is the case since we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles that balance these features when they are in conflict. Such principles not only help ensure ethical behavior of complex and dynamic systems but also can serve as a basis for justification of this behavior. The requirements, methods, implementation, and evaluation components of the paradigm are detailed as well as its instantiation in both a simulated and real robot functioning in the domain of eldercare.

am

link (url) DOI [BibTex]

2018



Softness, Warmth, and Responsiveness Improve Robot Hugs
Softness, Warmth, and Responsiveness Improve Robot Hugs

Block, A. E., Kuchenbecker, K. J.

International Journal of Social Robotics, 11(1):49-64, October 2018 (article)

Abstract
Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, roboticists are naturally interested in having robots one day hug humans as seamlessly as humans hug other humans. This project's purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a soft, warm, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty relatively young and rather technical participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Analysis of the results showed that people significantly prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end. Taking part in the experiment also significantly increased positive user opinions of robots and robot use.

hi

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Playful: Reactive Programming for Orchestrating Robotic Behavior
Playful: Reactive Programming for Orchestrating Robotic Behavior

Berenz, V., Schaal, S.

IEEE Robotics Automation Magazine, 25(3):49-60, September 2018 (article) In press

Abstract
For many service robots, reactivity to changes in their surroundings is a must. However, developing software suitable for dynamic environments is difficult. Existing robotic middleware allows engineers to design behavior graphs by organizing communication between components. But because these graphs are structurally inflexible, they hardly support the development of complex reactive behavior. To address this limitation, we propose Playful, a software platform that applies reactive programming to the specification of robotic behavior.

am

playful website playful_IEEE_RAM link (url) DOI [BibTex]


ClusterNet: Instance Segmentation in RGB-D Images
ClusterNet: Instance Segmentation in RGB-D Images

Shao, L., Tian, Y., Bohg, J.

arXiv, September 2018, Submitted to ICRA'19 (article) Submitted

Abstract
We propose a method for instance-level segmentation that uses RGB-D data as input and provides detailed information about the location, geometry and number of {\em individual\/} objects in the scene. This level of understanding is fundamental for autonomous robots. It enables safe and robust decision-making under the large uncertainty of the real-world. In our model, we propose to use the first and second order moments of the object occupancy function to represent an object instance. We train an hourglass Deep Neural Network (DNN) where each pixel in the output votes for the 3D position of the corresponding object center and for the object's size and pose. The final instance segmentation is achieved through clustering in the space of moments. The object-centric training loss is defined on the output of the clustering. Our method outperforms the state-of-the-art instance segmentation method on our synthesized dataset. We show that our method generalizes well on real-world data achieving visually better segmentation results.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Complexity, Rate, and Scale in Sliding Friction Dynamics Between a Finger and Textured Surface

Khojasteh, B., Janko, M., Visell, Y.

Nature Scientific Reports, 8(13710), September 2018 (article)

Abstract
Sliding friction between the skin and a touched surface is highly complex, but lies at the heart of our ability to discriminate surface texture through touch. Prior research has elucidated neural mechanisms of tactile texture perception, but our understanding of the nonlinear dynamics of frictional sliding between the finger and textured surfaces, with which the neural signals that encode texture originate, is incomplete. To address this, we compared measurements from human fingertips sliding against textured counter surfaces with predictions of numerical simulations of a model finger that resembled a real finger, with similar geometry, tissue heterogeneity, hyperelasticity, and interfacial adhesion. Modeled and measured forces exhibited similar complex, nonlinear sliding friction dynamics, force fluctuations, and prominent regularities related to the surface geometry. We comparatively analysed measured and simulated forces patterns in matched conditions using linear and nonlinear methods, including recurrence analysis. The model had greatest predictive power for faster sliding and for surface textures with length scales greater than about one millimeter. This could be attributed to the the tendency of sliding at slower speeds, or on finer surfaces, to complexly engage fine features of skin or surface, such as fingerprints or surface asperities. The results elucidate the dynamical forces felt during tactile exploration and highlight the challenges involved in the biological perception of surface texture via touch.

hi

DOI [BibTex]

DOI [BibTex]


no image
A Robust Soft Lens for Tunable Camera Application Using Dielectric Elastomer Actuators

Nam, S., Yun, S., Yoon, J. W., Park, S., Park, S. K., Mun, S., Park, B., Kyung, K.

Soft robotics, Mary Ann Liebert, Inc., August 2018 (article)

Abstract
Developing tunable lenses, an expansion-based mechanism for dynamic focus adjustment can provide a larger focal length tuning range than a contraction-based mechanism. Here, we develop an expansion-tunable soft lens module using a disk-type dielectric elastomer actuator (DEA) that creates axially symmetric pulling forces on a soft lens. Adopted from a biological accommodation mechanism in human eyes, a soft lens at the annular center of a disk-type DEA pair is efficiently stretched to change the focal length in a highly reliable manner. A soft lens with a diameter of 3mm shows a 65.7% change in the focal length (14.3–23.7mm) under a dynamic driving voltage signal control. We confirm a quadratic relation between lens expansion and focal length that leads to large focal length tunability obtainable in the proposed approach. The fabricated tunable lens module can be used for soft, lightweight, and compact vision components in robots, drones, vehicles, and so on.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Task-Driven PCA-Based Design Optimization of Wearable Cutaneous Devices

Pacchierotti, C., Young, E. M., Kuchenbecker, K. J.

IEEE Robotics and Automation Letters, 3(3):2214-2221, July 2018, Presented at ICRA 2018 (article)

Abstract
Small size and low weight are critical requirements for wearable and portable haptic interfaces, making it essential to work toward the optimization of their sensing and actuation systems. This paper presents a new approach for task-driven design optimization of fingertip cutaneous haptic devices. Given one (or more) target tactile interactions to render and a cutaneous device to optimize, we evaluate the minimum number and best configuration of the device’s actuators to minimize the estimated haptic rendering error. First, we calculate the motion needed for the original cutaneous device to render the considered target interaction. Then, we run a principal component analysis (PCA) to search for possible couplings between the original motor inputs, looking also for the best way to reconfigure them. If some couplings exist, we can re-design our cutaneous device with fewer motors, optimally configured to render the target tactile sensation. The proposed approach is quite general and can be applied to different tactile sensors and cutaneous devices. We validated it using a BioTac tactile sensor and custom plate-based 3-DoF and 6-DoF fingertip cutaneous devices, considering six representative target tactile interactions. The algorithm was able to find couplings between each device’s motor inputs, proving it to be a viable approach to optimize the design of wearable and portable cutaneous devices. Finally, we present two examples of optimized designs for our 3-DoF fingertip cutaneous device.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Teaching a Robot Bimanual Hand-Clapping Games via Wrist-Worn {IMU}s
Teaching a Robot Bimanual Hand-Clapping Games via Wrist-Worn IMUs

Fitter, N. T., Kuchenbecker, K. J.

Frontiers in Robotics and Artificial Intelligence, 5(85), July 2018 (article)

Abstract
Colleagues often shake hands in greeting, friends connect through high fives, and children around the world rejoice in hand-clapping games. As robots become more common in everyday human life, they will have the opportunity to join in these social-physical interactions, but few current robots are intended to touch people in friendly ways. This article describes how we enabled a Baxter Research Robot to both teach and learn bimanual hand-clapping games with a human partner. Our system monitors the user's motions via a pair of inertial measurement units (IMUs) worn on the wrists. We recorded a labeled library of 10 common hand-clapping movements from 10 participants; this dataset was used to train an SVM classifier to automatically identify hand-clapping motions from previously unseen participants with a test-set classification accuracy of 97.0%. Baxter uses these sensors and this classifier to quickly identify the motions of its human gameplay partner, so that it can join in hand-clapping games. This system was evaluated by N = 24 naïve users in an experiment that involved learning sequences of eight motions from Baxter, teaching Baxter eight-motion game patterns, and completing a free interaction period. The motion classification accuracy in this less structured setting was 85.9%, primarily due to unexpected variations in motion timing. The quantitative task performance results and qualitative participant survey responses showed that learning games from Baxter was significantly easier than teaching games to Baxter, and that the teaching role caused users to consider more teamwork aspects of the gameplay. Over the course of the experiment, people felt more understood by Baxter and became more willing to follow the example of the robot. Users felt uniformly safe interacting with Baxter, and they expressed positive opinions of Baxter and reported fun interacting with the robot. Taken together, the results indicate that this robot achieved credible social-physical interaction with humans and that its ability to both lead and follow systematically changed the human partner's experience.

hi

DOI [BibTex]

DOI [BibTex]


Real-time Perception meets Reactive Motion Generation
Real-time Perception meets Reactive Motion Generation

(Best Systems Paper Finalists - Amazon Robotics Best Paper Awards in Manipulation)

Kappler, D., Meier, F., Issac, J., Mainprice, J., Garcia Cifuentes, C., Wüthrich, M., Berenz, V., Schaal, S., Ratliff, N., Bohg, J.

IEEE Robotics and Automation Letters, 3(3):1864-1871, July 2018 (article)

Abstract
We address the challenging problem of robotic grasping and manipulation in the presence of uncertainty. This uncertainty is due to noisy sensing, inaccurate models and hard-to-predict environment dynamics. Our approach emphasizes the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. We present a fully integrated system where real-time object and robot tracking as well as ambient world modeling provides the necessary input to feedback controllers and continuous motion optimizers. Specifically, they provide attractive and repulsive potentials based on which the controllers and motion optimizer can online compute movement policies at different time intervals. We extensively evaluate the proposed system on a real robotic platform in four scenarios that exhibit either challenging workspace geometry or a dynamic environment. We compare the proposed integrated system with a more traditional sense-plan-act approach that is still widely used. In 333 experiments, we show the robustness and accuracy of the proposed system.

am

arxiv video video link (url) DOI Project Page [BibTex]


no image
Automatically Rating Trainee Skill at a Pediatric Laparoscopic Suturing Task

Oquendo, Y. A., Riddle, E. W., Hiller, D., Blinman, T. A., Kuchenbecker, K. J.

Surgical Endoscopy, 32(4):1840-1857, April 2018 (article)

hi

DOI [BibTex]

DOI [BibTex]


Electro-Active Polymer Based Soft Tactile Interface for Wearable Devices
Electro-Active Polymer Based Soft Tactile Interface for Wearable Devices

Mun, S., Yun, S., Nam, S., Park, S. K., Park, S., Park, B. J., Lim, J. M., Kyung, K. U.

IEEE Transactions on Haptics, 11(1):15-21, Febuary 2018 (article)

Abstract
This paper reports soft actuator based tactile stimulation interfaces applicable to wearable devices. The soft actuator is prepared by multi-layered accumulation of thin electro-active polymer (EAP) films. The multi-layered actuator is designed to produce electrically-induced convex protrusive deformation, which can be dynamically programmable for wide range of tactile stimuli. The maximum vertical protrusion is 650 μm and the output force is up to 255 mN. The soft actuators are embedded into the fingertip part of a glove and front part of a forearm band, respectively. We have conducted two kinds of experiments with 15 subjects. Perceived magnitudes of actuator's protrusion and vibrotactile intensity were measured with frequency of 1 Hz and 191 Hz, respectively. Analysis of the user tests shows participants perceive variation of protrusion height at the finger pad and modulation of vibration intensity through the proposed soft actuator based tactile interface.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Robotic Motion Learning Framework to Promote Social Engagement
Robotic Motion Learning Framework to Promote Social Engagement

Burns, R., Jeon, M., Park, C. H.

Applied Sciences, 8(2):241, Febuary 2018, Special Issue "Social Robotics" (article)

Abstract
Imitation is a powerful component of communication between people, and it poses an important implication in improving the quality of interaction in the field of human–robot interaction (HRI). This paper discusses a novel framework designed to improve human–robot interaction through robotic imitation of a participant’s gestures. In our experiment, a humanoid robotic agent socializes with and plays games with a participant. For the experimental group, the robot additionally imitates one of the participant’s novel gestures during a play session. We hypothesize that the robot’s use of imitation will increase the participant’s openness towards engaging with the robot. Experimental results from a user study of 12 subjects show that post-imitation, experimental subjects displayed a more positive emotional state, had higher instances of mood contagion towards the robot, and interpreted the robot to have a higher level of autonomy than their control group counterparts did. These results point to an increased participant interest in engagement fueled by personalized imitation during interaction.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Distributed Event-Based State Estimation for Networked Systems: An LMI Approach

Muehlebach, M., Trimpe, S.

IEEE Transactions on Automatic Control, 63(1):269-276, January 2018 (article)

am ics

arXiv (extended version) DOI Project Page [BibTex]

arXiv (extended version) DOI Project Page [BibTex]


no image
Memristor-enhanced humanoid robot control system–Part I: theory behind the novel memcomputing paradigm

Ascoli, A., Baumann, D., Tetzlaff, R., Chua, L. O., Hild, M.

International Journal of Circuit Theory and Applications, 46(1):155-183, 2018 (article)

am

DOI [BibTex]

DOI [BibTex]


Combining learned and analytical models for predicting action effects
Combining learned and analytical models for predicting action effects

Kloss, A., Schaal, S., Bohg, J.

arXiv, 2018 (article) Submitted

Abstract
One of the most basic skills a robot should possess is predicting the effect of physical interactions with objects in the environment. This enables optimal action selection to reach a certain goal state. Traditionally, dynamics are approximated by physics-based analytical models. These models rely on specific state representations that may be hard to obtain from raw sensory data, especially if no knowledge of the object shape is assumed. More recently, we have seen learning approaches that can predict the effect of complex physical interactions directly from sensory input. It is however an open question how far these models generalize beyond their training data. In this work, we investigate the advantages and limitations of neural network based learning approaches for predicting the effects of actions based on sensory input and show how analytical and learned models can be combined to leverage the best of both worlds. As physical interaction task, we use planar pushing, for which there exists a well-known analytical model and a large real-world dataset. We propose to use a convolutional neural network to convert raw depth images or organized point clouds into a suitable representation for the analytical model and compare this approach to using neural networks for both, perception and prediction. A systematic evaluation of the proposed approach on a very large real-world dataset shows two main advantages of the hybrid architecture. Compared to a pure neural network, it significantly (i) reduces required training data and (ii) improves generalization to novel physical interaction.

am

arXiv pdf link (url) [BibTex]


no image
Immersive Low-Cost Virtual Reality Treatment for Phantom Limb Pain: Evidence from Two Cases

Ambron, E., Miller, A., Kuchenbecker, K. J., Buxbaum, L. J., Coslett, H. B.

Frontiers in Neurology, 9(67):1-7, 2018 (article)

hi

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
A physical model for efficient ranking in networks

De Bacco, C., Larremore, D. B., Moore, C.

Science Advances, 4(7), American Association for the Advancement of Science, 2018 (article)

pio

Code Preprint link (url) DOI Project Page [BibTex]

Code Preprint link (url) DOI Project Page [BibTex]


no image
AreWater Smart Landscapes’ Contagious? An epidemic approach on networks to study peer effects

Brelsford, C., De Bacco, C.

Networks and Spatial Economics (NETS), pages: 1572-9427, 2018 (article)

pio

Preprint link (url) [BibTex]

Preprint link (url) [BibTex]


no image
Memristor-enhanced humanoid robot control system–Part II: circuit theoretic model and performance analysis

Baumann, D., Ascoli, A., Tetzlaff, R., Chua, L. O., Hild, M.

International Journal of Circuit Theory and Applications, 46(1):184-220, 2018 (article)

am

DOI [BibTex]

DOI [BibTex]


Tactile Masking by Electrovibration
Tactile Masking by Electrovibration

Vardar, Y., Güçlü, B., Basdogan, C.

IEEE Transactions on Haptics, 11(4):623-635, 2018 (article)

Abstract
Future touch screen applications will include multiple tactile stimuli displayed simultaneously or consecutively to single finger or multiple fingers. These applications should be designed by considering human tactile masking mechanism since it is known that presenting one stimulus may interfere with the perception of the other. In this study, we investigate the effect of masking on tactile perception of electrovibration displayed on touch screens. Through conducting psychophysical experiments with nine subjects, we measured the masked thresholds of sinusoidal electrovibration bursts (125 Hz) under two masking conditions: simultaneous and pedestal. The masking stimuli were noise bursts, applied at five different sensation levels varying from 2 to 22 dB SL, also presented by electrovibration. For each subject, the detection thresholds were elevated as linear functions of masking levels for both masking types. We observed that the masking effectiveness was larger with pedestal masking than simultaneous masking. Moreover, in order to investigate the effect of tactile masking on our haptic perception of edge sharpness, we compared the perceived sharpness of edges separating two textured regions displayed with and without various masking stimuli. Our results suggest that sharpness perception depends on the local contrast between background and foreground stimuli, which varies as a function of masking amplitude and activation levels of frequency-dependent psychophysical channels.

hi

vardar_toh2018 DOI [BibTex]

vardar_toh2018 DOI [BibTex]


no image
Geckos Race across Water using Multiple Mechanisms

Nirody, J., Jinn, J., Libby, T., Lee, T., Jusufi, A., Hu, D., Full, R.

Current Biology, 2018 (article)

bio

[BibTex]

[BibTex]

2017


no image
Evaluation of High-Fidelity Simulation as a Training Tool in Transoral Robotic Surgery

Bur, A. M., Gomez, E. D., Newman, J. G., Weinstein, G. S., Bert W. O’Malley, J., Rassekh, C. H., Kuchenbecker, K. J.

Laryngoscope, 127(12):2790-2795, December 2017 (article)

hi

DOI [BibTex]

2017


DOI [BibTex]


Interactive Perception: Leveraging Action in Perception and Perception in Action
Interactive Perception: Leveraging Action in Perception and Perception in Action

Bohg, J., Hausman, K., Sankaran, B., Brock, O., Kragic, D., Schaal, S., Sukhatme, G.

IEEE Transactions on Robotics, 33, pages: 1273-1291, December 2017 (article)

Abstract
Recent approaches in robotics follow the insight that perception is facilitated by interactivity with the environment. These approaches are subsumed under the term of Interactive Perception (IP). We argue that IP provides the following benefits: (i) any type of forceful interaction with the environment creates a new type of informative sensory signal that would otherwise not be present and (ii) any prior knowledge about the nature of the interaction supports the interpretation of the signal. This is facilitated by knowledge of the regularity in the combined space of sensory information and action parameters. The goal of this survey is to postulate this as a principle and collect evidence in support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of Interactive Perception. We close this survey by discussing the remaining open questions. Thereby, we hope to define a field and inspire future work.

am

arXiv DOI Project Page [BibTex]

arXiv DOI Project Page [BibTex]


Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning
Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning

Li, W., Bohg, J., Fritz, M.

arXiv, November 2017 (article) Submitted

Abstract
Understanding physical phenomena is a key component of human intelligence and enables physical interaction with previously unseen environments. In this paper, we study how an artificial agent can autonomously acquire this intuition through interaction with the environment. We created a synthetic block stacking environment with physics simulation in which the agent can learn a policy end-to-end through trial and error. Thereby, we bypass to explicitly model physical knowledge within the policy. We are specifically interested in tasks that require the agent to reach a given goal state that may be different for every new trial. To this end, we propose a deep reinforcement learning framework that learns policies which are parametrized by a goal. We validated the model on a toy example navigating in a grid world with different target positions and in a block stacking task with different target structures of the final tower. In contrast to prior work, our policies show better generalization across different goals.

am

arXiv [BibTex]


Electrically tunable binary phase Fresnel lens based on a dielectric elastomer actuator
Electrically tunable binary phase Fresnel lens based on a dielectric elastomer actuator

Park, S., Park, B., Nam, S., Yun, S., Park, S. K., Mun, S., Lim, J. M., Ryu, Y., Song, S. H., Kyung, K.

Optics Express, 25(20):23801-23808, OSA, October 2017 (article)

Abstract
We propose and demonstrate an all-solid-state tunable binary phase Fresnel lens with electrically controllable focal length. The lens is composed of a binary phase Fresnel zone plate, a circular acrylic frame, and a dielectric elastomer (DE) actuator which is made of a thin DE layer and two compliant electrodes using silver nanowires. Under electric potential, the actuator produces in-plane deformation in a radial direction that can compress the Fresnel zones. The electrically-induced deformation compresses the Fresnel zones to be contracted as high as 9.1 % and changes the focal length, getting shorter from 20.0 cm to 14.5 cm. The measured change in the focal length of the fabricated lens is consistent with the result estimated from numerical simulation.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Using Contact Forces and Robot Arm Accelerations to Automatically Rate Surgeon Skill at Peg Transfer

Brown, J. D., O’Brien, C. E., Leung, S. C., Dumon, K. R., Lee, D. I., Kuchenbecker, K. J.

IEEE Transactions on Biomedical Engineering, 64(9):2263-2275, September 2017 (article)

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Ungrounded Haptic Augmented Reality System for Displaying Texture and Friction

Culbertson, H., Kuchenbecker, K. J.

IEEE/ASME Transactions on Mechatronics, 22(4):1839-1849, August 2017 (article)

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A variation in wrinkle structures of UV-cured films with chemical structures of prepolymers

Park, S. K., Kwark, Y., Nam, S., Moon, J., Kim, D. W., Park, S., Park, B., Yun, S., Lee, J., Yu, B., Kyung, K.

Materials Letters, 199, pages: 105-109, July 2017 (article)

Abstract
Spontaneously wrinkled films can be easily obtained from UV-crosslinkable liquid prepolymers under special UV-curing conditions. They vary wrinkle structures of the UV-cured films and, however, cannot be precisely controlled. Here, five different UV-crosslinkable prepolymers are synthesized to study the chemical structure effect of prepolymers on wrinkle formation and modulation of the UV-cured films irrespective of the UV-curing conditions. Both wavelength and amplitude of the wrinkles are tuned with the different liquid prepolymers from 4.10 to 5.63µm and from 1.00 to 1.66µm, respectively. The wrinkle structures of the UV-cured films are faded by adding a solid prepolymer to a liquid prepolymer due to interference from it in the shrinkage of the liquid prepolymer layer. The wrinkles completely disappear in the UV-cured films fabricated from the formulated prepolymers containing over 50wt% of the solid prepolymer.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Event-based State Estimation: An Emulation-based Approach

Trimpe, S.

IET Control Theory & Applications, 11(11):1684-1693, July 2017 (article)

Abstract
An event-based state estimation approach for reducing communication in a networked control system is proposed. Multiple distributed sensor agents observe a dynamic process and sporadically transmit their measurements to estimator agents over a shared bus network. Local event-triggering protocols ensure that data is transmitted only when necessary to meet a desired estimation accuracy. The event-based design is shown to emulate the performance of a centralised state observer design up to guaranteed bounds, but with reduced communication. The stability results for state estimation are extended to the distributed control system that results when the local estimates are used for feedback control. Results from numerical simulations and hardware experiments illustrate the effectiveness of the proposed approach in reducing network communication.

am ics

arXiv Supplementary material PDF DOI Project Page [BibTex]

arXiv Supplementary material PDF DOI Project Page [BibTex]


no image
Perception of Force and Stiffness in the Presence of Low-Frequency Haptic Noise

Gurari, N., Okamura, A. M., Kuchenbecker, K. J.

PLoS ONE, 12(6):e0178605, June 2017 (article)

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Evaluation of a Vibrotactile Simulator for Dental Caries Detection

Kuchenbecker, K. J., Parajon, R., Maggio, M. P.

Simulation in Healthcare, 12(3):148-156, June 2017 (article)

hi

DOI [BibTex]

DOI [BibTex]


Probabilistic Articulated Real-Time Tracking for Robot Manipulation
Probabilistic Articulated Real-Time Tracking for Robot Manipulation

(Best Paper of RA-L 2017, Finalist of Best Robotic Vision Paper Award of ICRA 2017)

Garcia Cifuentes, C., Issac, J., Wüthrich, M., Schaal, S., Bohg, J.

IEEE Robotics and Automation Letters (RA-L), 2(2):577-584, April 2017 (article)

Abstract
We propose a probabilistic filtering method which fuses joint measurements with depth images to yield a precise, real-time estimate of the end-effector pose in the camera frame. This avoids the need for frame transformations when using it in combination with visual object tracking methods. Precision is achieved by modeling and correcting biases in the joint measurements as well as inaccuracies in the robot model, such as poor extrinsic camera calibration. We make our method computationally efficient through a principled combination of Kalman filtering of the joint measurements and asynchronous depth-image updates based on the Coordinate Particle Filter. We quantitatively evaluate our approach on a dataset recorded from a real robotic platform, annotated with ground truth from a motion capture system. We show that our approach is robust and accurate even under challenging conditions such as fast motion, significant and long-term occlusions, and time-varying biases. We release the dataset along with open-source code of our approach to allow for quantitative comparison with alternative approaches.

am

arXiv video code and dataset video PDF DOI Project Page [BibTex]


no image
Importance of Matching Physical Friction, Hardness, and Texture in Creating Realistic Haptic Virtual Surfaces

Culbertson, H., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 10(1):63-74, January 2017 (article)

hi

[BibTex]


no image
Effects of Grip-Force, Contact, and Acceleration Feedback on a Teleoperated Pick-and-Place Task

Khurshid, R. P., Fitter, N. T., Fedalei, E. A., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 10(1):40-53, January 2017 (article)

hi

[BibTex]

[BibTex]


no image
Anticipatory Action Selection for Human-Robot Table Tennis

Wang, Z., Boularias, A., Mülling, K., Schölkopf, B., Peters, J.

Artificial Intelligence, 247, pages: 399-414, 2017, Special Issue on AI and Robotics (article)

Abstract
Abstract Anticipation can enhance the capability of a robot in its interaction with humans, where the robot predicts the humans' intention for selecting its own action. We present a novel framework of anticipatory action selection for human-robot interaction, which is capable to handle nonlinear and stochastic human behaviors such as table tennis strokes and allows the robot to choose the optimal action based on prediction of the human partner's intention with uncertainty. The presented framework is generic and can be used in many human-robot interaction scenarios, for example, in navigation and human-robot co-manipulation. In this article, we conduct a case study on human-robot table tennis. Due to the limited amount of time for executing hitting movements, a robot usually needs to initiate its hitting movement before the opponent hits the ball, which requires the robot to be anticipatory based on visual observation of the opponent's movement. Previous work on Intention-Driven Dynamics Models (IDDM) allowed the robot to predict the intended target of the opponent. In this article, we address the problem of action selection and optimal timing for initiating a chosen action by formulating the anticipatory action selection as a Partially Observable Markov Decision Process (POMDP), where the transition and observation are modeled by the \{IDDM\} framework. We present two approaches to anticipatory action selection based on the \{POMDP\} formulation, i.e., a model-free policy learning method based on Least-Squares Policy Iteration (LSPI) that employs the \{IDDM\} for belief updates, and a model-based Monte-Carlo Planning (MCP) method, which benefits from the transition and observation model by the IDDM. Experimental results using real data in a simulated environment show the importance of anticipatory action selection, and that \{POMDPs\} are suitable to formulate the anticipatory action selection problem by taking into account the uncertainties in prediction. We also show that existing algorithms for POMDPs, such as \{LSPI\} and MCP, can be applied to substantially improve the robot's performance in its interaction with humans.

am ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
The tactile perception of transient changes in friction

Gueorguiev, D., Vezzoli, E., Mouraux, A., Lemaire-Semail, B., Thonnard, J.

Journal of The Royal Society Interface, 14(137), The Royal Society, 2017 (article)

Abstract
When we touch an object or explore a texture, frictional strains are induced by the tactile interactions with the surface of the object. Little is known about how these interactions are perceived, although it becomes crucial for the nascent industry of interactive displays with haptic feedback (e.g. smartphones and tablets) where tactile feedback based on friction modulation is particularly relevant. To investigate the human perception of frictional strains, we mounted a high-fidelity friction modulating ultrasonic device on a robotic platform performing controlled rubbing of the fingertip and asked participants to detect induced decreases of friction during a forced-choice task. The ability to perceive the changes in friction was found to follow Weber{\textquoteright}s Law of just noticeable differences, as it consistently depended on the ratio between the reduction in tangential force and the pre-stimulation tangential force. The Weber fraction was 0.11 in all conditions demonstrating a very high sensitivity to transient changes in friction. Humid fingers experienced less friction reduction than drier ones for the same intensity of ultrasonic vibration but the Weber fraction for detecting changes in friction was not influenced by the humidity of the skin.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Effect of Waveform on Tactile Perception by Electrovibration Displayed on Touch Screens
Effect of Waveform on Tactile Perception by Electrovibration Displayed on Touch Screens

Vardar, Y., Güçlü, B., Basdogan, C.

IEEE Transactions on Haptics, 10(4):488-499, 2017 (article)

Abstract
In this study, we investigated the effect of input voltage waveform on our haptic perception of electrovibration on touch screens. Through psychophysical experiments performed with eight subjects, we first measured the detection thresholds of electrovibration stimuli generated by sinusoidal and square voltages at various fundamental frequencies. We observed that the subjects were more sensitive to stimuli generated by square wave voltage than sinusoidal one for frequencies lower than 60 Hz. Using Matlab simulations, we showed that the sensation difference of waveforms in low fundamental frequencies occurred due to the frequency-dependent electrical properties of human skin and human tactile sensitivity. To validate our simulations, we conducted a second experiment with another group of eight subjects. We first actuated the touch screen at the threshold voltages estimated in the first experiment and then measured the contact force and acceleration acting on the index fingers of the subjects moving on the screen with a constant speed. We analyzed the collected data in the frequency domain using the human vibrotactile sensitivity curve. The results suggested that Pacinian channel was the primary psychophysical channel in the detection of the electrovibration stimuli caused by all the square-wave inputs tested in this study. We also observed that the measured force and acceleration data were affected by finger speed in a complex manner suggesting that it may also affect our haptic perception accordingly.

hi

vardar_toh2017 DOI [BibTex]

vardar_toh2017 DOI [BibTex]


no image
Community detection, link prediction, and layer interdependence in multilayer networks

De Bacco, C., Power, E. A., Larremore, D. B., Moore, C.

Physical Review E, 95(4):042317, APS, 2017 (article)

pio

Code Preprint link (url) Project Page [BibTex]

Code Preprint link (url) Project Page [BibTex]