Katherine J. Kuchenbecker directs the Haptic Intelligence Department at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany. She was previously an Associate Professor of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania, where she held the Class of 1940 Bicentennial Endowed Term Chair and a secondary appointment in Computer and Information Science. Kuchenbecker earned her Ph.D. in Mechanical Engineering at Stanford University in 2006 and did a postdoctoral fellowship at the Johns Hopkins University. Her research centers on haptic interfaces, which enable a user to touch virtual and distant objects as though they were real and within reach, as well as haptic sensing systems, which allow robots to physically interact with objects and people.
Kuchenbecker delivered a TEDYouth talk on haptics in 2012, and she has received several honors including a 2009 NSF CAREER Award, the 2012 IEEE Robotics and Automation Society Academic Early Career Award, and a 2014 Penn Lindback Award for Distinguished Teaching. Her team has won various best paper and best demonstration awards, and she frequently gives keynote talks at conferences. She was co-chair of the IEEE Technical Committee on Haptics from 2014 to 2017, and she co-chaired the IEEE Haptics Symposium in 2016 and 2018.
Although a human can move his or her fingertip with six degrees of freedom (three for position and three for orientation), displaying 6DOF haptic cues continues to escape the capabilities of body-grounded fingertip haptic displays. All six degrees of freedom have been displayed in smaller subsets, so the l...
The lack of haptic feedback is a potential limitation of existing robotic surgical systems. Members of Dr. Kuchenbecker's group at Penn previously invented a haptic feedback system named VerroTouch that is able to deliver the vibrations of s...
Robotic minimally invasive surgery systems such as the Intuitive Surgical da Vinci system physically separate the surgeon from the surgical tools. As touch cues are known to play a critical role in manipulation tasks, the resulting loss of the sense of touch may affect the speed and ski...
Humans draw on their vast life experience of touching things to make inferences about the objects in their environment; this capability enables one to make haptic judgments before actually touching things. For example, it is effortless to select the correct grip force for picking up a delicate object, or to choose a g...
Creating haptic experiences often entails inventing, modifying, or selecting specialized hardware. However, experience designers are rarely engineers, and 30 years of haptic inventions are buried in the fragmented literature that describes devices mechanically rather...
Many modern humanoid robots are designed to operate in human environments, like homes and hospitals. If designed well, such robots could help humans accomplish tasks and lower their physical and/or mental workload. As opposed to having an operator devise control policies and reprogram the robot for every new situation...
Human soft-tissue properties vary widely: factors such as the patient's age and health can make enormous differences. As a result, junior medical professionals often have difficulty learning how much force is needed to cut through or puncture a particular type of tissue.
For example, during chest tube insertion, the do...
Being able to cover the entire body of a robot with soft tactile sensors has become an attractive concept in intelligent robotics. Soft, stretchable materials can conform around surfaces and also absorb impacts, which is benefi...
Humans can form an impression of how a new object feels simply by touching its surfaces with the densely innervated skin of the fingertips. Recent research has focused on endowing robots with similar levels of haptic intelligence, but these efforts are usually limited to specifi...
Many scenarios arise wherein the high-frequency accelerations of a tool need to be captured and either portrayed for a human to feel or analyzed by a computer system. For example, this approach provides a simple and realistic way for a surgeon to feel tactile information from a remotely controlled surgical robot. Sim...
The frictional forces we experience when our body interacts with objects provide essential sensory cues that help adapt our behavior. We rely on these sensory cues daily, for example when we feel the smoothness of...
Many situations arise where it is beneficial for a human to control the movements of a robot at a distance, such as handling hazardous materials, doing surgery deep inside the human body, or taking part in remote meetings with other people. In these scenarios, the...
DJing is a musical activity that involves the blending of songs (mixing) and the rythmic manipulation of sounds (scratching). Traditionally, DJs used vinyl records as their music sources. The vibration of a stylus (needle) in the groove of these records produced not just sound, but also subtle haptic sensations that could be fe...
The interaction between a human finger-pad and a physical surface generates not only the tangential friction needed for gripping objects but also a wide variety of perceptual experiences. Finger-surface contact behavior is known to depend on the ...
Researchers worldwide want to discover how to generate compelling tactile sensations on touchscreens to increase the usability of mobile devices and other interactive computer systems. One approach for generating such sensations is to control the friction force between the screen and the finger-pad of the ...
Both vision and touch play important roles in human perception of real surfaces. Judging material properties based on only one modality may not give reliable results. For example, many of us have had the experience th...
When performing minimally invasive robotic surgery, surgeons must currently rely only on their visual sense, as commercially available robotic surgery systems provide no touch feedback. Dr. Kuchenbecker and other members of the Penn Haptics Group previously inve...
Improvements in healthcare have led to an increase in human life expectancy. Members of this aging population want to stay healthy and active, but many forms of exercise and physical therapy are expensive, boring, or inefficien...
Walking speed and symmetry are high priorities for people with hemiparesis from stroke. We developed the Gait Propulsion Trainer (GPT) to help such individuals improve their walking abilities by increasing the propulsive force generated by the paretic leg. The GPT centers on a cable spool attached to a stand at wais...
Robots working in unstructured environments and alongside people need to be able to sense contact information from both intentional and unintentional interactions. Soft and skin-like tactile sensors can provide a robot wit...
Up to 90% of individuals who undergo amputation experience a persistent sensation of the missing limb, which is called a phantom limb A substantial subset of these people feel intense pain in the missing extremity; this phantom limb pain (PLP) often responds poorly to medications or other interventions an...
In Proceedings of the IEEE World Haptics Conference (WHC), Tokyo, Japan, July 2019 (inproceedings) Accepted
Both vision and touch contribute to the perception of real surfaces. Although there have been many studies on the individual contributions of each sense, it is still unclear how each modality’s information is processed and integrated. To fill this gap, we investigated the similarity of visual and haptic perceptual spaces, as well as how well they each correlate with fingertip interaction metrics. Twenty participants interacted with ten different surfaces from the Penn Haptic Texture Toolkit by either looking at or touching them and judged their similarity in pairs. By analyzing the resulting similarity
ratings using multi-dimensional scaling (MDS), we found that surfaces are similarly organized within the three-dimensional perceptual spaces of both modalities. Also, between-participant correlations were significantly higher in the haptic condition. In a separate experiment, we obtained the contact forces and accelerations acting on one finger interacting with each surface in a controlled way. We analyzed the collected fingertip interaction data in both the time and frequency domains. Our results suggest that the three perceptual dimensions for each modality can be represented by roughness/smoothness, hardness/softness, and friction, and that these dimensions can be estimated by surface vibration power, tap spectral centroid, and kinetic friction coefficient, respectively.
In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), Glasgow, Scotland, May 2019 (inproceedings) Accepted
Creating haptic experiences often entails inventing, modifying, or selecting specialized hardware. However, experience designers are rarely engineers, and 30 years of haptic inventions are buried in a fragmented literature that describes devices mechanically rather than by potential purpose. We conceived of Haptipedia to unlock this trove of examples: Haptipedia presents a device corpus for exploration through metadata that matter to both device and experience designers. It is a taxonomy of device attributes that go beyond physical description to capture potential utility, applied to a growing database of 105 grounded force-feedback devices, and accessed through a public visualization that links utility to morphology. Haptipedia's design was driven by both systematic review of the haptic device literature and rich input from diverse haptic designers. We describe Haptipedia's reception (including hopes it will redefine device reporting standards) and our plans for its sustainability through community participation.
pages: 6, Workshop paper (6 pages) presented at the CHI 2019 Workshop on Hacking Blind Navigation, May 2019 (misc) Accepted
Since the 1960s, technologists have worked to develop systems that facilitate independent
navigation by vision-impaired (VI) pedestrians. These devices vary in terms of conveyed information
and feedback modality. Unfortunately, many such prototypes never progress beyond laboratory
testing. Conversely, smartphone-based navigation systems for sighted pedestrians have grown in
robustness and capabilities, to the point of now being ubiquitous.
How can we leverage the success of sighted navigation technology, which is driven by a larger global
market, as a way to progress VI navigation systems? We believe one possibility is to make common
devices that benefit both VI and sighted individuals, by providing information in a way that does not
distract either user from their tasks or environment. To this end we have developed physical
interfaces that eschew visual, audio or vibratory feedback, instead relying on the natural human
ability to perceive the shape of a handheld object.
In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 2019 (inproceedings) Accepted
Humans can form an impression of how a new object feels simply by touching its surfaces with the densely innervated skin of the fingertips. Many haptics researchers have recently been working to endow robots with similar levels of haptic intelligence, but these efforts almost always employ hand-crafted features, which are brittle, and concrete tasks, such as object recognition. We applied unsupervised feature learning methods, specifically K-SVD and Spatio-Temporal Hierarchical Matching Pursuit (ST-HMP), to rich multi-modal haptic data from a diverse dataset. We then tested the learned features on 19 more abstract binary classification tasks that center on haptic adjectives such as smooth and squishy. The learned features proved superior to traditional hand-crafted features by a large margin, almost doubling the average F1 score across all adjectives. Additionally, particular exploratory procedures (EPs) and sensor channels were found to support perception of certain haptic adjectives, underlining the need for diverse interactions and multi-modal haptic data.
Extended abstract presented as an Emerging Technology ePoster at the Annual Meeting of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES), Baltimore, Maryland, USA, April 2019 (misc) Accepted
Workshop paper (3 pages) presented at the HRI Pioneers Workshop, Daegu, South Korea, March 2019 (misc) Accepted
We live in an aging society; social-physical human-robot interaction has the potential to keep our elderly adults healthy by motivating them to exercise. After summarizing prior work, this paper proposes a tool that can be used to design exercise and therapy interactions to be performed by an upper-body humanoid robot. The interaction design tool comprises a teleoperation system that transmits the operator’s arm motions, head motions and facial expression along with an interface to monitor and assess the motion of the user interacting with the robot. We plan to use this platform to create dynamic and intuitive exercise interactions.
International Journal of Social Robotics, 2019 (article) Accepted
Future robots may need lighthearted physical interaction capabilities to connect with people in meaningful ways. To begin exploring how users perceive playful human–robot hand-to-hand interaction, we conducted a study with 20 participants. Each user played simple hand-clapping games with the Rethink Robotics Baxter Research Robot during a 1-h-long session involving 24 randomly ordered conditions that varied in facial reactivity, physical reactivity, arm stiffness, and clapping tempo. Survey data and experiment recordings demonstrate that this interaction is viable: all users successfully completed the experiment and mentioned enjoying at least one game without prompting. Hand-clapping tempo was highly salient to users, and human-like robot errors were more widely accepted than mechanical errors. Furthermore, perceptions of Baxter varied in the following statistically significant ways: facial reactivity increased the robot’s perceived pleasantness and energeticness; physical reactivity decreased pleasantness, energeticness, and dominance; higher arm stiffness increased safety and decreased dominance; and faster tempo increased energeticness and increased dominance. These findings can motivate and guide roboticists who want to design social–physical human–robot interactions.
Haptipedia is an online taxonomy, database, and visualization that aims to accelerate ideation of new
haptic devices and interactions in human-computer interaction, virtual reality, haptics, and robotics.
The current version of Haptipedia (105 devices) was created through iterative design, data entry, and
evaluation by our team of experts. Next, we aim to greatly increase the number of devices and keep
Haptipedia updated by soliciting data entry and verification from haptics experts worldwide.
Hands-on demonstration (4 pages) presented at AsiaHaptics, Incheon, South Korea, November 2018 (misc)
For simple and realistic vibrotactile feedback, 3D accelerations from real contact interactions are usually rendered using a single-axis vibration actuator; this dimensional reduction can be performed in many ways. This demonstration implements a real-time conversion system that simultaneously measures 3D accelerations and renders corresponding 1D vibrations using a two-pen interface. In the demonstration, a user freely interacts with various objects using an In-Pen that contains a 3-axis accelerometer. The captured accelerations are converted to a single-axis signal, and an Out-Pen renders the reduced signal for the user to feel. We prepared seven conversion methods from the simple use of a single-axis signal to applying principal component analysis (PCA) so that users can compare the performance of each conversion method in this demonstration.
Hands-on demonstration (3 pages) presented at AsiaHaptics, Incheon, South Korea, November 2018 (misc)
Large-scale tactile sensing is important for household robots and human-robot interaction because contacts can occur all over a robot’s body surface. This paper presents a new fabric-based tactile sensor that is straightforward to manufacture and can cover a large area. The tactile sensor is made of conductive and non-conductive fabric layers, and the electrodes are stitched with conductive thread, so the resulting device is flexible and stretchable. The sensor utilizes internal array electrodes and a reconstruction method called electrical resistance tomography (ERT) to achieve a high spatial resolution with a small number of electrodes. The developed sensor shows that only 16 electrodes can accurately estimate single and multiple contacts over a square that measures 20 cm by 20 cm.
International Journal of Social Robotics, 11(1):49-64, October 2018 (article)
Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, roboticists are naturally interested in having robots one day hug humans as seamlessly as humans hug other humans. This project's purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a soft, warm, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty relatively young and rather technical participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Analysis of the results showed that people significantly prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end. Taking part in the experiment also significantly increased positive user opinions of robots and robot use.
Extended abstract presented at the Hand, Brain and Technology conference (HBT), Ascona, Switzerland, August 2018 (misc)
Little is known about the shape and properties of the human finger during haptic interaction, even though these are essential parameters for controlling wearable finger devices and deliver realistic tactile feedback. This study explores a framework for four-dimensional scanning (3D over time) and modelling of finger-surface interactions, aiming to capture the motion and deformations of the entire finger with high resolution while simultaneously recording the interfacial forces at the contact. Preliminary results show that when the fingertip is actively pressing a rigid surface, it undergoes lateral expansion and proximal/distal bending, deformations that cannot be captured by imaging of the contact area alone. Therefore, we are currently capturing a dataset that will enable us to create a statistical model of the finger’s deformations and predict the contact forces induced by tactile interaction with objects. This technique could improve current methods for tactile rendering in wearable haptic devices, which rely on general physical modelling of the skin’s compliance, by developing an accurate model of the variations in finger properties across the human population. The availability of such a model will also enable a more realistic simulation of virtual finger behaviour in virtual reality (VR) environments, as well as the ability to accurately model a specific user’s finger from lower resolution data. It may also be relevant for inferring the physical properties of the underlying tissue from observing the surface mesh deformations, as previously shown for body tissues.
IEEE Robotics and Automation Letters, 3(3):2214-2221, July 2018, Presented at ICRA 2018 (article)
Small size and low weight are critical requirements for wearable and portable haptic interfaces, making it essential to work toward the optimization of their sensing and actuation systems. This paper presents a new approach for task-driven design optimization of fingertip cutaneous haptic devices. Given one (or more) target tactile interactions to render and a cutaneous device to optimize, we evaluate the minimum number and best configuration of the device’s actuators to minimize the estimated haptic rendering error. First, we calculate the motion needed for the original cutaneous device to render the considered target interaction. Then, we run a principal component analysis (PCA) to search for possible couplings between the original motor inputs, looking also for the best way to reconfigure them. If some couplings exist, we can re-design our cutaneous device with fewer motors, optimally configured to render the target tactile sensation. The proposed approach is quite general and can be applied to different tactile sensors and cutaneous devices. We validated it using a BioTac tactile sensor and custom plate-based 3-DoF and 6-DoF fingertip cutaneous devices, considering six representative target tactile interactions. The algorithm was able to find couplings between each device’s motor inputs, proving it to be a viable approach to optimize the design of wearable and portable cutaneous devices. Finally, we present two examples of optimized designs for our 3-DoF fingertip cutaneous device.
Frontiers in Robotics and Artificial Intelligence, 5(85), July 2018 (article)
Colleagues often shake hands in greeting, friends connect through high fives, and children around the world rejoice in hand-clapping games. As robots become more common in everyday human life, they will have the opportunity to join in these social-physical interactions, but few current robots are intended to touch people in friendly ways. This article describes how we enabled a Baxter Research Robot to both teach and learn bimanual hand-clapping games with a human partner. Our system monitors the user's motions via a pair of inertial measurement units (IMUs) worn on the wrists. We recorded a labeled library of 10 common hand-clapping movements from 10 participants; this dataset was used to train an SVM classifier to automatically identify hand-clapping motions from previously unseen participants with a test-set classification accuracy of 97.0%. Baxter uses these sensors and this classifier to quickly identify the motions of its human gameplay partner, so that it can join in hand-clapping games. This system was evaluated by N = 24 naïve users in an experiment that involved learning sequences of eight motions from Baxter, teaching Baxter eight-motion game patterns, and completing a free interaction period. The motion classification accuracy in this less structured setting was 85.9%, primarily due to unexpected variations in motion timing. The quantitative task performance results and qualitative participant survey responses showed that learning games from Baxter was significantly easier than teaching games to Baxter, and that the teaching role caused users to consider more teamwork aspects of the gameplay. Over the course of the experiment, people felt more understood by Baxter and became more willing to follow the example of the robot. Users felt uniformly safe interacting with Baxter, and they expressed positive opinions of Baxter and reported fun interacting with the robot. Taken together, the results indicate that this robot achieved credible social-physical interaction with humans and that its ability to both lead and follow systematically changed the human partner's experience.
Hands-on demonstration presented at EuroHaptics, Pisa, Italy, June 2018 (misc)
In this demonstration, you will hold two pen-shaped modules: an in-pen and an out-pen. The in-pen is instrumented with a high-bandwidth three-axis accelerometer, and the out-pen contains a one-axis voice coil actuator. Use the in-pen to interact with different surfaces; the measured 3D accelerations are continually converted into 1D vibrations and rendered with the out-pen for you to feel. You can test conversion methods that range from simply selecting a single axis to applying a discrete Fourier transform or principal component analysis for realistic and brisk real-time conversion.
Hands-on demonstration presented at EuroHaptics, Pisa, Italy, June 2018 (misc)
How many haptic devices have been proposed in the last 30 years? How can we leverage this rich source of design knowledge to inspire future innovations? Our goal is to make historical haptic invention accessible through interactive visualization of a comprehensive library – a Haptipedia – of devices that have been annotated with designer-relevant metadata. In this demonstration, participants can explore Haptipedia’s growing library of grounded force feedback devices through several prototype visualizations, interact with 3D simulations of the device mechanisms and movements, and tell us about the attributes and devices that could make Haptipedia a useful resource for the haptic design community.
Workshop paper (4 pages) presented at the RSS Workshop on Robot-Mediated Autism Intervention: Hardware, Software and Curriculum, Pittsburgh, USA, June 2018 (misc)
Children with autism often endure sensory overload, may be nonverbal, and have difficulty understanding and relaying emotions. These experiences result in heightened stress during social interaction. Animal-assisted intervention has been found to improve the behavior of children with autism during social interaction, but live animal companions are not always feasible. We are thus in the process of designing a robotic animal to mimic some successful characteristics of animal-assisted intervention while trying to improve on others. The over-arching hypothesis of this research is that an appropriately designed robot animal can reduce stress in children with autism and empower them to engage in social interaction.
In Encyclopedia of Robotics, (Editors: Marcelo H. Ang and Oussama Khatib and Bruno Siciliano), Springer, May 2018 (incollection)
Haptics is an interdisciplinary field that seeks to both understand and engineer touch-based interaction. Although a wide range of systems and applications are being investigated, haptics researchers often concentrate on perception and manipulation through the human hand.
A haptic interface is a mechatronic system that modulates the physical interaction between a human and his or her tangible surroundings. Haptic interfaces typically involve mechanical, electrical, and computational layers that work together to sense user motions or forces, quickly process these inputs with other information, and physically respond by actuating elements of the user’s surroundings, thereby enabling him or her to act on and feel a remote and/or virtual environment.
Cross-Cutting Challenge Interactive Discussion presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)
Fingertips and hands captivate the attention of most haptic interface designers, but humans can feel touch stimuli across the entire body surface. Trying to create devices that both can be worn and can deliver good haptic sensations raises challenges that rarely arise in other contexts. Most notably, tactile cues such as vibration, tapping, and squeezing are far simpler to implement in wearable systems than kinesthetic haptic feedback. This interactive discussion will present a variety of relevant projects to which I have contributed, attempting to pull out common themes and ideas for the future.
Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)
Much of three decades of haptic device invention is effectively lost to today’s designers: dispersion across time, region, and discipline imposes an incalculable drag on innovation in this field. Our goal is to make historical haptic invention accessible through interactive navigation of a comprehensive library – a Haptipedia – of devices that have been annotated with designer-relevant metadata. To build this open resource, we will systematically mine the literature and engage the haptics community for expert annotation. In a multi-year broad-based initiative, we will empirically derive salient attributes of haptic devices, design an interactive visualization tool where device creators and repurposers can efficiently explore and search Haptipedia, and establish methods and tools to manually and algorithmically collect data from the haptics literature and our community of experts. This paper outlines progress in compiling
an initial corpus of grounded force-feedback devices and their attributes, and it presents a concept sketch of the interface we envision.
Workshop paper (6 pages) presented at the HRI Workshop on Personal Robots for Exercising and Coaching, Chicago, USA, March 2018 (misc)
The worldwide population of older adults is steadily increasing and will soon exceed the capacity of assisted living facilities. Accordingly, we aim to understand whether appropriately designed robots could help older adults stay active and engaged while living at home. We developed eight human-robot exercise games for the Baxter Research Robot with the guidance of experts in game design, therapy, and rehabilitation. After extensive iteration, these games were employed in a user study that tested their viability with 20 younger and 20 older adult users. All participants were willing to enter Baxter’s workspace and physically interact with the robot. User trust and confidence in Baxter increased significantly between pre- and post-experiment assessments, and one individual from the target user population supplied us with abundant positive feedback about her experience. The preliminary results presented in this paper indicate potential for the use of two-armed human-scale robots for social-physical exercise interaction.
Workshop paper (2 pages) presented at the HRI Pioneers Workshop, Chicago, USA, March 2018 (misc)
Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, we want to enable robots to safely hug humans. This research strives to create and study a high fidelity robotic system that provides emotional support to people through hugs. This paper outlines our previous work evaluating human responses to a prototype’s physical and behavioral characteristics, and then it lays out our ongoing and future work.
Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)
Little is known about the shape and properties of the human finger during haptic interaction even though this knowledge is essential to control wearable finger devices and deliver realistic tactile feedback. This study explores a framework for four-dimensional scanning and modeling of finger-surface interactions, aiming to capture the motion and deformations of the entire finger with high resolution. The results show that when the fingertip is actively pressing a rigid surface, it undergoes lateral expansion of about 0.2 cm and proximal/distal bending of about 30◦, deformations that cannot be captured by imaging of the contact area alone. This project constitutes a first step towards an accurate statistical model of the finger’s behavior during haptic interaction.
Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)
Human children typically experience their surroundings both visually and haptically, providing ample opportunities to learn rich cross-sensory associations. To thrive in human environments and interact with the real world, robots also need to build models of these cross-sensory associations; current advances in machine learning should make it possible to infer models from large amounts of data. We previously built a visuo-haptic sensing device, the Proton Pack, and are using it to collect a large database of matched multimodal data from tool-surface interactions. As a benchmark to compare with machine learning performance, we conducted a human subject study (n = 84) on estimating haptic surface properties (here: hardness, roughness, friction, and warmness) from images. Using a 100-surface subset of our database, we showed images to study participants and collected 5635 ratings of the four haptic properties, which we compared with ratings made by the Proton Pack operator and with physical data recorded using motion, force, and vibration sensors. Preliminary results indicate weak correlation between participant and operator ratings, but potential for matching up certain human ratings (particularly hardness and roughness) with features from the literature.
In Proceedings of the International Symposium on Robotics Research (ISRR), Puerto Varas, Chile, December 2017 (inproceedings) In press
Hand-clapping games and other forms of rhythmic social-physical interaction might help foster human-robot teamwork, but the design of such interactions has scarcely been explored. We leveraged our prior work to enable the Rethink Robotics Baxter Research Robot to competently play one-handed tempo-matching hand-clapping games with a human user. To understand how such a robot’s capabilities and behaviors affect user perception, we created four versions of this interaction: the hand clapping could be initiated by either the robot or the human, and the non-initiating partner could be either cooperative, yielding synchronous motion, or mischievously uncooperative. Twenty adults tested two clapping tempos in each of these four interaction modes in a random order, rating every trial on standardized scales. The study results showed that having the robot initiate the interaction gave it a more dominant perceived personality. Despite previous results on the intrigue of misbehaving robots, we found that moving synchronously with the robot almost always made the interaction more enjoyable, less mentally taxing, less physically demanding, and lower effort for users than asynchronous interactions caused by robot or human mischief. Taken together, our results indicate that cooperative rhythmic social-physical interaction has the potential to strengthen human-robot partnerships.
Workshop Paper (2 pages) presented at the RO-MAN Workshop on Social Interaction and Multimodal Expression for Socially Intelligent Robots, Lisbon, Portugal, August 2017 (misc)
A hug is one of the most basic ways humans can express affection. As hugs are so common, a natural progression of robot development is to have robots one day hug humans as seamlessly as these intimate human-human interactions occur. This project’s purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a warm, soft, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot char- acteristics and nine randomly ordered trials with varied hug pressure and duration. We found that people prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end.
In Proceedings of the IEEE World Haptics Conference (WHC), pages: 599-604, Munich, Germany, June 2017, Finalist for best poster paper (inproceedings)
Despite rapid advancements in the field of fingertip haptics, rendering tactile cues with six degrees of freedom (6 DOF) remains an elusive challenge. In this paper, we investigate the potential of displaying fingertip haptic sensations with a 6-DOF parallel continuum manipulator (PCM) that mounts to the user's index finger and moves a contact platform around the fingertip. Compared to traditional mechanisms composed of rigid links and discrete joints, PCMs have the potential to be strong, dexterous, and compact, but they are also more complicated to design. We define the design space of 6-DOF parallel continuum manipulators and outline a process for refining such a device for fingertip haptic applications. Following extensive simulation, we obtain 12 designs that meet our specifications, construct a manually actuated prototype of one such design, and evaluate the simulation's ability to accurately predict the prototype's motion. Finally, we demonstrate the range of deliverable fingertip tactile cues, including a normal force into the finger and shear forces tangent to the finger at three extreme points on the boundary of the fingertip.
In Proceedings of the IEEE World Haptics Conference (WHC), pages: 394-399, Munich, Germany, June 2017 (inproceedings)
Clever electromechanical design is required to make the force feedback delivered by a kinesthetic haptic interface both strong and safe. This paper explores a onedimensional haptic force display that combines a DC motor and a magnetic particle brake on the same shaft. Rather than a rigid linkage, a spooled cable connects the user to the actuators to enable a large workspace, reduce the moving mass, and eliminate the sticky residual force from the brake. This design combines the high torque/power ratio of the brake and the active output capabilities of the motor to provide a wider range of forces than can be achieved with either actuator alone. A prototype of this device was built, its performance was characterized, and it was used to simulate constant force sources and virtual springs and dampers. Compared to the conventional design of using only a motor, the hybrid device can output higher unidirectional forces at the expense of free space feeling less free.
In Proceedings of the IEEE World Haptics Conference (WHC), pages: 107-112, Munich, Germany, June 2017 (inproceedings)
Over time, surgical trainees learn to compensate for the lack of haptic feedback in commercial robotic minimally invasive surgical systems. Incorporating touch cues into robotic surgery training could potentially shorten this learning process if the benefits of haptic feedback were sustained after it is removed. In this paper, we develop a wrist-squeezing haptic feedback system and evaluate whether it holds the potential to train novice da Vinci users to reduce the force they exert on a bimanual inanimate training task. Subjects were randomly divided into two groups according to a multiple baseline experimental design. Each of the ten participants moved a ring along a curved wire nine times while the haptic feedback was conditionally withheld, provided, and withheld again. The realtime tactile feedback of applied force magnitude significantly reduced the integral of the force produced by the da Vinci tools on the task materials, and this result remained even when the haptic feedback was removed. Overall, our findings suggest that wrist-squeezing force feedback can play an essential role in helping novice trainees learn to minimize the force they exert with a surgical robot.
In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 439-445, Singapore, May 2017 (inproceedings)
The Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short) is a new handheld visuo-haptic sensing system that records surface interactions. We previously demonstrated system calibration and a classification task using external motion tracking. This paper details improvements in surface classification performance and removal of the dependence on external motion tracking, necessary before embarking on our goal of gathering a vast surface interaction dataset. Two experiments were performed to refine data collection parameters. After adjusting the placement and filtering of the Proton's high-bandwidth accelerometers, we recorded interactions between two differently-sized steel tooling ball end-effectors (diameter 6.35 and 9.525 mm) and five surfaces. Using features based on normal force, tangential force, end-effector speed, and contact vibration, we trained multi-class SVMs to classify the surfaces using 50 ms chunks of data from each end-effector. Classification accuracies of 84.5% and 91.5% respectively were achieved on unseen test data, an improvement over prior results. In parallel, we pursued on-board motion tracking, using the Proton's camera and fiducial markers. Motion tracks from the external and onboard trackers agree within 2 mm and 0.01 rad RMS, and the accuracy decreases only slightly to 87.7% when using onboard tracking for the 9.525 mm end-effector. These experiments indicate that the Proton 2 is ready for portable data collection.
Workshop paper (3 pages) presented at the ICRA Workshop on C4 Surgical Robots, Singapore, May 2017 (misc)
Teleoperated surgical robots such as the Intuitive da Vinci Surgical System facilitate minimally invasive surgeries, which decrease risk to patients. However, these systems can be difficult to learn, and existing training curricula on surgical simulators do not offer students the realistic experience of a full operation. This paper presents an augmented-reality video training platform for the da Vinci that will allow trainees to rehearse any surgery recorded by an expert. While the trainee operates a da Vinci in free space, they see their own instruments overlaid on the expert video. Tools are identified in the source videos via color segmentation and kernelized correlation filter tracking, and their depth is calculated from the da Vinci’s stereoscopic video feed. The user tries to follow the expert’s movements, and if any of their tools venture too far away, the system provides instantaneous visual feedback and pauses to allow the user to correct their motion. The trainee can also rewind the expert video by bringing either da Vinci tool very close to the camera. This combined and augmented video provides the user with an immersive and interactive training experience.
Hands-on demonstration presented at ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vienna, Austria, March 2017 (misc)
Robots that work alongside humans might be more effective if they could forge a strong social bond with their human partners. Hand-clapping games and other forms of rhythmic social-physical interaction may foster human-robot teamwork, but the design of such interactions has scarcely been explored. At the HRI 2017 conference, we will showcase several such interactions taken from our recent work with the Rethink Robotics Baxter Research Robot, including tempo-matching, Simon says, and Pat-a-cake-like games. We believe conference attendees will be both entertained and intrigued by this novel demonstration of social-physical HRI.
Surgical Endoscopy, 31(Supplement 1):S28, Extended abstract presented as a podium presentation at the Annual Meeting of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES), Springer, Houston, USA, March 2017 (misc)
Introduction: Minimally invasive surgery has revolutionized surgical practice, but challenges remain. Trainees must acquire complex technical skills while minimizing patient risk, and surgeons must maintain their skills for rare procedures. These challenges are magnified in pediatric surgery due to the smaller spaces, finer tissue, and relative dearth of both inanimate and virtual simulators. To build technical expertise, trainees need opportunities for deliberate practice with specific performance feedback, which is typically provided via tedious human grading. This study aimed to validate a novel motion-tracking system and machine learning algorithm for automatically evaluating trainee performance on a pediatric laparoscopic suturing task using a 1–5 OSATS Overall Skill rating.
Methods: Subjects (n=14) ranging from medical students to fellows per- formed one or two trials of an intracorporeal suturing task in a custom pediatric laparoscopy training box (Fig. 1) after watching a video of ideal performance by an expert. The position and orientation of the tools and endoscope were recorded over time using Ascension trakSTAR magnetic motion-tracking sensors, and both instrument grasp angles were recorded over time using flex sensors on the handles. The 27 trials were video-recorded and scored on the OSATS scale by a senior fellow; ratings ranged from 1 to 4. The raw motion data from each trial was processed to calculate over 200 preliminary motion parameters. Regularized least-squares regression (LASSO) was used to identify the most predictive parameters for inclusion in a regression tree. Model performance was evaluated by leave-one-subject-out cross validation, wherein the automatic scores given to each subject’s trials (by a model trained on all other data) are compared to the corresponding human rater scores.
Results: The best-performing LASSO algorithm identified 14 predictive parameters for inclusion in the regression tree, including completion time, linear path length, angular path length, angular acceleration, grasp velocity, and grasp acceleration. The final model’s raw output showed a strong positive correlation of 0.87 with the reviewer-generated scores, and rounding the output to the nearest integer yielded a leave-one-subject-out cross-validation accuracy of 77.8%. Results are summarized in the confusion matrix (Table 1).
Conclusions: Our novel motion-tracking system and regression model automatically gave previously unseen trials overall skill scores that closely match scores from an expert human rater. With additional data and further development, this system may enable creation of a motion-based training platform for pediatric laparoscopic surgery and could yield insights into the fundamental components of surgical skill.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems