Header logo is


2019


Thumb xl cover walking seq
AirCap – Aerial Outdoor Motion Capture

Ahmad, A., Price, E., Tallamraju, R., Saini, N., Lawless, G., Ludwig, R., Martinovic, I., Bülthoff, H. H., Black, M. J.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Workshop on Aerial Swarms, November 2019 (misc)

Abstract
This paper presents an overview of the Grassroots project Aerial Outdoor Motion Capture (AirCap) running at the Max Planck Institute for Intelligent Systems. AirCap's goal is to achieve markerless, unconstrained, human motion capture (mocap) in unknown and unstructured outdoor environments. To that end, we have developed an autonomous flying motion capture system using a team of aerial vehicles (MAVs) with only on-board, monocular RGB cameras. We have conducted several real robot experiments involving up to 3 aerial vehicles autonomously tracking and following a person in several challenging scenarios using our approach of active cooperative perception developed in AirCap. Using the images captured by these robots during the experiments, we have demonstrated a successful offline body pose and shape estimation with sufficiently high accuracy. Overall, we have demonstrated the first fully autonomous flying motion capture system involving multiple robots for outdoor scenarios.

ps

[BibTex]

2019


[BibTex]


no image
High-Fidelity Multiphysics Finite Element Modeling of Finger-Surface Interactions with Tactile Feedback

Serhat, G., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Tokyo, Japan, July 2019 (misc)

Abstract
In this study, we develop a high-fidelity finite element (FE) analysis framework that enables multiphysics simulation of the human finger in contact with a surface that is providing tactile feedback. We aim to elucidate a variety of physical interactions that can occur at finger-surface interfaces, including contact, friction, vibration, and electrovibration. We also develop novel FE-based methods that will allow prediction of nonconventional features such as real finger-surface contact area and finger stickiness. We envision using the developed computational tools for efficient design and optimization of haptic devices by replacing expensive and lengthy experimental procedures with high-fidelity simulation.

hi

[BibTex]

[BibTex]


no image
Fingertip Friction Enhances Perception of Normal Force Changes

Gueorguiev, D., Lambert, J., Thonnard, J., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Tokyo, Japan, July 2019 (misc)

Abstract
Using a force-controlled robotic platform, we tested the human perception of positive and negative modulations in normal force during passive dynamic touch, which also induced a strong related change in the finger-surface lateral force. In a two-alternative forced-choice task, eleven participants had to detect brief variations in the normal force compared to a constant controlled pre-stimulation force of 1 N and report whether it had increased or decreased. The average 75% just noticeable difference (JND) was found to be around 0.25 N for detecting the peak change and 0.30 N for correctly reporting the increase or the decrease. Interestingly, the friction coefficient of a subject’s fingertip positively correlated with his or her performance at detecting the change and reporting its direction, which suggests that humans may use the lateral force as a sensory cue to perceive variations in the normal force.

hi

[BibTex]

[BibTex]


Thumb xl pocketrendering
Inflatable Haptic Sensor for the Torso of a Hugging Robot

Block, A. E., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Tokyo, Japan, July 2019 (misc)

Abstract
During hugs, humans naturally provide and intuit subtle non-verbal cues that signify the strength and duration of an exchanged hug. Personal preferences for this close interaction may vary greatly between people; robots do not currently have the abilities to perceive or understand these preferences. This work-in-progress paper discusses designing, building, and testing a novel inflatable torso that can simultaneously soften a robot and act as a tactile sensor to enable more natural and responsive hugging. Using PVC vinyl, a microphone, and a barometric pressure sensor, we created a small test chamber to demonstrate a proof of concept for the full torso. While contacting the chamber in several ways common in hugs (pat, squeeze, scratch, and rub), we recorded data from the two sensors. The preliminary results suggest that the complementary haptic sensing channels allow us to detect coarse and fine contacts typically experienced during hugs, regardless of user hand placement.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl figure1
Understanding the Pull-off Force of the Human Fingerpad

Nam, S., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE World Haptics Conference (WHC), Tokyo, Japan, July 2019 (misc)

Abstract
To understand the adhesive force that occurs when a finger pulls off of a smooth surface, we built an apparatus to measure the fingerpad’s moisture, normal force, and real contact area over time during interactions with a glass plate. We recorded a total of 450 trials (45 interactions by each of ten human subjects), capturing a wide range of values across the aforementioned variables. The experimental results showed that the pull-off force increases with larger finger contact area and faster detachment rate. Additionally, moisture generally increases the contact area of the finger, but too much moisture can restrict the increase in the pull-off force.

hi

[BibTex]

[BibTex]


Thumb xl h a image3
The Haptician and the Alphamonsters

Forte, M. P., L’Orsa, R., Mohan, M., Nam, S., Kuchenbecker, K. J.

Student Innovation Challenge on Implementing Haptics in Virtual Reality Environment presented at the IEEE World Haptics Conference, Tokyo, Japan, July 2019, Maria Paola Forte, Rachael L'Orsa, Mayumi Mohan, and Saekwang Nam contributed equally to this publication (misc)

Abstract
Dysgraphia is a neurological disorder characterized by writing disabilities that affects between 7% and 15% of children. It presents itself in the form of unfinished letters, letter distortion, inconsistent letter size, letter collision, etc. Traditional therapeutic exercises require continuous assistance from teachers or occupational therapists. Autonomous partial or full haptic guidance can produce positive results, but children often become bored with the repetitive nature of such activities. Conversely, virtual rehabilitation with video games represents a new frontier for occupational therapy due to its highly motivational nature. Virtual reality (VR) adds an element of novelty and entertainment to therapy, thus motivating players to perform exercises more regularly. We propose leveraging the HTC VIVE Pro and the EXOS Wrist DK2 to create an immersive spellcasting “exergame” (exercise game) that helps motivate children with dysgraphia to improve writing fluency.

hi

Student Innovation Challenge – Virtual Reality [BibTex]

Student Innovation Challenge – Virtual Reality [BibTex]


Thumb xl s ban outdoors 1   small
Explorations of Shape-Changing Haptic Interfaces for Blind and Sighted Pedestrian Navigation

Spiers, A., Kuchenbecker, K. J.

pages: 6, Workshop paper (6 pages) presented at the CHI 2019 Workshop on Hacking Blind Navigation, May 2019 (misc) Accepted

Abstract
Since the 1960s, technologists have worked to develop systems that facilitate independent navigation by vision-impaired (VI) pedestrians. These devices vary in terms of conveyed information and feedback modality. Unfortunately, many such prototypes never progress beyond laboratory testing. Conversely, smartphone-based navigation systems for sighted pedestrians have grown in robustness and capabilities, to the point of now being ubiquitous. How can we leverage the success of sighted navigation technology, which is driven by a larger global market, as a way to progress VI navigation systems? We believe one possibility is to make common devices that benefit both VI and sighted individuals, by providing information in a way that does not distract either user from their tasks or environment. To this end we have developed physical interfaces that eschew visual, audio or vibratory feedback, instead relying on the natural human ability to perceive the shape of a handheld object.

hi

[BibTex]

[BibTex]


no image
Bimanual Wrist-Squeezing Haptic Feedback Changes Speed-Force Tradeoff in Robotic Surgery Training

Cao, E., Machaca, S., Bernard, T., Wolfinger, B., Patterson, Z., Chi, A., Adrales, G. L., Kuchenbecker, K. J., Brown, J. D.

Extended abstract presented as an ePoster at the Annual Meeting of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES), Baltimore, USA, April 2019 (misc) Accepted

hi

[BibTex]

[BibTex]


no image
Interactive Augmented Reality for Robot-Assisted Surgery

Forte, M. P., Kuchenbecker, K. J.

Extended abstract presented as an Emerging Technology ePoster at the Annual Meeting of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES), Baltimore, Maryland, USA, April 2019 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
A Design Tool for Therapeutic Social-Physical Human-Robot Interactions

Mohan, M., Kuchenbecker, K. J.

Workshop paper (3 pages) presented at the HRI Pioneers Workshop, Daegu, South Korea, March 2019 (misc) Accepted

Abstract
We live in an aging society; social-physical human-robot interaction has the potential to keep our elderly adults healthy by motivating them to exercise. After summarizing prior work, this paper proposes a tool that can be used to design exercise and therapy interactions to be performed by an upper-body humanoid robot. The interaction design tool comprises a teleoperation system that transmits the operator’s arm motions, head motions and facial expression along with an interface to monitor and assess the motion of the user interacting with the robot. We plan to use this platform to create dynamic and intuitive exercise interactions.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl lic overview
Fast and Resource-Efficient Control of Wireless Cyber-Physical Systems

Baumann, D.

KTH Royal Institute of Technology, Stockholm, Febuary 2019 (phdthesis)

ics

PDF [BibTex]

PDF [BibTex]


no image
Learning Transferable Representations

Rojas-Carulla, M.

University of Cambridge, UK, 2019 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Sample-efficient deep reinforcement learning for continuous control

Gu, S.

University of Cambridge, UK, 2019 (phdthesis)

ei

[BibTex]


Thumb xl webteaser
Perceiving Systems (2016-2018)
Scientific Advisory Board Report, 2019 (misc)

ps

pdf [BibTex]

pdf [BibTex]


no image
Spatial Filtering based on Riemannian Manifold for Brain-Computer Interfacing

Xu, J.

Technical University of Munich, Germany, 2019 (mastersthesis)

ei

[BibTex]

[BibTex]


no image
A special issue on hydrogen-based Energy storage
{International Journal of Hydrogen Energy}, 44, pages: 7737, Elsevier, Amsterdam, 2019 (misc)

mms

DOI [BibTex]

DOI [BibTex]


Thumb xl teaser
Toward Expert-Sourcing of a Haptic Device Repository

Seifi, H., Ip, J., Agrawal, A., Kuchenbecker, K. J., MacLean, K. E.

Glasgow, UK, 2019 (misc)

Abstract
Haptipedia is an online taxonomy, database, and visualization that aims to accelerate ideation of new haptic devices and interactions in human-computer interaction, virtual reality, haptics, and robotics. The current version of Haptipedia (105 devices) was created through iterative design, data entry, and evaluation by our team of experts. Next, we aim to greatly increase the number of devices and keep Haptipedia updated by soliciting data entry and verification from haptics experts worldwide.

hi

link (url) [BibTex]

link (url) [BibTex]


no image
Nanoscale X-ray imaging of spin dynamics in Yttrium iron garnet

Förster, J., Wintz, S., Bailey, J., Finizio, S., Josten, E., Meertens, D., Dubs, C., Bozhko, D. A., Stoll, H., Dieterle, G., Traeger, N., Raabe, J., Slavin, A. N., Weigand, M., Gräfe, J., Schütz, G.

2019 (misc)

mms

link (url) [BibTex]

link (url) [BibTex]


no image
Reconfigurable nanoscale spin wave majority gate with frequency-division multiplexing

Talmelli, G., Devolder, T., Träger, N., Förster, J., Wintz, S., Weigand, M., Stoll, H., Heyns, M., Schütz, G., Radu, I., Gräfe, J., Ciubotaru, F., Adelmann, C.

2019 (misc)

Abstract
Spin waves are excitations in ferromagnetic media that have been proposed as information carriers in spintronic devices with potentially much lower operation power than conventional charge-based electronics. The wave nature of spin waves can be exploited to design majority gates by coding information in their phase and using interference for computation. However, a scalable spin wave majority gate design that can be co-integrated alongside conventional Si-based electronics is still lacking. Here, we demonstrate a reconfigurable nanoscale inline spin wave majority gate with ultrasmall footprint, frequency-division multiplexing, and fan-out. Time-resolved imaging of the magnetisation dynamics by scanning transmission x-ray microscopy reveals the operation mode of the device and validates the full logic majority truth table. All-electrical spin wave spectroscopy further demonstrates spin wave majority gates with sub-micron dimensions, sub-micron spin wave wavelengths, and reconfigurable input and output ports. We also show that interference-based computation allows for frequency-division multiplexing as well as the computation of different logic functions in the same device. Such devices can thus form the foundation of a future spin-wave-based superscalar vector computing platform.

mms

link (url) [BibTex]

link (url) [BibTex]


Thumb xl blockdiag
Event-triggered Learning

Solowjow, F., Trimpe, S.

2019 (techreport) Submitted

ics

arXiv PDF [BibTex]


no image
Visual-Inertial Mapping with Non-Linear Factor Recovery

Usenko, V., Demmel, N., Schubert, D., Stückler, J., Cremers, D.

2019, arXiv:1904.06504 (misc)

ev

[BibTex]

[BibTex]


no image
Hydrogen Energy

Hirscher, M., Autrey, T., Orimo, S.

{ChemPhysChem}, 20, pages: 1153-1411, Wiley-VCH, Weinheim, Germany, 2019 (misc)

mms

link (url) DOI [BibTex]

link (url) DOI [BibTex]

2018


no image
Reducing 3D Vibrations to 1D in Real Time

Park, G., Kuchenbecker, K. J.

Hands-on demonstration (4 pages) presented at AsiaHaptics, Incheon, South Korea, November 2018 (misc)

Abstract
For simple and realistic vibrotactile feedback, 3D accelerations from real contact interactions are usually rendered using a single-axis vibration actuator; this dimensional reduction can be performed in many ways. This demonstration implements a real-time conversion system that simultaneously measures 3D accelerations and renders corresponding 1D vibrations using a two-pen interface. In the demonstration, a user freely interacts with various objects using an In-Pen that contains a 3-axis accelerometer. The captured accelerations are converted to a single-axis signal, and an Out-Pen renders the reduced signal for the user to feel. We prepared seven conversion methods from the simple use of a single-axis signal to applying principal component analysis (PCA) so that users can compare the performance of each conversion method in this demonstration.

hi

Project Page [BibTex]

2018


Project Page [BibTex]


Thumb xl representative image2
A Large-Scale Fabric-Based Tactile Sensor Using Electrical Resistance Tomography

Lee, H., Park, K., Kim, J., Kuchenbecker, K. J.

Hands-on demonstration (3 pages) presented at AsiaHaptics, Incheon, South Korea, November 2018 (misc)

Abstract
Large-scale tactile sensing is important for household robots and human-robot interaction because contacts can occur all over a robot’s body surface. This paper presents a new fabric-based tactile sensor that is straightforward to manufacture and can cover a large area. The tactile sensor is made of conductive and non-conductive fabric layers, and the electrodes are stitched with conductive thread, so the resulting device is flexible and stretchable. The sensor utilizes internal array electrodes and a reconstruction method called electrical resistance tomography (ERT) to achieve a high spatial resolution with a small number of electrodes. The developed sensor shows that only 16 electrodes can accurately estimate single and multiple contacts over a square that measures 20 cm by 20 cm.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl encyclop med robotics
Nanoscale robotic agents in biological fluids and tissues

Palagi, S., Walker, D. Q. T., Fischer, P.

In The Encyclopedia of Medical Robotics, 2, pages: 19-42, 2, (Editors: Desai, J. P. and Ferreira, A.), World Scientific, October 2018 (inbook)

Abstract
Nanorobots are untethered structures of sub-micron size that can be controlled in a non-trivial way. Such nanoscale robotic agents are envisioned to revolutionize medicine by enabling minimally invasive diagnostic and therapeutic procedures. To be useful, nanorobots must be operated in complex biological fluids and tissues, which are often difficult to penetrate. In this chapter, we first discuss potential medical applications of motile nanorobots. We briefly present the challenges related to swimming at such small scales and we survey the rheological properties of some biological fluids and tissues. We then review recent experimental results in the development of nanorobots and in particular their design, fabrication, actuation, and propulsion in complex biological fluids and tissues. Recent work shows that their nanoscale dimension is a clear asset for operation in biological tissues, since many biological tissues consist of networks of macromolecules that prevent the passage of larger micron-scale structures, but contain dynamic pores through which nanorobots can move.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl teaser ps hi
Statistical Modelling of Fingertip Deformations and Contact Forces during Tactile Interaction

Gueorguiev, D., Tzionas, D., Pacchierotti, C., Black, M. J., Kuchenbecker, K. J.

Extended abstract presented at the Hand, Brain and Technology conference (HBT), Ascona, Switzerland, August 2018 (misc)

Abstract
Little is known about the shape and properties of the human finger during haptic interaction, even though these are essential parameters for controlling wearable finger devices and deliver realistic tactile feedback. This study explores a framework for four-dimensional scanning (3D over time) and modelling of finger-surface interactions, aiming to capture the motion and deformations of the entire finger with high resolution while simultaneously recording the interfacial forces at the contact. Preliminary results show that when the fingertip is actively pressing a rigid surface, it undergoes lateral expansion and proximal/distal bending, deformations that cannot be captured by imaging of the contact area alone. Therefore, we are currently capturing a dataset that will enable us to create a statistical model of the finger’s deformations and predict the contact forces induced by tactile interaction with objects. This technique could improve current methods for tactile rendering in wearable haptic devices, which rely on general physical modelling of the skin’s compliance, by developing an accurate model of the variations in finger properties across the human population. The availability of such a model will also enable a more realistic simulation of virtual finger behaviour in virtual reality (VR) environments, as well as the ability to accurately model a specific user’s finger from lower resolution data. It may also be relevant for inferring the physical properties of the underlying tissue from observing the surface mesh deformations, as previously shown for body tissues.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Instrumentation, Data, and Algorithms for Visually Understanding Haptic Surface Properties

Burka, A. L.

University of Pennsylvania, Philadelphia, USA, August 2018, Department of Electrical and Systems Engineering (phdthesis)

Abstract
Autonomous robots need to efficiently walk over varied surfaces and grasp diverse objects. We hypothesize that the association between how such surfaces look and how they physically feel during contact can be learned from a database of matched haptic and visual data recorded from various end-effectors' interactions with hundreds of real-world surfaces. Testing this hypothesis required the creation of a new multimodal sensing apparatus, the collection of a large multimodal dataset, and development of a machine-learning pipeline. This thesis begins by describing the design and construction of the Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short), an untethered handheld sensing device that emulates the capabilities of the human senses of vision and touch. Its sensory modalities include RGBD vision, egomotion, contact force, and contact vibration. Three interchangeable end-effectors (a steel tooling ball, an OptoForce three-axis force sensor, and a SynTouch BioTac artificial fingertip) allow for different material properties at the contact point and provide additional tactile data. We then detail the calibration process for the motion and force sensing systems, as well as several proof-of-concept surface discrimination experiments that demonstrate the reliability of the device and the utility of the data it collects. This thesis then presents a large-scale dataset of multimodal surface interaction recordings, including 357 unique surfaces such as furniture, fabrics, outdoor fixtures, and items from several private and public material sample collections. Each surface was touched with one, two, or three end-effectors, comprising approximately one minute per end-effector of tapping and dragging at various forces and speeds. We hope that the larger community of robotics researchers will find broad applications for the published dataset. Lastly, we demonstrate an algorithm that learns to estimate haptic surface properties given visual input. Surfaces were rated on hardness, roughness, stickiness, and temperature by the human experimenter and by a pool of purely visual observers. Then we trained an algorithm to perform the same task as well as infer quantitative properties calculated from the haptic data. Overall, the task of predicting haptic properties from vision alone proved difficult for both humans and computers, but a hybrid algorithm using a deep neural network and a support vector machine achieved a correlation between expected and actual regression output between approximately ρ = 0.3 and ρ = 0.5 on previously unseen surfaces.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl toc image
A machine from machines

Fischer, P.

Nature Physics, 14, pages: 1072–1073, July 2018 (misc)

Abstract
Building spinning microrotors that self-assemble and synchronize to form a gear sounds like an impossible feat. However, it has now been achieved using only a single type of building block -- a colloid that self-propels.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl ar
Robust Visual Augmented Reality in Robot-Assisted Surgery

Forte, M. P.

Politecnico di Milano, Milan, Italy, July 2018, Department of Electronic, Information, and Biomedical Engineering (mastersthesis)

Abstract
The broader research objective of this line of research is to test the hypothesis that real-time stereo video analysis and augmented reality can increase safety and task efficiency in robot-assisted surgery. This master’s thesis aims to solve the first step needed to achieve this goal: the creation of a robust system that delivers the envisioned feedback to a surgeon while he or she controls a surgical robot that is identical to those used on human patients. Several approaches for applying augmented reality to da Vinci Surgical Systems have been proposed, but none of them entirely rely on a clinical robot; specifically, they require additional sensors, depend on access to the da Vinci API, are designed for a very specific task, or were tested on systems that are starkly different from those in clinical use. There has also been prior work that presents the real-world camera view and the computer graphics on separate screens, or not in real time. In other scenarios, the digital information is overlaid manually by the surgeons themselves or by computer scientists, rather than being generated automatically in response to the surgeon’s actions. We attempted to overcome the aforementioned constraints by acquiring input signals from the da Vinci stereo endoscope and providing augmented reality to the console in real time (less than 150 ms delay, including the 62 ms of inherent latency of the da Vinci). The potential benefits of the resulting system are broad because it was built to be general, rather than customized for any specific task. The entire platform is compatible with any generation of the da Vinci System and does not require a dVRK (da Vinci Research Kit) or access to the API. Thus, it can be applied to existing da Vinci Systems in operating rooms around the world.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl cover book high 1
Colloidal Chemical Nanomotors

Alarcon-Correa, M.

Colloidal Chemical Nanomotors, pages: 150, Cuvillier Verlag, MPI-IS , June 2018 (phdthesis)

Abstract
Synthetic sophisticated nanostructures represent a fundamental building block for the development of nanotechnology. The fabrication of nanoparticles complex in structure and material composition is key to build nanomachines that can operate as man-made nanoscale motors, which autonomously convert external energy into motion. To achieve this, asymmetric nanoparticles were fabricated combining a physical vapor deposition technique known as NanoGLAD and wet chemical synthesis. This thesis primarily concerns three complex colloidal systems that have been developed: i)Hollow nanocup inclusion complexes that have a single Au nanoparticle in their pocket. The Au particle can be released with an external trigger. ii)The smallest self-propelling nanocolloids that have been made to date, which give rise to a local concentration gradient that causes enhanced diffusion of the particles. iii)Enzyme-powered pumps that have been assembled using bacteriophages as biological nanoscaffolds. This construct also can be used for enzyme recovery after heterogeneous catalysis.

pf

[BibTex]

[BibTex]


no image
Reducing 3D Vibrations to 1D in Real Time

Park, G., Kuchenbecker, K. J.

Hands-on demonstration presented at EuroHaptics, Pisa, Italy, June 2018 (misc)

Abstract
In this demonstration, you will hold two pen-shaped modules: an in-pen and an out-pen. The in-pen is instrumented with a high-bandwidth three-axis accelerometer, and the out-pen contains a one-axis voice coil actuator. Use the in-pen to interact with different surfaces; the measured 3D accelerations are continually converted into 1D vibrations and rendered with the out-pen for you to feel. You can test conversion methods that range from simply selecting a single axis to applying a discrete Fourier transform or principal component analysis for realistic and brisk real-time conversion.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Haptipedia: Exploring Haptic Device Design Through Interactive Visualizations

Seifi, H., Fazlollahi, F., Park, G., Kuchenbecker, K. J., MacLean, K. E.

Hands-on demonstration presented at EuroHaptics, Pisa, Italy, June 2018 (misc)

Abstract
How many haptic devices have been proposed in the last 30 years? How can we leverage this rich source of design knowledge to inspire future innovations? Our goal is to make historical haptic invention accessible through interactive visualization of a comprehensive library – a Haptipedia – of devices that have been annotated with designer-relevant metadata. In this demonstration, participants can explore Haptipedia’s growing library of grounded force feedback devices through several prototype visualizations, interact with 3D simulations of the device mechanisms and movements, and tell us about the attributes and devices that could make Haptipedia a useful resource for the haptic design community.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl koala
Designing a Haptic Empathetic Robot Animal for Children with Autism

Burns, R., Kuchenbecker, K. J.

Workshop paper (4 pages) presented at the RSS Workshop on Robot-Mediated Autism Intervention: Hardware, Software and Curriculum, Pittsburgh, USA, June 2018 (misc)

Abstract
Children with autism often endure sensory overload, may be nonverbal, and have difficulty understanding and relaying emotions. These experiences result in heightened stress during social interaction. Animal-assisted intervention has been found to improve the behavior of children with autism during social interaction, but live animal companions are not always feasible. We are thus in the process of designing a robotic animal to mimic some successful characteristics of animal-assisted intervention while trying to improve on others. The over-arching hypothesis of this research is that an appropriately designed robot animal can reduce stress in children with autism and empower them to engage in social interaction.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Delivering 6-DOF Fingertip Tactile Cues

Young, E., Kuchenbecker, K. J.

Work-in-progress paper (5 pages) presented at EuroHaptics, Pisa, Italy, June 2018 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Soft Multi-Axis Boundary-Electrode Tactile Sensors for Whole-Body Robotic Skin

Lee, H., Kim, J., Kuchenbecker, K. J.

Workshop paper (2 pages) presented at the RSS Pioneers Workshop, Pittsburgh, USA, June 2018 (misc)

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Haptics and Haptic Interfaces

Kuchenbecker, K. J.

In Encyclopedia of Robotics, (Editors: Marcelo H. Ang and Oussama Khatib and Bruno Siciliano), Springer, May 2018 (incollection)

Abstract
Haptics is an interdisciplinary field that seeks to both understand and engineer touch-based interaction. Although a wide range of systems and applications are being investigated, haptics researchers often concentrate on perception and manipulation through the human hand. A haptic interface is a mechatronic system that modulates the physical interaction between a human and his or her tangible surroundings. Haptic interfaces typically involve mechanical, electrical, and computational layers that work together to sense user motions or forces, quickly process these inputs with other information, and physically respond by actuating elements of the user’s surroundings, thereby enabling him or her to act on and feel a remote and/or virtual environment.

hi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl thesis cover2
Model-based Optical Flow: Layers, Learning, and Geometry

Wulff, J.

Tuebingen University, April 2018 (phdthesis)

Abstract
The estimation of motion in video sequences establishes temporal correspondences between pixels and surfaces and allows reasoning about a scene using multiple frames. Despite being a focus of research for over three decades, computing motion, or optical flow, remains challenging due to a number of difficulties, including the treatment of motion discontinuities and occluded regions, and the integration of information from more than two frames. One reason for these issues is that most optical flow algorithms only reason about the motion of pixels on the image plane, while not taking the image formation pipeline or the 3D structure of the world into account. One approach to address this uses layered models, which represent the occlusion structure of a scene and provide an approximation to the geometry. The goal of this dissertation is to show ways to inject additional knowledge about the scene into layered methods, making them more robust, faster, and more accurate. First, this thesis demonstrates the modeling power of layers using the example of motion blur in videos, which is caused by fast motion relative to the exposure time of the camera. Layers segment the scene into regions that move coherently while preserving their occlusion relationships. The motion of each layer therefore directly determines its motion blur. At the same time, the layered model captures complex blur overlap effects at motion discontinuities. Using layers, we can thus formulate a generative model for blurred video sequences, and use this model to simultaneously deblur a video and compute accurate optical flow for highly dynamic scenes containing motion blur. Next, we consider the representation of the motion within layers. Since, in a layered model, important motion discontinuities are captured by the segmentation into layers, the flow within each layer varies smoothly and can be approximated using a low dimensional subspace. We show how this subspace can be learned from training data using principal component analysis (PCA), and that flow estimation using this subspace is computationally efficient. The combination of the layered model and the low-dimensional subspace gives the best of both worlds, sharp motion discontinuities from the layers and computational efficiency from the subspace. Lastly, we show how layered methods can be dramatically improved using simple semantics. Instead of treating all layers equally, a semantic segmentation divides the scene into its static parts and moving objects. Static parts of the scene constitute a large majority of what is shown in typical video sequences; yet, in such regions optical flow is fully constrained by the depth structure of the scene and the camera motion. After segmenting out moving objects, we consider only static regions, and explicitly reason about the structure of the scene and the camera motion, yielding much better optical flow estimates. Furthermore, computing the structure of the scene allows to better combine information from multiple frames, resulting in high accuracies even in occluded regions. For moving regions, we compute the flow using a generic optical flow method, and combine it with the flow computed for the static regions to obtain a full optical flow field. By combining layered models of the scene with reasoning about the dynamic behavior of the real, three-dimensional world, the methods presented herein push the envelope of optical flow computation in terms of robustness, speed, and accuracy, giving state-of-the-art results on benchmarks and pointing to important future research directions for the estimation of motion in natural scenes.

ps

Official link DOI Project Page [BibTex]


no image
Arm-Worn Tactile Displays

Kuchenbecker, K. J.

Cross-Cutting Challenge Interactive Discussion presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)

Abstract
Fingertips and hands captivate the attention of most haptic interface designers, but humans can feel touch stimuli across the entire body surface. Trying to create devices that both can be worn and can deliver good haptic sensations raises challenges that rarely arise in other contexts. Most notably, tactile cues such as vibration, tapping, and squeezing are far simpler to implement in wearable systems than kinesthetic haptic feedback. This interactive discussion will present a variety of relevant projects to which I have contributed, attempting to pull out common themes and ideas for the future.

hi

[BibTex]

[BibTex]


Thumb xl wireframe main
Haptipedia: An Expert-Sourced Interactive Device Visualization for Haptic Designers

Seifi, H., MacLean, K. E., Kuchenbecker, K. J., Park, G.

Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)

Abstract
Much of three decades of haptic device invention is effectively lost to today’s designers: dispersion across time, region, and discipline imposes an incalculable drag on innovation in this field. Our goal is to make historical haptic invention accessible through interactive navigation of a comprehensive library – a Haptipedia – of devices that have been annotated with designer-relevant metadata. To build this open resource, we will systematically mine the literature and engage the haptics community for expert annotation. In a multi-year broad-based initiative, we will empirically derive salient attributes of haptic devices, design an interactive visualization tool where device creators and repurposers can efficiently explore and search Haptipedia, and establish methods and tools to manually and algorithmically collect data from the haptics literature and our community of experts. This paper outlines progress in compiling an initial corpus of grounded force-feedback devices and their attributes, and it presents a concept sketch of the interface we envision.

hi

Project Page [BibTex]

Project Page [BibTex]


no image
Exercising with Baxter: Design and Evaluation of Assistive Social-Physical Human-Robot Interaction

Fitter, N. T., Mohan, M., Kuchenbecker, K. J., Johnson, M. J.

Workshop paper (6 pages) presented at the HRI Workshop on Personal Robots for Exercising and Coaching, Chicago, USA, March 2018 (misc)

Abstract
The worldwide population of older adults is steadily increasing and will soon exceed the capacity of assisted living facilities. Accordingly, we aim to understand whether appropriately designed robots could help older adults stay active and engaged while living at home. We developed eight human-robot exercise games for the Baxter Research Robot with the guidance of experts in game design, therapy, and rehabilitation. After extensive iteration, these games were employed in a user study that tested their viability with 20 younger and 20 older adult users. All participants were willing to enter Baxter’s workspace and physically interact with the robot. User trust and confidence in Baxter increased significantly between pre- and post-experiment assessments, and one individual from the target user population supplied us with abundant positive feedback about her experience. The preliminary results presented in this paper indicate potential for the use of two-armed human-scale robots for social-physical exercise interaction.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


Thumb xl huggingpicture
Emotionally Supporting Humans Through Robot Hugs

Block, A. E., Kuchenbecker, K. J.

Workshop paper (2 pages) presented at the HRI Pioneers Workshop, Chicago, USA, March 2018 (misc)

Abstract
Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, we want to enable robots to safely hug humans. This research strives to create and study a high fidelity robotic system that provides emotional support to people through hugs. This paper outlines our previous work evaluating human responses to a prototype’s physical and behavioral characteristics, and then it lays out our ongoing and future work.

hi

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Thumb xl teaser ps hi
Towards a Statistical Model of Fingertip Contact Deformations from 4D Data

Gueorguiev, D., Tzionas, D., Pacchierotti, C., Black, M. J., Kuchenbecker, K. J.

Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)

Abstract
Little is known about the shape and properties of the human finger during haptic interaction even though this knowledge is essential to control wearable finger devices and deliver realistic tactile feedback. This study explores a framework for four-dimensional scanning and modeling of finger-surface interactions, aiming to capture the motion and deformations of the entire finger with high resolution. The results show that when the fingertip is actively pressing a rigid surface, it undergoes lateral expansion of about 0.2 cm and proximal/distal bending of about 30◦, deformations that cannot be captured by imaging of the contact area alone. This project constitutes a first step towards an accurate statistical model of the finger’s behavior during haptic interaction.

hi

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Can Humans Infer Haptic Surface Properties from Images?

Burka, A., Kuchenbecker, K. J.

Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)

Abstract
Human children typically experience their surroundings both visually and haptically, providing ample opportunities to learn rich cross-sensory associations. To thrive in human environments and interact with the real world, robots also need to build models of these cross-sensory associations; current advances in machine learning should make it possible to infer models from large amounts of data. We previously built a visuo-haptic sensing device, the Proton Pack, and are using it to collect a large database of matched multimodal data from tool-surface interactions. As a benchmark to compare with machine learning performance, we conducted a human subject study (n = 84) on estimating haptic surface properties (here: hardness, roughness, friction, and warmness) from images. Using a 100-surface subset of our database, we showed images to study participants and collected 5635 ratings of the four haptic properties, which we compared with ratings made by the Proton Pack operator and with physical data recorded using motion, force, and vibration sensors. Preliminary results indicate weak correlation between participant and operator ratings, but potential for matching up certain human ratings (particularly hardness and roughness) with features from the literature.

hi

Project Page [BibTex]

Project Page [BibTex]


Thumb xl coregpatentfig
Co-Registration – Simultaneous Alignment and Modeling of Articulated 3D Shapes

Black, M., Hirshberg, D., Loper, M., Rachlin, E., Weiss, A.

Febuary 2018, U.S.~Patent 9,898,848 (misc)

Abstract
Present application refers to a method, a model generation unit and a computer program (product) for generating trained models (M) of moving persons, based on physically measured person scan data (S). The approach is based on a common template (T) for the respective person and on the measured person scan data (S) in different shapes and different poses. Scan data are measured with a 3D laser scanner. A generic personal model is used for co-registering a set of person scan data (S) aligning the template (T) to the set of person scans (S) while simultaneously training the generic personal model to become a trained person model (M) by constraining the generic person model to be scan-specific, person-specific and pose-specific and providing the trained model (M), based on the co registering of the measured object scan data (S).

ps

text [BibTex]


no image
Die kybernetische Revolution

Schölkopf, B.

15-Mar-2018, Süddeutsche Zeitung, 2018 (misc)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Detailed Dense Inference with Convolutional Neural Networks via Discrete Wavelet Transform

Ma, L., Stueckler, J., Wu, T., Cremers, D.

arxiv, 2018, arXiv:1808.01834 (techreport)

ev

[BibTex]

[BibTex]