Header logo is de


2018


no image
Deep Reinforcement Learning for Event-Triggered Control

Baumann, D., Zhu, J., Martius, G., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control, Miami, Fl, USA, December 2018 (inproceedings) Accepted

al ics

arXiv PDF [BibTex]

2018


arXiv PDF [BibTex]


no image
Minimum Information Exchange in Distributed Systems

Solowjow, F., Mehrjou, A., Schölkopf, B., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control (CDC), Miami, Fl, USA, December 2018 (inproceedings) Accepted

ei ics

arXiv [BibTex]

arXiv [BibTex]


Thumb xl dip final
Deep Inertial Poser: Learning to Reconstruct Human Pose from Sparse Inertial Measurements in Real Time

Huang, Y., Kaufmann, M., Aksan, E., Black, M. J., Hilliges, O., Pons-Moll, G.

ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 37, pages: 185:1-185:15, ACM, November 2018, Two first authors contributed equally (article)

Abstract
We demonstrate a novel deep neural network capable of reconstructing human full body pose in real-time from 6 Inertial Measurement Units (IMUs) worn on the user's body. In doing so, we address several difficult challenges. First, the problem is severely under-constrained as multiple pose parameters produce the same IMU orientations. Second, capturing IMU data in conjunction with ground-truth poses is expensive and difficult to do in many target application scenarios (e.g., outdoors). Third, modeling temporal dependencies through non-linear optimization has proven effective in prior work but makes real-time prediction infeasible. To address this important limitation, we learn the temporal pose priors using deep learning. To learn from sufficient data, we synthesize IMU data from motion capture datasets. A bi-directional RNN architecture leverages past and future information that is available at training time. At test time, we deploy the network in a sliding window fashion, retaining real time capabilities. To evaluate our method, we recorded DIP-IMU, a dataset consisting of 10 subjects wearing 17 IMUs for validation in 64 sequences with 330,000 time instants; this constitutes the largest IMU dataset publicly available. We quantitatively evaluate our approach on multiple datasets and show results from a real-time implementation. DIP-IMU and the code are available for research purposes.

ps

data code (coming soon) pdf preprint video DOI [BibTex]

data code (coming soon) pdf preprint video DOI [BibTex]


Thumb xl universal custom complex magnetic spring design methodology
Universal Custom Complex Magnetic Spring Design Methodology

Woodward, M. A., Sitti, M.

IEEE Transactions on Magnetics, 54(1):1-13, October 2018 (article)

Abstract
A design methodology is presented for creating custom complex magnetic springs through the design of force-displacement curves. This methodology results in a magnet configuration, which will produce a desired force-displacement relationship. Initially, the problem is formulated and solved as a system of linear equations. Then, given the limited likelihood of a single solution being feasibly manufactured, key parameters of the solution are extracted and varied to create a family of solutions. Finally, these solutions are refined using numerical optimization. Given the properties of magnets, this methodology can create any well-defined function of force versus displacement and is model-independent. To demonstrate this flexibility, a number of example magnetic springs are designed; one of which, designed for use in a jumping-gliding robot's shape memory alloy actuated clutch, is manufactured and experimentally characterized. Due to the scaling of magnetic forces, the displacement region which these magnetic springs are most applicable is that of millimeters and below. However, this region is well situated for miniature robots and smart material actuators, where a tailored magnetic spring, designed to compliment a component, can enhance its performance while adding new functionality. The methodology is also expendable to variable interactions and multi-dimensional magnetic field design.

pi

DOI [BibTex]

DOI [BibTex]


Thumb xl sevillagcpr
On the Integration of Optical Flow and Action Recognition

Sevilla-Lara, L., Liao, Y., Guney, F., Jampani, V., Geiger, A., Black, M. J.

In German Conference on Pattern Recognition (GCPR), October 2018 (inproceedings)

Abstract
Most of the top performing action recognition methods use optical flow as a "black box" input. Here we take a deeper look at the combination of flow and action recognition, and investigate why optical flow is helpful, what makes a flow method good for action recognition, and how we can make it better. In particular, we investigate the impact of different flow algorithms and input transformations to better understand how these affect a state-of-the-art action recognition method. Furthermore, we fine tune two neural-network flow methods end-to-end on the most widely used action recognition dataset (UCF101). Based on these experiments, we make the following five observations: 1) optical flow is useful for action recognition because it is invariant to appearance, 2) optical flow methods are optimized to minimize end-point-error (EPE), but the EPE of current methods is not well correlated with action recognition performance, 3) for the flow methods tested, accuracy at boundaries and at small displacements is most correlated with action recognition performance, 4) training optical flow to minimize classification error instead of minimizing EPE improves recognition performance, and 5) optical flow learned for the task of action recognition differs from traditional optical flow especially inside the human body and at the boundary of the body. These observations may encourage optical flow researchers to look beyond EPE as a goal and guide action recognition researchers to seek better motion cues, leading to a tighter integration of the optical flow and action recognition communities.

am ps

arXiv [BibTex]

arXiv [BibTex]


Thumb xl iros18
Towards Robust Visual Odometry with a Multi-Camera System

Liu, P., Geppert, M., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In International Conference on Intelligent Robots and Systems (IROS) 2018, International Conference on Intelligent Robots and Systems, October 2018 (inproceedings)

Abstract
We present a visual odometry (VO) algorithm for a multi-camera system and robust operation in challenging environments. Our algorithm consists of a pose tracker and a local mapper. The tracker estimates the current pose by minimizing photometric errors between the most recent keyframe and the current frame. The mapper initializes the depths of all sampled feature points using plane-sweeping stereo. To reduce pose drift, a sliding window optimizer is used to refine poses and structure jointly. Our formulation is flexible enough to support an arbitrary number of stereo cameras. We evaluate our algorithm thoroughly on five datasets. The datasets were captured in different conditions: daytime, night-time with near-infrared (NIR) illumination and night-time without NIR illumination. Experimental results show that a multi-camera setup makes the VO more robust to challenging environments, especially night-time conditions, in which a single stereo configuration fails easily due to the lack of features.

avg

pdf [BibTex]

pdf [BibTex]


Thumb xl cover
Deep Neural Network-based Cooperative Visual Tracking through Multiple Micro Aerial Vehicles

Price, E., Lawless, G., Ludwig, R., Martinovic, I., Buelthoff, H. H., Black, M. J., Ahmad, A.

IEEE Robotics and Automation Letters, Robotics and Automation Letters, 3, pages: 3193-3200, IEEE, October 2018 (article)

Abstract
Multi-camera tracking of humans and animals in outdoor environments is a relevant and challenging problem. Our approach to it involves a team of cooperating micro aerial vehicles (MAVs) with on-board cameras only. DNNs often fail at objects with small scale or far away from the camera, which are typical characteristics of a scenario with aerial robots. Thus, the core problem addressed in this paper is how to achieve on-board, online, continuous and accurate vision-based detections using DNNs for visual person tracking through MAVs. Our solution leverages cooperation among multiple MAVs and active selection of most informative regions of image. We demonstrate the efficiency of our approach through simulations with up to 16 robots and real robot experiments involving two aerial robots tracking a person, while maintaining an active perception-driven formation. ROS-based source code is provided for the benefit of the community.

ps

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Thumb xl interpolation
Temporal Interpolation as an Unsupervised Pretraining Task for Optical Flow Estimation

Wulff, J., Black, M. J.

In German Conference on Pattern Recognition (GCPR), October 2018 (inproceedings)

Abstract
The difficulty of annotating training data is a major obstacle to using CNNs for low-level tasks in video. Synthetic data often does not generalize to real videos, while unsupervised methods require heuristic n losses. Proxy tasks can overcome these issues, and start by training a network for a task for which annotation is easier or which can be trained unsupervised. The trained network is then fine-tuned for the original task using small amounts of ground truth data. Here, we investigate frame interpolation as a proxy task for optical flow. Using real movies, we train a CNN unsupervised for temporal interpolation. Such a network implicitly estimates motion, but cannot handle untextured regions. By fi ne-tuning on small amounts of ground truth flow, the network can learn to fill in homogeneous regions and compute full optical flow fi elds. Using this unsupervised pre-training, our network outperforms similar architectures that were trained supervised using synthetic optical flow.

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl coma faces
Generating 3D Faces using Convolutional Mesh Autoencoders

Ranjan, A., Bolkart, T., Sanyal, S., Black, M. J.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and non-linear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We also show that, replacing the expression space of an existing state-of-the-art face model with our autoencoder, achieves a lower reconstruction error. Our data, model and code are available at http://coma.is.tue.mpg.de/.

ps

code paper supplementary link (url) [BibTex]


Thumb xl ianeccv18
Learning Priors for Semantic 3D Reconstruction

Cherabier, I., Schönberger, J., Oswald, M., Pollefeys, M., Geiger, A.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
We present a novel semantic 3D reconstruction framework which embeds variational regularization into a neural network. Our network performs a fixed number of unrolled multi-scale optimization iterations with shared interaction weights. In contrast to existing variational methods for semantic 3D reconstruction, our model is end-to-end trainable and captures more complex dependencies between the semantic labels and the 3D geometry. Compared to previous learning-based approaches to 3D reconstruction, we integrate powerful long-range dependencies using variational coarse-to-fine optimization. As a result, our network architecture requires only a moderate number of parameters while keeping a high level of expressiveness which enables learning from very little data. Experiments on real and synthetic datasets demonstrate that our network achieves higher accuracy compared to a purely variational approach while at the same time requiring two orders of magnitude less iterations to converge. Moreover, our approach handles ten times more semantic class labels using the same computational resources.

avg

pdf suppmat Project Page Video [BibTex]

pdf suppmat Project Page Video [BibTex]


Thumb xl person reid.001
Part-Aligned Bilinear Representations for Person Re-identification

Suh, Y., Wang, J., Tang, S., Mei, T., Lee, K. M.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
Comparing the appearance of corresponding body parts is essential for person re-identification. However, body parts are frequently misaligned be- tween detected boxes, due to the detection errors and the pose/viewpoint changes. In this paper, we propose a network that learns a part-aligned representation for person re-identification. Our model consists of a two-stream network, which gen- erates appearance and body part feature maps respectively, and a bilinear-pooling layer that fuses two feature maps to an image descriptor. We show that it results in a compact descriptor, where the inner product between two image descriptors is equivalent to an aggregation of the local appearance similarities of the cor- responding body parts, and thereby significantly reduces the part misalignment problem. Our approach is advantageous over other pose-guided representations by learning part descriptors optimal for person re-identification. Training the net- work does not require any part annotation on the person re-identification dataset. Instead, we simply initialize the part sub-stream using a pre-trained sub-network of an existing pose estimation network and train the whole network to minimize the re-identification loss. We validate the effectiveness of our approach by demon- strating its superiority over the state-of-the-art methods on the standard bench- mark datasets including Market-1501, CUHK03, CUHK01 and DukeMTMC, and standard video dataset MARS.

ps

pdf supplementary [BibTex]

pdf supplementary [BibTex]


Thumb xl persondetect  copy
Learning Human Optical Flow

Ranjan, A., Romero, J., Black, M. J.

In 29th British Machine Vision Conference, September 2018 (inproceedings)

Abstract
The optical flow of humans is well known to be useful for the analysis of human action. Given this, we devise an optical flow algorithm specifically for human motion and show that it is superior to generic flow methods. Designing a method by hand is impractical, so we develop a new training database of image sequences with ground truth optical flow. For this we use a 3D model of the human body and motion capture data to synthesize realistic flow fields. We then train a convolutional neural network to estimate human flow fields from pairs of images. Since many applications in human motion analysis depend on speed, and we anticipate mobile applications, we base our method on SpyNet with several modifications. We demonstrate that our trained network is more accurate than a wide range of top methods on held-out test data and that it generalizes well to real image sequences. When combined with a person detector/tracker, the approach provides a full solution to the problem of 2D human flow estimation. Both the code and the dataset are available for research.

ps

video code pdf link (url) [BibTex]

video code pdf link (url) [BibTex]


Thumb xl bmvc pic
Human Motion Parsing by Hierarchical Dynamic Clustering

Zhang, Y., Tang, S., Sun, H., Neumann, H.

British Machine Vision Conference, September 2018 (conference)

Abstract
Parsing continuous human motion into meaningful segments plays an essential role in various applications. In this work, we propose a hierarchical dynamic clustering framework to derive action clusters from a sequence of local features in an unsuper- vised bottom-up manner. We systematically investigate the modules in this framework and particularly propose diverse temporal pooling schemes, in order to realize accurate temporal action localization. We demonstrate our method on two motion parsing tasks: temporal action segmentation and abnormal behavior detection. The experimental results indicate that the proposed framework is significantly more effective than the other related state-of-the-art methods on several datasets.

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl nbf
Neural Body Fitting: Unifying Deep Learning and Model-Based Human Pose and Shape Estimation

(Best Student Paper Award)

Omran, M., Lassner, C., Pons-Moll, G., Gehler, P. V., Schiele, B.

3DV, September 2018 (conference)

Abstract
Direct prediction of 3D body pose and shape remains a challenge even for highly parameterized deep learning models. Mapping from the 2D image space to the prediction space is difficult: perspective ambiguities make the loss function noisy and training data is scarce. In this paper, we propose a novel approach (Neural Body Fitting (NBF)). It integrates a statistical body model within a CNN, leveraging reliable bottom-up semantic body part segmentation and robust top-down body model constraints. NBF is fully differentiable and can be trained using 2D and 3D annotations. In detailed experiments, we analyze how the components of our model affect performance, especially the use of part segmentations as an explicit intermediate representation, and present a robust, efficiently trainable framework for 3D human pose estimation from 2D images with competitive results on standard benchmarks. Code is available at https://github.com/mohomran/neural_body_fitting

ps

arXiv code [BibTex]


Thumb xl screen shot 2018 09 19 at 09.33.59
ClusterNet: Instance Segmentation in RGB-D Images

Shao, L., Tian, Y., Bohg, J.

arXiv, September 2018, Submitted to ICRA'19 (article) Submitted

Abstract
We propose a method for instance-level segmentation that uses RGB-D data as input and provides detailed information about the location, geometry and number of {\em individual\/} objects in the scene. This level of understanding is fundamental for autonomous robots. It enables safe and robust decision-making under the large uncertainty of the real-world. In our model, we propose to use the first and second order moments of the object occupancy function to represent an object instance. We train an hourglass Deep Neural Network (DNN) where each pixel in the output votes for the 3D position of the corresponding object center and for the object's size and pose. The final instance segmentation is achieved through clustering in the space of moments. The object-centric training loss is defined on the output of the clustering. Our method outperforms the state-of-the-art instance segmentation method on our synthesized dataset. We show that our method generalizes well on real-world data achieving visually better segmentation results.

am

link (url) [BibTex]

link (url) [BibTex]


Thumb xl joeleccv18
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions

Janai, J., Güney, F., Ranjan, A., Black, M. J., Geiger, A.

European Conference on Computer Vision (ECCV), September 2018 (conference)

avg ps

pdf suppmat [BibTex]

pdf suppmat [BibTex]


Thumb xl grasping
Leveraging Contact Forces for Learning to Grasp

Merzic, H., Bogdanovic, M., Kappler, D., Righetti, L., Bohg, J.

arXiv, September 2018, Submitted to ICRA'19 (article) Submitted

Abstract
Grasping objects under uncertainty remains an open problem in robotics research. This uncertainty is often due to noisy or partial observations of the object pose or shape. To enable a robot to react appropriately to unforeseen effects, it is crucial that it continuously takes sensor feedback into account. While visual feedback is important for inferring a grasp pose and reaching for an object, contact feedback offers valuable information during manipulation and grasp acquisition. In this paper, we use model-free deep reinforcement learning to synthesize control policies that exploit contact sensing to generate robust grasping under uncertainty. We demonstrate our approach on a multi-fingered hand that exhibits more complex finger coordination than the commonly used two- fingered grippers. We conduct extensive experiments in order to assess the performance of the learned policies, with and without contact sensing. While it is possible to learn grasping policies without contact sensing, our results suggest that contact feedback allows for a significant improvement of grasping robustness under object pose uncertainty and for objects with a complex shape.

am

video arXiv [BibTex]


Thumb xl beneccv18
SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images

Coors, B., Condurache, A. P., Geiger, A.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
Omnidirectional cameras offer great benefits over classical cameras wherever a wide field of view is essential, such as in virtual reality applications or in autonomous robots. Unfortunately, standard convolutional neural networks are not well suited for this scenario as the natural projection surface is a sphere which cannot be unwrapped to a plane without introducing significant distortions, particularly in the polar regions. In this work, we present SphereNet, a novel deep learning framework which encodes invariance against such distortions explicitly into convolutional neural networks. Towards this goal, SphereNet adapts the sampling locations of the convolutional filters, effectively reversing distortions, and wraps the filters around the sphere. By building on regular convolutions, SphereNet enables the transfer of existing perspective convolutional neural network models to the omnidirectional case. We demonstrate the effectiveness of our method on the tasks of image classification and object detection, exploiting two newly created semi-synthetic and real-world omnidirectional datasets.

avg

pdf suppmat [BibTex]

pdf suppmat [BibTex]


Thumb xl eccv pascal results  thumbnail
Deep Directional Statistics: Pose Estimation with Uncertainty Quantification

Prokudin, S., Gehler, P., Nowozin, S.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
Modern deep learning systems successfully solve many perception tasks such as object pose estimation when the input image is of high quality. However, in challenging imaging conditions such as on low resolution images or when the image is corrupted by imaging artifacts, current systems degrade considerably in accuracy. While a loss in performance is unavoidable we would like our models to quantify their uncertainty in order to achieve robustness against images of varying quality. Probabilistic deep learning models combine the expressive power of deep learning with uncertainty quantification. In this paper, we propose a novel probabilistic deep learning model for the task of angular regression. Our model uses von Mises distributions to predict a distribution over object pose angle. Whereas a single von Mises distribution is making strong assumptions about the shape of the distribution, we extend the basic model to predict a mixture of von Mises distributions. We show how to learn a mixture model using a finite and infinite number of mixture components. Our model allow for likelihood-based training and efficient inference at test time. We demonstrate on a number of challenging pose estimation datasets that our model produces calibrated probability predictions and competitive or superior point estimates compared to the current state-of-the-art.

ps

code pdf [BibTex]

code pdf [BibTex]


Thumb xl vip
Recovering Accurate 3D Human Pose in The Wild Using IMUs and a Moving Camera

Marcard, T. V., Henschel, R., Black, M. J., Rosenhahn, B., Pons-Moll, G.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
In this work, we propose a method that combines a single hand-held camera and a set of Inertial Measurement Units (IMUs) attached at the body limbs to estimate accurate 3D poses in the wild. This poses many new challenges: the moving camera, heading drift, cluttered background, occlusions and many people visible in the video. We associate 2D pose detections in each image to the corresponding IMU-equipped persons by solving a novel graph based optimization problem that forces 3D to 2D coherency within a frame and across long range frames. Given associations, we jointly optimize the pose of a statistical body model, the camera pose and heading drift using a continuous optimization framework. We validated our method on the TotalCapture dataset, which provides video and IMU synchronized with ground truth. We obtain an accuracy of 26mm, which makes it accurate enough to serve as a benchmark for image-based 3D pose estimation in the wild. Using our method, we recorded 3D Poses in the Wild (3DPW ), a new dataset consisting of more than 51; 000 frames with accurate 3D pose in challenging sequences, including walking in the city, going up-stairs, having co ffee or taking the bus. We make the reconstructed 3D poses, video, IMU and 3D models available for research purposes at http://virtualhumans.mpi-inf.mpg.de/3DPW.

ps

pdf SupMat data project [BibTex]

pdf SupMat data project [BibTex]


Thumb xl sample3 merge black
Learning an Infant Body Model from RGB-D Data for Accurate Full Body Motion Analysis

Hesse, N., Pujades, S., Romero, J., Black, M. J., Bodensteiner, C., Arens, M., Hofmann, U. G., Tacke, U., Hadders-Algra, M., Weinberger, R., Muller-Felber, W., Schroeder, A. S.

In Int. Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI), September 2018 (inproceedings)

Abstract
Infant motion analysis enables early detection of neurodevelopmental disorders like cerebral palsy (CP). Diagnosis, however, is challenging, requiring expert human judgement. An automated solution would be beneficial but requires the accurate capture of 3D full-body movements. To that end, we develop a non-intrusive, low-cost, lightweight acquisition system that captures the shape and motion of infants. Going beyond work on modeling adult body shape, we learn a 3D Skinned Multi-Infant Linear body model (SMIL) from noisy, low-quality, and incomplete RGB-D data. We demonstrate the capture of shape and motion with 37 infants in a clinical environment. Quantitative experiments show that SMIL faithfully represents the data and properly factorizes the shape and pose of the infants. With a case study based on general movement assessment (GMA), we demonstrate that SMIL captures enough information to allow medical assessment. SMIL provides a new tool and a step towards a fully automatic system for GMA.

ps

pdf Project page video [BibTex]

pdf Project page video [BibTex]


Thumb xl toc image
Diffusion Measurements of Swimming Enzymes with Fluorescence Correlation Spectroscopy

Günther, J., Börsch, M., Fischer, P.

Accounts of Chemical Research, 0(0):null, August 2018, PMID: 30160941 (article)

Abstract
ConspectusSelf-propelled chemical motors are chemically powered micro- or nanosized swimmers. The energy required for these motors’ active motion derives from catalytic chemical reactions and the transformation of a fuel dissolved in the solution. While self-propulsion is now well established for larger particles, it is still unclear if enzymes, nature’s nanometer-sized catalysts, are potentially also self-powered nanomotors. Because of its small size, any increase in an enzyme’s diffusion due to active self-propulsion must be observed on top of the enzyme’s passive Brownian motion, which dominates at this scale. Fluorescence correlation spectroscopy (FCS) is a sensitive method to quantify the diffusion properties of single fluorescently labeled molecules in solution. FCS experiments have shown a general increase in the diffusion constant of a number of enzymes when the enzyme is catalytically active. Diffusion enhancements after addition of the enzyme’s substrate (and sometimes its inhibitor) of up to 80\% have been reported, which is at least 1 order of magnitude higher than what theory would predict. However, many factors contribute to the FCS signal and in particular the shape of the autocorrelation function, which underlies diffusion measurements by fluorescence correlation spectroscopy. These effects need to be considered to establish if and by how much the catalytic activity changes an enzyme’s diffusion.We carefully review phenomena that can play a role in FCS experiments and the determination of enzyme diffusion, including the dissociation of enzyme oligomers upon interaction with the substrate, surface binding of the enzyme to glass during the experiment, conformational changes upon binding, and quenching of the fluorophore. We show that these effects can cause changes in the FCS signal that behave similar to an increase in diffusion. However, in the case of the enzymes F1-ATPase and alkaline phosphatase, we demonstrate that there is no measurable increase in enzyme diffusion. Rather, dissociation and conformational changes account for the changes in the FCS signal in the former and fluorophore quenching in the latter. Within the experimental accuracy of our FCS measurements, we do not observe any change in diffusion due to activity for the enzymes we have investigated.We suggest useful control experiments and additional tests for future FCS experiments that should help establish if the observed diffusion enhancement is real or if it is due to an experimental or data analysis artifact. We show that fluorescence lifetime and mean intensity measurements are essential in order to identify the nature of the observed changes in the autocorrelation function. While it is clear from theory that chemically active enzymes should also act as self-propelled nanomotors, our FCS measurements show that the associated increase in diffusion is much smaller than previously reported. Further experiments are needed to quantify the contribution of the enzymes’ catalytic activity to their self-propulsion. We hope that our findings help to establish a useful protocol for future FCS studies in this field and help establish by how much the diffusion of an enzyme is enhanced through catalytic activity.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl toc imagen
Uphill production of dihydrogen by enzymatic oxidation of glucose without an external energy source

Suraniti, E., Merzeau, P., Roche, J., Gounel, S., Mark, A. G., Fischer, P., Mano, N., Kuhn, A.

Nature Communications, 9(1):3229, August 2018 (article)

Abstract
Chemical systems do not allow the coupling of energy from several simple reactions to drive a subsequent reaction, which takes place in the same medium and leads to a product with a higher energy than the one released during the first reaction. Gibbs energy considerations thus are not favorable to drive e.g., water splitting by the direct oxidation of glucose as a model reaction. Here, we show that it is nevertheless possible to carry out such an energetically uphill reaction, if the electrons released in the oxidation reaction are temporarily stored in an electromagnetic system, which is then used to raise the electrons' potential energy so that they can power the electrolysis of water in a second step. We thereby demonstrate the general concept that lower energy delivering chemical reactions can be used to enable the formation of higher energy consuming reaction products in a closed system.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Statistical Modelling of Fingertip Deformations and Contact Forces during Tactile Interaction

Gueorguiev, D., Tzionas, D., Pacchierotti, C., Black, M. J., Kuchenbecker, K. J.

Extended abstract presented at the 2018 CSF Hand, Brain and Technology conference (HBT), August 2018 (misc)

Abstract
Little is known about the shape and properties of the human finger during haptic interaction, even though these are essential parameters for controlling wearable finger devices and deliver realistic tactile feedback. This study explores a framework for four-dimensional scanning (3D over time) and modelling of finger-surface interactions, aiming to capture the motion and deformations of the entire finger with high resolution while simultaneously recording the interfacial forces at the contact. Preliminary results show that when the fingertip is actively pressing a rigid surface, it undergoes lateral expansion and proximal/distal bending, deformations that cannot be captured by imaging of the contact area alone. Therefore, we are currently capturing a dataset that will enable us to create a statistical model of the finger’s deformations and predict the contact forces induced by tactile interaction with objects. This technique could improve current methods for tactile rendering in wearable haptic devices, which rely on general physical modelling of the skin’s compliance, by developing an accurate model of the variations in finger properties across the human population. The availability of such a model will also enable a more realistic simulation of virtual finger behaviour in virtual reality (VR) environments, as well as the ability to accurately model a specific user’s finger from lower resolution data. It may also be relevant for inferring the physical properties of the underlying tissue from observing the surface mesh deformations, as previously shown for body tissues.

hi

[BibTex]

[BibTex]


no image
Learning-Based Robust Model Predictive Control with State-Dependent Uncertainty

Soloperto, R., Müller, M. A., Trimpe, S., Allgöwer, F.

In IFAC Conference on Nonlinear Model Predictive Control, Madison, Wisconsin, USA, 6th IFAC Conference on Nonlinear Model Predictive Control, August 2018 (inproceedings) Accepted

ics

[BibTex]

[BibTex]


Thumb xl aircap ca 3
Decentralized MPC based Obstacle Avoidance for Multi-Robot Target Tracking Scenarios

Tallamraju, R., Rajappa, S., Black, M. J., Karlapalem, K., Ahmad, A.

The 16th IEEE International Symposium on Safety, Security, and Rescue Robotics, August 2018 (conference) Accepted

ps

Project Page [BibTex]

Project Page [BibTex]


Thumb xl toc image
Gait learning for soft microrobots controlled by light fields

Rohr, A. V., Trimpe, S., Marco, A., Fischer, P., Palagi, S.

In Proceeding of the International Conference on Intelligent Robots and Systems, International Conference on Intelligent Robots and Systems (IROS), July 2018 (inproceedings) To be published

Abstract
Soft microrobots based on photoresponsive materials and controlled by light fields can generate a variety of different gaits. This inherent flexibility can be exploited to maximize their locomotion performance in a given environment and used to adapt them to changing environments. However, because of the lack of accurate locomotion models, and given the intrinsic variability among microrobots, analytical control design is not possible. Common data-driven approaches, on the other hand, require running prohibitive numbers of experiments and lead to very sample-specific results. Here we propose a probabilistic learning approach for light-controlled soft microrobots based on Bayesian Optimization (BO) and Gaussian Processes (GPs). The proposed approach results in a learning scheme that is highly data-efficient, enabling gait optimization with a limited experimental budget, and robust against differences among microrobot samples. These features are obtained by designing the learning scheme through the comparison of different GP priors and BO settings on a semisynthetic data set. The developed learning scheme is validated in microrobot experiments, resulting in a 115% improvement in a microrobot’s locomotion performance with an experimental budget of only 20 tests. These encouraging results lead the way toward self-adaptive microrobotic systems based on lightcontrolled soft microrobots and probabilistic learning control.

ics pf

arXiv [BibTex]

arXiv [BibTex]


Thumb xl toc image
A machine from machines

Fischer, P.

Nature Physics, July 2018 (article)

Abstract
Building spinning microrotors that self-assemble and synchronize to form a gear sounds like an impossible feat. However, it has now been achieved using only a single type of building block -- a colloid that self-propels.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl toc image
Chemotaxis of Active Janus Nanoparticles

Popescu, M. N., Uspal, W. E., Bechinger, C., Fischer, P.

Nano Letters, July 2018, PMID: 30047271 (article)

Abstract
While colloids and molecules in solution exhibit passive Brownian motion, particles that are partially covered with a catalyst, which promotes the transformation of a fuel dissolved in the solution, can actively move. These active Janus particles are known as “chemical nanomotors” or self-propelling “swimmers” and have been realized with a range of catalysts, sizes, and particle geometries. Because their active translation depends on the fuel concentration, one expects that active colloidal particles should also be able to swim toward a fuel source. Synthesizing and engineering nanoparticles with distinct chemotactic properties may enable important developments, such as particles that can autonomously swim along a pH gradient toward a tumor. Chemotaxis requires that the particles possess an active coupling of their orientation to a chemical gradient. In this Perspective we provide a simple, intuitive description of the underlying mechanisms for chemotaxis, as well as the means to analyze and classify active particles that can show positive or negative chemotaxis. The classification provides guidance for engineering a specific response and is a useful organizing framework for the quantitative analysis and modeling of chemotactic behaviors. Chemotaxis is emerging as an important focus area in the field of active colloids and promises a number of fascinating applications for nanoparticles and particle-based delivery.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Kernel Recursive ABC: Point Estimation with Intractable Likelihood

Kajihara, T., Kanagawa, M., Yamazaki, K., Fukumizu, K.

Proceedings of the 35th International Conference on Machine Learning, pages: 2405-2414, PMLR, July 2018 (proceedings)

Abstract
We propose a novel approach to parameter estimation for simulator-based statistical models with intractable likelihood. Our proposed method involves recursive application of kernel ABC and kernel herding to the same observed data. We provide a theoretical explanation regarding why the approach works, showing (for the population setting) that, under a certain assumption, point estimates obtained with this method converge to the true parameter, as recursion proceeds. We have conducted a variety of numerical experiments, including parameter estimation for a real-world pedestrian flow simulator, and show that in most cases our method outperforms existing approaches.

pn

Paper [BibTex]

Paper [BibTex]


no image
Intrinsic disentanglement: an invariance view for deep generative models

Besserve, M., Sun, R., Schölkopf, B.

Workshop on Theoretical Foundations and Applications of Deep Generative Models at ICML, July 2018 (conference)

ei

PDF [BibTex]

PDF [BibTex]


Thumb xl ar
Robust Visual Augmented Reality in Robot-Assisted Surgery

Forte, M. P.

Politecnico di Milano, July 2018 (mastersthesis)

Abstract
The broader research objective of this line of research is to test the hypothesis that real-time stereo video analysis and augmented reality can increase safety and task efficiency in robot-assisted surgery. This master’s thesis aims to solve the first step needed to achieve this goal: the creation of a robust system that delivers the envisioned feedback to a surgeon while he or she controls a surgical robot that is identical to those used on human patients. Several approaches for applying augmented reality to da Vinci Surgical Systems have been proposed, but none of them entirely rely on a clinical robot; specifically, they require additional sensors, depend on access to the da Vinci API, are designed for a very specific task, or were tested on systems that are starkly different from those in clinical use. There has also been prior work that presents the real-world camera view and the computer graphics on separate screens, or not in real time. In other scenarios, the digital information is overlaid manually by the surgeons themselves or by computer scientists, rather than being generated automatically in response to the surgeon’s actions. We attempted to overcome the aforementioned constraints by acquiring input signals from the da Vinci stereo endoscope and providing augmented reality to the console in real time (less than 150 ms delay, including the 62 ms of inherent latency of the da Vinci). The potential benefits of the resulting system are broad because it was built to be general, rather than customized for any specific task. The entire platform is compatible with any generation of the da Vinci System and does not require a dVRK (da Vinci Research Kit) or access to the API. Thus, it can be applied to existing da Vinci Systems in operating rooms around the world.

hi

[BibTex]

[BibTex]


no image
Learning an Approximate Model Predictive Controller with Guarantees

Hertneck, M., Koehler, J., Trimpe, S., Allgöwer, F.

IEEE Control Systems Letters, 2(3):543-548, July 2018 (article)

Abstract
A supervised learning framework is proposed to approximate a model predictive controller (MPC) with reduced computational complexity and guarantees on stability and constraint satisfaction. The framework can be used for a wide class of nonlinear systems. Any standard supervised learning technique (e.g. neural networks) can be employed to approximate the MPC from samples. In order to obtain closed-loop guarantees for the learned MPC, a robust MPC design is combined with statistical learning bounds. The MPC design ensures robustness to inaccurate inputs within given bounds, and Hoeffding’s Inequality is used to validate that the learned MPC satisfies these bounds with high confidence. The result is a closed-loop statistical guarantee on stability and constraint satisfaction for the learned MPC. The proposed learning-based MPC framework is illustrated on a nonlinear benchmark problem, for which we learn a neural network controller with guarantees.

ics

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Comparison-Based Random Forests

Haghiri, S., Garreau, D., Luxburg, U. V.

In ICML, 35th International Conference on Machine Learning, July 2018 (inproceedings)

slt

link (url) [BibTex]

link (url) [BibTex]


Thumb xl mazen
Robust Physics-based Motion Retargeting with Realistic Body Shapes

Borno, M. A., Righetti, L., Black, M. J., Delp, S. L., Fiume, E., Romero, J.

Computer Graphics Forum, 37, pages: 6:1-12, July 2018 (article)

Abstract
Motion capture is often retargeted to new, and sometimes drastically different, characters. When the characters take on realistic human shapes, however, we become more sensitive to the motion looking right. This means adapting it to be consistent with the physical constraints imposed by different body shapes. We show how to take realistic 3D human shapes, approximate them using a simplified representation, and animate them so that they move realistically using physically-based retargeting. We develop a novel spacetime optimization approach that learns and robustly adapts physical controllers to new bodies and constraints. The approach automatically adapts the motion of the mocap subject to the body shape of a target subject. This motion respects the physical properties of the new body and every body shape results in a different and appropriate movement. This makes it easy to create a varied set of motions from a single mocap sequence by simply varying the characters. In an interactive environment, successful retargeting requires adapting the motion to unexpected external forces. We achieve robustness to such forces using a novel LQR-tree formulation. We show that the simulated motions look appropriate to each character’s anatomy and their actions are robust to perturbations.

ps mg

pdf video [BibTex]

pdf video [BibTex]


Thumb xl teaser image
Probabilistic Recurrent State-Space Models

Doerr, A., Daniel, C., Schiegg, M., Nguyen-Tuong, D., Schaal, S., Toussaint, M., Trimpe, S.

In Proceedings of the International Conference on Machine Learning, International Conference on Machine Learning (ICML), July 2018 (inproceedings) Accepted

Abstract
State-space models (SSMs) are a highly expressive model class for learning patterns in time series data and for system identification. Deterministic versions of SSMs (e.g., LSTMs) proved extremely successful in modeling complex time-series data. Fully probabilistic SSMs, however, unfortunately often prove hard to train, even for smaller problems. To overcome this limitation, we propose a scalable initialization and training algorithm based on doubly stochastic variational inference and Gaussian processes. In the variational approximation we propose in contrast to related approaches to fully capture the latent state temporal correlations to allow for robust training.

am ics

arXiv pdf Project Page [BibTex]

arXiv pdf Project Page [BibTex]


Thumb xl fitter18 frai imus
Teaching a robot bimanual hand-clapping games via wrist-worn IMUs

Fitter, N. T., Kuchenbecker, K. J.

Frontiers in Robotics and Artificial Intelligence, 5(85), July 2018 (article)

Abstract
Colleagues often shake hands in greeting, friends connect through high fives, and children around the world rejoice in hand-clapping games. As robots become more common in everyday human life, they will have the opportunity to join in these social-physical interactions, but few current robots are intended to touch people in friendly ways. This article describes how we enabled a Baxter Research Robot to both teach and learn bimanual hand-clapping games with a human partner. Our system monitors the user's motions via a pair of inertial measurement units (IMUs) worn on the wrists. We recorded a labeled library of 10 common hand-clapping movements from 10 participants; this dataset was used to train an SVM classifier to automatically identify hand-clapping motions from previously unseen participants with a test-set classification accuracy of 97.0%. Baxter uses these sensors and this classifier to quickly identify the motions of its human gameplay partner, so that it can join in hand-clapping games. This system was evaluated by N = 24 naïve users in an experiment that involved learning sequences of eight motions from Baxter, teaching Baxter eight-motion game patterns, and completing a free interaction period. The motion classification accuracy in this less structured setting was 85.9%, primarily due to unexpected variations in motion timing. The quantitative task performance results and qualitative participant survey responses showed that learning games from Baxter was significantly easier than teaching games to Baxter, and that the teaching role caused users to consider more teamwork aspects of the gameplay. Over the course of the experiment, people felt more understood by Baxter and became more willing to follow the example of the robot. Users felt uniformly safe interacting with Baxter, and they expressed positive opinions of Baxter and reported fun interacting with the robot. Taken together, the results indicate that this robot achieved credible social-physical interaction with humans and that its ability to both lead and follow systematically changed the human partner's experience.

hi

DOI [BibTex]

DOI [BibTex]


Thumb xl octo turned
Real-time Perception meets Reactive Motion Generation

(Best Systems Paper Finalists - Amazon Robotics Best Paper Awards in Manipulation)

Kappler, D., Meier, F., Issac, J., Mainprice, J., Garcia Cifuentes, C., Wüthrich, M., Berenz, V., Schaal, S., Ratliff, N., Bohg, J.

IEEE Robotics and Automation Letters, 3(3):1864-1871, July 2018 (article)

Abstract
We address the challenging problem of robotic grasping and manipulation in the presence of uncertainty. This uncertainty is due to noisy sensing, inaccurate models and hard-to-predict environment dynamics. Our approach emphasizes the importance of continuous, real-time perception and its tight integration with reactive motion generation methods. We present a fully integrated system where real-time object and robot tracking as well as ambient world modeling provides the necessary input to feedback controllers and continuous motion optimizers. Specifically, they provide attractive and repulsive potentials based on which the controllers and motion optimizer can online compute movement policies at different time intervals. We extensively evaluate the proposed system on a real robotic platform in four scenarios that exhibit either challenging workspace geometry or a dynamic environment. We compare the proposed integrated system with a more traditional sense-plan-act approach that is still widely used. In 333 experiments, we show the robustness and accuracy of the proposed system.

am

arxiv video video link (url) DOI Project Page [BibTex]


Thumb xl screen shot 2018 06 29 at 4.24.39 pm
Innate turning preference of leaf-cutting ants in the absence of external orientation cues

Endlein, T., Sitti, M.

Journal of Experimental Biology, The Company of Biologists Ltd, June 2018 (article)

Abstract
Many ants use a combination of cues for orientation but how do ants find their way when all external cues are suppressed? Do they walk in a random way or are their movements spatially oriented? Here we show for the first time that leaf-cutting ants (Acromyrmex lundii) have an innate preference of turning counter-clockwise (left) when external cues are precluded. We demonstrated this by allowing individual ants to run freely on the water surface of a newly-developed treadmill. The surface tension supported medium-sized workers but effectively prevented ants from reaching the wall of the vessel, important to avoid wall-following behaviour (thigmotaxis). Most ants ran for minutes on the spot but also slowly turned counter-clockwise in the absence of visual cues. Reconstructing the effectively walked path revealed a looping pattern which could be interpreted as a search strategy. A similar turning bias was shown for groups of ants in a symmetrical Y-maze where twice as many ants chose the left branch in the absence of optical cues. Wall-following behaviour was tested by inserting a coiled tube before the Y-fork. When ants traversed a left-coiled tube, more ants chose the left box and vice versa. Adding visual cues in form of vertical black strips either outside the treadmill or on one branch of the Y-maze led to oriented walks towards the strips. It is suggested that both, the turning bias and the wall-following are employed as search strategies for an unknown environment which can be overridden by visual cues.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl 1
Motility and chemotaxis of bacteria-driven microswimmers fabricated using antigen 43-mediated biotin display

Schauer, O., Mostaghaci, B., Colin, R., Hürtgen, D., Kraus, D., Sitti, M., Sourjik, V.

Scientific Reports, 8(1):9801, Nature Publishing Group, June 2018 (article)

Abstract
Bacteria-driven biohybrid microswimmers (bacteriabots) combine synthetic cargo with motile living bacteria that enable propulsion and steering. Although fabrication and potential use of such bacteriabots have attracted much attention, existing methods of fabrication require an extensive sample preparation that can drastically decrease the viability and motility of bacteria. Moreover, chemotactic behavior of bacteriabots in a liquid medium with chemical gradients has remained largely unclear. To overcome these shortcomings, we designed Escherichia coli to autonomously display biotin on its cell surface via the engineered autotransporter antigen 43 and thus to bind streptavidin-coated cargo. We show that the cargo attachment to these bacteria is greatly enhanced by motility and occurs predominantly at the cell poles, which is greatly beneficial for the fabrication of motile bacteriabots. We further performed a systemic study to understand and optimize the ability of these bacteriabots to follow chemical gradients. We demonstrate that the chemotaxis of bacteriabots is primarily limited by the cargo-dependent reduction of swimming speed and show that the fabrication of bacteriabots using elongated E. coli cells can be used to overcome this limitation.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl 41586 2018 250 fig1 html
Multifunctional ferrofluid-infused surfaces with reconfigurable multiscale topography

Wang, W., Timonen, J. V. I., Carlson, A., Drotlef, D., Zhang, C. T., Kolle, S., Grinthal, A., Wong, T., Hatton, B., Kang, S. H., Kennedy, S., Chi, J., Blough, R. T., Sitti, M., Mahadevan, L., Aizenberg, J.

Nature, June 2018 (article)

Abstract
Developing adaptive materials with geometries that change in response to external stimuli provides fundamental insights into the links between the physical forces involved and the resultant morphologies and creates a foundation for technologically relevant dynamic systems1,2. In particular, reconfigurable surface topography as a means to control interfacial properties 3 has recently been explored using responsive gels 4 , shape-memory polymers 5 , liquid crystals6-8 and hybrid composites9-14, including magnetically active slippery surfaces12-14. However, these designs exhibit a limited range of topographical changes and thus a restricted scope of function. Here we introduce a hierarchical magneto-responsive composite surface, made by infiltrating a ferrofluid into a microstructured matrix (termed ferrofluid-containing liquid-infused porous surfaces, or FLIPS). We demonstrate various topographical reconfigurations at multiple length scales and a broad range of associated emergent behaviours. An applied magnetic-field gradient induces the movement of magnetic nanoparticles suspended in the ferrofluid, which leads to microscale flow of the ferrofluid first above and then within the microstructured surface. This redistribution changes the initially smooth surface of the ferrofluid (which is immobilized by the porous matrix through capillary forces) into various multiscale hierarchical topographies shaped by the size, arrangement and orientation of the confining microstructures in the magnetic field. We analyse the spatial and temporal dynamics of these reconfigurations theoretically and experimentally as a function of the balance between capillary and magnetic pressures15-19 and of the geometric anisotropy of the FLIPS system. Several interesting functions at three different length scales are demonstrated: self-assembly of colloidal particles at the micrometre scale; regulated flow of liquid droplets at the millimetre scale; and switchable adhesion and friction, liquid pumping and removal of biofilms at the centimetre scale. We envision that FLIPS could be used as part of integrated control systems for the manipulation and transport of matter, thermal management, microfluidics and fouling-release materials.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Event-triggered Learning for Resource-efficient Networked Control

Solowjow, F., Baumann, D., Garcke, J., Trimpe, S.

In Proceedings of the 2018 American Control Conference (ACC), June 2018 (inproceedings)

ics

arXiv PDF [BibTex]

arXiv PDF [BibTex]


no image
Reducing 3D Vibrations to 1D in Real Time

Park, G., Kuchenbecker, K. J.

Pisa, Italy, June 2018, Hands-on demonstration presented at EuroHaptics (misc)

Abstract
In this demonstration, you will hold two pen-shaped modules: an in-pen and an out-pen. The in-pen is instrumented with a high-bandwidth three-axis accelerometer, and the out-pen contains a one-axis voice coil actuator. Use the in-pen to interact with different surfaces; the measured 3D accelerations are continually converted into 1D vibrations and rendered with the out-pen for you to feel. You can test conversion methods that range from simply selecting a single axis to applying a discrete Fourier transform or principal component analysis for realistic and brisk real-time conversion.

hi

[BibTex]

[BibTex]


Thumb xl image
Conditional Affordance Learning for Driving in Urban Environments
Conference on Robot Learning 2018, 2018 (conference)

avg

pdf suppmat Video [BibTex]


Thumb xl screen shot 2018 03 22 at 10.40.47 am
Oncilla robot: a versatile open-source quadruped research robot with compliant pantograph legs

Spröwitz, A., Tuleu, A., Ajallooeian, M., Vespignani, M., Moeckel, R., Eckert, P., D’Haene, M., Degrave, J., Nordmann, A., Schrauwen, B., Steil, J., Ijspeert, A. J.

Frontiers in Robotics and AI, 5(67), June 2018, arXiv: 1803.06259 (article)

Abstract
We present Oncilla robot, a novel mobile, quadruped legged locomotion machine. This large-cat sized, 5.1 robot is one of a kind of a recent, bioinspired legged robot class designed with the capability of model-free locomotion control. Animal legged locomotion in rough terrain is clearly shaped by sensor feedback systems. Results with Oncilla robot show that agile and versatile locomotion is possible without sensory signals to some extend, and tracking becomes robust when feedback control is added (Ajaoolleian 2015). By incorporating mechanical and control blueprints inspired from animals, and by observing the resulting robot locomotion characteristics, we aim to understand the contribution of individual components. Legged robots have a wide mechanical and control design parameter space, and a unique potential as research tools to investigate principles of biomechanics and legged locomotion control. But the hardware and controller design can be a steep initial hurdle for academic research. To facilitate the easy start and development of legged robots, Oncilla-robot's blueprints are available through open-source. [...]

dlg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl koala
Designing a Haptic Empathetic Robot Animal for Children with Autism

Burns, R., Kuchenbecker, K. J.

Workshop paper (4 pages) at the RSS Workshop on Robot-Mediated Autism Intervention: Hardware, Software and Curriculum, June 2018 (misc)

Abstract
Children with autism often endure sensory overload, may be nonverbal, and have difficulty understanding and relaying emotions. These experiences result in heightened stress during social interaction. Animal-assisted intervention has been found to improve the behavior of children with autism during social interaction, but live animal companions are not always feasible. We are thus in the process of designing a robotic animal to mimic some successful characteristics of animal-assisted intervention while trying to improve on others. The over-arching hypothesis of this research is that an appropriately designed robot animal can reduce stress in children with autism and empower them to engage in social interaction.

hi

link (url) [BibTex]

link (url) [BibTex]


no image
Soft Multi-axis Boundary-Electrode Tactile Sensors for Whole-Body Robotic Skin

Lee, H., Kim, J., Kuchenbecker, K. J.

Workshop paper (2 pages) at the RSS Pioneers Workshop , June 2018 (misc)

hi

[BibTex]

[BibTex]


Thumb xl screen shot 2018 04 18 at 11.01.27 am
Learning from Outside the Viability Kernel: Why we Should Build Robots that can Fail with Grace

Heim, S., Spröwitz, A.

Proceedings of SIMPAR 2018, pages: 55-61, IEEE, 2018 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), May 2018 (conference)

dlg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl selfsensing
Self-Sensing Paper Actuators Based on Graphite–Carbon Nanotube Hybrid Films

Morteza, A., Metin, S.

Advanced Science, pages: 1800239, May 2018 (article)

Abstract
Abstract Soft actuators have demonstrated potential in a range of applications, including soft robotics, artificial muscles, and biomimetic devices. However, the majority of current soft actuators suffer from the lack of real-time sensory feedback, prohibiting their effective sensing and multitask function. Here, a promising strategy is reported to design bilayer electrothermal actuators capable of simultaneous actuation and sensation (i.e., self-sensing actuators), merely through two input electric terminals. Decoupled electrothermal stimulation and strain sensation is achieved by the optimal combination of graphite microparticles and carbon nanotubes (CNTs) in the form of hybrid films. By finely tuning the charge transport properties of hybrid films, the signal-to-noise ratio (SNR) of self-sensing actuators is remarkably enhanced to over 66. As a result, self-sensing actuators can actively track their displacement and distinguish the touch of soft and hard objects.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl propultion. of helical m
Bioinspired microrobots

Palagi, S., Fischer, P.

Nature Reviews Materials, 3, pages: 113–124, May 2018 (article)

Abstract
Microorganisms can move in complex media, respond to the environment and self-organize. The field of microrobotics strives to achieve these functions in mobile robotic systems of sub-millimetre size. However, miniaturization of traditional robots and their control systems to the microscale is not a viable approach. A promising alternative strategy in developing microrobots is to implement sensing, actuation and control directly in the materials, thereby mimicking biological matter. In this Review, we discuss design principles and materials for the implementation of robotic functionalities in microrobots. We examine different biological locomotion strategies, and we discuss how they can be artificially recreated in magnetic microrobots and how soft materials improve control and performance. We show that smart, stimuli-responsive materials can act as on-board sensors and actuators and that ‘active matter’ enables autonomous motion, navigation and collective behaviours. Finally, we provide a critical outlook for the field of microrobotics and highlight the challenges that need to be overcome to realize sophisticated microrobots, which one day might rival biological machines.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]